Feb 23 08:47:47 crc systemd[1]: Starting Kubernetes Kubelet... Feb 23 08:47:47 crc restorecon[4715]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 23 08:47:47 crc restorecon[4715]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 23 08:47:48 crc restorecon[4715]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 23 08:47:48 crc restorecon[4715]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Feb 23 08:47:49 crc kubenswrapper[4940]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 23 08:47:49 crc kubenswrapper[4940]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Feb 23 08:47:49 crc kubenswrapper[4940]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 23 08:47:49 crc kubenswrapper[4940]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 23 08:47:49 crc kubenswrapper[4940]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 23 08:47:49 crc kubenswrapper[4940]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.077077 4940 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 23 08:47:49 crc kubenswrapper[4940]: W0223 08:47:49.084228 4940 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 23 08:47:49 crc kubenswrapper[4940]: W0223 08:47:49.084279 4940 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 23 08:47:49 crc kubenswrapper[4940]: W0223 08:47:49.084292 4940 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 23 08:47:49 crc kubenswrapper[4940]: W0223 08:47:49.084320 4940 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 23 08:47:49 crc kubenswrapper[4940]: W0223 08:47:49.084332 4940 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 23 08:47:49 crc kubenswrapper[4940]: W0223 08:47:49.084344 4940 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 23 08:47:49 crc kubenswrapper[4940]: W0223 08:47:49.084355 4940 feature_gate.go:330] unrecognized feature gate: Example Feb 23 08:47:49 crc kubenswrapper[4940]: W0223 08:47:49.084365 4940 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 23 08:47:49 crc kubenswrapper[4940]: W0223 08:47:49.084374 4940 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 23 08:47:49 crc kubenswrapper[4940]: W0223 08:47:49.084385 4940 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 23 08:47:49 crc kubenswrapper[4940]: W0223 08:47:49.084395 4940 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 23 08:47:49 crc kubenswrapper[4940]: W0223 08:47:49.084408 4940 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 23 08:47:49 crc kubenswrapper[4940]: W0223 08:47:49.084418 4940 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 23 08:47:49 crc kubenswrapper[4940]: W0223 08:47:49.084428 4940 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 23 08:47:49 crc kubenswrapper[4940]: W0223 08:47:49.084440 4940 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 23 08:47:49 crc kubenswrapper[4940]: W0223 08:47:49.084451 4940 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 23 08:47:49 crc kubenswrapper[4940]: W0223 08:47:49.084461 4940 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 23 08:47:49 crc kubenswrapper[4940]: W0223 08:47:49.084472 4940 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 23 08:47:49 crc kubenswrapper[4940]: W0223 08:47:49.084482 4940 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 23 08:47:49 crc kubenswrapper[4940]: W0223 08:47:49.084493 4940 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 23 08:47:49 crc kubenswrapper[4940]: W0223 08:47:49.084503 4940 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 23 08:47:49 crc kubenswrapper[4940]: W0223 08:47:49.084514 4940 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 23 08:47:49 crc kubenswrapper[4940]: W0223 08:47:49.084524 4940 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 23 08:47:49 crc kubenswrapper[4940]: W0223 08:47:49.084535 4940 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 23 08:47:49 crc kubenswrapper[4940]: W0223 08:47:49.084545 4940 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 23 08:47:49 crc kubenswrapper[4940]: W0223 08:47:49.084555 4940 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 23 08:47:49 crc kubenswrapper[4940]: W0223 08:47:49.084571 4940 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 23 08:47:49 crc kubenswrapper[4940]: W0223 08:47:49.084583 4940 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 23 08:47:49 crc kubenswrapper[4940]: W0223 08:47:49.084595 4940 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 23 08:47:49 crc kubenswrapper[4940]: W0223 08:47:49.084606 4940 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 23 08:47:49 crc kubenswrapper[4940]: W0223 08:47:49.084651 4940 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 23 08:47:49 crc kubenswrapper[4940]: W0223 08:47:49.084668 4940 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 23 08:47:49 crc kubenswrapper[4940]: W0223 08:47:49.084682 4940 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 23 08:47:49 crc kubenswrapper[4940]: W0223 08:47:49.084694 4940 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 23 08:47:49 crc kubenswrapper[4940]: W0223 08:47:49.084708 4940 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 23 08:47:49 crc kubenswrapper[4940]: W0223 08:47:49.084722 4940 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 23 08:47:49 crc kubenswrapper[4940]: W0223 08:47:49.084733 4940 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 23 08:47:49 crc kubenswrapper[4940]: W0223 08:47:49.084745 4940 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 23 08:47:49 crc kubenswrapper[4940]: W0223 08:47:49.084757 4940 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 23 08:47:49 crc kubenswrapper[4940]: W0223 08:47:49.084770 4940 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 23 08:47:49 crc kubenswrapper[4940]: W0223 08:47:49.084781 4940 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 23 08:47:49 crc kubenswrapper[4940]: W0223 08:47:49.084791 4940 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 23 08:47:49 crc kubenswrapper[4940]: W0223 08:47:49.084802 4940 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 23 08:47:49 crc kubenswrapper[4940]: W0223 08:47:49.084813 4940 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 23 08:47:49 crc kubenswrapper[4940]: W0223 08:47:49.084825 4940 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 23 08:47:49 crc kubenswrapper[4940]: W0223 08:47:49.084840 4940 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 23 08:47:49 crc kubenswrapper[4940]: W0223 08:47:49.084854 4940 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 23 08:47:49 crc kubenswrapper[4940]: W0223 08:47:49.084867 4940 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 23 08:47:49 crc kubenswrapper[4940]: W0223 08:47:49.084878 4940 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 23 08:47:49 crc kubenswrapper[4940]: W0223 08:47:49.084889 4940 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 23 08:47:49 crc kubenswrapper[4940]: W0223 08:47:49.084899 4940 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 23 08:47:49 crc kubenswrapper[4940]: W0223 08:47:49.084909 4940 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 23 08:47:49 crc kubenswrapper[4940]: W0223 08:47:49.084920 4940 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 23 08:47:49 crc kubenswrapper[4940]: W0223 08:47:49.084931 4940 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 23 08:47:49 crc kubenswrapper[4940]: W0223 08:47:49.084943 4940 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 23 08:47:49 crc kubenswrapper[4940]: W0223 08:47:49.084959 4940 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 23 08:47:49 crc kubenswrapper[4940]: W0223 08:47:49.084969 4940 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 23 08:47:49 crc kubenswrapper[4940]: W0223 08:47:49.084980 4940 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 23 08:47:49 crc kubenswrapper[4940]: W0223 08:47:49.084990 4940 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 23 08:47:49 crc kubenswrapper[4940]: W0223 08:47:49.085001 4940 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 23 08:47:49 crc kubenswrapper[4940]: W0223 08:47:49.085011 4940 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 23 08:47:49 crc kubenswrapper[4940]: W0223 08:47:49.085021 4940 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 23 08:47:49 crc kubenswrapper[4940]: W0223 08:47:49.085031 4940 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 23 08:47:49 crc kubenswrapper[4940]: W0223 08:47:49.085041 4940 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 23 08:47:49 crc kubenswrapper[4940]: W0223 08:47:49.085052 4940 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 23 08:47:49 crc kubenswrapper[4940]: W0223 08:47:49.085063 4940 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 23 08:47:49 crc kubenswrapper[4940]: W0223 08:47:49.085074 4940 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 23 08:47:49 crc kubenswrapper[4940]: W0223 08:47:49.085084 4940 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 23 08:47:49 crc kubenswrapper[4940]: W0223 08:47:49.085095 4940 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 23 08:47:49 crc kubenswrapper[4940]: W0223 08:47:49.085105 4940 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 23 08:47:49 crc kubenswrapper[4940]: W0223 08:47:49.085116 4940 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.087160 4940 flags.go:64] FLAG: --address="0.0.0.0" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.087196 4940 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.087222 4940 flags.go:64] FLAG: --anonymous-auth="true" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.087239 4940 flags.go:64] FLAG: --application-metrics-count-limit="100" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.087257 4940 flags.go:64] FLAG: --authentication-token-webhook="false" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.087270 4940 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.087287 4940 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.087301 4940 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.087315 4940 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.087328 4940 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.087342 4940 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.087355 4940 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.087367 4940 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.087379 4940 flags.go:64] FLAG: --cgroup-root="" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.087391 4940 flags.go:64] FLAG: --cgroups-per-qos="true" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.087404 4940 flags.go:64] FLAG: --client-ca-file="" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.087417 4940 flags.go:64] FLAG: --cloud-config="" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.087430 4940 flags.go:64] FLAG: --cloud-provider="" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.087442 4940 flags.go:64] FLAG: --cluster-dns="[]" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.087457 4940 flags.go:64] FLAG: --cluster-domain="" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.087469 4940 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.087481 4940 flags.go:64] FLAG: --config-dir="" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.087493 4940 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.087506 4940 flags.go:64] FLAG: --container-log-max-files="5" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.087521 4940 flags.go:64] FLAG: --container-log-max-size="10Mi" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.087534 4940 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.087547 4940 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.087560 4940 flags.go:64] FLAG: --containerd-namespace="k8s.io" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.087572 4940 flags.go:64] FLAG: --contention-profiling="false" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.087585 4940 flags.go:64] FLAG: --cpu-cfs-quota="true" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.087597 4940 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.087641 4940 flags.go:64] FLAG: --cpu-manager-policy="none" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.087656 4940 flags.go:64] FLAG: --cpu-manager-policy-options="" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.087673 4940 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.087686 4940 flags.go:64] FLAG: --enable-controller-attach-detach="true" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.087698 4940 flags.go:64] FLAG: --enable-debugging-handlers="true" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.087710 4940 flags.go:64] FLAG: --enable-load-reader="false" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.087723 4940 flags.go:64] FLAG: --enable-server="true" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.087735 4940 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.087755 4940 flags.go:64] FLAG: --event-burst="100" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.087769 4940 flags.go:64] FLAG: --event-qps="50" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.087782 4940 flags.go:64] FLAG: --event-storage-age-limit="default=0" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.087794 4940 flags.go:64] FLAG: --event-storage-event-limit="default=0" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.087807 4940 flags.go:64] FLAG: --eviction-hard="" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.087822 4940 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.087834 4940 flags.go:64] FLAG: --eviction-minimum-reclaim="" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.087846 4940 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.087858 4940 flags.go:64] FLAG: --eviction-soft="" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.087871 4940 flags.go:64] FLAG: --eviction-soft-grace-period="" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.087883 4940 flags.go:64] FLAG: --exit-on-lock-contention="false" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.087896 4940 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.087908 4940 flags.go:64] FLAG: --experimental-mounter-path="" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.087920 4940 flags.go:64] FLAG: --fail-cgroupv1="false" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.087932 4940 flags.go:64] FLAG: --fail-swap-on="true" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.087945 4940 flags.go:64] FLAG: --feature-gates="" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.087961 4940 flags.go:64] FLAG: --file-check-frequency="20s" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.087975 4940 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.087988 4940 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.088000 4940 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.088013 4940 flags.go:64] FLAG: --healthz-port="10248" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.088026 4940 flags.go:64] FLAG: --help="false" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.088039 4940 flags.go:64] FLAG: --hostname-override="" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.088051 4940 flags.go:64] FLAG: --housekeeping-interval="10s" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.088063 4940 flags.go:64] FLAG: --http-check-frequency="20s" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.088075 4940 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.088086 4940 flags.go:64] FLAG: --image-credential-provider-config="" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.088099 4940 flags.go:64] FLAG: --image-gc-high-threshold="85" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.088111 4940 flags.go:64] FLAG: --image-gc-low-threshold="80" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.088124 4940 flags.go:64] FLAG: --image-service-endpoint="" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.088135 4940 flags.go:64] FLAG: --kernel-memcg-notification="false" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.088147 4940 flags.go:64] FLAG: --kube-api-burst="100" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.088160 4940 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.088174 4940 flags.go:64] FLAG: --kube-api-qps="50" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.088186 4940 flags.go:64] FLAG: --kube-reserved="" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.088198 4940 flags.go:64] FLAG: --kube-reserved-cgroup="" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.088208 4940 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.088222 4940 flags.go:64] FLAG: --kubelet-cgroups="" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.088234 4940 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.088245 4940 flags.go:64] FLAG: --lock-file="" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.088255 4940 flags.go:64] FLAG: --log-cadvisor-usage="false" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.088267 4940 flags.go:64] FLAG: --log-flush-frequency="5s" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.088278 4940 flags.go:64] FLAG: --log-json-info-buffer-size="0" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.088297 4940 flags.go:64] FLAG: --log-json-split-stream="false" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.088308 4940 flags.go:64] FLAG: --log-text-info-buffer-size="0" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.088319 4940 flags.go:64] FLAG: --log-text-split-stream="false" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.088330 4940 flags.go:64] FLAG: --logging-format="text" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.088341 4940 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.088354 4940 flags.go:64] FLAG: --make-iptables-util-chains="true" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.088365 4940 flags.go:64] FLAG: --manifest-url="" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.088378 4940 flags.go:64] FLAG: --manifest-url-header="" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.088395 4940 flags.go:64] FLAG: --max-housekeeping-interval="15s" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.088407 4940 flags.go:64] FLAG: --max-open-files="1000000" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.088423 4940 flags.go:64] FLAG: --max-pods="110" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.088435 4940 flags.go:64] FLAG: --maximum-dead-containers="-1" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.088446 4940 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.088457 4940 flags.go:64] FLAG: --memory-manager-policy="None" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.088467 4940 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.088480 4940 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.088492 4940 flags.go:64] FLAG: --node-ip="192.168.126.11" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.088504 4940 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.088530 4940 flags.go:64] FLAG: --node-status-max-images="50" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.088541 4940 flags.go:64] FLAG: --node-status-update-frequency="10s" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.088553 4940 flags.go:64] FLAG: --oom-score-adj="-999" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.088564 4940 flags.go:64] FLAG: --pod-cidr="" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.088575 4940 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.088594 4940 flags.go:64] FLAG: --pod-manifest-path="" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.088604 4940 flags.go:64] FLAG: --pod-max-pids="-1" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.088644 4940 flags.go:64] FLAG: --pods-per-core="0" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.088655 4940 flags.go:64] FLAG: --port="10250" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.088666 4940 flags.go:64] FLAG: --protect-kernel-defaults="false" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.088677 4940 flags.go:64] FLAG: --provider-id="" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.088687 4940 flags.go:64] FLAG: --qos-reserved="" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.088700 4940 flags.go:64] FLAG: --read-only-port="10255" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.088713 4940 flags.go:64] FLAG: --register-node="true" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.088724 4940 flags.go:64] FLAG: --register-schedulable="true" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.088736 4940 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.088756 4940 flags.go:64] FLAG: --registry-burst="10" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.088767 4940 flags.go:64] FLAG: --registry-qps="5" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.088778 4940 flags.go:64] FLAG: --reserved-cpus="" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.088789 4940 flags.go:64] FLAG: --reserved-memory="" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.088803 4940 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.088815 4940 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.088827 4940 flags.go:64] FLAG: --rotate-certificates="false" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.088838 4940 flags.go:64] FLAG: --rotate-server-certificates="false" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.088848 4940 flags.go:64] FLAG: --runonce="false" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.088859 4940 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.088871 4940 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.088883 4940 flags.go:64] FLAG: --seccomp-default="false" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.088894 4940 flags.go:64] FLAG: --serialize-image-pulls="true" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.088905 4940 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.088917 4940 flags.go:64] FLAG: --storage-driver-db="cadvisor" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.088928 4940 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.088939 4940 flags.go:64] FLAG: --storage-driver-password="root" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.088951 4940 flags.go:64] FLAG: --storage-driver-secure="false" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.088962 4940 flags.go:64] FLAG: --storage-driver-table="stats" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.088972 4940 flags.go:64] FLAG: --storage-driver-user="root" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.088983 4940 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.088995 4940 flags.go:64] FLAG: --sync-frequency="1m0s" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.089007 4940 flags.go:64] FLAG: --system-cgroups="" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.089018 4940 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.089036 4940 flags.go:64] FLAG: --system-reserved-cgroup="" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.089047 4940 flags.go:64] FLAG: --tls-cert-file="" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.089057 4940 flags.go:64] FLAG: --tls-cipher-suites="[]" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.089071 4940 flags.go:64] FLAG: --tls-min-version="" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.089082 4940 flags.go:64] FLAG: --tls-private-key-file="" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.089093 4940 flags.go:64] FLAG: --topology-manager-policy="none" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.089104 4940 flags.go:64] FLAG: --topology-manager-policy-options="" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.089116 4940 flags.go:64] FLAG: --topology-manager-scope="container" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.089129 4940 flags.go:64] FLAG: --v="2" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.089144 4940 flags.go:64] FLAG: --version="false" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.089158 4940 flags.go:64] FLAG: --vmodule="" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.089170 4940 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.089182 4940 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Feb 23 08:47:49 crc kubenswrapper[4940]: W0223 08:47:49.093549 4940 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 23 08:47:49 crc kubenswrapper[4940]: W0223 08:47:49.093634 4940 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 23 08:47:49 crc kubenswrapper[4940]: W0223 08:47:49.093641 4940 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 23 08:47:49 crc kubenswrapper[4940]: W0223 08:47:49.093693 4940 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 23 08:47:49 crc kubenswrapper[4940]: W0223 08:47:49.093699 4940 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 23 08:47:49 crc kubenswrapper[4940]: W0223 08:47:49.093709 4940 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 23 08:47:49 crc kubenswrapper[4940]: W0223 08:47:49.093717 4940 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 23 08:47:49 crc kubenswrapper[4940]: W0223 08:47:49.093722 4940 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 23 08:47:49 crc kubenswrapper[4940]: W0223 08:47:49.093728 4940 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 23 08:47:49 crc kubenswrapper[4940]: W0223 08:47:49.094159 4940 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 23 08:47:49 crc kubenswrapper[4940]: W0223 08:47:49.094197 4940 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 23 08:47:49 crc kubenswrapper[4940]: W0223 08:47:49.094211 4940 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 23 08:47:49 crc kubenswrapper[4940]: W0223 08:47:49.094222 4940 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 23 08:47:49 crc kubenswrapper[4940]: W0223 08:47:49.094233 4940 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 23 08:47:49 crc kubenswrapper[4940]: W0223 08:47:49.094247 4940 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 23 08:47:49 crc kubenswrapper[4940]: W0223 08:47:49.094260 4940 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 23 08:47:49 crc kubenswrapper[4940]: W0223 08:47:49.094272 4940 feature_gate.go:330] unrecognized feature gate: Example Feb 23 08:47:49 crc kubenswrapper[4940]: W0223 08:47:49.094283 4940 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 23 08:47:49 crc kubenswrapper[4940]: W0223 08:47:49.094293 4940 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 23 08:47:49 crc kubenswrapper[4940]: W0223 08:47:49.094303 4940 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 23 08:47:49 crc kubenswrapper[4940]: W0223 08:47:49.094313 4940 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 23 08:47:49 crc kubenswrapper[4940]: W0223 08:47:49.094323 4940 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 23 08:47:49 crc kubenswrapper[4940]: W0223 08:47:49.094333 4940 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 23 08:47:49 crc kubenswrapper[4940]: W0223 08:47:49.094344 4940 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 23 08:47:49 crc kubenswrapper[4940]: W0223 08:47:49.094354 4940 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 23 08:47:49 crc kubenswrapper[4940]: W0223 08:47:49.094364 4940 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 23 08:47:49 crc kubenswrapper[4940]: W0223 08:47:49.094374 4940 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 23 08:47:49 crc kubenswrapper[4940]: W0223 08:47:49.094384 4940 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 23 08:47:49 crc kubenswrapper[4940]: W0223 08:47:49.094393 4940 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 23 08:47:49 crc kubenswrapper[4940]: W0223 08:47:49.094402 4940 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 23 08:47:49 crc kubenswrapper[4940]: W0223 08:47:49.094413 4940 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 23 08:47:49 crc kubenswrapper[4940]: W0223 08:47:49.094424 4940 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 23 08:47:49 crc kubenswrapper[4940]: W0223 08:47:49.094433 4940 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 23 08:47:49 crc kubenswrapper[4940]: W0223 08:47:49.094443 4940 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 23 08:47:49 crc kubenswrapper[4940]: W0223 08:47:49.094453 4940 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 23 08:47:49 crc kubenswrapper[4940]: W0223 08:47:49.094462 4940 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 23 08:47:49 crc kubenswrapper[4940]: W0223 08:47:49.094473 4940 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 23 08:47:49 crc kubenswrapper[4940]: W0223 08:47:49.094483 4940 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 23 08:47:49 crc kubenswrapper[4940]: W0223 08:47:49.094492 4940 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 23 08:47:49 crc kubenswrapper[4940]: W0223 08:47:49.094502 4940 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 23 08:47:49 crc kubenswrapper[4940]: W0223 08:47:49.094511 4940 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 23 08:47:49 crc kubenswrapper[4940]: W0223 08:47:49.094520 4940 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 23 08:47:49 crc kubenswrapper[4940]: W0223 08:47:49.094530 4940 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 23 08:47:49 crc kubenswrapper[4940]: W0223 08:47:49.094539 4940 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 23 08:47:49 crc kubenswrapper[4940]: W0223 08:47:49.094551 4940 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 23 08:47:49 crc kubenswrapper[4940]: W0223 08:47:49.094561 4940 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 23 08:47:49 crc kubenswrapper[4940]: W0223 08:47:49.094574 4940 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 23 08:47:49 crc kubenswrapper[4940]: W0223 08:47:49.094588 4940 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 23 08:47:49 crc kubenswrapper[4940]: W0223 08:47:49.094599 4940 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 23 08:47:49 crc kubenswrapper[4940]: W0223 08:47:49.094656 4940 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 23 08:47:49 crc kubenswrapper[4940]: W0223 08:47:49.094668 4940 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 23 08:47:49 crc kubenswrapper[4940]: W0223 08:47:49.094678 4940 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 23 08:47:49 crc kubenswrapper[4940]: W0223 08:47:49.094688 4940 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 23 08:47:49 crc kubenswrapper[4940]: W0223 08:47:49.094697 4940 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 23 08:47:49 crc kubenswrapper[4940]: W0223 08:47:49.094707 4940 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 23 08:47:49 crc kubenswrapper[4940]: W0223 08:47:49.094717 4940 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 23 08:47:49 crc kubenswrapper[4940]: W0223 08:47:49.094728 4940 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 23 08:47:49 crc kubenswrapper[4940]: W0223 08:47:49.094738 4940 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 23 08:47:49 crc kubenswrapper[4940]: W0223 08:47:49.094748 4940 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 23 08:47:49 crc kubenswrapper[4940]: W0223 08:47:49.094758 4940 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 23 08:47:49 crc kubenswrapper[4940]: W0223 08:47:49.094767 4940 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 23 08:47:49 crc kubenswrapper[4940]: W0223 08:47:49.094777 4940 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 23 08:47:49 crc kubenswrapper[4940]: W0223 08:47:49.094790 4940 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 23 08:47:49 crc kubenswrapper[4940]: W0223 08:47:49.094804 4940 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 23 08:47:49 crc kubenswrapper[4940]: W0223 08:47:49.094814 4940 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 23 08:47:49 crc kubenswrapper[4940]: W0223 08:47:49.094823 4940 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 23 08:47:49 crc kubenswrapper[4940]: W0223 08:47:49.094832 4940 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 23 08:47:49 crc kubenswrapper[4940]: W0223 08:47:49.094842 4940 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 23 08:47:49 crc kubenswrapper[4940]: W0223 08:47:49.094852 4940 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 23 08:47:49 crc kubenswrapper[4940]: W0223 08:47:49.094861 4940 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 23 08:47:49 crc kubenswrapper[4940]: W0223 08:47:49.094871 4940 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.094887 4940 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.103407 4940 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.103458 4940 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 23 08:47:49 crc kubenswrapper[4940]: W0223 08:47:49.103536 4940 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 23 08:47:49 crc kubenswrapper[4940]: W0223 08:47:49.103545 4940 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 23 08:47:49 crc kubenswrapper[4940]: W0223 08:47:49.103550 4940 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 23 08:47:49 crc kubenswrapper[4940]: W0223 08:47:49.103556 4940 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 23 08:47:49 crc kubenswrapper[4940]: W0223 08:47:49.103560 4940 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 23 08:47:49 crc kubenswrapper[4940]: W0223 08:47:49.103563 4940 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 23 08:47:49 crc kubenswrapper[4940]: W0223 08:47:49.103568 4940 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 23 08:47:49 crc kubenswrapper[4940]: W0223 08:47:49.103573 4940 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 23 08:47:49 crc kubenswrapper[4940]: W0223 08:47:49.103578 4940 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 23 08:47:49 crc kubenswrapper[4940]: W0223 08:47:49.103583 4940 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 23 08:47:49 crc kubenswrapper[4940]: W0223 08:47:49.103588 4940 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 23 08:47:49 crc kubenswrapper[4940]: W0223 08:47:49.103592 4940 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 23 08:47:49 crc kubenswrapper[4940]: W0223 08:47:49.103597 4940 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 23 08:47:49 crc kubenswrapper[4940]: W0223 08:47:49.103601 4940 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 23 08:47:49 crc kubenswrapper[4940]: W0223 08:47:49.103606 4940 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 23 08:47:49 crc kubenswrapper[4940]: W0223 08:47:49.103625 4940 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 23 08:47:49 crc kubenswrapper[4940]: W0223 08:47:49.103630 4940 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 23 08:47:49 crc kubenswrapper[4940]: W0223 08:47:49.103634 4940 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 23 08:47:49 crc kubenswrapper[4940]: W0223 08:47:49.103638 4940 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 23 08:47:49 crc kubenswrapper[4940]: W0223 08:47:49.103643 4940 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 23 08:47:49 crc kubenswrapper[4940]: W0223 08:47:49.103647 4940 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 23 08:47:49 crc kubenswrapper[4940]: W0223 08:47:49.103652 4940 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 23 08:47:49 crc kubenswrapper[4940]: W0223 08:47:49.103657 4940 feature_gate.go:330] unrecognized feature gate: Example Feb 23 08:47:49 crc kubenswrapper[4940]: W0223 08:47:49.103661 4940 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 23 08:47:49 crc kubenswrapper[4940]: W0223 08:47:49.103665 4940 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 23 08:47:49 crc kubenswrapper[4940]: W0223 08:47:49.103672 4940 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 23 08:47:49 crc kubenswrapper[4940]: W0223 08:47:49.103676 4940 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 23 08:47:49 crc kubenswrapper[4940]: W0223 08:47:49.103681 4940 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 23 08:47:49 crc kubenswrapper[4940]: W0223 08:47:49.103685 4940 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 23 08:47:49 crc kubenswrapper[4940]: W0223 08:47:49.103689 4940 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 23 08:47:49 crc kubenswrapper[4940]: W0223 08:47:49.103692 4940 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 23 08:47:49 crc kubenswrapper[4940]: W0223 08:47:49.103696 4940 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 23 08:47:49 crc kubenswrapper[4940]: W0223 08:47:49.103700 4940 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 23 08:47:49 crc kubenswrapper[4940]: W0223 08:47:49.103704 4940 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 23 08:47:49 crc kubenswrapper[4940]: W0223 08:47:49.103708 4940 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 23 08:47:49 crc kubenswrapper[4940]: W0223 08:47:49.103712 4940 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 23 08:47:49 crc kubenswrapper[4940]: W0223 08:47:49.103716 4940 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 23 08:47:49 crc kubenswrapper[4940]: W0223 08:47:49.103719 4940 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 23 08:47:49 crc kubenswrapper[4940]: W0223 08:47:49.103723 4940 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 23 08:47:49 crc kubenswrapper[4940]: W0223 08:47:49.103727 4940 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 23 08:47:49 crc kubenswrapper[4940]: W0223 08:47:49.103731 4940 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 23 08:47:49 crc kubenswrapper[4940]: W0223 08:47:49.103735 4940 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 23 08:47:49 crc kubenswrapper[4940]: W0223 08:47:49.103741 4940 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 23 08:47:49 crc kubenswrapper[4940]: W0223 08:47:49.103752 4940 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 23 08:47:49 crc kubenswrapper[4940]: W0223 08:47:49.103758 4940 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 23 08:47:49 crc kubenswrapper[4940]: W0223 08:47:49.103763 4940 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 23 08:47:49 crc kubenswrapper[4940]: W0223 08:47:49.103768 4940 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 23 08:47:49 crc kubenswrapper[4940]: W0223 08:47:49.103772 4940 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 23 08:47:49 crc kubenswrapper[4940]: W0223 08:47:49.103776 4940 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 23 08:47:49 crc kubenswrapper[4940]: W0223 08:47:49.103780 4940 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 23 08:47:49 crc kubenswrapper[4940]: W0223 08:47:49.103784 4940 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 23 08:47:49 crc kubenswrapper[4940]: W0223 08:47:49.103787 4940 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 23 08:47:49 crc kubenswrapper[4940]: W0223 08:47:49.103791 4940 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 23 08:47:49 crc kubenswrapper[4940]: W0223 08:47:49.103795 4940 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 23 08:47:49 crc kubenswrapper[4940]: W0223 08:47:49.103799 4940 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 23 08:47:49 crc kubenswrapper[4940]: W0223 08:47:49.103802 4940 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 23 08:47:49 crc kubenswrapper[4940]: W0223 08:47:49.103807 4940 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 23 08:47:49 crc kubenswrapper[4940]: W0223 08:47:49.103812 4940 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 23 08:47:49 crc kubenswrapper[4940]: W0223 08:47:49.103816 4940 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 23 08:47:49 crc kubenswrapper[4940]: W0223 08:47:49.103820 4940 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 23 08:47:49 crc kubenswrapper[4940]: W0223 08:47:49.103824 4940 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 23 08:47:49 crc kubenswrapper[4940]: W0223 08:47:49.103828 4940 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 23 08:47:49 crc kubenswrapper[4940]: W0223 08:47:49.103833 4940 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 23 08:47:49 crc kubenswrapper[4940]: W0223 08:47:49.103838 4940 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 23 08:47:49 crc kubenswrapper[4940]: W0223 08:47:49.103843 4940 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 23 08:47:49 crc kubenswrapper[4940]: W0223 08:47:49.103847 4940 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 23 08:47:49 crc kubenswrapper[4940]: W0223 08:47:49.103852 4940 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 23 08:47:49 crc kubenswrapper[4940]: W0223 08:47:49.103856 4940 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 23 08:47:49 crc kubenswrapper[4940]: W0223 08:47:49.103860 4940 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 23 08:47:49 crc kubenswrapper[4940]: W0223 08:47:49.103864 4940 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 23 08:47:49 crc kubenswrapper[4940]: W0223 08:47:49.103868 4940 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.103876 4940 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 23 08:47:49 crc kubenswrapper[4940]: W0223 08:47:49.104022 4940 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 23 08:47:49 crc kubenswrapper[4940]: W0223 08:47:49.104032 4940 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 23 08:47:49 crc kubenswrapper[4940]: W0223 08:47:49.104037 4940 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 23 08:47:49 crc kubenswrapper[4940]: W0223 08:47:49.104042 4940 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 23 08:47:49 crc kubenswrapper[4940]: W0223 08:47:49.104046 4940 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 23 08:47:49 crc kubenswrapper[4940]: W0223 08:47:49.104050 4940 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 23 08:47:49 crc kubenswrapper[4940]: W0223 08:47:49.104054 4940 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 23 08:47:49 crc kubenswrapper[4940]: W0223 08:47:49.104060 4940 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 23 08:47:49 crc kubenswrapper[4940]: W0223 08:47:49.104064 4940 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 23 08:47:49 crc kubenswrapper[4940]: W0223 08:47:49.104068 4940 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 23 08:47:49 crc kubenswrapper[4940]: W0223 08:47:49.104073 4940 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 23 08:47:49 crc kubenswrapper[4940]: W0223 08:47:49.104077 4940 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 23 08:47:49 crc kubenswrapper[4940]: W0223 08:47:49.104082 4940 feature_gate.go:330] unrecognized feature gate: Example Feb 23 08:47:49 crc kubenswrapper[4940]: W0223 08:47:49.104087 4940 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 23 08:47:49 crc kubenswrapper[4940]: W0223 08:47:49.104090 4940 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 23 08:47:49 crc kubenswrapper[4940]: W0223 08:47:49.104094 4940 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 23 08:47:49 crc kubenswrapper[4940]: W0223 08:47:49.104098 4940 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 23 08:47:49 crc kubenswrapper[4940]: W0223 08:47:49.104103 4940 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 23 08:47:49 crc kubenswrapper[4940]: W0223 08:47:49.104109 4940 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 23 08:47:49 crc kubenswrapper[4940]: W0223 08:47:49.104114 4940 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 23 08:47:49 crc kubenswrapper[4940]: W0223 08:47:49.104119 4940 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 23 08:47:49 crc kubenswrapper[4940]: W0223 08:47:49.104123 4940 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 23 08:47:49 crc kubenswrapper[4940]: W0223 08:47:49.104128 4940 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 23 08:47:49 crc kubenswrapper[4940]: W0223 08:47:49.104132 4940 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 23 08:47:49 crc kubenswrapper[4940]: W0223 08:47:49.104136 4940 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 23 08:47:49 crc kubenswrapper[4940]: W0223 08:47:49.104140 4940 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 23 08:47:49 crc kubenswrapper[4940]: W0223 08:47:49.104144 4940 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 23 08:47:49 crc kubenswrapper[4940]: W0223 08:47:49.104148 4940 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 23 08:47:49 crc kubenswrapper[4940]: W0223 08:47:49.104151 4940 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 23 08:47:49 crc kubenswrapper[4940]: W0223 08:47:49.104155 4940 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 23 08:47:49 crc kubenswrapper[4940]: W0223 08:47:49.104159 4940 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 23 08:47:49 crc kubenswrapper[4940]: W0223 08:47:49.104163 4940 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 23 08:47:49 crc kubenswrapper[4940]: W0223 08:47:49.104168 4940 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 23 08:47:49 crc kubenswrapper[4940]: W0223 08:47:49.104173 4940 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 23 08:47:49 crc kubenswrapper[4940]: W0223 08:47:49.104178 4940 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 23 08:47:49 crc kubenswrapper[4940]: W0223 08:47:49.104183 4940 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 23 08:47:49 crc kubenswrapper[4940]: W0223 08:47:49.104187 4940 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 23 08:47:49 crc kubenswrapper[4940]: W0223 08:47:49.104191 4940 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 23 08:47:49 crc kubenswrapper[4940]: W0223 08:47:49.104196 4940 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 23 08:47:49 crc kubenswrapper[4940]: W0223 08:47:49.104201 4940 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 23 08:47:49 crc kubenswrapper[4940]: W0223 08:47:49.104205 4940 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 23 08:47:49 crc kubenswrapper[4940]: W0223 08:47:49.104210 4940 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 23 08:47:49 crc kubenswrapper[4940]: W0223 08:47:49.104213 4940 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 23 08:47:49 crc kubenswrapper[4940]: W0223 08:47:49.104226 4940 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 23 08:47:49 crc kubenswrapper[4940]: W0223 08:47:49.104230 4940 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 23 08:47:49 crc kubenswrapper[4940]: W0223 08:47:49.104234 4940 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 23 08:47:49 crc kubenswrapper[4940]: W0223 08:47:49.104239 4940 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 23 08:47:49 crc kubenswrapper[4940]: W0223 08:47:49.104243 4940 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 23 08:47:49 crc kubenswrapper[4940]: W0223 08:47:49.104247 4940 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 23 08:47:49 crc kubenswrapper[4940]: W0223 08:47:49.104252 4940 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 23 08:47:49 crc kubenswrapper[4940]: W0223 08:47:49.104258 4940 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 23 08:47:49 crc kubenswrapper[4940]: W0223 08:47:49.104262 4940 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 23 08:47:49 crc kubenswrapper[4940]: W0223 08:47:49.104266 4940 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 23 08:47:49 crc kubenswrapper[4940]: W0223 08:47:49.104270 4940 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 23 08:47:49 crc kubenswrapper[4940]: W0223 08:47:49.104274 4940 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 23 08:47:49 crc kubenswrapper[4940]: W0223 08:47:49.104279 4940 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 23 08:47:49 crc kubenswrapper[4940]: W0223 08:47:49.104283 4940 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 23 08:47:49 crc kubenswrapper[4940]: W0223 08:47:49.104287 4940 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 23 08:47:49 crc kubenswrapper[4940]: W0223 08:47:49.104292 4940 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 23 08:47:49 crc kubenswrapper[4940]: W0223 08:47:49.104296 4940 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 23 08:47:49 crc kubenswrapper[4940]: W0223 08:47:49.104300 4940 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 23 08:47:49 crc kubenswrapper[4940]: W0223 08:47:49.104305 4940 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 23 08:47:49 crc kubenswrapper[4940]: W0223 08:47:49.104309 4940 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 23 08:47:49 crc kubenswrapper[4940]: W0223 08:47:49.104314 4940 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 23 08:47:49 crc kubenswrapper[4940]: W0223 08:47:49.104318 4940 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 23 08:47:49 crc kubenswrapper[4940]: W0223 08:47:49.104323 4940 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 23 08:47:49 crc kubenswrapper[4940]: W0223 08:47:49.104327 4940 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 23 08:47:49 crc kubenswrapper[4940]: W0223 08:47:49.104332 4940 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 23 08:47:49 crc kubenswrapper[4940]: W0223 08:47:49.104336 4940 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 23 08:47:49 crc kubenswrapper[4940]: W0223 08:47:49.104340 4940 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 23 08:47:49 crc kubenswrapper[4940]: W0223 08:47:49.104344 4940 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.104351 4940 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.104585 4940 server.go:940] "Client rotation is on, will bootstrap in background" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.109856 4940 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.109958 4940 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.111213 4940 server.go:997] "Starting client certificate rotation" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.111244 4940 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.111494 4940 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2026-01-06 20:41:50.381718188 +0000 UTC Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.111650 4940 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.138409 4940 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.140486 4940 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 23 08:47:49 crc kubenswrapper[4940]: E0223 08:47:49.140924 4940 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.222:6443: connect: connection refused" logger="UnhandledError" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.161681 4940 log.go:25] "Validated CRI v1 runtime API" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.206492 4940 log.go:25] "Validated CRI v1 image API" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.210846 4940 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.218274 4940 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2026-02-23-08-43-07-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.218353 4940 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:43 fsType:tmpfs blockSize:0}] Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.248268 4940 manager.go:217] Machine: {Timestamp:2026-02-23 08:47:49.244903529 +0000 UTC m=+0.628109746 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2800000 MemoryCapacity:33654124544 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:3c406e8c-0d77-4ead-8ee9-37cf28c01cc1 BootID:0b40f9a7-6d5b-496d-bcec-88183c6aba29 Filesystems:[{Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:43 Capacity:1073741824 Type:vfs Inodes:4108169 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827060224 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:3365408768 Type:vfs Inodes:821633 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:f6:f0:d7 Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:f6:f0:d7 Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:43:b3:2d Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:46:ae:b5 Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:1e:8b:17 Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:8f:6d:44 Speed:-1 Mtu:1496} {Name:eth10 MacAddress:3a:8e:68:34:0f:4b Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:2e:c6:3d:ce:c4:b7 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654124544 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.248568 4940 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.248762 4940 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.249097 4940 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.249304 4940 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.249350 4940 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.249658 4940 topology_manager.go:138] "Creating topology manager with none policy" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.249671 4940 container_manager_linux.go:303] "Creating device plugin manager" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.250101 4940 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.250148 4940 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.251800 4940 state_mem.go:36] "Initialized new in-memory state store" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.251942 4940 server.go:1245] "Using root directory" path="/var/lib/kubelet" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.257838 4940 kubelet.go:418] "Attempting to sync node with API server" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.257868 4940 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.257918 4940 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.257933 4940 kubelet.go:324] "Adding apiserver pod source" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.257944 4940 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.262089 4940 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Feb 23 08:47:49 crc kubenswrapper[4940]: W0223 08:47:49.262648 4940 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.222:6443: connect: connection refused Feb 23 08:47:49 crc kubenswrapper[4940]: E0223 08:47:49.262747 4940 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.222:6443: connect: connection refused" logger="UnhandledError" Feb 23 08:47:49 crc kubenswrapper[4940]: W0223 08:47:49.262786 4940 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.222:6443: connect: connection refused Feb 23 08:47:49 crc kubenswrapper[4940]: E0223 08:47:49.262900 4940 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.222:6443: connect: connection refused" logger="UnhandledError" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.262952 4940 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.265394 4940 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.268729 4940 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.268757 4940 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.268766 4940 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.268774 4940 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.268787 4940 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.268795 4940 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.268805 4940 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.268818 4940 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.268828 4940 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.268836 4940 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.268850 4940 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.268858 4940 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.270728 4940 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.271301 4940 server.go:1280] "Started kubelet" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.272355 4940 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.222:6443: connect: connection refused Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.272483 4940 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.272518 4940 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 23 08:47:49 crc systemd[1]: Started Kubernetes Kubelet. Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.273824 4940 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.276874 4940 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.276957 4940 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.278082 4940 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 13:00:19.675865207 +0000 UTC Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.278373 4940 volume_manager.go:287] "The desired_state_of_world populator starts" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.278393 4940 volume_manager.go:289] "Starting Kubelet Volume Manager" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.281420 4940 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.281740 4940 server.go:460] "Adding debug handlers to kubelet server" Feb 23 08:47:49 crc kubenswrapper[4940]: E0223 08:47:49.282975 4940 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 08:47:49 crc kubenswrapper[4940]: E0223 08:47:49.287757 4940 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.222:6443: connect: connection refused" interval="200ms" Feb 23 08:47:49 crc kubenswrapper[4940]: E0223 08:47:49.287761 4940 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.222:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.1896d3e196ff1774 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-23 08:47:49.271271284 +0000 UTC m=+0.654477441,LastTimestamp:2026-02-23 08:47:49.271271284 +0000 UTC m=+0.654477441,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 23 08:47:49 crc kubenswrapper[4940]: W0223 08:47:49.291944 4940 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.222:6443: connect: connection refused Feb 23 08:47:49 crc kubenswrapper[4940]: E0223 08:47:49.292042 4940 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.222:6443: connect: connection refused" logger="UnhandledError" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.292910 4940 factory.go:153] Registering CRI-O factory Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.294075 4940 factory.go:221] Registration of the crio container factory successfully Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.294159 4940 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.294171 4940 factory.go:55] Registering systemd factory Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.294179 4940 factory.go:221] Registration of the systemd container factory successfully Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.294202 4940 factory.go:103] Registering Raw factory Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.294217 4940 manager.go:1196] Started watching for new ooms in manager Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.294969 4940 manager.go:319] Starting recovery of all containers Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.307364 4940 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.307465 4940 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.307486 4940 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.307505 4940 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.307523 4940 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.307539 4940 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.307555 4940 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.307571 4940 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.307589 4940 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.307607 4940 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.307647 4940 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.307663 4940 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.307680 4940 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.307701 4940 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.307741 4940 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.307759 4940 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.307780 4940 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.307799 4940 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.307828 4940 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.307846 4940 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.307863 4940 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.307881 4940 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.307898 4940 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.307917 4940 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.307936 4940 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.307953 4940 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.308007 4940 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.308029 4940 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.308046 4940 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.308063 4940 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.308081 4940 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.308097 4940 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.308141 4940 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.308160 4940 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.308177 4940 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.308194 4940 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.308212 4940 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.308231 4940 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.308256 4940 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.308274 4940 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.308290 4940 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.308306 4940 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.308321 4940 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.308336 4940 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.308355 4940 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.308371 4940 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.308387 4940 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.308402 4940 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.308420 4940 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.308436 4940 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.308453 4940 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.308471 4940 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.308512 4940 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.308531 4940 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.308548 4940 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.308565 4940 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.308582 4940 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.308599 4940 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.308633 4940 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.308651 4940 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.308666 4940 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.308681 4940 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.308695 4940 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.308735 4940 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.308754 4940 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.308769 4940 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.308785 4940 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.308802 4940 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.308817 4940 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.308834 4940 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.308850 4940 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.308865 4940 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.308881 4940 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.308897 4940 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.308912 4940 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.308927 4940 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.310192 4940 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.310221 4940 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.310235 4940 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.310248 4940 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.310293 4940 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.310306 4940 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.310318 4940 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.310330 4940 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.310342 4940 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.310358 4940 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.310375 4940 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.310393 4940 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.310416 4940 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.310430 4940 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.310445 4940 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.310460 4940 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.310474 4940 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.310488 4940 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.310504 4940 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.310520 4940 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.310537 4940 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.310568 4940 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.310630 4940 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.310651 4940 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.310667 4940 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.310684 4940 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.310698 4940 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.310713 4940 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.310727 4940 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.310747 4940 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.310763 4940 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.310779 4940 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.310792 4940 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.310806 4940 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.310818 4940 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.310838 4940 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.310863 4940 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.312826 4940 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.312926 4940 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.312947 4940 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.312972 4940 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.313036 4940 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.313058 4940 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.313091 4940 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.313107 4940 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.313135 4940 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.313153 4940 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.313170 4940 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.313194 4940 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.313227 4940 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.313255 4940 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.313268 4940 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.313288 4940 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.313349 4940 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.313364 4940 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.313429 4940 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.313444 4940 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.313484 4940 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.313509 4940 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.313524 4940 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.313584 4940 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.313638 4940 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.313650 4940 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.313714 4940 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.313726 4940 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.313817 4940 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.313831 4940 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.313843 4940 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.313861 4940 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.313873 4940 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.313885 4940 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.313902 4940 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.313913 4940 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.313947 4940 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.313961 4940 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.313975 4940 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.313993 4940 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.314007 4940 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.314021 4940 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.314033 4940 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.314046 4940 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.314087 4940 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.314100 4940 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.314115 4940 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.314128 4940 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.314140 4940 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.314155 4940 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.314167 4940 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.314184 4940 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.314221 4940 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.314233 4940 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.314268 4940 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.314280 4940 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.314381 4940 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.314398 4940 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.314409 4940 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.314427 4940 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.314499 4940 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.314510 4940 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.314559 4940 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.314572 4940 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.314586 4940 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.314603 4940 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.314627 4940 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.314642 4940 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.314687 4940 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.314702 4940 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.314734 4940 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.314745 4940 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.314809 4940 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.314830 4940 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.314845 4940 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.314861 4940 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.314909 4940 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.314927 4940 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.314945 4940 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.314962 4940 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.315001 4940 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.315013 4940 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.315031 4940 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.315044 4940 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.315173 4940 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.315194 4940 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.315291 4940 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.315308 4940 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.315340 4940 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.315352 4940 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.315366 4940 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.315377 4940 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.315441 4940 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.315459 4940 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.315515 4940 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.315528 4940 reconstruct.go:97] "Volume reconstruction finished" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.315544 4940 reconciler.go:26] "Reconciler: start to sync state" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.319598 4940 manager.go:324] Recovery completed Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.330511 4940 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.335702 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.335786 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.335802 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.337989 4940 cpu_manager.go:225] "Starting CPU manager" policy="none" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.338018 4940 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.338042 4940 state_mem.go:36] "Initialized new in-memory state store" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.342134 4940 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.344339 4940 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.344379 4940 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.344404 4940 kubelet.go:2335] "Starting kubelet main sync loop" Feb 23 08:47:49 crc kubenswrapper[4940]: E0223 08:47:49.344530 4940 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 23 08:47:49 crc kubenswrapper[4940]: W0223 08:47:49.345126 4940 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.222:6443: connect: connection refused Feb 23 08:47:49 crc kubenswrapper[4940]: E0223 08:47:49.345167 4940 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.222:6443: connect: connection refused" logger="UnhandledError" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.354353 4940 policy_none.go:49] "None policy: Start" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.355195 4940 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.355224 4940 state_mem.go:35] "Initializing new in-memory state store" Feb 23 08:47:49 crc kubenswrapper[4940]: E0223 08:47:49.383107 4940 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.405910 4940 manager.go:334] "Starting Device Plugin manager" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.406021 4940 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.406033 4940 server.go:79] "Starting device plugin registration server" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.406360 4940 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.406375 4940 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.406583 4940 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.406659 4940 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.406666 4940 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 23 08:47:49 crc kubenswrapper[4940]: E0223 08:47:49.417876 4940 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.445168 4940 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc"] Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.445302 4940 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.446505 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.446541 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.446552 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.446705 4940 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.447247 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.447327 4940 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.448119 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.448164 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.448176 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.448404 4940 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.448744 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.448836 4940 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.449228 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.449264 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.449282 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.450103 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.450150 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.450169 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.450180 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.450155 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.450273 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.450675 4940 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.450839 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.450888 4940 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.451919 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.451940 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.451953 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.452069 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.452107 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.452119 4940 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.452128 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.452366 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.452433 4940 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.452829 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.452864 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.452875 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.453060 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.453091 4940 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.453282 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.453310 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.453321 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.453770 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.453797 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.453808 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:47:49 crc kubenswrapper[4940]: E0223 08:47:49.488597 4940 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.222:6443: connect: connection refused" interval="400ms" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.507284 4940 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.508787 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.508818 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.508828 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.508850 4940 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 23 08:47:49 crc kubenswrapper[4940]: E0223 08:47:49.509279 4940 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.222:6443: connect: connection refused" node="crc" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.518159 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.518209 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.518238 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.518260 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.518283 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.518309 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.518330 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.518350 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.518370 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.518411 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.518450 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.518554 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.518692 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.518719 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.519729 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.621362 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.621428 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.621453 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.621477 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.621502 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.621523 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.621545 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.621567 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.621592 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.621632 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.621655 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.621678 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.621679 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.621760 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.621771 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.621782 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.621764 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.621825 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.621896 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.621938 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.621962 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.621733 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.621741 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.621700 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.621744 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.622037 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.622119 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.622164 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.622228 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.622280 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.709458 4940 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.710770 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.710814 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.710825 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.710849 4940 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 23 08:47:49 crc kubenswrapper[4940]: E0223 08:47:49.711357 4940 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.222:6443: connect: connection refused" node="crc" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.775475 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.792710 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.801325 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.820462 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 23 08:47:49 crc kubenswrapper[4940]: W0223 08:47:49.826782 4940 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-2f53cd6f22a9b09b0779988d50373af2c1c49e6a76c3b81deafebfbea1309d9e WatchSource:0}: Error finding container 2f53cd6f22a9b09b0779988d50373af2c1c49e6a76c3b81deafebfbea1309d9e: Status 404 returned error can't find the container with id 2f53cd6f22a9b09b0779988d50373af2c1c49e6a76c3b81deafebfbea1309d9e Feb 23 08:47:49 crc kubenswrapper[4940]: W0223 08:47:49.828755 4940 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-f6ed90af04498d9674a47366a6785dbe3993e6f9a6de0add65bc821afc925e0f WatchSource:0}: Error finding container f6ed90af04498d9674a47366a6785dbe3993e6f9a6de0add65bc821afc925e0f: Status 404 returned error can't find the container with id f6ed90af04498d9674a47366a6785dbe3993e6f9a6de0add65bc821afc925e0f Feb 23 08:47:49 crc kubenswrapper[4940]: I0223 08:47:49.829982 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 23 08:47:49 crc kubenswrapper[4940]: W0223 08:47:49.834091 4940 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-610329c67adc381b17a45c5ad17875c752127fddf3406966468f2b1ad1def669 WatchSource:0}: Error finding container 610329c67adc381b17a45c5ad17875c752127fddf3406966468f2b1ad1def669: Status 404 returned error can't find the container with id 610329c67adc381b17a45c5ad17875c752127fddf3406966468f2b1ad1def669 Feb 23 08:47:49 crc kubenswrapper[4940]: W0223 08:47:49.840273 4940 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-92830c43593c00cb1c3773ee69a22fd4d3399ad289efc86cd6ab5bd7695c55b1 WatchSource:0}: Error finding container 92830c43593c00cb1c3773ee69a22fd4d3399ad289efc86cd6ab5bd7695c55b1: Status 404 returned error can't find the container with id 92830c43593c00cb1c3773ee69a22fd4d3399ad289efc86cd6ab5bd7695c55b1 Feb 23 08:47:49 crc kubenswrapper[4940]: W0223 08:47:49.855439 4940 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-acd578297c859d49da120157b7a19d59226211a01983427022d5afa82d8bcdda WatchSource:0}: Error finding container acd578297c859d49da120157b7a19d59226211a01983427022d5afa82d8bcdda: Status 404 returned error can't find the container with id acd578297c859d49da120157b7a19d59226211a01983427022d5afa82d8bcdda Feb 23 08:47:49 crc kubenswrapper[4940]: E0223 08:47:49.890404 4940 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.222:6443: connect: connection refused" interval="800ms" Feb 23 08:47:50 crc kubenswrapper[4940]: I0223 08:47:50.111490 4940 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 08:47:50 crc kubenswrapper[4940]: I0223 08:47:50.112999 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:47:50 crc kubenswrapper[4940]: I0223 08:47:50.113032 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:47:50 crc kubenswrapper[4940]: I0223 08:47:50.113045 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:47:50 crc kubenswrapper[4940]: I0223 08:47:50.113069 4940 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 23 08:47:50 crc kubenswrapper[4940]: E0223 08:47:50.113546 4940 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.222:6443: connect: connection refused" node="crc" Feb 23 08:47:50 crc kubenswrapper[4940]: W0223 08:47:50.237792 4940 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.222:6443: connect: connection refused Feb 23 08:47:50 crc kubenswrapper[4940]: E0223 08:47:50.237871 4940 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.222:6443: connect: connection refused" logger="UnhandledError" Feb 23 08:47:50 crc kubenswrapper[4940]: W0223 08:47:50.271073 4940 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.222:6443: connect: connection refused Feb 23 08:47:50 crc kubenswrapper[4940]: E0223 08:47:50.271168 4940 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.222:6443: connect: connection refused" logger="UnhandledError" Feb 23 08:47:50 crc kubenswrapper[4940]: I0223 08:47:50.273919 4940 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.222:6443: connect: connection refused Feb 23 08:47:50 crc kubenswrapper[4940]: I0223 08:47:50.279099 4940 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 15:01:45.747022921 +0000 UTC Feb 23 08:47:50 crc kubenswrapper[4940]: I0223 08:47:50.353394 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"610329c67adc381b17a45c5ad17875c752127fddf3406966468f2b1ad1def669"} Feb 23 08:47:50 crc kubenswrapper[4940]: I0223 08:47:50.354803 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"f6ed90af04498d9674a47366a6785dbe3993e6f9a6de0add65bc821afc925e0f"} Feb 23 08:47:50 crc kubenswrapper[4940]: I0223 08:47:50.359638 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"2f53cd6f22a9b09b0779988d50373af2c1c49e6a76c3b81deafebfbea1309d9e"} Feb 23 08:47:50 crc kubenswrapper[4940]: I0223 08:47:50.361840 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"acd578297c859d49da120157b7a19d59226211a01983427022d5afa82d8bcdda"} Feb 23 08:47:50 crc kubenswrapper[4940]: I0223 08:47:50.362815 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"92830c43593c00cb1c3773ee69a22fd4d3399ad289efc86cd6ab5bd7695c55b1"} Feb 23 08:47:50 crc kubenswrapper[4940]: W0223 08:47:50.498440 4940 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.222:6443: connect: connection refused Feb 23 08:47:50 crc kubenswrapper[4940]: E0223 08:47:50.498519 4940 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.222:6443: connect: connection refused" logger="UnhandledError" Feb 23 08:47:50 crc kubenswrapper[4940]: W0223 08:47:50.573356 4940 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.222:6443: connect: connection refused Feb 23 08:47:50 crc kubenswrapper[4940]: E0223 08:47:50.573442 4940 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.222:6443: connect: connection refused" logger="UnhandledError" Feb 23 08:47:50 crc kubenswrapper[4940]: E0223 08:47:50.692522 4940 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.222:6443: connect: connection refused" interval="1.6s" Feb 23 08:47:50 crc kubenswrapper[4940]: I0223 08:47:50.914514 4940 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 08:47:50 crc kubenswrapper[4940]: I0223 08:47:50.915602 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:47:50 crc kubenswrapper[4940]: I0223 08:47:50.915682 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:47:50 crc kubenswrapper[4940]: I0223 08:47:50.915699 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:47:50 crc kubenswrapper[4940]: I0223 08:47:50.915734 4940 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 23 08:47:50 crc kubenswrapper[4940]: E0223 08:47:50.916306 4940 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.222:6443: connect: connection refused" node="crc" Feb 23 08:47:51 crc kubenswrapper[4940]: I0223 08:47:51.263140 4940 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 23 08:47:51 crc kubenswrapper[4940]: E0223 08:47:51.264399 4940 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.222:6443: connect: connection refused" logger="UnhandledError" Feb 23 08:47:51 crc kubenswrapper[4940]: I0223 08:47:51.274327 4940 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.222:6443: connect: connection refused Feb 23 08:47:51 crc kubenswrapper[4940]: I0223 08:47:51.279483 4940 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 14:52:53.30410788 +0000 UTC Feb 23 08:47:51 crc kubenswrapper[4940]: I0223 08:47:51.366584 4940 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="d742db4469dc738b276ecd68cf5aea77ad5e6fb4460a4fdca4a626a9a073956f" exitCode=0 Feb 23 08:47:51 crc kubenswrapper[4940]: I0223 08:47:51.366702 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"d742db4469dc738b276ecd68cf5aea77ad5e6fb4460a4fdca4a626a9a073956f"} Feb 23 08:47:51 crc kubenswrapper[4940]: I0223 08:47:51.366822 4940 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 08:47:51 crc kubenswrapper[4940]: I0223 08:47:51.368379 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:47:51 crc kubenswrapper[4940]: I0223 08:47:51.368413 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:47:51 crc kubenswrapper[4940]: I0223 08:47:51.368423 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:47:51 crc kubenswrapper[4940]: I0223 08:47:51.369590 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"8dbb2f4667fe28c3ae9389cece3d7246f31a8bd82328370a699dae73e11b19a4"} Feb 23 08:47:51 crc kubenswrapper[4940]: I0223 08:47:51.369638 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"e9cd5739e2a71a55c1bf38b8f783c024e47076dcc74b4524b08a2b60ed3d7fd5"} Feb 23 08:47:51 crc kubenswrapper[4940]: I0223 08:47:51.369654 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"4aceeb7fc620d96b7e2a474de52d7b3fb007417f4f1ec81e5853724d54381566"} Feb 23 08:47:51 crc kubenswrapper[4940]: I0223 08:47:51.371763 4940 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="a7d8e4f8739813cb8d24afc9999755886cd4a77958b8b4e1514fc0bc53403258" exitCode=0 Feb 23 08:47:51 crc kubenswrapper[4940]: I0223 08:47:51.371843 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"a7d8e4f8739813cb8d24afc9999755886cd4a77958b8b4e1514fc0bc53403258"} Feb 23 08:47:51 crc kubenswrapper[4940]: I0223 08:47:51.372010 4940 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 08:47:51 crc kubenswrapper[4940]: I0223 08:47:51.373279 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:47:51 crc kubenswrapper[4940]: I0223 08:47:51.373326 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:47:51 crc kubenswrapper[4940]: I0223 08:47:51.373338 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:47:51 crc kubenswrapper[4940]: I0223 08:47:51.373774 4940 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="bc81c994f3f21269783d08f800aa4a58bdf43d229b5b61d7f40f8706230fd6c5" exitCode=0 Feb 23 08:47:51 crc kubenswrapper[4940]: I0223 08:47:51.373884 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"bc81c994f3f21269783d08f800aa4a58bdf43d229b5b61d7f40f8706230fd6c5"} Feb 23 08:47:51 crc kubenswrapper[4940]: I0223 08:47:51.373944 4940 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 08:47:51 crc kubenswrapper[4940]: I0223 08:47:51.375218 4940 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 08:47:51 crc kubenswrapper[4940]: I0223 08:47:51.375276 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:47:51 crc kubenswrapper[4940]: I0223 08:47:51.375319 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:47:51 crc kubenswrapper[4940]: I0223 08:47:51.375342 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:47:51 crc kubenswrapper[4940]: I0223 08:47:51.375838 4940 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="b0f77059ccd04d8f502c2652a5e62a780bf57c1beb0bde2aa974f3d46c798f64" exitCode=0 Feb 23 08:47:51 crc kubenswrapper[4940]: I0223 08:47:51.375885 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"b0f77059ccd04d8f502c2652a5e62a780bf57c1beb0bde2aa974f3d46c798f64"} Feb 23 08:47:51 crc kubenswrapper[4940]: I0223 08:47:51.375981 4940 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 08:47:51 crc kubenswrapper[4940]: I0223 08:47:51.376260 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:47:51 crc kubenswrapper[4940]: I0223 08:47:51.376293 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:47:51 crc kubenswrapper[4940]: I0223 08:47:51.376302 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:47:51 crc kubenswrapper[4940]: I0223 08:47:51.377898 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:47:51 crc kubenswrapper[4940]: I0223 08:47:51.377956 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:47:51 crc kubenswrapper[4940]: I0223 08:47:51.377975 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:47:52 crc kubenswrapper[4940]: E0223 08:47:52.147651 4940 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.222:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.1896d3e196ff1774 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-23 08:47:49.271271284 +0000 UTC m=+0.654477441,LastTimestamp:2026-02-23 08:47:49.271271284 +0000 UTC m=+0.654477441,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 23 08:47:52 crc kubenswrapper[4940]: W0223 08:47:52.185736 4940 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.222:6443: connect: connection refused Feb 23 08:47:52 crc kubenswrapper[4940]: E0223 08:47:52.185950 4940 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.222:6443: connect: connection refused" logger="UnhandledError" Feb 23 08:47:52 crc kubenswrapper[4940]: I0223 08:47:52.273995 4940 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.222:6443: connect: connection refused Feb 23 08:47:52 crc kubenswrapper[4940]: I0223 08:47:52.280272 4940 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 17:42:29.671634133 +0000 UTC Feb 23 08:47:52 crc kubenswrapper[4940]: E0223 08:47:52.293783 4940 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.222:6443: connect: connection refused" interval="3.2s" Feb 23 08:47:52 crc kubenswrapper[4940]: I0223 08:47:52.387547 4940 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="260cdbf12890f13f411672fde6af3350b3ddc30824ac5b43110247ae1c7b74fc" exitCode=0 Feb 23 08:47:52 crc kubenswrapper[4940]: I0223 08:47:52.387648 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"260cdbf12890f13f411672fde6af3350b3ddc30824ac5b43110247ae1c7b74fc"} Feb 23 08:47:52 crc kubenswrapper[4940]: I0223 08:47:52.388943 4940 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 08:47:52 crc kubenswrapper[4940]: I0223 08:47:52.390895 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:47:52 crc kubenswrapper[4940]: I0223 08:47:52.390938 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:47:52 crc kubenswrapper[4940]: I0223 08:47:52.390951 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:47:52 crc kubenswrapper[4940]: I0223 08:47:52.393301 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"4aa4e3eae7c81a14b6302a982f67ee3d9ecc29fea292654bb6ed70d209b7b202"} Feb 23 08:47:52 crc kubenswrapper[4940]: I0223 08:47:52.395795 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"d3a121c424818b098798b4f28bc000a66157862597ce373a4d4748fb75f179fe"} Feb 23 08:47:52 crc kubenswrapper[4940]: I0223 08:47:52.399192 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"b0c1a355a4218aa74f5a9f20e74427e20e7dd1864bfb683eeb2ac43ef6d7357f"} Feb 23 08:47:52 crc kubenswrapper[4940]: I0223 08:47:52.399258 4940 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 08:47:52 crc kubenswrapper[4940]: I0223 08:47:52.400246 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:47:52 crc kubenswrapper[4940]: I0223 08:47:52.400285 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:47:52 crc kubenswrapper[4940]: I0223 08:47:52.400295 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:47:52 crc kubenswrapper[4940]: I0223 08:47:52.417209 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"0ee3bc200ebaed1c133ad32e65db44c8613c48203736dada71c0881d3d4f45a5"} Feb 23 08:47:52 crc kubenswrapper[4940]: I0223 08:47:52.517346 4940 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 08:47:52 crc kubenswrapper[4940]: I0223 08:47:52.518717 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:47:52 crc kubenswrapper[4940]: I0223 08:47:52.518778 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:47:52 crc kubenswrapper[4940]: I0223 08:47:52.518793 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:47:52 crc kubenswrapper[4940]: I0223 08:47:52.518859 4940 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 23 08:47:52 crc kubenswrapper[4940]: E0223 08:47:52.519579 4940 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.222:6443: connect: connection refused" node="crc" Feb 23 08:47:52 crc kubenswrapper[4940]: W0223 08:47:52.706162 4940 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.222:6443: connect: connection refused Feb 23 08:47:52 crc kubenswrapper[4940]: E0223 08:47:52.706242 4940 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.222:6443: connect: connection refused" logger="UnhandledError" Feb 23 08:47:52 crc kubenswrapper[4940]: W0223 08:47:52.710319 4940 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.222:6443: connect: connection refused Feb 23 08:47:52 crc kubenswrapper[4940]: E0223 08:47:52.710408 4940 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.222:6443: connect: connection refused" logger="UnhandledError" Feb 23 08:47:52 crc kubenswrapper[4940]: W0223 08:47:52.835184 4940 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.222:6443: connect: connection refused Feb 23 08:47:52 crc kubenswrapper[4940]: E0223 08:47:52.835313 4940 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.222:6443: connect: connection refused" logger="UnhandledError" Feb 23 08:47:53 crc kubenswrapper[4940]: I0223 08:47:53.274619 4940 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.222:6443: connect: connection refused Feb 23 08:47:53 crc kubenswrapper[4940]: I0223 08:47:53.280647 4940 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 17:13:02.072423836 +0000 UTC Feb 23 08:47:53 crc kubenswrapper[4940]: I0223 08:47:53.423863 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"90753b936aa559224b0241e456ce800658b0cdaa5c92cf54972255319d1ff334"} Feb 23 08:47:53 crc kubenswrapper[4940]: I0223 08:47:53.423915 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"c3ba561d0277929d8f55fcd1fe87d8f37a2d126b94eaf9ac25bc79ad17f61e26"} Feb 23 08:47:53 crc kubenswrapper[4940]: I0223 08:47:53.423957 4940 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 08:47:53 crc kubenswrapper[4940]: I0223 08:47:53.424941 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:47:53 crc kubenswrapper[4940]: I0223 08:47:53.424985 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:47:53 crc kubenswrapper[4940]: I0223 08:47:53.424997 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:47:53 crc kubenswrapper[4940]: I0223 08:47:53.427217 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"320ce295e7959b674abee36c26a76f4a999afd57b59d9ddedca5a91f15dbf89e"} Feb 23 08:47:53 crc kubenswrapper[4940]: I0223 08:47:53.427253 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"c47633e36ba0e4b4070060e2925e103b6efa68771b18f7a7f429546f9f05c1d9"} Feb 23 08:47:53 crc kubenswrapper[4940]: I0223 08:47:53.427270 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"b6fefd6cf7ee54d7687a700f08f52ff20d438289f846da4d811f3295e85455c1"} Feb 23 08:47:53 crc kubenswrapper[4940]: I0223 08:47:53.427284 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"ee77b54ead604e1f64b0057494a4ba082ac7f59289f563ce28ec6048485e3b07"} Feb 23 08:47:53 crc kubenswrapper[4940]: I0223 08:47:53.427334 4940 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 08:47:53 crc kubenswrapper[4940]: I0223 08:47:53.428342 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:47:53 crc kubenswrapper[4940]: I0223 08:47:53.428382 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:47:53 crc kubenswrapper[4940]: I0223 08:47:53.428393 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:47:53 crc kubenswrapper[4940]: I0223 08:47:53.429797 4940 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="768f56feda9f5952088933827a03f9d23d7d807d1354578de2335e7c3e8beaaf" exitCode=0 Feb 23 08:47:53 crc kubenswrapper[4940]: I0223 08:47:53.429876 4940 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 08:47:53 crc kubenswrapper[4940]: I0223 08:47:53.429890 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"768f56feda9f5952088933827a03f9d23d7d807d1354578de2335e7c3e8beaaf"} Feb 23 08:47:53 crc kubenswrapper[4940]: I0223 08:47:53.429911 4940 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 08:47:53 crc kubenswrapper[4940]: I0223 08:47:53.430018 4940 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 08:47:53 crc kubenswrapper[4940]: I0223 08:47:53.430580 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:47:53 crc kubenswrapper[4940]: I0223 08:47:53.430642 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:47:53 crc kubenswrapper[4940]: I0223 08:47:53.430655 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:47:53 crc kubenswrapper[4940]: I0223 08:47:53.431531 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:47:53 crc kubenswrapper[4940]: I0223 08:47:53.431570 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:47:53 crc kubenswrapper[4940]: I0223 08:47:53.431589 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:47:53 crc kubenswrapper[4940]: I0223 08:47:53.431542 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:47:53 crc kubenswrapper[4940]: I0223 08:47:53.431681 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:47:53 crc kubenswrapper[4940]: I0223 08:47:53.431693 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:47:53 crc kubenswrapper[4940]: I0223 08:47:53.753591 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 23 08:47:53 crc kubenswrapper[4940]: I0223 08:47:53.959775 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 23 08:47:54 crc kubenswrapper[4940]: I0223 08:47:54.220698 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 23 08:47:54 crc kubenswrapper[4940]: I0223 08:47:54.281591 4940 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 16:14:01.789254434 +0000 UTC Feb 23 08:47:54 crc kubenswrapper[4940]: I0223 08:47:54.437633 4940 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 08:47:54 crc kubenswrapper[4940]: I0223 08:47:54.437621 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"d2a314775a13f5a612d19ad1c27ccadc74e4ad191c84bd78a1068105509e514a"} Feb 23 08:47:54 crc kubenswrapper[4940]: I0223 08:47:54.437650 4940 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 08:47:54 crc kubenswrapper[4940]: I0223 08:47:54.437799 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"9c467a466c26e4cd6c762317dcc08a84b7bd61aa044e8d075f7ada83206764d4"} Feb 23 08:47:54 crc kubenswrapper[4940]: I0223 08:47:54.437860 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"4316680c09083874958b09e3d47897f844490827fafe87653b1459708ba55af3"} Feb 23 08:47:54 crc kubenswrapper[4940]: I0223 08:47:54.437880 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"cca6b93a0655ce53ad6d1ccfe625533a67c58b3876c57fe4473d699c95938bd7"} Feb 23 08:47:54 crc kubenswrapper[4940]: I0223 08:47:54.437711 4940 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 08:47:54 crc kubenswrapper[4940]: I0223 08:47:54.438958 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:47:54 crc kubenswrapper[4940]: I0223 08:47:54.438998 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:47:54 crc kubenswrapper[4940]: I0223 08:47:54.439008 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:47:54 crc kubenswrapper[4940]: I0223 08:47:54.439288 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:47:54 crc kubenswrapper[4940]: I0223 08:47:54.439319 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:47:54 crc kubenswrapper[4940]: I0223 08:47:54.439328 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:47:54 crc kubenswrapper[4940]: I0223 08:47:54.440075 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:47:54 crc kubenswrapper[4940]: I0223 08:47:54.440114 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:47:54 crc kubenswrapper[4940]: I0223 08:47:54.440123 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:47:55 crc kubenswrapper[4940]: I0223 08:47:55.282040 4940 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 13:47:40.589028063 +0000 UTC Feb 23 08:47:55 crc kubenswrapper[4940]: I0223 08:47:55.445905 4940 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 08:47:55 crc kubenswrapper[4940]: I0223 08:47:55.446433 4940 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 08:47:55 crc kubenswrapper[4940]: I0223 08:47:55.446418 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"863243ea1e361e4f5cd70d64fa4a0c078de0c0529c9928b413db44a13924c474"} Feb 23 08:47:55 crc kubenswrapper[4940]: I0223 08:47:55.446585 4940 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 08:47:55 crc kubenswrapper[4940]: I0223 08:47:55.446949 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:47:55 crc kubenswrapper[4940]: I0223 08:47:55.447003 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:47:55 crc kubenswrapper[4940]: I0223 08:47:55.447015 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:47:55 crc kubenswrapper[4940]: I0223 08:47:55.447503 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:47:55 crc kubenswrapper[4940]: I0223 08:47:55.447545 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:47:55 crc kubenswrapper[4940]: I0223 08:47:55.447557 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:47:55 crc kubenswrapper[4940]: I0223 08:47:55.448172 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:47:55 crc kubenswrapper[4940]: I0223 08:47:55.448258 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:47:55 crc kubenswrapper[4940]: I0223 08:47:55.448305 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:47:55 crc kubenswrapper[4940]: I0223 08:47:55.650426 4940 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 23 08:47:55 crc kubenswrapper[4940]: I0223 08:47:55.720143 4940 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 08:47:55 crc kubenswrapper[4940]: I0223 08:47:55.721502 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:47:55 crc kubenswrapper[4940]: I0223 08:47:55.721594 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:47:55 crc kubenswrapper[4940]: I0223 08:47:55.721654 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:47:55 crc kubenswrapper[4940]: I0223 08:47:55.721702 4940 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 23 08:47:55 crc kubenswrapper[4940]: I0223 08:47:55.966035 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 23 08:47:55 crc kubenswrapper[4940]: I0223 08:47:55.966205 4940 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 08:47:55 crc kubenswrapper[4940]: I0223 08:47:55.967812 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:47:55 crc kubenswrapper[4940]: I0223 08:47:55.967863 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:47:55 crc kubenswrapper[4940]: I0223 08:47:55.967877 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:47:56 crc kubenswrapper[4940]: I0223 08:47:56.283143 4940 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 15:26:34.924990466 +0000 UTC Feb 23 08:47:56 crc kubenswrapper[4940]: I0223 08:47:56.290791 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Feb 23 08:47:56 crc kubenswrapper[4940]: I0223 08:47:56.448862 4940 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 08:47:56 crc kubenswrapper[4940]: I0223 08:47:56.450176 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:47:56 crc kubenswrapper[4940]: I0223 08:47:56.450223 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:47:56 crc kubenswrapper[4940]: I0223 08:47:56.450238 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:47:57 crc kubenswrapper[4940]: I0223 08:47:57.283494 4940 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 06:09:05.147816672 +0000 UTC Feb 23 08:47:57 crc kubenswrapper[4940]: I0223 08:47:57.452153 4940 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 08:47:57 crc kubenswrapper[4940]: I0223 08:47:57.453105 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:47:57 crc kubenswrapper[4940]: I0223 08:47:57.453147 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:47:57 crc kubenswrapper[4940]: I0223 08:47:57.453158 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:47:57 crc kubenswrapper[4940]: I0223 08:47:57.567578 4940 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 23 08:47:57 crc kubenswrapper[4940]: I0223 08:47:57.567759 4940 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 08:47:57 crc kubenswrapper[4940]: I0223 08:47:57.569023 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:47:57 crc kubenswrapper[4940]: I0223 08:47:57.569081 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:47:57 crc kubenswrapper[4940]: I0223 08:47:57.569098 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:47:57 crc kubenswrapper[4940]: I0223 08:47:57.737683 4940 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 23 08:47:57 crc kubenswrapper[4940]: I0223 08:47:57.738011 4940 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 08:47:57 crc kubenswrapper[4940]: I0223 08:47:57.739478 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:47:57 crc kubenswrapper[4940]: I0223 08:47:57.739509 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:47:57 crc kubenswrapper[4940]: I0223 08:47:57.739520 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:47:58 crc kubenswrapper[4940]: I0223 08:47:58.284027 4940 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 12:54:12.857431928 +0000 UTC Feb 23 08:47:58 crc kubenswrapper[4940]: I0223 08:47:58.499169 4940 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Feb 23 08:47:58 crc kubenswrapper[4940]: I0223 08:47:58.499468 4940 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 08:47:58 crc kubenswrapper[4940]: I0223 08:47:58.501053 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:47:58 crc kubenswrapper[4940]: I0223 08:47:58.501128 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:47:58 crc kubenswrapper[4940]: I0223 08:47:58.501144 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:47:59 crc kubenswrapper[4940]: I0223 08:47:59.284457 4940 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 05:53:10.282182488 +0000 UTC Feb 23 08:47:59 crc kubenswrapper[4940]: E0223 08:47:59.418227 4940 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 23 08:48:00 crc kubenswrapper[4940]: I0223 08:48:00.285194 4940 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 08:27:52.767488962 +0000 UTC Feb 23 08:48:00 crc kubenswrapper[4940]: I0223 08:48:00.483057 4940 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 23 08:48:00 crc kubenswrapper[4940]: I0223 08:48:00.483364 4940 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 08:48:00 crc kubenswrapper[4940]: I0223 08:48:00.485311 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:48:00 crc kubenswrapper[4940]: I0223 08:48:00.485368 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:48:00 crc kubenswrapper[4940]: I0223 08:48:00.485380 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:48:00 crc kubenswrapper[4940]: I0223 08:48:00.488071 4940 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 23 08:48:00 crc kubenswrapper[4940]: I0223 08:48:00.567546 4940 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 23 08:48:00 crc kubenswrapper[4940]: I0223 08:48:00.567677 4940 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 23 08:48:01 crc kubenswrapper[4940]: I0223 08:48:01.285399 4940 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 23:17:50.054657674 +0000 UTC Feb 23 08:48:01 crc kubenswrapper[4940]: I0223 08:48:01.464226 4940 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 08:48:01 crc kubenswrapper[4940]: I0223 08:48:01.465139 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:48:01 crc kubenswrapper[4940]: I0223 08:48:01.465208 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:48:01 crc kubenswrapper[4940]: I0223 08:48:01.465228 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:48:01 crc kubenswrapper[4940]: I0223 08:48:01.468838 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 23 08:48:02 crc kubenswrapper[4940]: I0223 08:48:02.119722 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 23 08:48:02 crc kubenswrapper[4940]: I0223 08:48:02.286236 4940 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 10:09:51.79363686 +0000 UTC Feb 23 08:48:02 crc kubenswrapper[4940]: I0223 08:48:02.467668 4940 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 08:48:02 crc kubenswrapper[4940]: I0223 08:48:02.468681 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:48:02 crc kubenswrapper[4940]: I0223 08:48:02.468744 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:48:02 crc kubenswrapper[4940]: I0223 08:48:02.468756 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:48:03 crc kubenswrapper[4940]: I0223 08:48:03.286421 4940 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 17:20:39.091839348 +0000 UTC Feb 23 08:48:03 crc kubenswrapper[4940]: I0223 08:48:03.470587 4940 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 08:48:03 crc kubenswrapper[4940]: I0223 08:48:03.471893 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:48:03 crc kubenswrapper[4940]: I0223 08:48:03.471949 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:48:03 crc kubenswrapper[4940]: I0223 08:48:03.471961 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:48:04 crc kubenswrapper[4940]: E0223 08:48:04.142834 4940 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:48:04Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Feb 23 08:48:04 crc kubenswrapper[4940]: E0223 08:48:04.143603 4940 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:48:04Z is after 2026-02-23T05:33:13Z" node="crc" Feb 23 08:48:04 crc kubenswrapper[4940]: W0223 08:48:04.148825 4940 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:48:04Z is after 2026-02-23T05:33:13Z Feb 23 08:48:04 crc kubenswrapper[4940]: E0223 08:48:04.148895 4940 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:48:04Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Feb 23 08:48:04 crc kubenswrapper[4940]: W0223 08:48:04.150477 4940 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:48:04Z is after 2026-02-23T05:33:13Z Feb 23 08:48:04 crc kubenswrapper[4940]: E0223 08:48:04.150515 4940 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:48:04Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Feb 23 08:48:04 crc kubenswrapper[4940]: I0223 08:48:04.152675 4940 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:48:04Z is after 2026-02-23T05:33:13Z Feb 23 08:48:04 crc kubenswrapper[4940]: W0223 08:48:04.154069 4940 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:48:04Z is after 2026-02-23T05:33:13Z Feb 23 08:48:04 crc kubenswrapper[4940]: E0223 08:48:04.154119 4940 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:48:04Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Feb 23 08:48:04 crc kubenswrapper[4940]: I0223 08:48:04.158409 4940 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Feb 23 08:48:04 crc kubenswrapper[4940]: I0223 08:48:04.158460 4940 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Feb 23 08:48:04 crc kubenswrapper[4940]: E0223 08:48:04.160169 4940 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:48:04Z is after 2026-02-23T05:33:13Z" interval="6.4s" Feb 23 08:48:04 crc kubenswrapper[4940]: E0223 08:48:04.160584 4940 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:48:04Z is after 2026-02-23T05:33:13Z" event="&Event{ObjectMeta:{crc.1896d3e196ff1774 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-23 08:47:49.271271284 +0000 UTC m=+0.654477441,LastTimestamp:2026-02-23 08:47:49.271271284 +0000 UTC m=+0.654477441,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 23 08:48:04 crc kubenswrapper[4940]: W0223 08:48:04.165121 4940 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:48:04Z is after 2026-02-23T05:33:13Z Feb 23 08:48:04 crc kubenswrapper[4940]: E0223 08:48:04.165221 4940 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:48:04Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Feb 23 08:48:04 crc kubenswrapper[4940]: I0223 08:48:04.170111 4940 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Feb 23 08:48:04 crc kubenswrapper[4940]: I0223 08:48:04.170189 4940 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Feb 23 08:48:04 crc kubenswrapper[4940]: I0223 08:48:04.222239 4940 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Feb 23 08:48:04 crc kubenswrapper[4940]: I0223 08:48:04.222335 4940 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Feb 23 08:48:04 crc kubenswrapper[4940]: I0223 08:48:04.277260 4940 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:48:04Z is after 2026-02-23T05:33:13Z Feb 23 08:48:04 crc kubenswrapper[4940]: I0223 08:48:04.287585 4940 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 23:02:04.033066961 +0000 UTC Feb 23 08:48:04 crc kubenswrapper[4940]: I0223 08:48:04.475962 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 23 08:48:04 crc kubenswrapper[4940]: I0223 08:48:04.478248 4940 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="320ce295e7959b674abee36c26a76f4a999afd57b59d9ddedca5a91f15dbf89e" exitCode=255 Feb 23 08:48:04 crc kubenswrapper[4940]: I0223 08:48:04.478301 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"320ce295e7959b674abee36c26a76f4a999afd57b59d9ddedca5a91f15dbf89e"} Feb 23 08:48:04 crc kubenswrapper[4940]: I0223 08:48:04.478546 4940 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 08:48:04 crc kubenswrapper[4940]: I0223 08:48:04.479552 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:48:04 crc kubenswrapper[4940]: I0223 08:48:04.479589 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:48:04 crc kubenswrapper[4940]: I0223 08:48:04.479602 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:48:04 crc kubenswrapper[4940]: I0223 08:48:04.480259 4940 scope.go:117] "RemoveContainer" containerID="320ce295e7959b674abee36c26a76f4a999afd57b59d9ddedca5a91f15dbf89e" Feb 23 08:48:05 crc kubenswrapper[4940]: I0223 08:48:05.277477 4940 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:48:05Z is after 2026-02-23T05:33:13Z Feb 23 08:48:05 crc kubenswrapper[4940]: I0223 08:48:05.288060 4940 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 17:33:23.73835598 +0000 UTC Feb 23 08:48:05 crc kubenswrapper[4940]: I0223 08:48:05.482655 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 23 08:48:05 crc kubenswrapper[4940]: I0223 08:48:05.484499 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"83fc36bc05fe2b5753911ece28d8f15c017c155963372f73699d2632fbddbcfe"} Feb 23 08:48:05 crc kubenswrapper[4940]: I0223 08:48:05.484666 4940 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 08:48:05 crc kubenswrapper[4940]: I0223 08:48:05.485470 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:48:05 crc kubenswrapper[4940]: I0223 08:48:05.485558 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:48:05 crc kubenswrapper[4940]: I0223 08:48:05.485572 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:48:06 crc kubenswrapper[4940]: I0223 08:48:06.277131 4940 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:48:06Z is after 2026-02-23T05:33:13Z Feb 23 08:48:06 crc kubenswrapper[4940]: I0223 08:48:06.288558 4940 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 19:54:18.013210089 +0000 UTC Feb 23 08:48:06 crc kubenswrapper[4940]: I0223 08:48:06.489237 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Feb 23 08:48:06 crc kubenswrapper[4940]: I0223 08:48:06.489878 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 23 08:48:06 crc kubenswrapper[4940]: I0223 08:48:06.491451 4940 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="83fc36bc05fe2b5753911ece28d8f15c017c155963372f73699d2632fbddbcfe" exitCode=255 Feb 23 08:48:06 crc kubenswrapper[4940]: I0223 08:48:06.491493 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"83fc36bc05fe2b5753911ece28d8f15c017c155963372f73699d2632fbddbcfe"} Feb 23 08:48:06 crc kubenswrapper[4940]: I0223 08:48:06.491550 4940 scope.go:117] "RemoveContainer" containerID="320ce295e7959b674abee36c26a76f4a999afd57b59d9ddedca5a91f15dbf89e" Feb 23 08:48:06 crc kubenswrapper[4940]: I0223 08:48:06.491736 4940 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 08:48:06 crc kubenswrapper[4940]: I0223 08:48:06.492418 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:48:06 crc kubenswrapper[4940]: I0223 08:48:06.492440 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:48:06 crc kubenswrapper[4940]: I0223 08:48:06.492448 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:48:06 crc kubenswrapper[4940]: I0223 08:48:06.493026 4940 scope.go:117] "RemoveContainer" containerID="83fc36bc05fe2b5753911ece28d8f15c017c155963372f73699d2632fbddbcfe" Feb 23 08:48:06 crc kubenswrapper[4940]: E0223 08:48:06.493223 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 23 08:48:07 crc kubenswrapper[4940]: I0223 08:48:07.276240 4940 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:48:07Z is after 2026-02-23T05:33:13Z Feb 23 08:48:07 crc kubenswrapper[4940]: I0223 08:48:07.289446 4940 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 16:31:50.021994217 +0000 UTC Feb 23 08:48:07 crc kubenswrapper[4940]: I0223 08:48:07.494321 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Feb 23 08:48:07 crc kubenswrapper[4940]: I0223 08:48:07.743268 4940 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 23 08:48:07 crc kubenswrapper[4940]: I0223 08:48:07.743408 4940 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 08:48:07 crc kubenswrapper[4940]: I0223 08:48:07.744382 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:48:07 crc kubenswrapper[4940]: I0223 08:48:07.744427 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:48:07 crc kubenswrapper[4940]: I0223 08:48:07.744439 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:48:07 crc kubenswrapper[4940]: I0223 08:48:07.745149 4940 scope.go:117] "RemoveContainer" containerID="83fc36bc05fe2b5753911ece28d8f15c017c155963372f73699d2632fbddbcfe" Feb 23 08:48:07 crc kubenswrapper[4940]: E0223 08:48:07.745362 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 23 08:48:07 crc kubenswrapper[4940]: I0223 08:48:07.747524 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 23 08:48:08 crc kubenswrapper[4940]: I0223 08:48:08.277341 4940 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:48:08Z is after 2026-02-23T05:33:13Z Feb 23 08:48:08 crc kubenswrapper[4940]: I0223 08:48:08.289928 4940 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 09:18:07.41469448 +0000 UTC Feb 23 08:48:08 crc kubenswrapper[4940]: I0223 08:48:08.498876 4940 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 08:48:08 crc kubenswrapper[4940]: I0223 08:48:08.500199 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:48:08 crc kubenswrapper[4940]: I0223 08:48:08.500267 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:48:08 crc kubenswrapper[4940]: I0223 08:48:08.500282 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:48:08 crc kubenswrapper[4940]: I0223 08:48:08.501437 4940 scope.go:117] "RemoveContainer" containerID="83fc36bc05fe2b5753911ece28d8f15c017c155963372f73699d2632fbddbcfe" Feb 23 08:48:08 crc kubenswrapper[4940]: E0223 08:48:08.501707 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 23 08:48:08 crc kubenswrapper[4940]: I0223 08:48:08.531635 4940 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Feb 23 08:48:08 crc kubenswrapper[4940]: I0223 08:48:08.531844 4940 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 08:48:08 crc kubenswrapper[4940]: I0223 08:48:08.533057 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:48:08 crc kubenswrapper[4940]: I0223 08:48:08.533249 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:48:08 crc kubenswrapper[4940]: I0223 08:48:08.533463 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:48:08 crc kubenswrapper[4940]: I0223 08:48:08.543008 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Feb 23 08:48:09 crc kubenswrapper[4940]: I0223 08:48:09.276083 4940 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:48:09Z is after 2026-02-23T05:33:13Z Feb 23 08:48:09 crc kubenswrapper[4940]: I0223 08:48:09.290475 4940 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 00:56:23.668894087 +0000 UTC Feb 23 08:48:09 crc kubenswrapper[4940]: E0223 08:48:09.418323 4940 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 23 08:48:09 crc kubenswrapper[4940]: I0223 08:48:09.501816 4940 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 08:48:09 crc kubenswrapper[4940]: I0223 08:48:09.502539 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:48:09 crc kubenswrapper[4940]: I0223 08:48:09.502565 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:48:09 crc kubenswrapper[4940]: I0223 08:48:09.502577 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:48:10 crc kubenswrapper[4940]: I0223 08:48:10.067624 4940 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 23 08:48:10 crc kubenswrapper[4940]: I0223 08:48:10.067824 4940 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 08:48:10 crc kubenswrapper[4940]: I0223 08:48:10.068983 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:48:10 crc kubenswrapper[4940]: I0223 08:48:10.069021 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:48:10 crc kubenswrapper[4940]: I0223 08:48:10.069033 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:48:10 crc kubenswrapper[4940]: I0223 08:48:10.069679 4940 scope.go:117] "RemoveContainer" containerID="83fc36bc05fe2b5753911ece28d8f15c017c155963372f73699d2632fbddbcfe" Feb 23 08:48:10 crc kubenswrapper[4940]: E0223 08:48:10.069873 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 23 08:48:10 crc kubenswrapper[4940]: I0223 08:48:10.277104 4940 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:48:10Z is after 2026-02-23T05:33:13Z Feb 23 08:48:10 crc kubenswrapper[4940]: I0223 08:48:10.291449 4940 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 01:52:39.379875674 +0000 UTC Feb 23 08:48:10 crc kubenswrapper[4940]: I0223 08:48:10.544818 4940 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 08:48:10 crc kubenswrapper[4940]: I0223 08:48:10.546108 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:48:10 crc kubenswrapper[4940]: I0223 08:48:10.546160 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:48:10 crc kubenswrapper[4940]: I0223 08:48:10.546173 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:48:10 crc kubenswrapper[4940]: I0223 08:48:10.546199 4940 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 23 08:48:10 crc kubenswrapper[4940]: E0223 08:48:10.548895 4940 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:48:10Z is after 2026-02-23T05:33:13Z" node="crc" Feb 23 08:48:10 crc kubenswrapper[4940]: E0223 08:48:10.563993 4940 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:48:10Z is after 2026-02-23T05:33:13Z" interval="7s" Feb 23 08:48:10 crc kubenswrapper[4940]: I0223 08:48:10.568107 4940 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 23 08:48:10 crc kubenswrapper[4940]: I0223 08:48:10.568179 4940 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 23 08:48:11 crc kubenswrapper[4940]: W0223 08:48:11.198332 4940 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:48:11Z is after 2026-02-23T05:33:13Z Feb 23 08:48:11 crc kubenswrapper[4940]: E0223 08:48:11.198417 4940 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:48:11Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Feb 23 08:48:11 crc kubenswrapper[4940]: I0223 08:48:11.276657 4940 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:48:11Z is after 2026-02-23T05:33:13Z Feb 23 08:48:11 crc kubenswrapper[4940]: I0223 08:48:11.291526 4940 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 05:00:40.418453989 +0000 UTC Feb 23 08:48:12 crc kubenswrapper[4940]: W0223 08:48:12.176008 4940 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:48:12Z is after 2026-02-23T05:33:13Z Feb 23 08:48:12 crc kubenswrapper[4940]: E0223 08:48:12.176180 4940 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:48:12Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Feb 23 08:48:12 crc kubenswrapper[4940]: I0223 08:48:12.277179 4940 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:48:12Z is after 2026-02-23T05:33:13Z Feb 23 08:48:12 crc kubenswrapper[4940]: I0223 08:48:12.292541 4940 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-18 14:00:47.033464092 +0000 UTC Feb 23 08:48:12 crc kubenswrapper[4940]: I0223 08:48:12.443515 4940 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 23 08:48:12 crc kubenswrapper[4940]: E0223 08:48:12.448803 4940 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:48:12Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Feb 23 08:48:13 crc kubenswrapper[4940]: I0223 08:48:13.277422 4940 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:48:13Z is after 2026-02-23T05:33:13Z Feb 23 08:48:13 crc kubenswrapper[4940]: I0223 08:48:13.293104 4940 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 23:07:28.356812343 +0000 UTC Feb 23 08:48:14 crc kubenswrapper[4940]: E0223 08:48:14.165788 4940 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:48:14Z is after 2026-02-23T05:33:13Z" event="&Event{ObjectMeta:{crc.1896d3e196ff1774 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-23 08:47:49.271271284 +0000 UTC m=+0.654477441,LastTimestamp:2026-02-23 08:47:49.271271284 +0000 UTC m=+0.654477441,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 23 08:48:14 crc kubenswrapper[4940]: I0223 08:48:14.220677 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 23 08:48:14 crc kubenswrapper[4940]: I0223 08:48:14.220886 4940 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 08:48:14 crc kubenswrapper[4940]: I0223 08:48:14.222011 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:48:14 crc kubenswrapper[4940]: I0223 08:48:14.222059 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:48:14 crc kubenswrapper[4940]: I0223 08:48:14.222074 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:48:14 crc kubenswrapper[4940]: I0223 08:48:14.222687 4940 scope.go:117] "RemoveContainer" containerID="83fc36bc05fe2b5753911ece28d8f15c017c155963372f73699d2632fbddbcfe" Feb 23 08:48:14 crc kubenswrapper[4940]: E0223 08:48:14.222861 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 23 08:48:14 crc kubenswrapper[4940]: I0223 08:48:14.277140 4940 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:48:14Z is after 2026-02-23T05:33:13Z Feb 23 08:48:14 crc kubenswrapper[4940]: I0223 08:48:14.293920 4940 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 01:11:52.436905066 +0000 UTC Feb 23 08:48:15 crc kubenswrapper[4940]: W0223 08:48:15.143522 4940 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:48:15Z is after 2026-02-23T05:33:13Z Feb 23 08:48:15 crc kubenswrapper[4940]: E0223 08:48:15.143603 4940 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:48:15Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Feb 23 08:48:15 crc kubenswrapper[4940]: I0223 08:48:15.276423 4940 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:48:15Z is after 2026-02-23T05:33:13Z Feb 23 08:48:15 crc kubenswrapper[4940]: I0223 08:48:15.295032 4940 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 18:17:43.091238484 +0000 UTC Feb 23 08:48:16 crc kubenswrapper[4940]: I0223 08:48:16.277527 4940 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:48:16Z is after 2026-02-23T05:33:13Z Feb 23 08:48:16 crc kubenswrapper[4940]: I0223 08:48:16.295480 4940 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 20:08:56.301371783 +0000 UTC Feb 23 08:48:16 crc kubenswrapper[4940]: W0223 08:48:16.867737 4940 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:48:16Z is after 2026-02-23T05:33:13Z Feb 23 08:48:16 crc kubenswrapper[4940]: E0223 08:48:16.867819 4940 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:48:16Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Feb 23 08:48:17 crc kubenswrapper[4940]: I0223 08:48:17.277112 4940 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:48:17Z is after 2026-02-23T05:33:13Z Feb 23 08:48:17 crc kubenswrapper[4940]: I0223 08:48:17.296468 4940 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 14:04:14.004052159 +0000 UTC Feb 23 08:48:17 crc kubenswrapper[4940]: I0223 08:48:17.549914 4940 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 08:48:17 crc kubenswrapper[4940]: I0223 08:48:17.551737 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:48:17 crc kubenswrapper[4940]: I0223 08:48:17.551822 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:48:17 crc kubenswrapper[4940]: I0223 08:48:17.551835 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:48:17 crc kubenswrapper[4940]: I0223 08:48:17.551875 4940 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 23 08:48:17 crc kubenswrapper[4940]: E0223 08:48:17.557264 4940 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:48:17Z is after 2026-02-23T05:33:13Z" node="crc" Feb 23 08:48:17 crc kubenswrapper[4940]: E0223 08:48:17.570001 4940 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:48:17Z is after 2026-02-23T05:33:13Z" interval="7s" Feb 23 08:48:18 crc kubenswrapper[4940]: I0223 08:48:18.276886 4940 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:48:18Z is after 2026-02-23T05:33:13Z Feb 23 08:48:18 crc kubenswrapper[4940]: I0223 08:48:18.297412 4940 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 02:34:52.703872996 +0000 UTC Feb 23 08:48:19 crc kubenswrapper[4940]: I0223 08:48:19.277086 4940 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:48:19Z is after 2026-02-23T05:33:13Z Feb 23 08:48:19 crc kubenswrapper[4940]: I0223 08:48:19.298178 4940 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 12:43:08.537513349 +0000 UTC Feb 23 08:48:19 crc kubenswrapper[4940]: E0223 08:48:19.418485 4940 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 23 08:48:20 crc kubenswrapper[4940]: I0223 08:48:20.279471 4940 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:48:20Z is after 2026-02-23T05:33:13Z Feb 23 08:48:20 crc kubenswrapper[4940]: I0223 08:48:20.299214 4940 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 12:33:08.780689594 +0000 UTC Feb 23 08:48:20 crc kubenswrapper[4940]: I0223 08:48:20.568076 4940 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 23 08:48:20 crc kubenswrapper[4940]: I0223 08:48:20.568181 4940 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 23 08:48:20 crc kubenswrapper[4940]: I0223 08:48:20.568282 4940 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 23 08:48:20 crc kubenswrapper[4940]: I0223 08:48:20.568469 4940 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 08:48:20 crc kubenswrapper[4940]: I0223 08:48:20.570120 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:48:20 crc kubenswrapper[4940]: I0223 08:48:20.570156 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:48:20 crc kubenswrapper[4940]: I0223 08:48:20.570169 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:48:20 crc kubenswrapper[4940]: I0223 08:48:20.570672 4940 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="cluster-policy-controller" containerStatusID={"Type":"cri-o","ID":"e9cd5739e2a71a55c1bf38b8f783c024e47076dcc74b4524b08a2b60ed3d7fd5"} pod="openshift-kube-controller-manager/kube-controller-manager-crc" containerMessage="Container cluster-policy-controller failed startup probe, will be restarted" Feb 23 08:48:20 crc kubenswrapper[4940]: I0223 08:48:20.570860 4940 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" containerID="cri-o://e9cd5739e2a71a55c1bf38b8f783c024e47076dcc74b4524b08a2b60ed3d7fd5" gracePeriod=30 Feb 23 08:48:21 crc kubenswrapper[4940]: I0223 08:48:21.277072 4940 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:48:21Z is after 2026-02-23T05:33:13Z Feb 23 08:48:21 crc kubenswrapper[4940]: I0223 08:48:21.300253 4940 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 22:08:08.428619769 +0000 UTC Feb 23 08:48:21 crc kubenswrapper[4940]: I0223 08:48:21.537151 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/cluster-policy-controller/0.log" Feb 23 08:48:21 crc kubenswrapper[4940]: I0223 08:48:21.537496 4940 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="e9cd5739e2a71a55c1bf38b8f783c024e47076dcc74b4524b08a2b60ed3d7fd5" exitCode=255 Feb 23 08:48:21 crc kubenswrapper[4940]: I0223 08:48:21.537533 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"e9cd5739e2a71a55c1bf38b8f783c024e47076dcc74b4524b08a2b60ed3d7fd5"} Feb 23 08:48:21 crc kubenswrapper[4940]: I0223 08:48:21.537560 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"73db3b91fa5cdca73acde9a3bcb9062394a54e61e87020b5d656290a54aa4f70"} Feb 23 08:48:21 crc kubenswrapper[4940]: I0223 08:48:21.537679 4940 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 08:48:21 crc kubenswrapper[4940]: I0223 08:48:21.538793 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:48:21 crc kubenswrapper[4940]: I0223 08:48:21.538818 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:48:21 crc kubenswrapper[4940]: I0223 08:48:21.538826 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:48:22 crc kubenswrapper[4940]: I0223 08:48:22.120309 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 23 08:48:22 crc kubenswrapper[4940]: I0223 08:48:22.277126 4940 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:48:22Z is after 2026-02-23T05:33:13Z Feb 23 08:48:22 crc kubenswrapper[4940]: I0223 08:48:22.300742 4940 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 20:19:57.237285575 +0000 UTC Feb 23 08:48:22 crc kubenswrapper[4940]: I0223 08:48:22.539888 4940 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 08:48:22 crc kubenswrapper[4940]: I0223 08:48:22.541196 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:48:22 crc kubenswrapper[4940]: I0223 08:48:22.541457 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:48:22 crc kubenswrapper[4940]: I0223 08:48:22.541687 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:48:23 crc kubenswrapper[4940]: I0223 08:48:23.276934 4940 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:48:23Z is after 2026-02-23T05:33:13Z Feb 23 08:48:23 crc kubenswrapper[4940]: I0223 08:48:23.301112 4940 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 14:04:17.625772976 +0000 UTC Feb 23 08:48:24 crc kubenswrapper[4940]: E0223 08:48:24.170013 4940 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:48:24Z is after 2026-02-23T05:33:13Z" event="&Event{ObjectMeta:{crc.1896d3e196ff1774 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-23 08:47:49.271271284 +0000 UTC m=+0.654477441,LastTimestamp:2026-02-23 08:47:49.271271284 +0000 UTC m=+0.654477441,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 23 08:48:24 crc kubenswrapper[4940]: I0223 08:48:24.280138 4940 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:48:24Z is after 2026-02-23T05:33:13Z Feb 23 08:48:24 crc kubenswrapper[4940]: I0223 08:48:24.301662 4940 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 23:42:53.715658733 +0000 UTC Feb 23 08:48:24 crc kubenswrapper[4940]: I0223 08:48:24.558413 4940 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 08:48:24 crc kubenswrapper[4940]: I0223 08:48:24.559690 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:48:24 crc kubenswrapper[4940]: I0223 08:48:24.559726 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:48:24 crc kubenswrapper[4940]: I0223 08:48:24.559736 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:48:24 crc kubenswrapper[4940]: I0223 08:48:24.559789 4940 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 23 08:48:24 crc kubenswrapper[4940]: E0223 08:48:24.563148 4940 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:48:24Z is after 2026-02-23T05:33:13Z" node="crc" Feb 23 08:48:24 crc kubenswrapper[4940]: E0223 08:48:24.573567 4940 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:48:24Z is after 2026-02-23T05:33:13Z" interval="7s" Feb 23 08:48:25 crc kubenswrapper[4940]: I0223 08:48:25.276997 4940 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:48:25Z is after 2026-02-23T05:33:13Z Feb 23 08:48:25 crc kubenswrapper[4940]: I0223 08:48:25.302449 4940 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 01:58:59.308121698 +0000 UTC Feb 23 08:48:26 crc kubenswrapper[4940]: I0223 08:48:26.277542 4940 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:48:26Z is after 2026-02-23T05:33:13Z Feb 23 08:48:26 crc kubenswrapper[4940]: I0223 08:48:26.303069 4940 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 23:59:04.705833981 +0000 UTC Feb 23 08:48:26 crc kubenswrapper[4940]: I0223 08:48:26.344878 4940 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 08:48:26 crc kubenswrapper[4940]: I0223 08:48:26.346283 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:48:26 crc kubenswrapper[4940]: I0223 08:48:26.346345 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:48:26 crc kubenswrapper[4940]: I0223 08:48:26.346375 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:48:26 crc kubenswrapper[4940]: I0223 08:48:26.347418 4940 scope.go:117] "RemoveContainer" containerID="83fc36bc05fe2b5753911ece28d8f15c017c155963372f73699d2632fbddbcfe" Feb 23 08:48:26 crc kubenswrapper[4940]: I0223 08:48:26.553134 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Feb 23 08:48:26 crc kubenswrapper[4940]: I0223 08:48:26.555162 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"8c6585ef982c59b0a07aee05f22dc686b80323299c9ca43be2ddd4365e977c33"} Feb 23 08:48:26 crc kubenswrapper[4940]: I0223 08:48:26.555334 4940 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 08:48:26 crc kubenswrapper[4940]: I0223 08:48:26.556236 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:48:26 crc kubenswrapper[4940]: I0223 08:48:26.556294 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:48:26 crc kubenswrapper[4940]: I0223 08:48:26.556306 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:48:27 crc kubenswrapper[4940]: I0223 08:48:27.276881 4940 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:48:27Z is after 2026-02-23T05:33:13Z Feb 23 08:48:27 crc kubenswrapper[4940]: I0223 08:48:27.303447 4940 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 05:19:06.043781084 +0000 UTC Feb 23 08:48:27 crc kubenswrapper[4940]: I0223 08:48:27.559550 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/2.log" Feb 23 08:48:27 crc kubenswrapper[4940]: I0223 08:48:27.560204 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Feb 23 08:48:27 crc kubenswrapper[4940]: I0223 08:48:27.562073 4940 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="8c6585ef982c59b0a07aee05f22dc686b80323299c9ca43be2ddd4365e977c33" exitCode=255 Feb 23 08:48:27 crc kubenswrapper[4940]: I0223 08:48:27.562132 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"8c6585ef982c59b0a07aee05f22dc686b80323299c9ca43be2ddd4365e977c33"} Feb 23 08:48:27 crc kubenswrapper[4940]: I0223 08:48:27.562192 4940 scope.go:117] "RemoveContainer" containerID="83fc36bc05fe2b5753911ece28d8f15c017c155963372f73699d2632fbddbcfe" Feb 23 08:48:27 crc kubenswrapper[4940]: I0223 08:48:27.562415 4940 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 08:48:27 crc kubenswrapper[4940]: I0223 08:48:27.564173 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:48:27 crc kubenswrapper[4940]: I0223 08:48:27.564244 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:48:27 crc kubenswrapper[4940]: I0223 08:48:27.564260 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:48:27 crc kubenswrapper[4940]: I0223 08:48:27.565017 4940 scope.go:117] "RemoveContainer" containerID="8c6585ef982c59b0a07aee05f22dc686b80323299c9ca43be2ddd4365e977c33" Feb 23 08:48:27 crc kubenswrapper[4940]: E0223 08:48:27.565239 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 23 08:48:27 crc kubenswrapper[4940]: I0223 08:48:27.567716 4940 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 23 08:48:27 crc kubenswrapper[4940]: I0223 08:48:27.568031 4940 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 08:48:27 crc kubenswrapper[4940]: I0223 08:48:27.569810 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:48:27 crc kubenswrapper[4940]: I0223 08:48:27.569859 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:48:27 crc kubenswrapper[4940]: I0223 08:48:27.569872 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:48:28 crc kubenswrapper[4940]: I0223 08:48:28.279157 4940 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:48:28Z is after 2026-02-23T05:33:13Z Feb 23 08:48:28 crc kubenswrapper[4940]: I0223 08:48:28.304595 4940 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 18:42:04.891060288 +0000 UTC Feb 23 08:48:28 crc kubenswrapper[4940]: I0223 08:48:28.572109 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/2.log" Feb 23 08:48:29 crc kubenswrapper[4940]: W0223 08:48:29.208651 4940 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:48:29Z is after 2026-02-23T05:33:13Z Feb 23 08:48:29 crc kubenswrapper[4940]: E0223 08:48:29.208735 4940 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:48:29Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Feb 23 08:48:29 crc kubenswrapper[4940]: I0223 08:48:29.276858 4940 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:48:29Z is after 2026-02-23T05:33:13Z Feb 23 08:48:29 crc kubenswrapper[4940]: I0223 08:48:29.305369 4940 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 14:51:34.946049264 +0000 UTC Feb 23 08:48:29 crc kubenswrapper[4940]: I0223 08:48:29.324737 4940 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 23 08:48:29 crc kubenswrapper[4940]: E0223 08:48:29.328638 4940 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:48:29Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Feb 23 08:48:29 crc kubenswrapper[4940]: E0223 08:48:29.329821 4940 certificate_manager.go:440] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Reached backoff limit, still unable to rotate certs: timed out waiting for the condition" logger="UnhandledError" Feb 23 08:48:29 crc kubenswrapper[4940]: E0223 08:48:29.418595 4940 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 23 08:48:30 crc kubenswrapper[4940]: I0223 08:48:30.067249 4940 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 23 08:48:30 crc kubenswrapper[4940]: I0223 08:48:30.067446 4940 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 08:48:30 crc kubenswrapper[4940]: I0223 08:48:30.068648 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:48:30 crc kubenswrapper[4940]: I0223 08:48:30.068735 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:48:30 crc kubenswrapper[4940]: I0223 08:48:30.068755 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:48:30 crc kubenswrapper[4940]: I0223 08:48:30.069690 4940 scope.go:117] "RemoveContainer" containerID="8c6585ef982c59b0a07aee05f22dc686b80323299c9ca43be2ddd4365e977c33" Feb 23 08:48:30 crc kubenswrapper[4940]: E0223 08:48:30.069948 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 23 08:48:30 crc kubenswrapper[4940]: I0223 08:48:30.277146 4940 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:48:30Z is after 2026-02-23T05:33:13Z Feb 23 08:48:30 crc kubenswrapper[4940]: I0223 08:48:30.305961 4940 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 07:19:32.283175707 +0000 UTC Feb 23 08:48:30 crc kubenswrapper[4940]: I0223 08:48:30.568331 4940 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 23 08:48:30 crc kubenswrapper[4940]: I0223 08:48:30.568402 4940 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 23 08:48:31 crc kubenswrapper[4940]: I0223 08:48:31.277951 4940 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:48:31Z is after 2026-02-23T05:33:13Z Feb 23 08:48:31 crc kubenswrapper[4940]: I0223 08:48:31.306640 4940 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 00:41:11.506600092 +0000 UTC Feb 23 08:48:31 crc kubenswrapper[4940]: I0223 08:48:31.564026 4940 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 08:48:31 crc kubenswrapper[4940]: I0223 08:48:31.565439 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:48:31 crc kubenswrapper[4940]: I0223 08:48:31.565482 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:48:31 crc kubenswrapper[4940]: I0223 08:48:31.565494 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:48:31 crc kubenswrapper[4940]: I0223 08:48:31.565520 4940 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 23 08:48:31 crc kubenswrapper[4940]: E0223 08:48:31.568731 4940 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:48:31Z is after 2026-02-23T05:33:13Z" node="crc" Feb 23 08:48:31 crc kubenswrapper[4940]: E0223 08:48:31.577692 4940 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:48:31Z is after 2026-02-23T05:33:13Z" interval="7s" Feb 23 08:48:32 crc kubenswrapper[4940]: I0223 08:48:32.276424 4940 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:48:32Z is after 2026-02-23T05:33:13Z Feb 23 08:48:32 crc kubenswrapper[4940]: I0223 08:48:32.307292 4940 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 13:40:29.983509597 +0000 UTC Feb 23 08:48:33 crc kubenswrapper[4940]: I0223 08:48:33.276568 4940 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:48:33Z is after 2026-02-23T05:33:13Z Feb 23 08:48:33 crc kubenswrapper[4940]: I0223 08:48:33.308259 4940 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 12:32:27.345805971 +0000 UTC Feb 23 08:48:33 crc kubenswrapper[4940]: W0223 08:48:33.824381 4940 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:48:33Z is after 2026-02-23T05:33:13Z Feb 23 08:48:33 crc kubenswrapper[4940]: E0223 08:48:33.824473 4940 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:48:33Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Feb 23 08:48:34 crc kubenswrapper[4940]: E0223 08:48:34.174487 4940 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:48:34Z is after 2026-02-23T05:33:13Z" event="&Event{ObjectMeta:{crc.1896d3e196ff1774 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-23 08:47:49.271271284 +0000 UTC m=+0.654477441,LastTimestamp:2026-02-23 08:47:49.271271284 +0000 UTC m=+0.654477441,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 23 08:48:34 crc kubenswrapper[4940]: I0223 08:48:34.221717 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 23 08:48:34 crc kubenswrapper[4940]: I0223 08:48:34.221989 4940 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 08:48:34 crc kubenswrapper[4940]: I0223 08:48:34.223680 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:48:34 crc kubenswrapper[4940]: I0223 08:48:34.223730 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:48:34 crc kubenswrapper[4940]: I0223 08:48:34.223749 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:48:34 crc kubenswrapper[4940]: I0223 08:48:34.224528 4940 scope.go:117] "RemoveContainer" containerID="8c6585ef982c59b0a07aee05f22dc686b80323299c9ca43be2ddd4365e977c33" Feb 23 08:48:34 crc kubenswrapper[4940]: E0223 08:48:34.224818 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 23 08:48:34 crc kubenswrapper[4940]: I0223 08:48:34.278789 4940 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:48:34Z is after 2026-02-23T05:33:13Z Feb 23 08:48:34 crc kubenswrapper[4940]: I0223 08:48:34.308582 4940 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 17:40:23.606084624 +0000 UTC Feb 23 08:48:34 crc kubenswrapper[4940]: W0223 08:48:34.410818 4940 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:48:34Z is after 2026-02-23T05:33:13Z Feb 23 08:48:34 crc kubenswrapper[4940]: E0223 08:48:34.410918 4940 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:48:34Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Feb 23 08:48:35 crc kubenswrapper[4940]: I0223 08:48:35.285335 4940 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:48:35Z is after 2026-02-23T05:33:13Z Feb 23 08:48:35 crc kubenswrapper[4940]: I0223 08:48:35.309499 4940 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 10:31:20.735385401 +0000 UTC Feb 23 08:48:36 crc kubenswrapper[4940]: I0223 08:48:36.277021 4940 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:48:36Z is after 2026-02-23T05:33:13Z Feb 23 08:48:36 crc kubenswrapper[4940]: I0223 08:48:36.310576 4940 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 03:27:44.87653872 +0000 UTC Feb 23 08:48:37 crc kubenswrapper[4940]: I0223 08:48:37.277243 4940 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:48:37Z is after 2026-02-23T05:33:13Z Feb 23 08:48:37 crc kubenswrapper[4940]: I0223 08:48:37.310808 4940 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 11:34:27.142412299 +0000 UTC Feb 23 08:48:38 crc kubenswrapper[4940]: I0223 08:48:38.276777 4940 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:48:38Z is after 2026-02-23T05:33:13Z Feb 23 08:48:38 crc kubenswrapper[4940]: I0223 08:48:38.311488 4940 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 20:52:40.753109441 +0000 UTC Feb 23 08:48:38 crc kubenswrapper[4940]: I0223 08:48:38.569935 4940 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 08:48:38 crc kubenswrapper[4940]: I0223 08:48:38.572131 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:48:38 crc kubenswrapper[4940]: I0223 08:48:38.572185 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:48:38 crc kubenswrapper[4940]: I0223 08:48:38.572197 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:48:38 crc kubenswrapper[4940]: I0223 08:48:38.572228 4940 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 23 08:48:38 crc kubenswrapper[4940]: E0223 08:48:38.575636 4940 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:48:38Z is after 2026-02-23T05:33:13Z" node="crc" Feb 23 08:48:38 crc kubenswrapper[4940]: E0223 08:48:38.581046 4940 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:48:38Z is after 2026-02-23T05:33:13Z" interval="7s" Feb 23 08:48:39 crc kubenswrapper[4940]: I0223 08:48:39.277585 4940 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:48:39Z is after 2026-02-23T05:33:13Z Feb 23 08:48:39 crc kubenswrapper[4940]: I0223 08:48:39.312530 4940 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 10:05:59.269012893 +0000 UTC Feb 23 08:48:39 crc kubenswrapper[4940]: W0223 08:48:39.329524 4940 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:48:39Z is after 2026-02-23T05:33:13Z Feb 23 08:48:39 crc kubenswrapper[4940]: E0223 08:48:39.329660 4940 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:48:39Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Feb 23 08:48:39 crc kubenswrapper[4940]: E0223 08:48:39.418807 4940 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 23 08:48:40 crc kubenswrapper[4940]: I0223 08:48:40.277122 4940 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:48:40Z is after 2026-02-23T05:33:13Z Feb 23 08:48:40 crc kubenswrapper[4940]: I0223 08:48:40.313693 4940 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 13:33:30.20643331 +0000 UTC Feb 23 08:48:40 crc kubenswrapper[4940]: I0223 08:48:40.568485 4940 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 23 08:48:40 crc kubenswrapper[4940]: I0223 08:48:40.568568 4940 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 23 08:48:41 crc kubenswrapper[4940]: I0223 08:48:41.277317 4940 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:48:41Z is after 2026-02-23T05:33:13Z Feb 23 08:48:41 crc kubenswrapper[4940]: I0223 08:48:41.314464 4940 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 08:29:30.49713582 +0000 UTC Feb 23 08:48:42 crc kubenswrapper[4940]: I0223 08:48:42.277363 4940 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:48:42Z is after 2026-02-23T05:33:13Z Feb 23 08:48:42 crc kubenswrapper[4940]: I0223 08:48:42.315063 4940 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 02:20:49.765997748 +0000 UTC Feb 23 08:48:43 crc kubenswrapper[4940]: I0223 08:48:43.279091 4940 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:48:43Z is after 2026-02-23T05:33:13Z Feb 23 08:48:43 crc kubenswrapper[4940]: I0223 08:48:43.316138 4940 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 17:44:23.093919193 +0000 UTC Feb 23 08:48:43 crc kubenswrapper[4940]: I0223 08:48:43.758302 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 23 08:48:43 crc kubenswrapper[4940]: I0223 08:48:43.758516 4940 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 08:48:43 crc kubenswrapper[4940]: I0223 08:48:43.759965 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:48:43 crc kubenswrapper[4940]: I0223 08:48:43.760037 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:48:43 crc kubenswrapper[4940]: I0223 08:48:43.760050 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:48:44 crc kubenswrapper[4940]: E0223 08:48:44.178602 4940 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:48:44Z is after 2026-02-23T05:33:13Z" event="&Event{ObjectMeta:{crc.1896d3e196ff1774 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-23 08:47:49.271271284 +0000 UTC m=+0.654477441,LastTimestamp:2026-02-23 08:47:49.271271284 +0000 UTC m=+0.654477441,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 23 08:48:44 crc kubenswrapper[4940]: I0223 08:48:44.277064 4940 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:48:44Z is after 2026-02-23T05:33:13Z Feb 23 08:48:44 crc kubenswrapper[4940]: I0223 08:48:44.316684 4940 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 02:19:17.359696713 +0000 UTC Feb 23 08:48:45 crc kubenswrapper[4940]: I0223 08:48:45.276651 4940 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:48:45Z is after 2026-02-23T05:33:13Z Feb 23 08:48:45 crc kubenswrapper[4940]: I0223 08:48:45.317447 4940 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 15:38:01.550675943 +0000 UTC Feb 23 08:48:45 crc kubenswrapper[4940]: I0223 08:48:45.576353 4940 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 08:48:45 crc kubenswrapper[4940]: I0223 08:48:45.577586 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:48:45 crc kubenswrapper[4940]: I0223 08:48:45.577633 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:48:45 crc kubenswrapper[4940]: I0223 08:48:45.577646 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:48:45 crc kubenswrapper[4940]: I0223 08:48:45.577671 4940 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 23 08:48:45 crc kubenswrapper[4940]: E0223 08:48:45.580844 4940 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:48:45Z is after 2026-02-23T05:33:13Z" node="crc" Feb 23 08:48:45 crc kubenswrapper[4940]: E0223 08:48:45.587376 4940 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:48:45Z is after 2026-02-23T05:33:13Z" interval="7s" Feb 23 08:48:46 crc kubenswrapper[4940]: I0223 08:48:46.296107 4940 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Feb 23 08:48:46 crc kubenswrapper[4940]: I0223 08:48:46.318478 4940 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 09:17:18.81350866 +0000 UTC Feb 23 08:48:46 crc kubenswrapper[4940]: I0223 08:48:46.345240 4940 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 08:48:46 crc kubenswrapper[4940]: I0223 08:48:46.346558 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:48:46 crc kubenswrapper[4940]: I0223 08:48:46.346626 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:48:46 crc kubenswrapper[4940]: I0223 08:48:46.346638 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:48:46 crc kubenswrapper[4940]: I0223 08:48:46.347256 4940 scope.go:117] "RemoveContainer" containerID="8c6585ef982c59b0a07aee05f22dc686b80323299c9ca43be2ddd4365e977c33" Feb 23 08:48:46 crc kubenswrapper[4940]: E0223 08:48:46.347437 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 23 08:48:47 crc kubenswrapper[4940]: I0223 08:48:47.319449 4940 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 21:39:34.62653803 +0000 UTC Feb 23 08:48:48 crc kubenswrapper[4940]: I0223 08:48:48.320999 4940 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 23:08:03.819477754 +0000 UTC Feb 23 08:48:49 crc kubenswrapper[4940]: I0223 08:48:49.321281 4940 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 05:20:03.557379739 +0000 UTC Feb 23 08:48:49 crc kubenswrapper[4940]: E0223 08:48:49.418960 4940 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 23 08:48:50 crc kubenswrapper[4940]: I0223 08:48:50.322532 4940 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 14:02:46.341624671 +0000 UTC Feb 23 08:48:50 crc kubenswrapper[4940]: I0223 08:48:50.569250 4940 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 23 08:48:50 crc kubenswrapper[4940]: I0223 08:48:50.569359 4940 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 23 08:48:50 crc kubenswrapper[4940]: I0223 08:48:50.569446 4940 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 23 08:48:50 crc kubenswrapper[4940]: I0223 08:48:50.569721 4940 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 08:48:50 crc kubenswrapper[4940]: I0223 08:48:50.571558 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:48:50 crc kubenswrapper[4940]: I0223 08:48:50.571641 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:48:50 crc kubenswrapper[4940]: I0223 08:48:50.571679 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:48:50 crc kubenswrapper[4940]: I0223 08:48:50.572334 4940 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="cluster-policy-controller" containerStatusID={"Type":"cri-o","ID":"73db3b91fa5cdca73acde9a3bcb9062394a54e61e87020b5d656290a54aa4f70"} pod="openshift-kube-controller-manager/kube-controller-manager-crc" containerMessage="Container cluster-policy-controller failed startup probe, will be restarted" Feb 23 08:48:50 crc kubenswrapper[4940]: I0223 08:48:50.572471 4940 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" containerID="cri-o://73db3b91fa5cdca73acde9a3bcb9062394a54e61e87020b5d656290a54aa4f70" gracePeriod=30 Feb 23 08:48:51 crc kubenswrapper[4940]: I0223 08:48:51.323439 4940 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 03:48:33.078706988 +0000 UTC Feb 23 08:48:51 crc kubenswrapper[4940]: I0223 08:48:51.638246 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/cluster-policy-controller/1.log" Feb 23 08:48:51 crc kubenswrapper[4940]: I0223 08:48:51.640056 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/cluster-policy-controller/0.log" Feb 23 08:48:51 crc kubenswrapper[4940]: I0223 08:48:51.640405 4940 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="73db3b91fa5cdca73acde9a3bcb9062394a54e61e87020b5d656290a54aa4f70" exitCode=255 Feb 23 08:48:51 crc kubenswrapper[4940]: I0223 08:48:51.640467 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"73db3b91fa5cdca73acde9a3bcb9062394a54e61e87020b5d656290a54aa4f70"} Feb 23 08:48:51 crc kubenswrapper[4940]: I0223 08:48:51.640533 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"2696757a27ff629002d6fe762776487514e74ad9465357240b3f196cf9e3cec7"} Feb 23 08:48:51 crc kubenswrapper[4940]: I0223 08:48:51.640568 4940 scope.go:117] "RemoveContainer" containerID="e9cd5739e2a71a55c1bf38b8f783c024e47076dcc74b4524b08a2b60ed3d7fd5" Feb 23 08:48:51 crc kubenswrapper[4940]: I0223 08:48:51.640730 4940 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 08:48:51 crc kubenswrapper[4940]: I0223 08:48:51.642518 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:48:51 crc kubenswrapper[4940]: I0223 08:48:51.642584 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:48:51 crc kubenswrapper[4940]: I0223 08:48:51.642599 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:48:52 crc kubenswrapper[4940]: I0223 08:48:52.119595 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 23 08:48:52 crc kubenswrapper[4940]: I0223 08:48:52.324134 4940 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 18:50:20.225097017 +0000 UTC Feb 23 08:48:52 crc kubenswrapper[4940]: I0223 08:48:52.581652 4940 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 08:48:52 crc kubenswrapper[4940]: I0223 08:48:52.583088 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:48:52 crc kubenswrapper[4940]: I0223 08:48:52.583220 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:48:52 crc kubenswrapper[4940]: I0223 08:48:52.583311 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:48:52 crc kubenswrapper[4940]: I0223 08:48:52.583504 4940 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 23 08:48:52 crc kubenswrapper[4940]: I0223 08:48:52.592275 4940 kubelet_node_status.go:115] "Node was previously registered" node="crc" Feb 23 08:48:52 crc kubenswrapper[4940]: I0223 08:48:52.592537 4940 kubelet_node_status.go:79] "Successfully registered node" node="crc" Feb 23 08:48:52 crc kubenswrapper[4940]: E0223 08:48:52.592564 4940 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": node \"crc\" not found" Feb 23 08:48:52 crc kubenswrapper[4940]: I0223 08:48:52.595338 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:48:52 crc kubenswrapper[4940]: I0223 08:48:52.595373 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:48:52 crc kubenswrapper[4940]: I0223 08:48:52.595383 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:48:52 crc kubenswrapper[4940]: I0223 08:48:52.595397 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:48:52 crc kubenswrapper[4940]: I0223 08:48:52.595407 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:48:52Z","lastTransitionTime":"2026-02-23T08:48:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:48:52 crc kubenswrapper[4940]: E0223 08:48:52.606453 4940 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T08:48:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T08:48:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T08:48:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T08:48:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0b40f9a7-6d5b-496d-bcec-88183c6aba29\\\",\\\"systemUUID\\\":\\\"3c406e8c-0d77-4ead-8ee9-37cf28c01cc1\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 08:48:52 crc kubenswrapper[4940]: I0223 08:48:52.614994 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:48:52 crc kubenswrapper[4940]: I0223 08:48:52.615049 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:48:52 crc kubenswrapper[4940]: I0223 08:48:52.615063 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:48:52 crc kubenswrapper[4940]: I0223 08:48:52.615085 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:48:52 crc kubenswrapper[4940]: I0223 08:48:52.615099 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:48:52Z","lastTransitionTime":"2026-02-23T08:48:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:48:52 crc kubenswrapper[4940]: E0223 08:48:52.627226 4940 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T08:48:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T08:48:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T08:48:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T08:48:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0b40f9a7-6d5b-496d-bcec-88183c6aba29\\\",\\\"systemUUID\\\":\\\"3c406e8c-0d77-4ead-8ee9-37cf28c01cc1\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 08:48:52 crc kubenswrapper[4940]: I0223 08:48:52.635785 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:48:52 crc kubenswrapper[4940]: I0223 08:48:52.635828 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:48:52 crc kubenswrapper[4940]: I0223 08:48:52.635841 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:48:52 crc kubenswrapper[4940]: I0223 08:48:52.635864 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:48:52 crc kubenswrapper[4940]: I0223 08:48:52.635878 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:48:52Z","lastTransitionTime":"2026-02-23T08:48:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:48:52 crc kubenswrapper[4940]: I0223 08:48:52.645802 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/cluster-policy-controller/1.log" Feb 23 08:48:52 crc kubenswrapper[4940]: E0223 08:48:52.647262 4940 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T08:48:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T08:48:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T08:48:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T08:48:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0b40f9a7-6d5b-496d-bcec-88183c6aba29\\\",\\\"systemUUID\\\":\\\"3c406e8c-0d77-4ead-8ee9-37cf28c01cc1\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 08:48:52 crc kubenswrapper[4940]: I0223 08:48:52.647666 4940 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 08:48:52 crc kubenswrapper[4940]: I0223 08:48:52.648831 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:48:52 crc kubenswrapper[4940]: I0223 08:48:52.648909 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:48:52 crc kubenswrapper[4940]: I0223 08:48:52.648923 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:48:52 crc kubenswrapper[4940]: I0223 08:48:52.657670 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:48:52 crc kubenswrapper[4940]: I0223 08:48:52.657781 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:48:52 crc kubenswrapper[4940]: I0223 08:48:52.657796 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:48:52 crc kubenswrapper[4940]: I0223 08:48:52.657818 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:48:52 crc kubenswrapper[4940]: I0223 08:48:52.657829 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:48:52Z","lastTransitionTime":"2026-02-23T08:48:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:48:52 crc kubenswrapper[4940]: E0223 08:48:52.669883 4940 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T08:48:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T08:48:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T08:48:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T08:48:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0b40f9a7-6d5b-496d-bcec-88183c6aba29\\\",\\\"systemUUID\\\":\\\"3c406e8c-0d77-4ead-8ee9-37cf28c01cc1\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 08:48:52 crc kubenswrapper[4940]: E0223 08:48:52.670048 4940 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 23 08:48:52 crc kubenswrapper[4940]: E0223 08:48:52.670086 4940 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 08:48:52 crc kubenswrapper[4940]: E0223 08:48:52.770679 4940 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 08:48:52 crc kubenswrapper[4940]: E0223 08:48:52.871802 4940 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 08:48:52 crc kubenswrapper[4940]: E0223 08:48:52.972774 4940 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 08:48:53 crc kubenswrapper[4940]: E0223 08:48:53.073780 4940 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 08:48:53 crc kubenswrapper[4940]: E0223 08:48:53.175318 4940 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 08:48:53 crc kubenswrapper[4940]: E0223 08:48:53.275600 4940 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 08:48:53 crc kubenswrapper[4940]: I0223 08:48:53.325814 4940 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 17:55:00.195337143 +0000 UTC Feb 23 08:48:53 crc kubenswrapper[4940]: E0223 08:48:53.376716 4940 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 08:48:53 crc kubenswrapper[4940]: E0223 08:48:53.477898 4940 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 08:48:53 crc kubenswrapper[4940]: E0223 08:48:53.578753 4940 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 08:48:53 crc kubenswrapper[4940]: E0223 08:48:53.679746 4940 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 08:48:53 crc kubenswrapper[4940]: E0223 08:48:53.780529 4940 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 08:48:53 crc kubenswrapper[4940]: E0223 08:48:53.881696 4940 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 08:48:53 crc kubenswrapper[4940]: E0223 08:48:53.982828 4940 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 08:48:54 crc kubenswrapper[4940]: E0223 08:48:54.083643 4940 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 08:48:54 crc kubenswrapper[4940]: E0223 08:48:54.183914 4940 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 08:48:54 crc kubenswrapper[4940]: E0223 08:48:54.284976 4940 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 08:48:54 crc kubenswrapper[4940]: I0223 08:48:54.326084 4940 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 12:11:12.858035988 +0000 UTC Feb 23 08:48:54 crc kubenswrapper[4940]: E0223 08:48:54.385274 4940 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 08:48:54 crc kubenswrapper[4940]: E0223 08:48:54.486239 4940 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 08:48:54 crc kubenswrapper[4940]: E0223 08:48:54.586558 4940 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 08:48:54 crc kubenswrapper[4940]: E0223 08:48:54.687088 4940 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 08:48:54 crc kubenswrapper[4940]: E0223 08:48:54.788275 4940 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 08:48:54 crc kubenswrapper[4940]: E0223 08:48:54.889357 4940 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 08:48:54 crc kubenswrapper[4940]: E0223 08:48:54.990391 4940 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 08:48:55 crc kubenswrapper[4940]: E0223 08:48:55.091298 4940 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 08:48:55 crc kubenswrapper[4940]: E0223 08:48:55.191525 4940 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 08:48:55 crc kubenswrapper[4940]: E0223 08:48:55.292207 4940 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 08:48:55 crc kubenswrapper[4940]: I0223 08:48:55.326690 4940 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 13:55:55.724448609 +0000 UTC Feb 23 08:48:55 crc kubenswrapper[4940]: E0223 08:48:55.392549 4940 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 08:48:55 crc kubenswrapper[4940]: E0223 08:48:55.493480 4940 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 08:48:55 crc kubenswrapper[4940]: E0223 08:48:55.594580 4940 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 08:48:55 crc kubenswrapper[4940]: E0223 08:48:55.695059 4940 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 08:48:55 crc kubenswrapper[4940]: E0223 08:48:55.796080 4940 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 08:48:55 crc kubenswrapper[4940]: E0223 08:48:55.896344 4940 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 08:48:55 crc kubenswrapper[4940]: E0223 08:48:55.996891 4940 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 08:48:56 crc kubenswrapper[4940]: E0223 08:48:56.097719 4940 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 08:48:56 crc kubenswrapper[4940]: E0223 08:48:56.198890 4940 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 08:48:56 crc kubenswrapper[4940]: E0223 08:48:56.299647 4940 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 08:48:56 crc kubenswrapper[4940]: I0223 08:48:56.326966 4940 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 23:30:02.26512634 +0000 UTC Feb 23 08:48:56 crc kubenswrapper[4940]: E0223 08:48:56.399798 4940 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 08:48:56 crc kubenswrapper[4940]: E0223 08:48:56.500783 4940 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 08:48:56 crc kubenswrapper[4940]: E0223 08:48:56.601577 4940 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 08:48:56 crc kubenswrapper[4940]: E0223 08:48:56.702941 4940 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 08:48:56 crc kubenswrapper[4940]: E0223 08:48:56.804584 4940 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 08:48:56 crc kubenswrapper[4940]: E0223 08:48:56.905328 4940 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 08:48:57 crc kubenswrapper[4940]: E0223 08:48:57.005926 4940 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 08:48:57 crc kubenswrapper[4940]: E0223 08:48:57.106680 4940 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 08:48:57 crc kubenswrapper[4940]: E0223 08:48:57.207374 4940 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 08:48:57 crc kubenswrapper[4940]: E0223 08:48:57.307571 4940 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 08:48:57 crc kubenswrapper[4940]: I0223 08:48:57.328217 4940 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 17:31:42.377256955 +0000 UTC Feb 23 08:48:57 crc kubenswrapper[4940]: I0223 08:48:57.345655 4940 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 08:48:57 crc kubenswrapper[4940]: I0223 08:48:57.347067 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:48:57 crc kubenswrapper[4940]: I0223 08:48:57.347151 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:48:57 crc kubenswrapper[4940]: I0223 08:48:57.347165 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:48:57 crc kubenswrapper[4940]: E0223 08:48:57.411878 4940 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 08:48:57 crc kubenswrapper[4940]: E0223 08:48:57.512063 4940 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 08:48:57 crc kubenswrapper[4940]: I0223 08:48:57.568448 4940 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 23 08:48:57 crc kubenswrapper[4940]: I0223 08:48:57.568727 4940 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 08:48:57 crc kubenswrapper[4940]: I0223 08:48:57.570354 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:48:57 crc kubenswrapper[4940]: I0223 08:48:57.570424 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:48:57 crc kubenswrapper[4940]: I0223 08:48:57.570442 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:48:57 crc kubenswrapper[4940]: I0223 08:48:57.573402 4940 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 23 08:48:57 crc kubenswrapper[4940]: E0223 08:48:57.613223 4940 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 08:48:57 crc kubenswrapper[4940]: I0223 08:48:57.661206 4940 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 08:48:57 crc kubenswrapper[4940]: I0223 08:48:57.662457 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:48:57 crc kubenswrapper[4940]: I0223 08:48:57.662521 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:48:57 crc kubenswrapper[4940]: I0223 08:48:57.662538 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:48:57 crc kubenswrapper[4940]: E0223 08:48:57.713891 4940 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 08:48:57 crc kubenswrapper[4940]: E0223 08:48:57.815070 4940 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 08:48:57 crc kubenswrapper[4940]: E0223 08:48:57.915429 4940 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 08:48:58 crc kubenswrapper[4940]: E0223 08:48:58.016271 4940 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 08:48:58 crc kubenswrapper[4940]: E0223 08:48:58.116566 4940 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 08:48:58 crc kubenswrapper[4940]: E0223 08:48:58.217392 4940 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 08:48:58 crc kubenswrapper[4940]: E0223 08:48:58.318456 4940 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 08:48:58 crc kubenswrapper[4940]: I0223 08:48:58.328815 4940 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 13:19:32.759715504 +0000 UTC Feb 23 08:48:58 crc kubenswrapper[4940]: I0223 08:48:58.345312 4940 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 08:48:58 crc kubenswrapper[4940]: I0223 08:48:58.346888 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:48:58 crc kubenswrapper[4940]: I0223 08:48:58.346939 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:48:58 crc kubenswrapper[4940]: I0223 08:48:58.346953 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:48:58 crc kubenswrapper[4940]: I0223 08:48:58.347769 4940 scope.go:117] "RemoveContainer" containerID="8c6585ef982c59b0a07aee05f22dc686b80323299c9ca43be2ddd4365e977c33" Feb 23 08:48:58 crc kubenswrapper[4940]: E0223 08:48:58.419026 4940 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 08:48:58 crc kubenswrapper[4940]: E0223 08:48:58.519114 4940 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 08:48:58 crc kubenswrapper[4940]: E0223 08:48:58.619712 4940 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 08:48:58 crc kubenswrapper[4940]: I0223 08:48:58.667379 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/2.log" Feb 23 08:48:58 crc kubenswrapper[4940]: I0223 08:48:58.669587 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"947d88435f9dfeac492c7a476d5413d6c9f65fea2e6a8322bed3a197be4e7c11"} Feb 23 08:48:58 crc kubenswrapper[4940]: I0223 08:48:58.669948 4940 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 08:48:58 crc kubenswrapper[4940]: I0223 08:48:58.671048 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:48:58 crc kubenswrapper[4940]: I0223 08:48:58.671091 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:48:58 crc kubenswrapper[4940]: I0223 08:48:58.671106 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:48:58 crc kubenswrapper[4940]: E0223 08:48:58.720780 4940 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 08:48:58 crc kubenswrapper[4940]: E0223 08:48:58.821364 4940 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 08:48:58 crc kubenswrapper[4940]: E0223 08:48:58.922116 4940 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 08:48:59 crc kubenswrapper[4940]: E0223 08:48:59.022832 4940 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 08:48:59 crc kubenswrapper[4940]: E0223 08:48:59.123236 4940 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.198098 4940 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.226592 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.226670 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.226682 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.226704 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.226719 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:48:59Z","lastTransitionTime":"2026-02-23T08:48:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.295576 4940 apiserver.go:52] "Watching apiserver" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.299734 4940 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.299972 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g"] Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.300465 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.300477 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 23 08:48:59 crc kubenswrapper[4940]: E0223 08:48:59.300551 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.301024 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 08:48:59 crc kubenswrapper[4940]: E0223 08:48:59.301087 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.301310 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.301425 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.301541 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 08:48:59 crc kubenswrapper[4940]: E0223 08:48:59.301671 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.303949 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.304210 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.304661 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.304674 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.305771 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.305868 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.305786 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.305838 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.306096 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.329299 4940 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 12:57:42.343057579 +0000 UTC Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.329975 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.330035 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.330051 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.330076 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.330091 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:48:59Z","lastTransitionTime":"2026-02-23T08:48:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.343551 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.355839 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.369284 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.384816 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.385006 4940 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.387552 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.387594 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.387635 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.387653 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.387671 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.387690 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.387707 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.387729 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.388040 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.388047 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.388102 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.388170 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.388259 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.388360 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.388984 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.389050 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.389071 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.389206 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.389325 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.389417 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.389531 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.389569 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.389852 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.389917 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.390046 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.390087 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.390376 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.390321 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.390435 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.390457 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.390711 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.390927 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.390956 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.391180 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.391217 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.391220 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.391256 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.391559 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.391780 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.391289 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.391853 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.391874 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.392206 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.392405 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.392465 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.392492 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.392828 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.392872 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.392935 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.392961 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.393708 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.393739 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.393762 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.393815 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.393836 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.393861 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.393890 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.393911 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.393932 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.393954 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.394078 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.394102 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.394124 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.394147 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.394168 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.394193 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.394213 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.394241 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.394263 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.394283 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.394304 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.394325 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.394344 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.394363 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.394386 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.394407 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.394427 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.394448 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.394468 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.394492 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.394511 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.394529 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.394552 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.394573 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.394593 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.394633 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.394655 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.394676 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.394696 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.394716 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.394738 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.394757 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.394777 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.394799 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.394820 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.394840 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.394859 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.394879 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.393226 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.395206 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.395248 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.395268 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.395291 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.395315 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.395335 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.395362 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.395382 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.395403 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.395424 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.395451 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.395473 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.395491 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.395511 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.395533 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.395554 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.396139 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.396645 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.396892 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.393660 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.395193 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.395317 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.395356 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.395431 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.395582 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.396086 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.396304 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.396434 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.397300 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.397232 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.397843 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.398942 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.403800 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.404183 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.404201 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.404270 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.404303 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.404325 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.404341 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.404526 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.404424 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.404663 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.404790 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.405019 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.405201 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.405204 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.405244 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.405259 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.405495 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.405622 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.405588 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.405681 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.405669 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.405782 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.406067 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.406084 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.406120 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.406126 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.406250 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.406469 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.406463 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.406561 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.406671 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.406731 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.406737 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.406839 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.406899 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.406908 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.406932 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.406978 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.407005 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.407030 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.407074 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.407085 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.407108 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.407157 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.407183 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.407228 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.407249 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.407293 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.407315 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.407334 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.407378 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.407403 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.407424 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.407463 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.407488 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.407537 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.407560 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.407582 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.407634 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.407655 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.407693 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.407717 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.407737 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.407784 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.407807 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.407847 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.407871 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.407890 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.407927 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.407952 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.407975 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.408021 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.408041 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.408061 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.408110 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.408140 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.408183 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.408201 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.408221 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.408260 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.408279 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.408299 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.408339 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.408359 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.408381 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.416136 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.407284 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.407392 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.407667 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.424992 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.407515 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.407790 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.408067 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 08:48:59 crc kubenswrapper[4940]: E0223 08:48:59.408448 4940 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 08:48:59.90839854 +0000 UTC m=+71.291604697 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.408773 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.409086 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.409415 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.409459 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.410047 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.410272 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.410661 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.410709 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.411047 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.411462 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.411786 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.411972 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.412209 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.412356 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.412443 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.412754 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.412826 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.412842 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.413062 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.413366 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.415166 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.415226 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.415794 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.415826 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.415919 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.415934 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.416436 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.417169 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.417957 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.417525 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.418345 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.418373 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.418584 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.418662 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.418706 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.419038 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.419071 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.419137 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.419308 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.419589 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.419844 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.422108 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.423430 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.424008 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.424123 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.424483 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.424547 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.424464 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.424882 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.425710 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.425847 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.425972 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.426045 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.426070 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.426092 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.426110 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.426133 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.426154 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.426175 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.426195 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.426270 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.426292 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.426322 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.426370 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.426433 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.426537 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.426558 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.426492 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.426638 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.426566 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.427079 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.426866 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.426693 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.426805 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.427579 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.428214 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.428567 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.428294 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.428908 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.429150 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.429419 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.429467 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.429635 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.426695 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.429715 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.429933 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.427597 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.429794 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.430166 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.430341 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.430761 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.431244 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.431256 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.432139 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.432205 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.432233 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.432277 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.432310 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.432329 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.432366 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.432385 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.432407 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.433520 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.433550 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.444221 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.444459 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.444784 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.444791 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.444831 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.444837 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.444858 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.444882 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.444902 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.444920 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.444925 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.444940 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.445063 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.445084 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.445113 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.445132 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.445155 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.445178 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.445199 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.445220 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.445245 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.445267 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.445295 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.445328 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.445368 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.445401 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.445427 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.445458 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.445486 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.445516 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.445537 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.445567 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.445599 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.445641 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.445673 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.445704 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.445722 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.445738 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.446165 4940 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.446192 4940 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.446207 4940 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.446220 4940 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.446236 4940 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.446254 4940 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.446269 4940 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.446282 4940 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.446294 4940 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.446306 4940 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.446318 4940 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.446331 4940 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.446342 4940 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.446354 4940 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.446364 4940 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.446375 4940 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.446386 4940 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.446384 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.446397 4940 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.446448 4940 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.446464 4940 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.446479 4940 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.446494 4940 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.446507 4940 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.446519 4940 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.446532 4940 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.446545 4940 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.446558 4940 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.446572 4940 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.446585 4940 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.446600 4940 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.446629 4940 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.446644 4940 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.446657 4940 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.446670 4940 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.446687 4940 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.446700 4940 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.446714 4940 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.446728 4940 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.446745 4940 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.446855 4940 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.446867 4940 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.446883 4940 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.446949 4940 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.447072 4940 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.447094 4940 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.447106 4940 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.447116 4940 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.447129 4940 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.447143 4940 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.447201 4940 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.447215 4940 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.447228 4940 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.447283 4940 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.447295 4940 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.447306 4940 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.447316 4940 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.447327 4940 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.447338 4940 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.447349 4940 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.447361 4940 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.447373 4940 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.447384 4940 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.447399 4940 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.447410 4940 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.447473 4940 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.447484 4940 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.447495 4940 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.447506 4940 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.447562 4940 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.447574 4940 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.447600 4940 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.447629 4940 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.447642 4940 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.447651 4940 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.447662 4940 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.447672 4940 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.447682 4940 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.447695 4940 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.447705 4940 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.447714 4940 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.447724 4940 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.447733 4940 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.447746 4940 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.447757 4940 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.447769 4940 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.447779 4940 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.447790 4940 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.447800 4940 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.447812 4940 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.447822 4940 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.447835 4940 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.447848 4940 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.447859 4940 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.447870 4940 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.447881 4940 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.447894 4940 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.447905 4940 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.447916 4940 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.447928 4940 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.447938 4940 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.447949 4940 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.447960 4940 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.448013 4940 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.448024 4940 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.448053 4940 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.448064 4940 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.448074 4940 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.448084 4940 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.448185 4940 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.448294 4940 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.448307 4940 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.451846 4940 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.451876 4940 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.451896 4940 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.451912 4940 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.452018 4940 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.452040 4940 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.452089 4940 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.452102 4940 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.452129 4940 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.452141 4940 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.452153 4940 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.452178 4940 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.452191 4940 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.452202 4940 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.452213 4940 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.452225 4940 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.452254 4940 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.452267 4940 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.452295 4940 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.452306 4940 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.452357 4940 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.452381 4940 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.454513 4940 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.454549 4940 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.454567 4940 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.454584 4940 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.454602 4940 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.447341 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.447672 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.448700 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.448681 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.448734 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.448746 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.449373 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.449478 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.450315 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.451166 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.451323 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.451816 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.452700 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.452996 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.453065 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.453272 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.453310 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.453574 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.453742 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.453762 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.453057 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.453947 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.454479 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.445135 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.449590 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.455011 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.455057 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:48:59Z","lastTransitionTime":"2026-02-23T08:48:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.455523 4940 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Feb 23 08:48:59 crc kubenswrapper[4940]: E0223 08:48:59.455651 4940 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 23 08:48:59 crc kubenswrapper[4940]: E0223 08:48:59.455733 4940 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-23 08:48:59.955709065 +0000 UTC m=+71.338915222 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.455978 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 08:48:59 crc kubenswrapper[4940]: E0223 08:48:59.456451 4940 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.456546 4940 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Feb 23 08:48:59 crc kubenswrapper[4940]: E0223 08:48:59.456653 4940 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-23 08:48:59.956641826 +0000 UTC m=+71.339847983 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.456894 4940 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.457135 4940 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.457334 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.458817 4940 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.460115 4940 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.460145 4940 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.460194 4940 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.460213 4940 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.460229 4940 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.460274 4940 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.460287 4940 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.460300 4940 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.460341 4940 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.460355 4940 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.461838 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.465853 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.466455 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.466509 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.467962 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.469249 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.476292 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.477337 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 08:48:59 crc kubenswrapper[4940]: E0223 08:48:59.478203 4940 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 23 08:48:59 crc kubenswrapper[4940]: E0223 08:48:59.478254 4940 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 23 08:48:59 crc kubenswrapper[4940]: E0223 08:48:59.478274 4940 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 23 08:48:59 crc kubenswrapper[4940]: E0223 08:48:59.478468 4940 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-23 08:48:59.978438421 +0000 UTC m=+71.361644578 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 23 08:48:59 crc kubenswrapper[4940]: E0223 08:48:59.478566 4940 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 23 08:48:59 crc kubenswrapper[4940]: E0223 08:48:59.478817 4940 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 23 08:48:59 crc kubenswrapper[4940]: E0223 08:48:59.478886 4940 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 23 08:48:59 crc kubenswrapper[4940]: E0223 08:48:59.479003 4940 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-23 08:48:59.978976099 +0000 UTC m=+71.362182256 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.478993 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.483009 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.483070 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.484215 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.485478 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.490670 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.491108 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.491210 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.492148 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.493160 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.493838 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.494184 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.494419 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.498995 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.501099 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.505595 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.508173 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.512655 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.520324 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.532275 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.545852 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.559395 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.559438 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.559447 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.559467 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.559477 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:48:59Z","lastTransitionTime":"2026-02-23T08:48:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.560919 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.560963 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.560999 4940 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.561015 4940 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.561027 4940 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.561037 4940 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.561046 4940 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.561055 4940 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.561123 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.561221 4940 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.561251 4940 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.561265 4940 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.561278 4940 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.561290 4940 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.561292 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.561305 4940 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.561366 4940 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.561390 4940 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.561413 4940 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.561429 4940 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.561444 4940 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.561458 4940 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.561471 4940 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.561483 4940 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.561493 4940 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.561504 4940 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.561514 4940 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.561525 4940 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.561557 4940 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.561567 4940 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.561642 4940 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.561654 4940 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.561665 4940 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.561679 4940 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.561690 4940 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.561702 4940 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.561713 4940 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.561725 4940 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.561736 4940 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.561747 4940 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.561758 4940 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.561773 4940 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.561784 4940 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.561795 4940 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.561814 4940 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.561826 4940 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.616202 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.623252 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 23 08:48:59 crc kubenswrapper[4940]: E0223 08:48:59.635205 4940 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 23 08:48:59 crc kubenswrapper[4940]: container &Container{Name:network-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,Command:[/bin/bash -c #!/bin/bash Feb 23 08:48:59 crc kubenswrapper[4940]: set -o allexport Feb 23 08:48:59 crc kubenswrapper[4940]: if [[ -f /etc/kubernetes/apiserver-url.env ]]; then Feb 23 08:48:59 crc kubenswrapper[4940]: source /etc/kubernetes/apiserver-url.env Feb 23 08:48:59 crc kubenswrapper[4940]: else Feb 23 08:48:59 crc kubenswrapper[4940]: echo "Error: /etc/kubernetes/apiserver-url.env is missing" Feb 23 08:48:59 crc kubenswrapper[4940]: exit 1 Feb 23 08:48:59 crc kubenswrapper[4940]: fi Feb 23 08:48:59 crc kubenswrapper[4940]: exec /usr/bin/cluster-network-operator start --listen=0.0.0.0:9104 Feb 23 08:48:59 crc kubenswrapper[4940]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:cno,HostPort:9104,ContainerPort:9104,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:RELEASE_VERSION,Value:4.18.1,ValueFrom:nil,},EnvVar{Name:KUBE_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b97554198294bf544fbc116c94a0a1fb2ec8a4de0e926bf9d9e320135f0bee6f,ValueFrom:nil,},EnvVar{Name:KUBE_RBAC_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09,ValueFrom:nil,},EnvVar{Name:MULTUS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26,ValueFrom:nil,},EnvVar{Name:MULTUS_ADMISSION_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317,ValueFrom:nil,},EnvVar{Name:CNI_PLUGINS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc,ValueFrom:nil,},EnvVar{Name:BOND_CNI_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78,ValueFrom:nil,},EnvVar{Name:WHEREABOUTS_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4,ValueFrom:nil,},EnvVar{Name:ROUTE_OVERRRIDE_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa,ValueFrom:nil,},EnvVar{Name:MULTUS_NETWORKPOLICY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:23f833d3738d68706eb2f2868bd76bd71cee016cffa6faf5f045a60cc8c6eddd,ValueFrom:nil,},EnvVar{Name:OVN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,ValueFrom:nil,},EnvVar{Name:OVN_NB_RAFT_ELECTION_TIMER,Value:10,ValueFrom:nil,},EnvVar{Name:OVN_SB_RAFT_ELECTION_TIMER,Value:16,ValueFrom:nil,},EnvVar{Name:OVN_NORTHD_PROBE_INTERVAL,Value:10000,ValueFrom:nil,},EnvVar{Name:OVN_CONTROLLER_INACTIVITY_PROBE,Value:180000,ValueFrom:nil,},EnvVar{Name:OVN_NB_INACTIVITY_PROBE,Value:60000,ValueFrom:nil,},EnvVar{Name:EGRESS_ROUTER_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c,ValueFrom:nil,},EnvVar{Name:NETWORK_METRICS_DAEMON_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_SOURCE_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_TARGET_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:NETWORK_OPERATOR_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:CLOUD_NETWORK_CONFIG_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8048f1cb0be521f09749c0a489503cd56d85b68c6ca93380e082cfd693cd97a8,ValueFrom:nil,},EnvVar{Name:CLI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,ValueFrom:nil,},EnvVar{Name:FRR_K8S_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5dbf844e49bb46b78586930149e5e5f5dc121014c8afd10fe36f3651967cc256,ValueFrom:nil,},EnvVar{Name:NETWORKING_CONSOLE_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd,ValueFrom:nil,},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:host-etc-kube,ReadOnly:true,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-tls,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rdwmf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-operator-58b4c7f79c-55gtf_openshift-network-operator(37a5e44f-9a88-4405-be8a-b645485e7312): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 23 08:48:59 crc kubenswrapper[4940]: > logger="UnhandledError" Feb 23 08:48:59 crc kubenswrapper[4940]: W0223 08:48:59.635867 4940 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd75a4c96_2883_4a0b_bab2_0fab2b6c0b49.slice/crio-61a43835c1d749a6bd23713cd927e9652aaac9217cfeb09acb40aea9a8227a4d WatchSource:0}: Error finding container 61a43835c1d749a6bd23713cd927e9652aaac9217cfeb09acb40aea9a8227a4d: Status 404 returned error can't find the container with id 61a43835c1d749a6bd23713cd927e9652aaac9217cfeb09acb40aea9a8227a4d Feb 23 08:48:59 crc kubenswrapper[4940]: E0223 08:48:59.636477 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"network-operator\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" podUID="37a5e44f-9a88-4405-be8a-b645485e7312" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.643031 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 23 08:48:59 crc kubenswrapper[4940]: E0223 08:48:59.645014 4940 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:iptables-alerter,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,Command:[/iptables-alerter/iptables-alerter.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONTAINER_RUNTIME_ENDPOINT,Value:unix:///run/crio/crio.sock,ValueFrom:nil,},EnvVar{Name:ALERTER_POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:iptables-alerter-script,ReadOnly:false,MountPath:/iptables-alerter,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-slash,ReadOnly:true,MountPath:/host,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rczfb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod iptables-alerter-4ln5h_openshift-network-operator(d75a4c96-2883-4a0b-bab2-0fab2b6c0b49): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Feb 23 08:48:59 crc kubenswrapper[4940]: E0223 08:48:59.647342 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"iptables-alerter\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/iptables-alerter-4ln5h" podUID="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" Feb 23 08:48:59 crc kubenswrapper[4940]: E0223 08:48:59.660567 4940 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 23 08:48:59 crc kubenswrapper[4940]: container &Container{Name:webhook,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c set -xe Feb 23 08:48:59 crc kubenswrapper[4940]: if [[ -f "/env/_master" ]]; then Feb 23 08:48:59 crc kubenswrapper[4940]: set -o allexport Feb 23 08:48:59 crc kubenswrapper[4940]: source "/env/_master" Feb 23 08:48:59 crc kubenswrapper[4940]: set +o allexport Feb 23 08:48:59 crc kubenswrapper[4940]: fi Feb 23 08:48:59 crc kubenswrapper[4940]: # OVN-K will try to remove hybrid overlay node annotations even when the hybrid overlay is not enabled. Feb 23 08:48:59 crc kubenswrapper[4940]: # https://github.com/ovn-org/ovn-kubernetes/blob/ac6820df0b338a246f10f412cd5ec903bd234694/go-controller/pkg/ovn/master.go#L791 Feb 23 08:48:59 crc kubenswrapper[4940]: ho_enable="--enable-hybrid-overlay" Feb 23 08:48:59 crc kubenswrapper[4940]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start webhook" Feb 23 08:48:59 crc kubenswrapper[4940]: # extra-allowed-user: service account `ovn-kubernetes-control-plane` Feb 23 08:48:59 crc kubenswrapper[4940]: # sets pod annotations in multi-homing layer3 network controller (cluster-manager) Feb 23 08:48:59 crc kubenswrapper[4940]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Feb 23 08:48:59 crc kubenswrapper[4940]: --webhook-cert-dir="/etc/webhook-cert" \ Feb 23 08:48:59 crc kubenswrapper[4940]: --webhook-host=127.0.0.1 \ Feb 23 08:48:59 crc kubenswrapper[4940]: --webhook-port=9743 \ Feb 23 08:48:59 crc kubenswrapper[4940]: ${ho_enable} \ Feb 23 08:48:59 crc kubenswrapper[4940]: --enable-interconnect \ Feb 23 08:48:59 crc kubenswrapper[4940]: --disable-approver \ Feb 23 08:48:59 crc kubenswrapper[4940]: --extra-allowed-user="system:serviceaccount:openshift-ovn-kubernetes:ovn-kubernetes-control-plane" \ Feb 23 08:48:59 crc kubenswrapper[4940]: --wait-for-kubernetes-api=200s \ Feb 23 08:48:59 crc kubenswrapper[4940]: --pod-admission-conditions="/var/run/ovnkube-identity-config/additional-pod-admission-cond.json" \ Feb 23 08:48:59 crc kubenswrapper[4940]: --loglevel="${LOGLEVEL}" Feb 23 08:48:59 crc kubenswrapper[4940]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:2,ValueFrom:nil,},EnvVar{Name:KUBERNETES_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:webhook-cert,ReadOnly:false,MountPath:/etc/webhook-cert/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s2kz5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000470000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-vrzqb_openshift-network-node-identity(ef543e1b-8068-4ea3-b32a-61027b32e95d): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 23 08:48:59 crc kubenswrapper[4940]: > logger="UnhandledError" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.662598 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.662782 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.662848 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.662912 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.662970 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:48:59Z","lastTransitionTime":"2026-02-23T08:48:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:48:59 crc kubenswrapper[4940]: E0223 08:48:59.663721 4940 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 23 08:48:59 crc kubenswrapper[4940]: container &Container{Name:approver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c set -xe Feb 23 08:48:59 crc kubenswrapper[4940]: if [[ -f "/env/_master" ]]; then Feb 23 08:48:59 crc kubenswrapper[4940]: set -o allexport Feb 23 08:48:59 crc kubenswrapper[4940]: source "/env/_master" Feb 23 08:48:59 crc kubenswrapper[4940]: set +o allexport Feb 23 08:48:59 crc kubenswrapper[4940]: fi Feb 23 08:48:59 crc kubenswrapper[4940]: Feb 23 08:48:59 crc kubenswrapper[4940]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start approver" Feb 23 08:48:59 crc kubenswrapper[4940]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Feb 23 08:48:59 crc kubenswrapper[4940]: --disable-webhook \ Feb 23 08:48:59 crc kubenswrapper[4940]: --csr-acceptance-conditions="/var/run/ovnkube-identity-config/additional-cert-acceptance-cond.json" \ Feb 23 08:48:59 crc kubenswrapper[4940]: --loglevel="${LOGLEVEL}" Feb 23 08:48:59 crc kubenswrapper[4940]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:4,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s2kz5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000470000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-vrzqb_openshift-network-node-identity(ef543e1b-8068-4ea3-b32a-61027b32e95d): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 23 08:48:59 crc kubenswrapper[4940]: > logger="UnhandledError" Feb 23 08:48:59 crc kubenswrapper[4940]: E0223 08:48:59.665092 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"webhook\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"approver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-network-node-identity/network-node-identity-vrzqb" podUID="ef543e1b-8068-4ea3-b32a-61027b32e95d" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.674486 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"61a43835c1d749a6bd23713cd927e9652aaac9217cfeb09acb40aea9a8227a4d"} Feb 23 08:48:59 crc kubenswrapper[4940]: E0223 08:48:59.676483 4940 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:iptables-alerter,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,Command:[/iptables-alerter/iptables-alerter.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONTAINER_RUNTIME_ENDPOINT,Value:unix:///run/crio/crio.sock,ValueFrom:nil,},EnvVar{Name:ALERTER_POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:iptables-alerter-script,ReadOnly:false,MountPath:/iptables-alerter,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-slash,ReadOnly:true,MountPath:/host,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rczfb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod iptables-alerter-4ln5h_openshift-network-operator(d75a4c96-2883-4a0b-bab2-0fab2b6c0b49): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.676880 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"2fb963fc6ac6a1a8ac5ad8cb69c02481117b76f2bb7381b30bd4f5417619887c"} Feb 23 08:48:59 crc kubenswrapper[4940]: E0223 08:48:59.677690 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"iptables-alerter\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/iptables-alerter-4ln5h" podUID="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" Feb 23 08:48:59 crc kubenswrapper[4940]: E0223 08:48:59.679002 4940 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 23 08:48:59 crc kubenswrapper[4940]: container &Container{Name:network-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,Command:[/bin/bash -c #!/bin/bash Feb 23 08:48:59 crc kubenswrapper[4940]: set -o allexport Feb 23 08:48:59 crc kubenswrapper[4940]: if [[ -f /etc/kubernetes/apiserver-url.env ]]; then Feb 23 08:48:59 crc kubenswrapper[4940]: source /etc/kubernetes/apiserver-url.env Feb 23 08:48:59 crc kubenswrapper[4940]: else Feb 23 08:48:59 crc kubenswrapper[4940]: echo "Error: /etc/kubernetes/apiserver-url.env is missing" Feb 23 08:48:59 crc kubenswrapper[4940]: exit 1 Feb 23 08:48:59 crc kubenswrapper[4940]: fi Feb 23 08:48:59 crc kubenswrapper[4940]: exec /usr/bin/cluster-network-operator start --listen=0.0.0.0:9104 Feb 23 08:48:59 crc kubenswrapper[4940]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:cno,HostPort:9104,ContainerPort:9104,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:RELEASE_VERSION,Value:4.18.1,ValueFrom:nil,},EnvVar{Name:KUBE_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b97554198294bf544fbc116c94a0a1fb2ec8a4de0e926bf9d9e320135f0bee6f,ValueFrom:nil,},EnvVar{Name:KUBE_RBAC_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09,ValueFrom:nil,},EnvVar{Name:MULTUS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26,ValueFrom:nil,},EnvVar{Name:MULTUS_ADMISSION_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317,ValueFrom:nil,},EnvVar{Name:CNI_PLUGINS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc,ValueFrom:nil,},EnvVar{Name:BOND_CNI_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78,ValueFrom:nil,},EnvVar{Name:WHEREABOUTS_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4,ValueFrom:nil,},EnvVar{Name:ROUTE_OVERRRIDE_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa,ValueFrom:nil,},EnvVar{Name:MULTUS_NETWORKPOLICY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:23f833d3738d68706eb2f2868bd76bd71cee016cffa6faf5f045a60cc8c6eddd,ValueFrom:nil,},EnvVar{Name:OVN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,ValueFrom:nil,},EnvVar{Name:OVN_NB_RAFT_ELECTION_TIMER,Value:10,ValueFrom:nil,},EnvVar{Name:OVN_SB_RAFT_ELECTION_TIMER,Value:16,ValueFrom:nil,},EnvVar{Name:OVN_NORTHD_PROBE_INTERVAL,Value:10000,ValueFrom:nil,},EnvVar{Name:OVN_CONTROLLER_INACTIVITY_PROBE,Value:180000,ValueFrom:nil,},EnvVar{Name:OVN_NB_INACTIVITY_PROBE,Value:60000,ValueFrom:nil,},EnvVar{Name:EGRESS_ROUTER_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c,ValueFrom:nil,},EnvVar{Name:NETWORK_METRICS_DAEMON_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_SOURCE_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_TARGET_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:NETWORK_OPERATOR_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:CLOUD_NETWORK_CONFIG_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8048f1cb0be521f09749c0a489503cd56d85b68c6ca93380e082cfd693cd97a8,ValueFrom:nil,},EnvVar{Name:CLI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,ValueFrom:nil,},EnvVar{Name:FRR_K8S_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5dbf844e49bb46b78586930149e5e5f5dc121014c8afd10fe36f3651967cc256,ValueFrom:nil,},EnvVar{Name:NETWORKING_CONSOLE_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd,ValueFrom:nil,},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:host-etc-kube,ReadOnly:true,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-tls,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rdwmf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-operator-58b4c7f79c-55gtf_openshift-network-operator(37a5e44f-9a88-4405-be8a-b645485e7312): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 23 08:48:59 crc kubenswrapper[4940]: > logger="UnhandledError" Feb 23 08:48:59 crc kubenswrapper[4940]: E0223 08:48:59.680205 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"network-operator\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" podUID="37a5e44f-9a88-4405-be8a-b645485e7312" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.680743 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/3.log" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.681725 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/2.log" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.685443 4940 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="947d88435f9dfeac492c7a476d5413d6c9f65fea2e6a8322bed3a197be4e7c11" exitCode=255 Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.685651 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"947d88435f9dfeac492c7a476d5413d6c9f65fea2e6a8322bed3a197be4e7c11"} Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.685741 4940 scope.go:117] "RemoveContainer" containerID="8c6585ef982c59b0a07aee05f22dc686b80323299c9ca43be2ddd4365e977c33" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.687494 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.689861 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"539e3798ad0fdcdeaf688db125ced356d389b8d4bf123a532f09cecaf10b7e3a"} Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.698392 4940 scope.go:117] "RemoveContainer" containerID="947d88435f9dfeac492c7a476d5413d6c9f65fea2e6a8322bed3a197be4e7c11" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.698478 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 23 08:48:59 crc kubenswrapper[4940]: E0223 08:48:59.698707 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 23 08:48:59 crc kubenswrapper[4940]: E0223 08:48:59.702836 4940 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 23 08:48:59 crc kubenswrapper[4940]: container &Container{Name:webhook,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c set -xe Feb 23 08:48:59 crc kubenswrapper[4940]: if [[ -f "/env/_master" ]]; then Feb 23 08:48:59 crc kubenswrapper[4940]: set -o allexport Feb 23 08:48:59 crc kubenswrapper[4940]: source "/env/_master" Feb 23 08:48:59 crc kubenswrapper[4940]: set +o allexport Feb 23 08:48:59 crc kubenswrapper[4940]: fi Feb 23 08:48:59 crc kubenswrapper[4940]: # OVN-K will try to remove hybrid overlay node annotations even when the hybrid overlay is not enabled. Feb 23 08:48:59 crc kubenswrapper[4940]: # https://github.com/ovn-org/ovn-kubernetes/blob/ac6820df0b338a246f10f412cd5ec903bd234694/go-controller/pkg/ovn/master.go#L791 Feb 23 08:48:59 crc kubenswrapper[4940]: ho_enable="--enable-hybrid-overlay" Feb 23 08:48:59 crc kubenswrapper[4940]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start webhook" Feb 23 08:48:59 crc kubenswrapper[4940]: # extra-allowed-user: service account `ovn-kubernetes-control-plane` Feb 23 08:48:59 crc kubenswrapper[4940]: # sets pod annotations in multi-homing layer3 network controller (cluster-manager) Feb 23 08:48:59 crc kubenswrapper[4940]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Feb 23 08:48:59 crc kubenswrapper[4940]: --webhook-cert-dir="/etc/webhook-cert" \ Feb 23 08:48:59 crc kubenswrapper[4940]: --webhook-host=127.0.0.1 \ Feb 23 08:48:59 crc kubenswrapper[4940]: --webhook-port=9743 \ Feb 23 08:48:59 crc kubenswrapper[4940]: ${ho_enable} \ Feb 23 08:48:59 crc kubenswrapper[4940]: --enable-interconnect \ Feb 23 08:48:59 crc kubenswrapper[4940]: --disable-approver \ Feb 23 08:48:59 crc kubenswrapper[4940]: --extra-allowed-user="system:serviceaccount:openshift-ovn-kubernetes:ovn-kubernetes-control-plane" \ Feb 23 08:48:59 crc kubenswrapper[4940]: --wait-for-kubernetes-api=200s \ Feb 23 08:48:59 crc kubenswrapper[4940]: --pod-admission-conditions="/var/run/ovnkube-identity-config/additional-pod-admission-cond.json" \ Feb 23 08:48:59 crc kubenswrapper[4940]: --loglevel="${LOGLEVEL}" Feb 23 08:48:59 crc kubenswrapper[4940]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:2,ValueFrom:nil,},EnvVar{Name:KUBERNETES_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:webhook-cert,ReadOnly:false,MountPath:/etc/webhook-cert/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s2kz5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000470000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-vrzqb_openshift-network-node-identity(ef543e1b-8068-4ea3-b32a-61027b32e95d): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 23 08:48:59 crc kubenswrapper[4940]: > logger="UnhandledError" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.702917 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 08:48:59 crc kubenswrapper[4940]: E0223 08:48:59.705651 4940 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 23 08:48:59 crc kubenswrapper[4940]: container &Container{Name:approver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c set -xe Feb 23 08:48:59 crc kubenswrapper[4940]: if [[ -f "/env/_master" ]]; then Feb 23 08:48:59 crc kubenswrapper[4940]: set -o allexport Feb 23 08:48:59 crc kubenswrapper[4940]: source "/env/_master" Feb 23 08:48:59 crc kubenswrapper[4940]: set +o allexport Feb 23 08:48:59 crc kubenswrapper[4940]: fi Feb 23 08:48:59 crc kubenswrapper[4940]: Feb 23 08:48:59 crc kubenswrapper[4940]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start approver" Feb 23 08:48:59 crc kubenswrapper[4940]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Feb 23 08:48:59 crc kubenswrapper[4940]: --disable-webhook \ Feb 23 08:48:59 crc kubenswrapper[4940]: --csr-acceptance-conditions="/var/run/ovnkube-identity-config/additional-cert-acceptance-cond.json" \ Feb 23 08:48:59 crc kubenswrapper[4940]: --loglevel="${LOGLEVEL}" Feb 23 08:48:59 crc kubenswrapper[4940]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:4,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s2kz5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000470000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-vrzqb_openshift-network-node-identity(ef543e1b-8068-4ea3-b32a-61027b32e95d): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 23 08:48:59 crc kubenswrapper[4940]: > logger="UnhandledError" Feb 23 08:48:59 crc kubenswrapper[4940]: E0223 08:48:59.707275 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"webhook\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"approver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-network-node-identity/network-node-identity-vrzqb" podUID="ef543e1b-8068-4ea3-b32a-61027b32e95d" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.716197 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.736083 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.747851 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.762171 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.765865 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.765913 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.765926 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.765949 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.765965 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:48:59Z","lastTransitionTime":"2026-02-23T08:48:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.778157 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.795461 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.808721 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67d1926d-47ed-4a5c-b868-690599126446\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ee3bc200ebaed1c133ad32e65db44c8613c48203736dada71c0881d3d4f45a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6fefd6cf7ee54d7687a700f08f52ff20d438289f846da4d811f3295e85455c1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee77b54ead604e1f64b0057494a4ba082ac7f59289f563ce28ec6048485e3b07\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://947d88435f9dfeac492c7a476d5413d6c9f65fea2e6a8322bed3a197be4e7c11\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c6585ef982c59b0a07aee05f22dc686b80323299c9ca43be2ddd4365e977c33\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T08:48:27Z\\\",\\\"message\\\":\\\"W0223 08:48:26.633495 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0223 08:48:26.634048 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771836506 cert, and key in /tmp/serving-cert-3064489127/serving-signer.crt, /tmp/serving-cert-3064489127/serving-signer.key\\\\nI0223 08:48:26.823895 1 observer_polling.go:159] Starting file observer\\\\nW0223 08:48:26.829459 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:48:26Z is after 2026-02-23T05:33:16Z\\\\nI0223 08:48:26.829775 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0223 08:48:26.843989 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3064489127/tls.crt::/tmp/serving-cert-3064489127/tls.key\\\\\\\"\\\\nF0223 08:48:27.231479 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:48:27Z is after 2026-02-23T05:33:16Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T08:48:26Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://947d88435f9dfeac492c7a476d5413d6c9f65fea2e6a8322bed3a197be4e7c11\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"message\\\":\\\"le observer\\\\nW0223 08:48:59.155726 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0223 08:48:59.155899 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0223 08:48:59.156685 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1318688766/tls.crt::/tmp/serving-cert-1318688766/tls.key\\\\\\\"\\\\nI0223 08:48:59.543716 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0223 08:48:59.548278 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0223 08:48:59.548307 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0223 08:48:59.548334 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0223 08:48:59.548340 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0223 08:48:59.558852 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0223 08:48:59.558871 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0223 08:48:59.558894 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0223 08:48:59.558902 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0223 08:48:59.558909 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0223 08:48:59.558912 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0223 08:48:59.558917 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0223 08:48:59.558922 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0223 08:48:59.560387 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T08:48:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c47633e36ba0e4b4070060e2925e103b6efa68771b18f7a7f429546f9f05c1d9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:53Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7d8e4f8739813cb8d24afc9999755886cd4a77958b8b4e1514fc0bc53403258\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7d8e4f8739813cb8d24afc9999755886cd4a77958b8b4e1514fc0bc53403258\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:47:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:47:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:47:49Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.829265 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.843052 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.855890 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.868763 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.868802 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.868811 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.868831 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.868842 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:48:59Z","lastTransitionTime":"2026-02-23T08:48:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.876985 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.965903 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.966083 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 08:48:59 crc kubenswrapper[4940]: E0223 08:48:59.966212 4940 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 08:49:00.966163734 +0000 UTC m=+72.349369901 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 08:48:59 crc kubenswrapper[4940]: E0223 08:48:59.966260 4940 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.966327 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 08:48:59 crc kubenswrapper[4940]: E0223 08:48:59.966353 4940 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-23 08:49:00.966326879 +0000 UTC m=+72.349533196 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 23 08:48:59 crc kubenswrapper[4940]: E0223 08:48:59.966490 4940 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 23 08:48:59 crc kubenswrapper[4940]: E0223 08:48:59.966559 4940 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-23 08:49:00.966543887 +0000 UTC m=+72.349750234 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.973079 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.973140 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.973157 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.973183 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:48:59 crc kubenswrapper[4940]: I0223 08:48:59.973201 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:48:59Z","lastTransitionTime":"2026-02-23T08:48:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:00 crc kubenswrapper[4940]: I0223 08:49:00.067407 4940 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 23 08:49:00 crc kubenswrapper[4940]: I0223 08:49:00.067743 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 08:49:00 crc kubenswrapper[4940]: I0223 08:49:00.067824 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 08:49:00 crc kubenswrapper[4940]: E0223 08:49:00.068008 4940 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 23 08:49:00 crc kubenswrapper[4940]: E0223 08:49:00.068035 4940 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 23 08:49:00 crc kubenswrapper[4940]: E0223 08:49:00.068041 4940 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 23 08:49:00 crc kubenswrapper[4940]: E0223 08:49:00.068050 4940 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 23 08:49:00 crc kubenswrapper[4940]: E0223 08:49:00.068071 4940 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 23 08:49:00 crc kubenswrapper[4940]: E0223 08:49:00.068089 4940 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 23 08:49:00 crc kubenswrapper[4940]: E0223 08:49:00.068154 4940 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-23 08:49:01.068106511 +0000 UTC m=+72.451312678 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 23 08:49:00 crc kubenswrapper[4940]: E0223 08:49:00.068195 4940 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-23 08:49:01.068184824 +0000 UTC m=+72.451390981 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 23 08:49:00 crc kubenswrapper[4940]: I0223 08:49:00.076071 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:00 crc kubenswrapper[4940]: I0223 08:49:00.076143 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:00 crc kubenswrapper[4940]: I0223 08:49:00.076229 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:00 crc kubenswrapper[4940]: I0223 08:49:00.076257 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:00 crc kubenswrapper[4940]: I0223 08:49:00.076292 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:00Z","lastTransitionTime":"2026-02-23T08:49:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:00 crc kubenswrapper[4940]: I0223 08:49:00.179694 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:00 crc kubenswrapper[4940]: I0223 08:49:00.179759 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:00 crc kubenswrapper[4940]: I0223 08:49:00.179773 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:00 crc kubenswrapper[4940]: I0223 08:49:00.179796 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:00 crc kubenswrapper[4940]: I0223 08:49:00.179809 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:00Z","lastTransitionTime":"2026-02-23T08:49:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:00 crc kubenswrapper[4940]: I0223 08:49:00.282888 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:00 crc kubenswrapper[4940]: I0223 08:49:00.282955 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:00 crc kubenswrapper[4940]: I0223 08:49:00.282970 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:00 crc kubenswrapper[4940]: I0223 08:49:00.282997 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:00 crc kubenswrapper[4940]: I0223 08:49:00.283013 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:00Z","lastTransitionTime":"2026-02-23T08:49:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:00 crc kubenswrapper[4940]: I0223 08:49:00.330229 4940 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 01:32:43.410228071 +0000 UTC Feb 23 08:49:00 crc kubenswrapper[4940]: I0223 08:49:00.345593 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 08:49:00 crc kubenswrapper[4940]: E0223 08:49:00.345985 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 08:49:00 crc kubenswrapper[4940]: I0223 08:49:00.395294 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:00 crc kubenswrapper[4940]: I0223 08:49:00.395345 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:00 crc kubenswrapper[4940]: I0223 08:49:00.395383 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:00 crc kubenswrapper[4940]: I0223 08:49:00.395410 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:00 crc kubenswrapper[4940]: I0223 08:49:00.395422 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:00Z","lastTransitionTime":"2026-02-23T08:49:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:00 crc kubenswrapper[4940]: I0223 08:49:00.498531 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:00 crc kubenswrapper[4940]: I0223 08:49:00.498591 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:00 crc kubenswrapper[4940]: I0223 08:49:00.498603 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:00 crc kubenswrapper[4940]: I0223 08:49:00.498636 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:00 crc kubenswrapper[4940]: I0223 08:49:00.498673 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:00Z","lastTransitionTime":"2026-02-23T08:49:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:00 crc kubenswrapper[4940]: I0223 08:49:00.602338 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:00 crc kubenswrapper[4940]: I0223 08:49:00.602424 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:00 crc kubenswrapper[4940]: I0223 08:49:00.602441 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:00 crc kubenswrapper[4940]: I0223 08:49:00.602467 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:00 crc kubenswrapper[4940]: I0223 08:49:00.602489 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:00Z","lastTransitionTime":"2026-02-23T08:49:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:00 crc kubenswrapper[4940]: I0223 08:49:00.696246 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/3.log" Feb 23 08:49:00 crc kubenswrapper[4940]: I0223 08:49:00.699269 4940 scope.go:117] "RemoveContainer" containerID="947d88435f9dfeac492c7a476d5413d6c9f65fea2e6a8322bed3a197be4e7c11" Feb 23 08:49:00 crc kubenswrapper[4940]: E0223 08:49:00.699454 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 23 08:49:00 crc kubenswrapper[4940]: I0223 08:49:00.704809 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:00 crc kubenswrapper[4940]: I0223 08:49:00.704852 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:00 crc kubenswrapper[4940]: I0223 08:49:00.704865 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:00 crc kubenswrapper[4940]: I0223 08:49:00.704887 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:00 crc kubenswrapper[4940]: I0223 08:49:00.704901 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:00Z","lastTransitionTime":"2026-02-23T08:49:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:00 crc kubenswrapper[4940]: I0223 08:49:00.713316 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 08:49:00 crc kubenswrapper[4940]: I0223 08:49:00.727361 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 08:49:00 crc kubenswrapper[4940]: I0223 08:49:00.739490 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 08:49:00 crc kubenswrapper[4940]: I0223 08:49:00.754820 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 08:49:00 crc kubenswrapper[4940]: I0223 08:49:00.769723 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67d1926d-47ed-4a5c-b868-690599126446\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ee3bc200ebaed1c133ad32e65db44c8613c48203736dada71c0881d3d4f45a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6fefd6cf7ee54d7687a700f08f52ff20d438289f846da4d811f3295e85455c1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee77b54ead604e1f64b0057494a4ba082ac7f59289f563ce28ec6048485e3b07\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://947d88435f9dfeac492c7a476d5413d6c9f65fea2e6a8322bed3a197be4e7c11\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://947d88435f9dfeac492c7a476d5413d6c9f65fea2e6a8322bed3a197be4e7c11\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"message\\\":\\\"le observer\\\\nW0223 08:48:59.155726 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0223 08:48:59.155899 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0223 08:48:59.156685 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1318688766/tls.crt::/tmp/serving-cert-1318688766/tls.key\\\\\\\"\\\\nI0223 08:48:59.543716 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0223 08:48:59.548278 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0223 08:48:59.548307 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0223 08:48:59.548334 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0223 08:48:59.548340 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0223 08:48:59.558852 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0223 08:48:59.558871 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0223 08:48:59.558894 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0223 08:48:59.558902 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0223 08:48:59.558909 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0223 08:48:59.558912 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0223 08:48:59.558917 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0223 08:48:59.558922 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0223 08:48:59.560387 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T08:48:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c47633e36ba0e4b4070060e2925e103b6efa68771b18f7a7f429546f9f05c1d9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:53Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7d8e4f8739813cb8d24afc9999755886cd4a77958b8b4e1514fc0bc53403258\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7d8e4f8739813cb8d24afc9999755886cd4a77958b8b4e1514fc0bc53403258\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:47:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:47:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:47:49Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 08:49:00 crc kubenswrapper[4940]: I0223 08:49:00.781915 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 08:49:00 crc kubenswrapper[4940]: I0223 08:49:00.792541 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 08:49:00 crc kubenswrapper[4940]: I0223 08:49:00.807558 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:00 crc kubenswrapper[4940]: I0223 08:49:00.807592 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:00 crc kubenswrapper[4940]: I0223 08:49:00.807604 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:00 crc kubenswrapper[4940]: I0223 08:49:00.807635 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:00 crc kubenswrapper[4940]: I0223 08:49:00.807648 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:00Z","lastTransitionTime":"2026-02-23T08:49:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:00 crc kubenswrapper[4940]: I0223 08:49:00.910392 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:00 crc kubenswrapper[4940]: I0223 08:49:00.910438 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:00 crc kubenswrapper[4940]: I0223 08:49:00.910453 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:00 crc kubenswrapper[4940]: I0223 08:49:00.910472 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:00 crc kubenswrapper[4940]: I0223 08:49:00.910485 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:00Z","lastTransitionTime":"2026-02-23T08:49:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:00 crc kubenswrapper[4940]: I0223 08:49:00.958769 4940 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 23 08:49:00 crc kubenswrapper[4940]: I0223 08:49:00.976503 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 08:49:00 crc kubenswrapper[4940]: I0223 08:49:00.976667 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 08:49:00 crc kubenswrapper[4940]: I0223 08:49:00.976702 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 08:49:00 crc kubenswrapper[4940]: E0223 08:49:00.976743 4940 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 08:49:02.976707443 +0000 UTC m=+74.359913600 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 08:49:00 crc kubenswrapper[4940]: E0223 08:49:00.976809 4940 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 23 08:49:00 crc kubenswrapper[4940]: E0223 08:49:00.976898 4940 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-23 08:49:02.976870319 +0000 UTC m=+74.360076616 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 23 08:49:00 crc kubenswrapper[4940]: E0223 08:49:00.976935 4940 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 23 08:49:00 crc kubenswrapper[4940]: E0223 08:49:00.977047 4940 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-23 08:49:02.977017634 +0000 UTC m=+74.360223961 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 23 08:49:01 crc kubenswrapper[4940]: I0223 08:49:01.014180 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:01 crc kubenswrapper[4940]: I0223 08:49:01.014220 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:01 crc kubenswrapper[4940]: I0223 08:49:01.014230 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:01 crc kubenswrapper[4940]: I0223 08:49:01.014252 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:01 crc kubenswrapper[4940]: I0223 08:49:01.014263 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:01Z","lastTransitionTime":"2026-02-23T08:49:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:01 crc kubenswrapper[4940]: I0223 08:49:01.077872 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 08:49:01 crc kubenswrapper[4940]: I0223 08:49:01.077930 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 08:49:01 crc kubenswrapper[4940]: E0223 08:49:01.078045 4940 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 23 08:49:01 crc kubenswrapper[4940]: E0223 08:49:01.078063 4940 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 23 08:49:01 crc kubenswrapper[4940]: E0223 08:49:01.078076 4940 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 23 08:49:01 crc kubenswrapper[4940]: E0223 08:49:01.078117 4940 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-23 08:49:03.078104722 +0000 UTC m=+74.461310879 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 23 08:49:01 crc kubenswrapper[4940]: E0223 08:49:01.078203 4940 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 23 08:49:01 crc kubenswrapper[4940]: E0223 08:49:01.078249 4940 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 23 08:49:01 crc kubenswrapper[4940]: E0223 08:49:01.078266 4940 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 23 08:49:01 crc kubenswrapper[4940]: E0223 08:49:01.078345 4940 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-23 08:49:03.078318419 +0000 UTC m=+74.461524576 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 23 08:49:01 crc kubenswrapper[4940]: I0223 08:49:01.116308 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:01 crc kubenswrapper[4940]: I0223 08:49:01.116347 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:01 crc kubenswrapper[4940]: I0223 08:49:01.116355 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:01 crc kubenswrapper[4940]: I0223 08:49:01.116370 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:01 crc kubenswrapper[4940]: I0223 08:49:01.116382 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:01Z","lastTransitionTime":"2026-02-23T08:49:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:01 crc kubenswrapper[4940]: I0223 08:49:01.218576 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:01 crc kubenswrapper[4940]: I0223 08:49:01.218627 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:01 crc kubenswrapper[4940]: I0223 08:49:01.218637 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:01 crc kubenswrapper[4940]: I0223 08:49:01.218661 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:01 crc kubenswrapper[4940]: I0223 08:49:01.218673 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:01Z","lastTransitionTime":"2026-02-23T08:49:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:01 crc kubenswrapper[4940]: I0223 08:49:01.321707 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:01 crc kubenswrapper[4940]: I0223 08:49:01.321760 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:01 crc kubenswrapper[4940]: I0223 08:49:01.321771 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:01 crc kubenswrapper[4940]: I0223 08:49:01.321791 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:01 crc kubenswrapper[4940]: I0223 08:49:01.321805 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:01Z","lastTransitionTime":"2026-02-23T08:49:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:01 crc kubenswrapper[4940]: I0223 08:49:01.330881 4940 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 22:12:34.902773268 +0000 UTC Feb 23 08:49:01 crc kubenswrapper[4940]: I0223 08:49:01.330973 4940 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 23 08:49:01 crc kubenswrapper[4940]: I0223 08:49:01.341941 4940 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Feb 23 08:49:01 crc kubenswrapper[4940]: I0223 08:49:01.345377 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 08:49:01 crc kubenswrapper[4940]: I0223 08:49:01.345510 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 08:49:01 crc kubenswrapper[4940]: E0223 08:49:01.345633 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 08:49:01 crc kubenswrapper[4940]: E0223 08:49:01.345768 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 08:49:01 crc kubenswrapper[4940]: I0223 08:49:01.349798 4940 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Feb 23 08:49:01 crc kubenswrapper[4940]: I0223 08:49:01.350856 4940 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Feb 23 08:49:01 crc kubenswrapper[4940]: I0223 08:49:01.351854 4940 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Feb 23 08:49:01 crc kubenswrapper[4940]: I0223 08:49:01.352642 4940 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Feb 23 08:49:01 crc kubenswrapper[4940]: I0223 08:49:01.353388 4940 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Feb 23 08:49:01 crc kubenswrapper[4940]: I0223 08:49:01.353983 4940 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Feb 23 08:49:01 crc kubenswrapper[4940]: I0223 08:49:01.354778 4940 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Feb 23 08:49:01 crc kubenswrapper[4940]: I0223 08:49:01.355510 4940 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Feb 23 08:49:01 crc kubenswrapper[4940]: I0223 08:49:01.358163 4940 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Feb 23 08:49:01 crc kubenswrapper[4940]: I0223 08:49:01.359029 4940 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Feb 23 08:49:01 crc kubenswrapper[4940]: I0223 08:49:01.360465 4940 csr.go:261] certificate signing request csr-549wk is approved, waiting to be issued Feb 23 08:49:01 crc kubenswrapper[4940]: I0223 08:49:01.360996 4940 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Feb 23 08:49:01 crc kubenswrapper[4940]: I0223 08:49:01.361741 4940 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Feb 23 08:49:01 crc kubenswrapper[4940]: I0223 08:49:01.362884 4940 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Feb 23 08:49:01 crc kubenswrapper[4940]: I0223 08:49:01.363527 4940 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Feb 23 08:49:01 crc kubenswrapper[4940]: I0223 08:49:01.364204 4940 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Feb 23 08:49:01 crc kubenswrapper[4940]: I0223 08:49:01.365212 4940 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Feb 23 08:49:01 crc kubenswrapper[4940]: I0223 08:49:01.366321 4940 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Feb 23 08:49:01 crc kubenswrapper[4940]: I0223 08:49:01.367973 4940 csr.go:257] certificate signing request csr-549wk is issued Feb 23 08:49:01 crc kubenswrapper[4940]: I0223 08:49:01.368567 4940 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Feb 23 08:49:01 crc kubenswrapper[4940]: I0223 08:49:01.369320 4940 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Feb 23 08:49:01 crc kubenswrapper[4940]: I0223 08:49:01.370057 4940 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Feb 23 08:49:01 crc kubenswrapper[4940]: I0223 08:49:01.370731 4940 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Feb 23 08:49:01 crc kubenswrapper[4940]: I0223 08:49:01.371411 4940 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Feb 23 08:49:01 crc kubenswrapper[4940]: I0223 08:49:01.371947 4940 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Feb 23 08:49:01 crc kubenswrapper[4940]: I0223 08:49:01.372712 4940 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Feb 23 08:49:01 crc kubenswrapper[4940]: I0223 08:49:01.373258 4940 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Feb 23 08:49:01 crc kubenswrapper[4940]: I0223 08:49:01.373953 4940 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Feb 23 08:49:01 crc kubenswrapper[4940]: I0223 08:49:01.374696 4940 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Feb 23 08:49:01 crc kubenswrapper[4940]: I0223 08:49:01.375263 4940 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Feb 23 08:49:01 crc kubenswrapper[4940]: I0223 08:49:01.375916 4940 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Feb 23 08:49:01 crc kubenswrapper[4940]: I0223 08:49:01.376498 4940 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Feb 23 08:49:01 crc kubenswrapper[4940]: I0223 08:49:01.377079 4940 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Feb 23 08:49:01 crc kubenswrapper[4940]: I0223 08:49:01.377189 4940 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Feb 23 08:49:01 crc kubenswrapper[4940]: I0223 08:49:01.380142 4940 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Feb 23 08:49:01 crc kubenswrapper[4940]: I0223 08:49:01.381309 4940 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Feb 23 08:49:01 crc kubenswrapper[4940]: I0223 08:49:01.381904 4940 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Feb 23 08:49:01 crc kubenswrapper[4940]: I0223 08:49:01.383467 4940 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Feb 23 08:49:01 crc kubenswrapper[4940]: I0223 08:49:01.384253 4940 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Feb 23 08:49:01 crc kubenswrapper[4940]: I0223 08:49:01.384865 4940 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Feb 23 08:49:01 crc kubenswrapper[4940]: I0223 08:49:01.387051 4940 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Feb 23 08:49:01 crc kubenswrapper[4940]: I0223 08:49:01.388126 4940 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Feb 23 08:49:01 crc kubenswrapper[4940]: I0223 08:49:01.389123 4940 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Feb 23 08:49:01 crc kubenswrapper[4940]: I0223 08:49:01.389852 4940 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Feb 23 08:49:01 crc kubenswrapper[4940]: I0223 08:49:01.391106 4940 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Feb 23 08:49:01 crc kubenswrapper[4940]: I0223 08:49:01.392368 4940 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Feb 23 08:49:01 crc kubenswrapper[4940]: I0223 08:49:01.393219 4940 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Feb 23 08:49:01 crc kubenswrapper[4940]: I0223 08:49:01.393819 4940 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Feb 23 08:49:01 crc kubenswrapper[4940]: I0223 08:49:01.394800 4940 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Feb 23 08:49:01 crc kubenswrapper[4940]: I0223 08:49:01.396025 4940 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Feb 23 08:49:01 crc kubenswrapper[4940]: I0223 08:49:01.396544 4940 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Feb 23 08:49:01 crc kubenswrapper[4940]: I0223 08:49:01.397075 4940 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Feb 23 08:49:01 crc kubenswrapper[4940]: I0223 08:49:01.397997 4940 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Feb 23 08:49:01 crc kubenswrapper[4940]: I0223 08:49:01.398574 4940 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Feb 23 08:49:01 crc kubenswrapper[4940]: I0223 08:49:01.399705 4940 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Feb 23 08:49:01 crc kubenswrapper[4940]: I0223 08:49:01.400268 4940 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Feb 23 08:49:01 crc kubenswrapper[4940]: I0223 08:49:01.425066 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:01 crc kubenswrapper[4940]: I0223 08:49:01.425116 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:01 crc kubenswrapper[4940]: I0223 08:49:01.425137 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:01 crc kubenswrapper[4940]: I0223 08:49:01.425160 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:01 crc kubenswrapper[4940]: I0223 08:49:01.425190 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:01Z","lastTransitionTime":"2026-02-23T08:49:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:01 crc kubenswrapper[4940]: I0223 08:49:01.528146 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:01 crc kubenswrapper[4940]: I0223 08:49:01.528235 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:01 crc kubenswrapper[4940]: I0223 08:49:01.528255 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:01 crc kubenswrapper[4940]: I0223 08:49:01.528278 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:01 crc kubenswrapper[4940]: I0223 08:49:01.528324 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:01Z","lastTransitionTime":"2026-02-23T08:49:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:01 crc kubenswrapper[4940]: I0223 08:49:01.631320 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:01 crc kubenswrapper[4940]: I0223 08:49:01.631375 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:01 crc kubenswrapper[4940]: I0223 08:49:01.631388 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:01 crc kubenswrapper[4940]: I0223 08:49:01.631408 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:01 crc kubenswrapper[4940]: I0223 08:49:01.631424 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:01Z","lastTransitionTime":"2026-02-23T08:49:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:01 crc kubenswrapper[4940]: I0223 08:49:01.703067 4940 scope.go:117] "RemoveContainer" containerID="947d88435f9dfeac492c7a476d5413d6c9f65fea2e6a8322bed3a197be4e7c11" Feb 23 08:49:01 crc kubenswrapper[4940]: E0223 08:49:01.703319 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 23 08:49:01 crc kubenswrapper[4940]: I0223 08:49:01.734795 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:01 crc kubenswrapper[4940]: I0223 08:49:01.734847 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:01 crc kubenswrapper[4940]: I0223 08:49:01.734857 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:01 crc kubenswrapper[4940]: I0223 08:49:01.734874 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:01 crc kubenswrapper[4940]: I0223 08:49:01.735191 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:01Z","lastTransitionTime":"2026-02-23T08:49:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:01 crc kubenswrapper[4940]: I0223 08:49:01.838402 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:01 crc kubenswrapper[4940]: I0223 08:49:01.838475 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:01 crc kubenswrapper[4940]: I0223 08:49:01.838489 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:01 crc kubenswrapper[4940]: I0223 08:49:01.838511 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:01 crc kubenswrapper[4940]: I0223 08:49:01.838525 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:01Z","lastTransitionTime":"2026-02-23T08:49:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:01 crc kubenswrapper[4940]: I0223 08:49:01.941802 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:01 crc kubenswrapper[4940]: I0223 08:49:01.941857 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:01 crc kubenswrapper[4940]: I0223 08:49:01.941880 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:01 crc kubenswrapper[4940]: I0223 08:49:01.941902 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:01 crc kubenswrapper[4940]: I0223 08:49:01.941914 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:01Z","lastTransitionTime":"2026-02-23T08:49:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:02 crc kubenswrapper[4940]: I0223 08:49:02.044330 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:02 crc kubenswrapper[4940]: I0223 08:49:02.044397 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:02 crc kubenswrapper[4940]: I0223 08:49:02.044411 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:02 crc kubenswrapper[4940]: I0223 08:49:02.044434 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:02 crc kubenswrapper[4940]: I0223 08:49:02.044449 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:02Z","lastTransitionTime":"2026-02-23T08:49:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:02 crc kubenswrapper[4940]: I0223 08:49:02.124468 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 23 08:49:02 crc kubenswrapper[4940]: I0223 08:49:02.133308 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Feb 23 08:49:02 crc kubenswrapper[4940]: I0223 08:49:02.135364 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 08:49:02 crc kubenswrapper[4940]: I0223 08:49:02.144564 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 08:49:02 crc kubenswrapper[4940]: I0223 08:49:02.147518 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:02 crc kubenswrapper[4940]: I0223 08:49:02.147563 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:02 crc kubenswrapper[4940]: I0223 08:49:02.147577 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:02 crc kubenswrapper[4940]: I0223 08:49:02.147596 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:02 crc kubenswrapper[4940]: I0223 08:49:02.147607 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:02Z","lastTransitionTime":"2026-02-23T08:49:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:02 crc kubenswrapper[4940]: I0223 08:49:02.156869 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67d1926d-47ed-4a5c-b868-690599126446\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ee3bc200ebaed1c133ad32e65db44c8613c48203736dada71c0881d3d4f45a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6fefd6cf7ee54d7687a700f08f52ff20d438289f846da4d811f3295e85455c1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee77b54ead604e1f64b0057494a4ba082ac7f59289f563ce28ec6048485e3b07\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://947d88435f9dfeac492c7a476d5413d6c9f65fea2e6a8322bed3a197be4e7c11\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://947d88435f9dfeac492c7a476d5413d6c9f65fea2e6a8322bed3a197be4e7c11\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"message\\\":\\\"le observer\\\\nW0223 08:48:59.155726 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0223 08:48:59.155899 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0223 08:48:59.156685 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1318688766/tls.crt::/tmp/serving-cert-1318688766/tls.key\\\\\\\"\\\\nI0223 08:48:59.543716 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0223 08:48:59.548278 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0223 08:48:59.548307 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0223 08:48:59.548334 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0223 08:48:59.548340 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0223 08:48:59.558852 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0223 08:48:59.558871 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0223 08:48:59.558894 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0223 08:48:59.558902 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0223 08:48:59.558909 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0223 08:48:59.558912 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0223 08:48:59.558917 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0223 08:48:59.558922 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0223 08:48:59.560387 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T08:48:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c47633e36ba0e4b4070060e2925e103b6efa68771b18f7a7f429546f9f05c1d9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:53Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7d8e4f8739813cb8d24afc9999755886cd4a77958b8b4e1514fc0bc53403258\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7d8e4f8739813cb8d24afc9999755886cd4a77958b8b4e1514fc0bc53403258\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:47:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:47:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:47:49Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 08:49:02 crc kubenswrapper[4940]: I0223 08:49:02.168902 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 08:49:02 crc kubenswrapper[4940]: I0223 08:49:02.180472 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 08:49:02 crc kubenswrapper[4940]: I0223 08:49:02.193545 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 08:49:02 crc kubenswrapper[4940]: I0223 08:49:02.206342 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 08:49:02 crc kubenswrapper[4940]: I0223 08:49:02.250854 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:02 crc kubenswrapper[4940]: I0223 08:49:02.251246 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:02 crc kubenswrapper[4940]: I0223 08:49:02.251363 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:02 crc kubenswrapper[4940]: I0223 08:49:02.251461 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:02 crc kubenswrapper[4940]: I0223 08:49:02.251560 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:02Z","lastTransitionTime":"2026-02-23T08:49:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:02 crc kubenswrapper[4940]: I0223 08:49:02.331591 4940 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 07:19:32.699694138 +0000 UTC Feb 23 08:49:02 crc kubenswrapper[4940]: I0223 08:49:02.345408 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 08:49:02 crc kubenswrapper[4940]: E0223 08:49:02.345858 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 08:49:02 crc kubenswrapper[4940]: I0223 08:49:02.354506 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:02 crc kubenswrapper[4940]: I0223 08:49:02.354558 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:02 crc kubenswrapper[4940]: I0223 08:49:02.354571 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:02 crc kubenswrapper[4940]: I0223 08:49:02.354590 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:02 crc kubenswrapper[4940]: I0223 08:49:02.354603 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:02Z","lastTransitionTime":"2026-02-23T08:49:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:02 crc kubenswrapper[4940]: I0223 08:49:02.368747 4940 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-02-23 08:44:01 +0000 UTC, rotation deadline is 2026-11-14 06:14:37.828575655 +0000 UTC Feb 23 08:49:02 crc kubenswrapper[4940]: I0223 08:49:02.368792 4940 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 6333h25m35.459786178s for next certificate rotation Feb 23 08:49:02 crc kubenswrapper[4940]: I0223 08:49:02.457777 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:02 crc kubenswrapper[4940]: I0223 08:49:02.457822 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:02 crc kubenswrapper[4940]: I0223 08:49:02.457832 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:02 crc kubenswrapper[4940]: I0223 08:49:02.457849 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:02 crc kubenswrapper[4940]: I0223 08:49:02.457861 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:02Z","lastTransitionTime":"2026-02-23T08:49:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:02 crc kubenswrapper[4940]: I0223 08:49:02.560320 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:02 crc kubenswrapper[4940]: I0223 08:49:02.560355 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:02 crc kubenswrapper[4940]: I0223 08:49:02.560364 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:02 crc kubenswrapper[4940]: I0223 08:49:02.560379 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:02 crc kubenswrapper[4940]: I0223 08:49:02.560389 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:02Z","lastTransitionTime":"2026-02-23T08:49:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:02 crc kubenswrapper[4940]: I0223 08:49:02.662900 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:02 crc kubenswrapper[4940]: I0223 08:49:02.662942 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:02 crc kubenswrapper[4940]: I0223 08:49:02.662952 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:02 crc kubenswrapper[4940]: I0223 08:49:02.662968 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:02 crc kubenswrapper[4940]: I0223 08:49:02.662980 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:02Z","lastTransitionTime":"2026-02-23T08:49:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:02 crc kubenswrapper[4940]: I0223 08:49:02.765184 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:02 crc kubenswrapper[4940]: I0223 08:49:02.765232 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:02 crc kubenswrapper[4940]: I0223 08:49:02.765243 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:02 crc kubenswrapper[4940]: I0223 08:49:02.765258 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:02 crc kubenswrapper[4940]: I0223 08:49:02.765268 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:02Z","lastTransitionTime":"2026-02-23T08:49:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:02 crc kubenswrapper[4940]: I0223 08:49:02.867426 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:02 crc kubenswrapper[4940]: I0223 08:49:02.867480 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:02 crc kubenswrapper[4940]: I0223 08:49:02.867491 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:02 crc kubenswrapper[4940]: I0223 08:49:02.867507 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:02 crc kubenswrapper[4940]: I0223 08:49:02.867522 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:02Z","lastTransitionTime":"2026-02-23T08:49:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:02 crc kubenswrapper[4940]: I0223 08:49:02.969500 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:02 crc kubenswrapper[4940]: I0223 08:49:02.969839 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:02 crc kubenswrapper[4940]: I0223 08:49:02.969929 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:02 crc kubenswrapper[4940]: I0223 08:49:02.970006 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:02 crc kubenswrapper[4940]: I0223 08:49:02.970077 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:02Z","lastTransitionTime":"2026-02-23T08:49:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:02 crc kubenswrapper[4940]: I0223 08:49:02.995983 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 08:49:02 crc kubenswrapper[4940]: I0223 08:49:02.996093 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 08:49:02 crc kubenswrapper[4940]: I0223 08:49:02.996127 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 08:49:02 crc kubenswrapper[4940]: E0223 08:49:02.996153 4940 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 08:49:06.996122547 +0000 UTC m=+78.379328704 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 08:49:02 crc kubenswrapper[4940]: E0223 08:49:02.996211 4940 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 23 08:49:02 crc kubenswrapper[4940]: E0223 08:49:02.996236 4940 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 23 08:49:02 crc kubenswrapper[4940]: E0223 08:49:02.996283 4940 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-23 08:49:06.996265021 +0000 UTC m=+78.379471228 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 23 08:49:02 crc kubenswrapper[4940]: E0223 08:49:02.996301 4940 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-23 08:49:06.996293552 +0000 UTC m=+78.379499829 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 23 08:49:03 crc kubenswrapper[4940]: I0223 08:49:03.023246 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:03 crc kubenswrapper[4940]: I0223 08:49:03.023298 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:03 crc kubenswrapper[4940]: I0223 08:49:03.023312 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:03 crc kubenswrapper[4940]: I0223 08:49:03.023334 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:03 crc kubenswrapper[4940]: I0223 08:49:03.023347 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:03Z","lastTransitionTime":"2026-02-23T08:49:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:03 crc kubenswrapper[4940]: E0223 08:49:03.034419 4940 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T08:49:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T08:49:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T08:49:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T08:49:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0b40f9a7-6d5b-496d-bcec-88183c6aba29\\\",\\\"systemUUID\\\":\\\"3c406e8c-0d77-4ead-8ee9-37cf28c01cc1\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 08:49:03 crc kubenswrapper[4940]: I0223 08:49:03.039224 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:03 crc kubenswrapper[4940]: I0223 08:49:03.039269 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:03 crc kubenswrapper[4940]: I0223 08:49:03.039280 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:03 crc kubenswrapper[4940]: I0223 08:49:03.039295 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:03 crc kubenswrapper[4940]: I0223 08:49:03.039304 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:03Z","lastTransitionTime":"2026-02-23T08:49:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:03 crc kubenswrapper[4940]: E0223 08:49:03.050183 4940 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T08:49:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T08:49:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T08:49:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T08:49:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0b40f9a7-6d5b-496d-bcec-88183c6aba29\\\",\\\"systemUUID\\\":\\\"3c406e8c-0d77-4ead-8ee9-37cf28c01cc1\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 08:49:03 crc kubenswrapper[4940]: I0223 08:49:03.054368 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:03 crc kubenswrapper[4940]: I0223 08:49:03.054410 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:03 crc kubenswrapper[4940]: I0223 08:49:03.054427 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:03 crc kubenswrapper[4940]: I0223 08:49:03.054446 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:03 crc kubenswrapper[4940]: I0223 08:49:03.054460 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:03Z","lastTransitionTime":"2026-02-23T08:49:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:03 crc kubenswrapper[4940]: E0223 08:49:03.068550 4940 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T08:49:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T08:49:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T08:49:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T08:49:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0b40f9a7-6d5b-496d-bcec-88183c6aba29\\\",\\\"systemUUID\\\":\\\"3c406e8c-0d77-4ead-8ee9-37cf28c01cc1\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 08:49:03 crc kubenswrapper[4940]: I0223 08:49:03.072172 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:03 crc kubenswrapper[4940]: I0223 08:49:03.072222 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:03 crc kubenswrapper[4940]: I0223 08:49:03.072233 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:03 crc kubenswrapper[4940]: I0223 08:49:03.072253 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:03 crc kubenswrapper[4940]: I0223 08:49:03.072272 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:03Z","lastTransitionTime":"2026-02-23T08:49:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:03 crc kubenswrapper[4940]: E0223 08:49:03.083838 4940 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T08:49:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T08:49:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T08:49:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T08:49:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0b40f9a7-6d5b-496d-bcec-88183c6aba29\\\",\\\"systemUUID\\\":\\\"3c406e8c-0d77-4ead-8ee9-37cf28c01cc1\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 08:49:03 crc kubenswrapper[4940]: I0223 08:49:03.087976 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:03 crc kubenswrapper[4940]: I0223 08:49:03.088009 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:03 crc kubenswrapper[4940]: I0223 08:49:03.088020 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:03 crc kubenswrapper[4940]: I0223 08:49:03.088039 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:03 crc kubenswrapper[4940]: I0223 08:49:03.088053 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:03Z","lastTransitionTime":"2026-02-23T08:49:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:03 crc kubenswrapper[4940]: I0223 08:49:03.096961 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 08:49:03 crc kubenswrapper[4940]: I0223 08:49:03.097000 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 08:49:03 crc kubenswrapper[4940]: E0223 08:49:03.097143 4940 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 23 08:49:03 crc kubenswrapper[4940]: E0223 08:49:03.097160 4940 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 23 08:49:03 crc kubenswrapper[4940]: E0223 08:49:03.097211 4940 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 23 08:49:03 crc kubenswrapper[4940]: E0223 08:49:03.097225 4940 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 23 08:49:03 crc kubenswrapper[4940]: E0223 08:49:03.097169 4940 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 23 08:49:03 crc kubenswrapper[4940]: E0223 08:49:03.097309 4940 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 23 08:49:03 crc kubenswrapper[4940]: E0223 08:49:03.097292 4940 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-23 08:49:07.097268997 +0000 UTC m=+78.480475384 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 23 08:49:03 crc kubenswrapper[4940]: E0223 08:49:03.097371 4940 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-23 08:49:07.09735363 +0000 UTC m=+78.480559787 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 23 08:49:03 crc kubenswrapper[4940]: E0223 08:49:03.099326 4940 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T08:49:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T08:49:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T08:49:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T08:49:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0b40f9a7-6d5b-496d-bcec-88183c6aba29\\\",\\\"systemUUID\\\":\\\"3c406e8c-0d77-4ead-8ee9-37cf28c01cc1\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 08:49:03 crc kubenswrapper[4940]: E0223 08:49:03.099486 4940 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 23 08:49:03 crc kubenswrapper[4940]: I0223 08:49:03.107566 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:03 crc kubenswrapper[4940]: I0223 08:49:03.108129 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:03 crc kubenswrapper[4940]: I0223 08:49:03.108364 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:03 crc kubenswrapper[4940]: I0223 08:49:03.108386 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:03 crc kubenswrapper[4940]: I0223 08:49:03.108402 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:03Z","lastTransitionTime":"2026-02-23T08:49:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:03 crc kubenswrapper[4940]: I0223 08:49:03.211835 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:03 crc kubenswrapper[4940]: I0223 08:49:03.211910 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:03 crc kubenswrapper[4940]: I0223 08:49:03.211923 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:03 crc kubenswrapper[4940]: I0223 08:49:03.211948 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:03 crc kubenswrapper[4940]: I0223 08:49:03.211965 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:03Z","lastTransitionTime":"2026-02-23T08:49:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:03 crc kubenswrapper[4940]: I0223 08:49:03.314437 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:03 crc kubenswrapper[4940]: I0223 08:49:03.314473 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:03 crc kubenswrapper[4940]: I0223 08:49:03.314481 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:03 crc kubenswrapper[4940]: I0223 08:49:03.314496 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:03 crc kubenswrapper[4940]: I0223 08:49:03.314536 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:03Z","lastTransitionTime":"2026-02-23T08:49:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:03 crc kubenswrapper[4940]: I0223 08:49:03.331980 4940 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 00:27:28.282034554 +0000 UTC Feb 23 08:49:03 crc kubenswrapper[4940]: I0223 08:49:03.345446 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 08:49:03 crc kubenswrapper[4940]: I0223 08:49:03.345445 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 08:49:03 crc kubenswrapper[4940]: E0223 08:49:03.345594 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 08:49:03 crc kubenswrapper[4940]: E0223 08:49:03.346154 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 08:49:03 crc kubenswrapper[4940]: I0223 08:49:03.416916 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:03 crc kubenswrapper[4940]: I0223 08:49:03.417000 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:03 crc kubenswrapper[4940]: I0223 08:49:03.417013 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:03 crc kubenswrapper[4940]: I0223 08:49:03.417034 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:03 crc kubenswrapper[4940]: I0223 08:49:03.417064 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:03Z","lastTransitionTime":"2026-02-23T08:49:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:03 crc kubenswrapper[4940]: I0223 08:49:03.520501 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:03 crc kubenswrapper[4940]: I0223 08:49:03.520564 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:03 crc kubenswrapper[4940]: I0223 08:49:03.520577 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:03 crc kubenswrapper[4940]: I0223 08:49:03.520600 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:03 crc kubenswrapper[4940]: I0223 08:49:03.520635 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:03Z","lastTransitionTime":"2026-02-23T08:49:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:03 crc kubenswrapper[4940]: I0223 08:49:03.623447 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:03 crc kubenswrapper[4940]: I0223 08:49:03.623513 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:03 crc kubenswrapper[4940]: I0223 08:49:03.623527 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:03 crc kubenswrapper[4940]: I0223 08:49:03.623553 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:03 crc kubenswrapper[4940]: I0223 08:49:03.623568 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:03Z","lastTransitionTime":"2026-02-23T08:49:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:03 crc kubenswrapper[4940]: I0223 08:49:03.725831 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:03 crc kubenswrapper[4940]: I0223 08:49:03.725872 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:03 crc kubenswrapper[4940]: I0223 08:49:03.725882 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:03 crc kubenswrapper[4940]: I0223 08:49:03.725900 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:03 crc kubenswrapper[4940]: I0223 08:49:03.725916 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:03Z","lastTransitionTime":"2026-02-23T08:49:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:03 crc kubenswrapper[4940]: I0223 08:49:03.828885 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:03 crc kubenswrapper[4940]: I0223 08:49:03.828948 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:03 crc kubenswrapper[4940]: I0223 08:49:03.828959 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:03 crc kubenswrapper[4940]: I0223 08:49:03.828980 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:03 crc kubenswrapper[4940]: I0223 08:49:03.828995 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:03Z","lastTransitionTime":"2026-02-23T08:49:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:03 crc kubenswrapper[4940]: I0223 08:49:03.942489 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:03 crc kubenswrapper[4940]: I0223 08:49:03.942546 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:03 crc kubenswrapper[4940]: I0223 08:49:03.942561 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:03 crc kubenswrapper[4940]: I0223 08:49:03.942584 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:03 crc kubenswrapper[4940]: I0223 08:49:03.942598 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:03Z","lastTransitionTime":"2026-02-23T08:49:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:04 crc kubenswrapper[4940]: I0223 08:49:04.046287 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:04 crc kubenswrapper[4940]: I0223 08:49:04.046354 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:04 crc kubenswrapper[4940]: I0223 08:49:04.046374 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:04 crc kubenswrapper[4940]: I0223 08:49:04.046400 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:04 crc kubenswrapper[4940]: I0223 08:49:04.046414 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:04Z","lastTransitionTime":"2026-02-23T08:49:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:04 crc kubenswrapper[4940]: I0223 08:49:04.149940 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:04 crc kubenswrapper[4940]: I0223 08:49:04.150003 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:04 crc kubenswrapper[4940]: I0223 08:49:04.150014 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:04 crc kubenswrapper[4940]: I0223 08:49:04.150036 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:04 crc kubenswrapper[4940]: I0223 08:49:04.150048 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:04Z","lastTransitionTime":"2026-02-23T08:49:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:04 crc kubenswrapper[4940]: I0223 08:49:04.220665 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 23 08:49:04 crc kubenswrapper[4940]: I0223 08:49:04.221520 4940 scope.go:117] "RemoveContainer" containerID="947d88435f9dfeac492c7a476d5413d6c9f65fea2e6a8322bed3a197be4e7c11" Feb 23 08:49:04 crc kubenswrapper[4940]: E0223 08:49:04.221711 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 23 08:49:04 crc kubenswrapper[4940]: I0223 08:49:04.253039 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:04 crc kubenswrapper[4940]: I0223 08:49:04.253131 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:04 crc kubenswrapper[4940]: I0223 08:49:04.253143 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:04 crc kubenswrapper[4940]: I0223 08:49:04.253165 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:04 crc kubenswrapper[4940]: I0223 08:49:04.253178 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:04Z","lastTransitionTime":"2026-02-23T08:49:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:04 crc kubenswrapper[4940]: I0223 08:49:04.332501 4940 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 00:04:56.509402481 +0000 UTC Feb 23 08:49:04 crc kubenswrapper[4940]: I0223 08:49:04.345029 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 08:49:04 crc kubenswrapper[4940]: E0223 08:49:04.345239 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 08:49:04 crc kubenswrapper[4940]: I0223 08:49:04.356375 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:04 crc kubenswrapper[4940]: I0223 08:49:04.356429 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:04 crc kubenswrapper[4940]: I0223 08:49:04.356445 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:04 crc kubenswrapper[4940]: I0223 08:49:04.356470 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:04 crc kubenswrapper[4940]: I0223 08:49:04.356488 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:04Z","lastTransitionTime":"2026-02-23T08:49:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:04 crc kubenswrapper[4940]: I0223 08:49:04.459936 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:04 crc kubenswrapper[4940]: I0223 08:49:04.459974 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:04 crc kubenswrapper[4940]: I0223 08:49:04.459983 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:04 crc kubenswrapper[4940]: I0223 08:49:04.460000 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:04 crc kubenswrapper[4940]: I0223 08:49:04.460011 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:04Z","lastTransitionTime":"2026-02-23T08:49:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:04 crc kubenswrapper[4940]: I0223 08:49:04.563258 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:04 crc kubenswrapper[4940]: I0223 08:49:04.563299 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:04 crc kubenswrapper[4940]: I0223 08:49:04.563308 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:04 crc kubenswrapper[4940]: I0223 08:49:04.563325 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:04 crc kubenswrapper[4940]: I0223 08:49:04.563334 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:04Z","lastTransitionTime":"2026-02-23T08:49:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:04 crc kubenswrapper[4940]: I0223 08:49:04.666407 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:04 crc kubenswrapper[4940]: I0223 08:49:04.666491 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:04 crc kubenswrapper[4940]: I0223 08:49:04.666501 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:04 crc kubenswrapper[4940]: I0223 08:49:04.666518 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:04 crc kubenswrapper[4940]: I0223 08:49:04.666533 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:04Z","lastTransitionTime":"2026-02-23T08:49:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:04 crc kubenswrapper[4940]: I0223 08:49:04.769511 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:04 crc kubenswrapper[4940]: I0223 08:49:04.769569 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:04 crc kubenswrapper[4940]: I0223 08:49:04.769643 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:04 crc kubenswrapper[4940]: I0223 08:49:04.769744 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:04 crc kubenswrapper[4940]: I0223 08:49:04.769764 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:04Z","lastTransitionTime":"2026-02-23T08:49:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:04 crc kubenswrapper[4940]: I0223 08:49:04.881304 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:04 crc kubenswrapper[4940]: I0223 08:49:04.881347 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:04 crc kubenswrapper[4940]: I0223 08:49:04.881356 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:04 crc kubenswrapper[4940]: I0223 08:49:04.881377 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:04 crc kubenswrapper[4940]: I0223 08:49:04.881388 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:04Z","lastTransitionTime":"2026-02-23T08:49:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:04 crc kubenswrapper[4940]: I0223 08:49:04.984177 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:04 crc kubenswrapper[4940]: I0223 08:49:04.984227 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:04 crc kubenswrapper[4940]: I0223 08:49:04.984244 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:04 crc kubenswrapper[4940]: I0223 08:49:04.984270 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:04 crc kubenswrapper[4940]: I0223 08:49:04.984287 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:04Z","lastTransitionTime":"2026-02-23T08:49:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:05 crc kubenswrapper[4940]: I0223 08:49:05.087689 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:05 crc kubenswrapper[4940]: I0223 08:49:05.087755 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:05 crc kubenswrapper[4940]: I0223 08:49:05.087779 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:05 crc kubenswrapper[4940]: I0223 08:49:05.087818 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:05 crc kubenswrapper[4940]: I0223 08:49:05.087842 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:05Z","lastTransitionTime":"2026-02-23T08:49:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:05 crc kubenswrapper[4940]: I0223 08:49:05.190822 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:05 crc kubenswrapper[4940]: I0223 08:49:05.190865 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:05 crc kubenswrapper[4940]: I0223 08:49:05.190876 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:05 crc kubenswrapper[4940]: I0223 08:49:05.190897 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:05 crc kubenswrapper[4940]: I0223 08:49:05.190908 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:05Z","lastTransitionTime":"2026-02-23T08:49:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:05 crc kubenswrapper[4940]: I0223 08:49:05.294859 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:05 crc kubenswrapper[4940]: I0223 08:49:05.294930 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:05 crc kubenswrapper[4940]: I0223 08:49:05.294941 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:05 crc kubenswrapper[4940]: I0223 08:49:05.294963 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:05 crc kubenswrapper[4940]: I0223 08:49:05.294974 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:05Z","lastTransitionTime":"2026-02-23T08:49:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:05 crc kubenswrapper[4940]: I0223 08:49:05.333678 4940 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 08:52:38.867872897 +0000 UTC Feb 23 08:49:05 crc kubenswrapper[4940]: I0223 08:49:05.345112 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 08:49:05 crc kubenswrapper[4940]: I0223 08:49:05.345213 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 08:49:05 crc kubenswrapper[4940]: E0223 08:49:05.345293 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 08:49:05 crc kubenswrapper[4940]: E0223 08:49:05.345417 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 08:49:05 crc kubenswrapper[4940]: I0223 08:49:05.397658 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:05 crc kubenswrapper[4940]: I0223 08:49:05.397734 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:05 crc kubenswrapper[4940]: I0223 08:49:05.397745 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:05 crc kubenswrapper[4940]: I0223 08:49:05.397773 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:05 crc kubenswrapper[4940]: I0223 08:49:05.397825 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:05Z","lastTransitionTime":"2026-02-23T08:49:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:05 crc kubenswrapper[4940]: I0223 08:49:05.502161 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:05 crc kubenswrapper[4940]: I0223 08:49:05.502246 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:05 crc kubenswrapper[4940]: I0223 08:49:05.502256 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:05 crc kubenswrapper[4940]: I0223 08:49:05.502276 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:05 crc kubenswrapper[4940]: I0223 08:49:05.502288 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:05Z","lastTransitionTime":"2026-02-23T08:49:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:05 crc kubenswrapper[4940]: I0223 08:49:05.605297 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:05 crc kubenswrapper[4940]: I0223 08:49:05.605350 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:05 crc kubenswrapper[4940]: I0223 08:49:05.605359 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:05 crc kubenswrapper[4940]: I0223 08:49:05.605403 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:05 crc kubenswrapper[4940]: I0223 08:49:05.605417 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:05Z","lastTransitionTime":"2026-02-23T08:49:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:05 crc kubenswrapper[4940]: I0223 08:49:05.709457 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:05 crc kubenswrapper[4940]: I0223 08:49:05.709665 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:05 crc kubenswrapper[4940]: I0223 08:49:05.709728 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:05 crc kubenswrapper[4940]: I0223 08:49:05.709766 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:05 crc kubenswrapper[4940]: I0223 08:49:05.709808 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:05Z","lastTransitionTime":"2026-02-23T08:49:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:05 crc kubenswrapper[4940]: I0223 08:49:05.813769 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:05 crc kubenswrapper[4940]: I0223 08:49:05.814122 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:05 crc kubenswrapper[4940]: I0223 08:49:05.814263 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:05 crc kubenswrapper[4940]: I0223 08:49:05.814410 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:05 crc kubenswrapper[4940]: I0223 08:49:05.814550 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:05Z","lastTransitionTime":"2026-02-23T08:49:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:05 crc kubenswrapper[4940]: I0223 08:49:05.918146 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:05 crc kubenswrapper[4940]: I0223 08:49:05.918518 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:05 crc kubenswrapper[4940]: I0223 08:49:05.918587 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:05 crc kubenswrapper[4940]: I0223 08:49:05.918684 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:05 crc kubenswrapper[4940]: I0223 08:49:05.918768 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:05Z","lastTransitionTime":"2026-02-23T08:49:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:06 crc kubenswrapper[4940]: I0223 08:49:06.021551 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:06 crc kubenswrapper[4940]: I0223 08:49:06.021838 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:06 crc kubenswrapper[4940]: I0223 08:49:06.021914 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:06 crc kubenswrapper[4940]: I0223 08:49:06.022054 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:06 crc kubenswrapper[4940]: I0223 08:49:06.022143 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:06Z","lastTransitionTime":"2026-02-23T08:49:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:06 crc kubenswrapper[4940]: I0223 08:49:06.125089 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:06 crc kubenswrapper[4940]: I0223 08:49:06.125128 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:06 crc kubenswrapper[4940]: I0223 08:49:06.125137 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:06 crc kubenswrapper[4940]: I0223 08:49:06.125153 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:06 crc kubenswrapper[4940]: I0223 08:49:06.125163 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:06Z","lastTransitionTime":"2026-02-23T08:49:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:06 crc kubenswrapper[4940]: I0223 08:49:06.228579 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:06 crc kubenswrapper[4940]: I0223 08:49:06.228673 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:06 crc kubenswrapper[4940]: I0223 08:49:06.228687 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:06 crc kubenswrapper[4940]: I0223 08:49:06.228705 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:06 crc kubenswrapper[4940]: I0223 08:49:06.228717 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:06Z","lastTransitionTime":"2026-02-23T08:49:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:06 crc kubenswrapper[4940]: I0223 08:49:06.332090 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:06 crc kubenswrapper[4940]: I0223 08:49:06.332587 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:06 crc kubenswrapper[4940]: I0223 08:49:06.332731 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:06 crc kubenswrapper[4940]: I0223 08:49:06.332859 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:06 crc kubenswrapper[4940]: I0223 08:49:06.332987 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:06Z","lastTransitionTime":"2026-02-23T08:49:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:06 crc kubenswrapper[4940]: I0223 08:49:06.334213 4940 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 22:05:40.907945466 +0000 UTC Feb 23 08:49:06 crc kubenswrapper[4940]: I0223 08:49:06.344652 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 08:49:06 crc kubenswrapper[4940]: E0223 08:49:06.344918 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 08:49:06 crc kubenswrapper[4940]: I0223 08:49:06.435885 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:06 crc kubenswrapper[4940]: I0223 08:49:06.435932 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:06 crc kubenswrapper[4940]: I0223 08:49:06.435943 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:06 crc kubenswrapper[4940]: I0223 08:49:06.435960 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:06 crc kubenswrapper[4940]: I0223 08:49:06.435971 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:06Z","lastTransitionTime":"2026-02-23T08:49:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:06 crc kubenswrapper[4940]: I0223 08:49:06.538192 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:06 crc kubenswrapper[4940]: I0223 08:49:06.538229 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:06 crc kubenswrapper[4940]: I0223 08:49:06.538239 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:06 crc kubenswrapper[4940]: I0223 08:49:06.538256 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:06 crc kubenswrapper[4940]: I0223 08:49:06.538267 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:06Z","lastTransitionTime":"2026-02-23T08:49:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:06 crc kubenswrapper[4940]: I0223 08:49:06.640519 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:06 crc kubenswrapper[4940]: I0223 08:49:06.640562 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:06 crc kubenswrapper[4940]: I0223 08:49:06.640574 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:06 crc kubenswrapper[4940]: I0223 08:49:06.640590 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:06 crc kubenswrapper[4940]: I0223 08:49:06.640603 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:06Z","lastTransitionTime":"2026-02-23T08:49:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:06 crc kubenswrapper[4940]: I0223 08:49:06.742784 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:06 crc kubenswrapper[4940]: I0223 08:49:06.742839 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:06 crc kubenswrapper[4940]: I0223 08:49:06.742851 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:06 crc kubenswrapper[4940]: I0223 08:49:06.742865 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:06 crc kubenswrapper[4940]: I0223 08:49:06.742874 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:06Z","lastTransitionTime":"2026-02-23T08:49:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:06 crc kubenswrapper[4940]: I0223 08:49:06.845271 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:06 crc kubenswrapper[4940]: I0223 08:49:06.845355 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:06 crc kubenswrapper[4940]: I0223 08:49:06.845367 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:06 crc kubenswrapper[4940]: I0223 08:49:06.845409 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:06 crc kubenswrapper[4940]: I0223 08:49:06.845424 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:06Z","lastTransitionTime":"2026-02-23T08:49:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:06 crc kubenswrapper[4940]: I0223 08:49:06.947776 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:06 crc kubenswrapper[4940]: I0223 08:49:06.947819 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:06 crc kubenswrapper[4940]: I0223 08:49:06.947829 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:06 crc kubenswrapper[4940]: I0223 08:49:06.947846 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:06 crc kubenswrapper[4940]: I0223 08:49:06.947856 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:06Z","lastTransitionTime":"2026-02-23T08:49:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:07 crc kubenswrapper[4940]: I0223 08:49:07.034382 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 08:49:07 crc kubenswrapper[4940]: E0223 08:49:07.034553 4940 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 08:49:15.034531949 +0000 UTC m=+86.417738106 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 08:49:07 crc kubenswrapper[4940]: I0223 08:49:07.034545 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 08:49:07 crc kubenswrapper[4940]: I0223 08:49:07.034597 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 08:49:07 crc kubenswrapper[4940]: E0223 08:49:07.034720 4940 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 23 08:49:07 crc kubenswrapper[4940]: E0223 08:49:07.034726 4940 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 23 08:49:07 crc kubenswrapper[4940]: E0223 08:49:07.034762 4940 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-23 08:49:15.034752726 +0000 UTC m=+86.417958883 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 23 08:49:07 crc kubenswrapper[4940]: E0223 08:49:07.034802 4940 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-23 08:49:15.034777766 +0000 UTC m=+86.417983963 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 23 08:49:07 crc kubenswrapper[4940]: I0223 08:49:07.049856 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:07 crc kubenswrapper[4940]: I0223 08:49:07.049886 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:07 crc kubenswrapper[4940]: I0223 08:49:07.049896 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:07 crc kubenswrapper[4940]: I0223 08:49:07.049912 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:07 crc kubenswrapper[4940]: I0223 08:49:07.049923 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:07Z","lastTransitionTime":"2026-02-23T08:49:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:07 crc kubenswrapper[4940]: I0223 08:49:07.135872 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 08:49:07 crc kubenswrapper[4940]: I0223 08:49:07.135924 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 08:49:07 crc kubenswrapper[4940]: E0223 08:49:07.136088 4940 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 23 08:49:07 crc kubenswrapper[4940]: E0223 08:49:07.136130 4940 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 23 08:49:07 crc kubenswrapper[4940]: E0223 08:49:07.136143 4940 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 23 08:49:07 crc kubenswrapper[4940]: E0223 08:49:07.136195 4940 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-23 08:49:15.136178215 +0000 UTC m=+86.519384432 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 23 08:49:07 crc kubenswrapper[4940]: E0223 08:49:07.136096 4940 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 23 08:49:07 crc kubenswrapper[4940]: E0223 08:49:07.136231 4940 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 23 08:49:07 crc kubenswrapper[4940]: E0223 08:49:07.136247 4940 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 23 08:49:07 crc kubenswrapper[4940]: E0223 08:49:07.136309 4940 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-23 08:49:15.136290649 +0000 UTC m=+86.519496886 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 23 08:49:07 crc kubenswrapper[4940]: I0223 08:49:07.151891 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:07 crc kubenswrapper[4940]: I0223 08:49:07.151928 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:07 crc kubenswrapper[4940]: I0223 08:49:07.151939 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:07 crc kubenswrapper[4940]: I0223 08:49:07.151974 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:07 crc kubenswrapper[4940]: I0223 08:49:07.151987 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:07Z","lastTransitionTime":"2026-02-23T08:49:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:07 crc kubenswrapper[4940]: I0223 08:49:07.254116 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:07 crc kubenswrapper[4940]: I0223 08:49:07.254173 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:07 crc kubenswrapper[4940]: I0223 08:49:07.254187 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:07 crc kubenswrapper[4940]: I0223 08:49:07.254203 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:07 crc kubenswrapper[4940]: I0223 08:49:07.254215 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:07Z","lastTransitionTime":"2026-02-23T08:49:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:07 crc kubenswrapper[4940]: I0223 08:49:07.335380 4940 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 00:29:04.517904865 +0000 UTC Feb 23 08:49:07 crc kubenswrapper[4940]: I0223 08:49:07.344950 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 08:49:07 crc kubenswrapper[4940]: I0223 08:49:07.345049 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 08:49:07 crc kubenswrapper[4940]: E0223 08:49:07.345157 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 08:49:07 crc kubenswrapper[4940]: E0223 08:49:07.345242 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 08:49:07 crc kubenswrapper[4940]: I0223 08:49:07.356928 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:07 crc kubenswrapper[4940]: I0223 08:49:07.357163 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:07 crc kubenswrapper[4940]: I0223 08:49:07.357262 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:07 crc kubenswrapper[4940]: I0223 08:49:07.357359 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:07 crc kubenswrapper[4940]: I0223 08:49:07.357451 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:07Z","lastTransitionTime":"2026-02-23T08:49:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:07 crc kubenswrapper[4940]: I0223 08:49:07.461377 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:07 crc kubenswrapper[4940]: I0223 08:49:07.461700 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:07 crc kubenswrapper[4940]: I0223 08:49:07.461845 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:07 crc kubenswrapper[4940]: I0223 08:49:07.461957 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:07 crc kubenswrapper[4940]: I0223 08:49:07.462071 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:07Z","lastTransitionTime":"2026-02-23T08:49:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:07 crc kubenswrapper[4940]: I0223 08:49:07.566586 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:07 crc kubenswrapper[4940]: I0223 08:49:07.566659 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:07 crc kubenswrapper[4940]: I0223 08:49:07.566669 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:07 crc kubenswrapper[4940]: I0223 08:49:07.566689 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:07 crc kubenswrapper[4940]: I0223 08:49:07.566702 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:07Z","lastTransitionTime":"2026-02-23T08:49:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:07 crc kubenswrapper[4940]: I0223 08:49:07.670515 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:07 crc kubenswrapper[4940]: I0223 08:49:07.670571 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:07 crc kubenswrapper[4940]: I0223 08:49:07.670627 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:07 crc kubenswrapper[4940]: I0223 08:49:07.670656 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:07 crc kubenswrapper[4940]: I0223 08:49:07.670667 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:07Z","lastTransitionTime":"2026-02-23T08:49:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:07 crc kubenswrapper[4940]: I0223 08:49:07.773869 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:07 crc kubenswrapper[4940]: I0223 08:49:07.773918 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:07 crc kubenswrapper[4940]: I0223 08:49:07.773934 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:07 crc kubenswrapper[4940]: I0223 08:49:07.773959 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:07 crc kubenswrapper[4940]: I0223 08:49:07.773976 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:07Z","lastTransitionTime":"2026-02-23T08:49:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:07 crc kubenswrapper[4940]: I0223 08:49:07.877244 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:07 crc kubenswrapper[4940]: I0223 08:49:07.877305 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:07 crc kubenswrapper[4940]: I0223 08:49:07.877316 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:07 crc kubenswrapper[4940]: I0223 08:49:07.877336 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:07 crc kubenswrapper[4940]: I0223 08:49:07.877347 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:07Z","lastTransitionTime":"2026-02-23T08:49:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:07 crc kubenswrapper[4940]: I0223 08:49:07.979726 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:07 crc kubenswrapper[4940]: I0223 08:49:07.979770 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:07 crc kubenswrapper[4940]: I0223 08:49:07.979781 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:07 crc kubenswrapper[4940]: I0223 08:49:07.979795 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:07 crc kubenswrapper[4940]: I0223 08:49:07.979805 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:07Z","lastTransitionTime":"2026-02-23T08:49:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:08 crc kubenswrapper[4940]: I0223 08:49:08.083568 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:08 crc kubenswrapper[4940]: I0223 08:49:08.083691 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:08 crc kubenswrapper[4940]: I0223 08:49:08.083716 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:08 crc kubenswrapper[4940]: I0223 08:49:08.083749 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:08 crc kubenswrapper[4940]: I0223 08:49:08.083781 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:08Z","lastTransitionTime":"2026-02-23T08:49:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:08 crc kubenswrapper[4940]: I0223 08:49:08.185990 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:08 crc kubenswrapper[4940]: I0223 08:49:08.186025 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:08 crc kubenswrapper[4940]: I0223 08:49:08.186036 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:08 crc kubenswrapper[4940]: I0223 08:49:08.186052 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:08 crc kubenswrapper[4940]: I0223 08:49:08.186063 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:08Z","lastTransitionTime":"2026-02-23T08:49:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:08 crc kubenswrapper[4940]: I0223 08:49:08.289334 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:08 crc kubenswrapper[4940]: I0223 08:49:08.289401 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:08 crc kubenswrapper[4940]: I0223 08:49:08.289420 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:08 crc kubenswrapper[4940]: I0223 08:49:08.289448 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:08 crc kubenswrapper[4940]: I0223 08:49:08.289470 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:08Z","lastTransitionTime":"2026-02-23T08:49:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:08 crc kubenswrapper[4940]: I0223 08:49:08.335740 4940 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 17:27:49.045157658 +0000 UTC Feb 23 08:49:08 crc kubenswrapper[4940]: I0223 08:49:08.345281 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 08:49:08 crc kubenswrapper[4940]: E0223 08:49:08.345432 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 08:49:08 crc kubenswrapper[4940]: I0223 08:49:08.393224 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:08 crc kubenswrapper[4940]: I0223 08:49:08.393474 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:08 crc kubenswrapper[4940]: I0223 08:49:08.393561 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:08 crc kubenswrapper[4940]: I0223 08:49:08.393697 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:08 crc kubenswrapper[4940]: I0223 08:49:08.393806 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:08Z","lastTransitionTime":"2026-02-23T08:49:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:08 crc kubenswrapper[4940]: I0223 08:49:08.496303 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:08 crc kubenswrapper[4940]: I0223 08:49:08.496344 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:08 crc kubenswrapper[4940]: I0223 08:49:08.496356 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:08 crc kubenswrapper[4940]: I0223 08:49:08.496371 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:08 crc kubenswrapper[4940]: I0223 08:49:08.496384 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:08Z","lastTransitionTime":"2026-02-23T08:49:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:08 crc kubenswrapper[4940]: I0223 08:49:08.598834 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:08 crc kubenswrapper[4940]: I0223 08:49:08.598881 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:08 crc kubenswrapper[4940]: I0223 08:49:08.598891 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:08 crc kubenswrapper[4940]: I0223 08:49:08.598908 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:08 crc kubenswrapper[4940]: I0223 08:49:08.598920 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:08Z","lastTransitionTime":"2026-02-23T08:49:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:08 crc kubenswrapper[4940]: I0223 08:49:08.701779 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:08 crc kubenswrapper[4940]: I0223 08:49:08.701820 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:08 crc kubenswrapper[4940]: I0223 08:49:08.701830 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:08 crc kubenswrapper[4940]: I0223 08:49:08.701845 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:08 crc kubenswrapper[4940]: I0223 08:49:08.701856 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:08Z","lastTransitionTime":"2026-02-23T08:49:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:08 crc kubenswrapper[4940]: I0223 08:49:08.804731 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:08 crc kubenswrapper[4940]: I0223 08:49:08.804774 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:08 crc kubenswrapper[4940]: I0223 08:49:08.804784 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:08 crc kubenswrapper[4940]: I0223 08:49:08.804801 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:08 crc kubenswrapper[4940]: I0223 08:49:08.804815 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:08Z","lastTransitionTime":"2026-02-23T08:49:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:08 crc kubenswrapper[4940]: I0223 08:49:08.907002 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:08 crc kubenswrapper[4940]: I0223 08:49:08.907060 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:08 crc kubenswrapper[4940]: I0223 08:49:08.907070 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:08 crc kubenswrapper[4940]: I0223 08:49:08.907087 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:08 crc kubenswrapper[4940]: I0223 08:49:08.907101 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:08Z","lastTransitionTime":"2026-02-23T08:49:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:09 crc kubenswrapper[4940]: I0223 08:49:09.009768 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:09 crc kubenswrapper[4940]: I0223 08:49:09.009823 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:09 crc kubenswrapper[4940]: I0223 08:49:09.009840 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:09 crc kubenswrapper[4940]: I0223 08:49:09.009862 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:09 crc kubenswrapper[4940]: I0223 08:49:09.009879 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:09Z","lastTransitionTime":"2026-02-23T08:49:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:09 crc kubenswrapper[4940]: I0223 08:49:09.112124 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:09 crc kubenswrapper[4940]: I0223 08:49:09.112165 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:09 crc kubenswrapper[4940]: I0223 08:49:09.112177 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:09 crc kubenswrapper[4940]: I0223 08:49:09.112194 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:09 crc kubenswrapper[4940]: I0223 08:49:09.112205 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:09Z","lastTransitionTime":"2026-02-23T08:49:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:09 crc kubenswrapper[4940]: I0223 08:49:09.114444 4940 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Feb 23 08:49:09 crc kubenswrapper[4940]: I0223 08:49:09.214556 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:09 crc kubenswrapper[4940]: I0223 08:49:09.214604 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:09 crc kubenswrapper[4940]: I0223 08:49:09.214634 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:09 crc kubenswrapper[4940]: I0223 08:49:09.214650 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:09 crc kubenswrapper[4940]: I0223 08:49:09.214661 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:09Z","lastTransitionTime":"2026-02-23T08:49:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:09 crc kubenswrapper[4940]: I0223 08:49:09.317521 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:09 crc kubenswrapper[4940]: I0223 08:49:09.317575 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:09 crc kubenswrapper[4940]: I0223 08:49:09.317592 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:09 crc kubenswrapper[4940]: I0223 08:49:09.317643 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:09 crc kubenswrapper[4940]: I0223 08:49:09.317662 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:09Z","lastTransitionTime":"2026-02-23T08:49:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:09 crc kubenswrapper[4940]: I0223 08:49:09.336116 4940 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 14:36:39.330371686 +0000 UTC Feb 23 08:49:09 crc kubenswrapper[4940]: I0223 08:49:09.345517 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 08:49:09 crc kubenswrapper[4940]: E0223 08:49:09.345689 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 08:49:09 crc kubenswrapper[4940]: I0223 08:49:09.345729 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 08:49:09 crc kubenswrapper[4940]: E0223 08:49:09.345854 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 08:49:09 crc kubenswrapper[4940]: I0223 08:49:09.357250 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 08:49:09 crc kubenswrapper[4940]: I0223 08:49:09.368231 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 08:49:09 crc kubenswrapper[4940]: I0223 08:49:09.379577 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 08:49:09 crc kubenswrapper[4940]: I0223 08:49:09.390994 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67d1926d-47ed-4a5c-b868-690599126446\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ee3bc200ebaed1c133ad32e65db44c8613c48203736dada71c0881d3d4f45a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6fefd6cf7ee54d7687a700f08f52ff20d438289f846da4d811f3295e85455c1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee77b54ead604e1f64b0057494a4ba082ac7f59289f563ce28ec6048485e3b07\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://947d88435f9dfeac492c7a476d5413d6c9f65fea2e6a8322bed3a197be4e7c11\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://947d88435f9dfeac492c7a476d5413d6c9f65fea2e6a8322bed3a197be4e7c11\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"message\\\":\\\"le observer\\\\nW0223 08:48:59.155726 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0223 08:48:59.155899 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0223 08:48:59.156685 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1318688766/tls.crt::/tmp/serving-cert-1318688766/tls.key\\\\\\\"\\\\nI0223 08:48:59.543716 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0223 08:48:59.548278 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0223 08:48:59.548307 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0223 08:48:59.548334 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0223 08:48:59.548340 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0223 08:48:59.558852 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0223 08:48:59.558871 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0223 08:48:59.558894 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0223 08:48:59.558902 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0223 08:48:59.558909 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0223 08:48:59.558912 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0223 08:48:59.558917 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0223 08:48:59.558922 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0223 08:48:59.560387 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T08:48:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c47633e36ba0e4b4070060e2925e103b6efa68771b18f7a7f429546f9f05c1d9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:53Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7d8e4f8739813cb8d24afc9999755886cd4a77958b8b4e1514fc0bc53403258\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7d8e4f8739813cb8d24afc9999755886cd4a77958b8b4e1514fc0bc53403258\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:47:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:47:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:47:49Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 08:49:09 crc kubenswrapper[4940]: I0223 08:49:09.401422 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 08:49:09 crc kubenswrapper[4940]: I0223 08:49:09.412245 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b55d9910-222c-4c04-8a15-cecc288b8dd6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2696757a27ff629002d6fe762776487514e74ad9465357240b3f196cf9e3cec7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73db3b91fa5cdca73acde9a3bcb9062394a54e61e87020b5d656290a54aa4f70\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T08:48:50Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0223 08:48:20.958890 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0223 08:48:20.960636 1 observer_polling.go:159] Starting file observer\\\\nI0223 08:48:20.961488 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0223 08:48:20.962119 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0223 08:48:48.479316 1 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials\\\\nI0223 08:48:50.579389 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0223 08:48:50.579494 1 cmd.go:179] failed checking apiserver connectivity: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T08:48:20Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:48:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aceeb7fc620d96b7e2a474de52d7b3fb007417f4f1ec81e5853724d54381566\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dbb2f4667fe28c3ae9389cece3d7246f31a8bd82328370a699dae73e11b19a4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0c1a355a4218aa74f5a9f20e74427e20e7dd1864bfb683eeb2ac43ef6d7357f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:47:49Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 08:49:09 crc kubenswrapper[4940]: I0223 08:49:09.420315 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:09 crc kubenswrapper[4940]: I0223 08:49:09.420367 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:09 crc kubenswrapper[4940]: I0223 08:49:09.420377 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:09 crc kubenswrapper[4940]: I0223 08:49:09.420394 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:09 crc kubenswrapper[4940]: I0223 08:49:09.420404 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:09Z","lastTransitionTime":"2026-02-23T08:49:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:09 crc kubenswrapper[4940]: I0223 08:49:09.422429 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 08:49:09 crc kubenswrapper[4940]: I0223 08:49:09.433386 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 08:49:09 crc kubenswrapper[4940]: I0223 08:49:09.523235 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:09 crc kubenswrapper[4940]: I0223 08:49:09.523305 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:09 crc kubenswrapper[4940]: I0223 08:49:09.523328 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:09 crc kubenswrapper[4940]: I0223 08:49:09.523353 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:09 crc kubenswrapper[4940]: I0223 08:49:09.523371 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:09Z","lastTransitionTime":"2026-02-23T08:49:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:09 crc kubenswrapper[4940]: I0223 08:49:09.626194 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:09 crc kubenswrapper[4940]: I0223 08:49:09.626254 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:09 crc kubenswrapper[4940]: I0223 08:49:09.626270 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:09 crc kubenswrapper[4940]: I0223 08:49:09.626292 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:09 crc kubenswrapper[4940]: I0223 08:49:09.626309 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:09Z","lastTransitionTime":"2026-02-23T08:49:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:09 crc kubenswrapper[4940]: I0223 08:49:09.729066 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:09 crc kubenswrapper[4940]: I0223 08:49:09.729121 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:09 crc kubenswrapper[4940]: I0223 08:49:09.729138 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:09 crc kubenswrapper[4940]: I0223 08:49:09.729159 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:09 crc kubenswrapper[4940]: I0223 08:49:09.729177 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:09Z","lastTransitionTime":"2026-02-23T08:49:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:09 crc kubenswrapper[4940]: I0223 08:49:09.831763 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:09 crc kubenswrapper[4940]: I0223 08:49:09.831811 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:09 crc kubenswrapper[4940]: I0223 08:49:09.831823 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:09 crc kubenswrapper[4940]: I0223 08:49:09.831841 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:09 crc kubenswrapper[4940]: I0223 08:49:09.831852 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:09Z","lastTransitionTime":"2026-02-23T08:49:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:09 crc kubenswrapper[4940]: I0223 08:49:09.934758 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:09 crc kubenswrapper[4940]: I0223 08:49:09.934822 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:09 crc kubenswrapper[4940]: I0223 08:49:09.934835 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:09 crc kubenswrapper[4940]: I0223 08:49:09.934856 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:09 crc kubenswrapper[4940]: I0223 08:49:09.934868 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:09Z","lastTransitionTime":"2026-02-23T08:49:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:10 crc kubenswrapper[4940]: I0223 08:49:10.037107 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:10 crc kubenswrapper[4940]: I0223 08:49:10.037176 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:10 crc kubenswrapper[4940]: I0223 08:49:10.037193 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:10 crc kubenswrapper[4940]: I0223 08:49:10.037220 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:10 crc kubenswrapper[4940]: I0223 08:49:10.037238 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:10Z","lastTransitionTime":"2026-02-23T08:49:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:10 crc kubenswrapper[4940]: I0223 08:49:10.140022 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:10 crc kubenswrapper[4940]: I0223 08:49:10.140094 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:10 crc kubenswrapper[4940]: I0223 08:49:10.140114 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:10 crc kubenswrapper[4940]: I0223 08:49:10.140144 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:10 crc kubenswrapper[4940]: I0223 08:49:10.140164 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:10Z","lastTransitionTime":"2026-02-23T08:49:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:10 crc kubenswrapper[4940]: I0223 08:49:10.243016 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:10 crc kubenswrapper[4940]: I0223 08:49:10.243077 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:10 crc kubenswrapper[4940]: I0223 08:49:10.243088 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:10 crc kubenswrapper[4940]: I0223 08:49:10.243110 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:10 crc kubenswrapper[4940]: I0223 08:49:10.243124 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:10Z","lastTransitionTime":"2026-02-23T08:49:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:10 crc kubenswrapper[4940]: I0223 08:49:10.336270 4940 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 13:18:18.093196032 +0000 UTC Feb 23 08:49:10 crc kubenswrapper[4940]: I0223 08:49:10.344920 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 08:49:10 crc kubenswrapper[4940]: E0223 08:49:10.345081 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 08:49:10 crc kubenswrapper[4940]: I0223 08:49:10.345357 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:10 crc kubenswrapper[4940]: I0223 08:49:10.345434 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:10 crc kubenswrapper[4940]: I0223 08:49:10.345447 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:10 crc kubenswrapper[4940]: I0223 08:49:10.345463 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:10 crc kubenswrapper[4940]: I0223 08:49:10.345473 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:10Z","lastTransitionTime":"2026-02-23T08:49:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:10 crc kubenswrapper[4940]: I0223 08:49:10.448007 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:10 crc kubenswrapper[4940]: I0223 08:49:10.448067 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:10 crc kubenswrapper[4940]: I0223 08:49:10.448081 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:10 crc kubenswrapper[4940]: I0223 08:49:10.448098 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:10 crc kubenswrapper[4940]: I0223 08:49:10.448110 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:10Z","lastTransitionTime":"2026-02-23T08:49:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:10 crc kubenswrapper[4940]: I0223 08:49:10.550187 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:10 crc kubenswrapper[4940]: I0223 08:49:10.550234 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:10 crc kubenswrapper[4940]: I0223 08:49:10.550244 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:10 crc kubenswrapper[4940]: I0223 08:49:10.550263 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:10 crc kubenswrapper[4940]: I0223 08:49:10.550650 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:10Z","lastTransitionTime":"2026-02-23T08:49:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:10 crc kubenswrapper[4940]: I0223 08:49:10.653644 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:10 crc kubenswrapper[4940]: I0223 08:49:10.653701 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:10 crc kubenswrapper[4940]: I0223 08:49:10.653713 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:10 crc kubenswrapper[4940]: I0223 08:49:10.653731 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:10 crc kubenswrapper[4940]: I0223 08:49:10.653745 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:10Z","lastTransitionTime":"2026-02-23T08:49:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:10 crc kubenswrapper[4940]: I0223 08:49:10.756652 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:10 crc kubenswrapper[4940]: I0223 08:49:10.756699 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:10 crc kubenswrapper[4940]: I0223 08:49:10.756721 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:10 crc kubenswrapper[4940]: I0223 08:49:10.756741 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:10 crc kubenswrapper[4940]: I0223 08:49:10.756751 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:10Z","lastTransitionTime":"2026-02-23T08:49:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:10 crc kubenswrapper[4940]: I0223 08:49:10.859396 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:10 crc kubenswrapper[4940]: I0223 08:49:10.859452 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:10 crc kubenswrapper[4940]: I0223 08:49:10.859463 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:10 crc kubenswrapper[4940]: I0223 08:49:10.859481 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:10 crc kubenswrapper[4940]: I0223 08:49:10.859495 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:10Z","lastTransitionTime":"2026-02-23T08:49:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:10 crc kubenswrapper[4940]: I0223 08:49:10.962481 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:10 crc kubenswrapper[4940]: I0223 08:49:10.962527 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:10 crc kubenswrapper[4940]: I0223 08:49:10.962538 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:10 crc kubenswrapper[4940]: I0223 08:49:10.962555 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:10 crc kubenswrapper[4940]: I0223 08:49:10.962566 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:10Z","lastTransitionTime":"2026-02-23T08:49:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:11 crc kubenswrapper[4940]: I0223 08:49:11.064670 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:11 crc kubenswrapper[4940]: I0223 08:49:11.064704 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:11 crc kubenswrapper[4940]: I0223 08:49:11.064712 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:11 crc kubenswrapper[4940]: I0223 08:49:11.064728 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:11 crc kubenswrapper[4940]: I0223 08:49:11.064736 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:11Z","lastTransitionTime":"2026-02-23T08:49:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:11 crc kubenswrapper[4940]: I0223 08:49:11.167140 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:11 crc kubenswrapper[4940]: I0223 08:49:11.167205 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:11 crc kubenswrapper[4940]: I0223 08:49:11.167216 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:11 crc kubenswrapper[4940]: I0223 08:49:11.167232 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:11 crc kubenswrapper[4940]: I0223 08:49:11.167244 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:11Z","lastTransitionTime":"2026-02-23T08:49:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:11 crc kubenswrapper[4940]: I0223 08:49:11.270275 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:11 crc kubenswrapper[4940]: I0223 08:49:11.270344 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:11 crc kubenswrapper[4940]: I0223 08:49:11.270368 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:11 crc kubenswrapper[4940]: I0223 08:49:11.270399 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:11 crc kubenswrapper[4940]: I0223 08:49:11.270420 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:11Z","lastTransitionTime":"2026-02-23T08:49:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:11 crc kubenswrapper[4940]: I0223 08:49:11.337219 4940 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 18:10:29.025129531 +0000 UTC Feb 23 08:49:11 crc kubenswrapper[4940]: I0223 08:49:11.345719 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 08:49:11 crc kubenswrapper[4940]: I0223 08:49:11.345747 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 08:49:11 crc kubenswrapper[4940]: E0223 08:49:11.346163 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 08:49:11 crc kubenswrapper[4940]: E0223 08:49:11.346401 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 08:49:11 crc kubenswrapper[4940]: E0223 08:49:11.348054 4940 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 23 08:49:11 crc kubenswrapper[4940]: container &Container{Name:webhook,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c set -xe Feb 23 08:49:11 crc kubenswrapper[4940]: if [[ -f "/env/_master" ]]; then Feb 23 08:49:11 crc kubenswrapper[4940]: set -o allexport Feb 23 08:49:11 crc kubenswrapper[4940]: source "/env/_master" Feb 23 08:49:11 crc kubenswrapper[4940]: set +o allexport Feb 23 08:49:11 crc kubenswrapper[4940]: fi Feb 23 08:49:11 crc kubenswrapper[4940]: # OVN-K will try to remove hybrid overlay node annotations even when the hybrid overlay is not enabled. Feb 23 08:49:11 crc kubenswrapper[4940]: # https://github.com/ovn-org/ovn-kubernetes/blob/ac6820df0b338a246f10f412cd5ec903bd234694/go-controller/pkg/ovn/master.go#L791 Feb 23 08:49:11 crc kubenswrapper[4940]: ho_enable="--enable-hybrid-overlay" Feb 23 08:49:11 crc kubenswrapper[4940]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start webhook" Feb 23 08:49:11 crc kubenswrapper[4940]: # extra-allowed-user: service account `ovn-kubernetes-control-plane` Feb 23 08:49:11 crc kubenswrapper[4940]: # sets pod annotations in multi-homing layer3 network controller (cluster-manager) Feb 23 08:49:11 crc kubenswrapper[4940]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Feb 23 08:49:11 crc kubenswrapper[4940]: --webhook-cert-dir="/etc/webhook-cert" \ Feb 23 08:49:11 crc kubenswrapper[4940]: --webhook-host=127.0.0.1 \ Feb 23 08:49:11 crc kubenswrapper[4940]: --webhook-port=9743 \ Feb 23 08:49:11 crc kubenswrapper[4940]: ${ho_enable} \ Feb 23 08:49:11 crc kubenswrapper[4940]: --enable-interconnect \ Feb 23 08:49:11 crc kubenswrapper[4940]: --disable-approver \ Feb 23 08:49:11 crc kubenswrapper[4940]: --extra-allowed-user="system:serviceaccount:openshift-ovn-kubernetes:ovn-kubernetes-control-plane" \ Feb 23 08:49:11 crc kubenswrapper[4940]: --wait-for-kubernetes-api=200s \ Feb 23 08:49:11 crc kubenswrapper[4940]: --pod-admission-conditions="/var/run/ovnkube-identity-config/additional-pod-admission-cond.json" \ Feb 23 08:49:11 crc kubenswrapper[4940]: --loglevel="${LOGLEVEL}" Feb 23 08:49:11 crc kubenswrapper[4940]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:2,ValueFrom:nil,},EnvVar{Name:KUBERNETES_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:webhook-cert,ReadOnly:false,MountPath:/etc/webhook-cert/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s2kz5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000470000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-vrzqb_openshift-network-node-identity(ef543e1b-8068-4ea3-b32a-61027b32e95d): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 23 08:49:11 crc kubenswrapper[4940]: > logger="UnhandledError" Feb 23 08:49:11 crc kubenswrapper[4940]: E0223 08:49:11.350893 4940 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 23 08:49:11 crc kubenswrapper[4940]: container &Container{Name:approver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c set -xe Feb 23 08:49:11 crc kubenswrapper[4940]: if [[ -f "/env/_master" ]]; then Feb 23 08:49:11 crc kubenswrapper[4940]: set -o allexport Feb 23 08:49:11 crc kubenswrapper[4940]: source "/env/_master" Feb 23 08:49:11 crc kubenswrapper[4940]: set +o allexport Feb 23 08:49:11 crc kubenswrapper[4940]: fi Feb 23 08:49:11 crc kubenswrapper[4940]: Feb 23 08:49:11 crc kubenswrapper[4940]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start approver" Feb 23 08:49:11 crc kubenswrapper[4940]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Feb 23 08:49:11 crc kubenswrapper[4940]: --disable-webhook \ Feb 23 08:49:11 crc kubenswrapper[4940]: --csr-acceptance-conditions="/var/run/ovnkube-identity-config/additional-cert-acceptance-cond.json" \ Feb 23 08:49:11 crc kubenswrapper[4940]: --loglevel="${LOGLEVEL}" Feb 23 08:49:11 crc kubenswrapper[4940]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:4,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s2kz5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000470000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-vrzqb_openshift-network-node-identity(ef543e1b-8068-4ea3-b32a-61027b32e95d): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 23 08:49:11 crc kubenswrapper[4940]: > logger="UnhandledError" Feb 23 08:49:11 crc kubenswrapper[4940]: E0223 08:49:11.352235 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"webhook\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"approver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-network-node-identity/network-node-identity-vrzqb" podUID="ef543e1b-8068-4ea3-b32a-61027b32e95d" Feb 23 08:49:11 crc kubenswrapper[4940]: I0223 08:49:11.373517 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:11 crc kubenswrapper[4940]: I0223 08:49:11.373590 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:11 crc kubenswrapper[4940]: I0223 08:49:11.373651 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:11 crc kubenswrapper[4940]: I0223 08:49:11.373684 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:11 crc kubenswrapper[4940]: I0223 08:49:11.373717 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:11Z","lastTransitionTime":"2026-02-23T08:49:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:11 crc kubenswrapper[4940]: I0223 08:49:11.476282 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:11 crc kubenswrapper[4940]: I0223 08:49:11.476580 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:11 crc kubenswrapper[4940]: I0223 08:49:11.476702 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:11 crc kubenswrapper[4940]: I0223 08:49:11.476825 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:11 crc kubenswrapper[4940]: I0223 08:49:11.476929 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:11Z","lastTransitionTime":"2026-02-23T08:49:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:11 crc kubenswrapper[4940]: I0223 08:49:11.580748 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:11 crc kubenswrapper[4940]: I0223 08:49:11.580828 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:11 crc kubenswrapper[4940]: I0223 08:49:11.580860 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:11 crc kubenswrapper[4940]: I0223 08:49:11.580902 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:11 crc kubenswrapper[4940]: I0223 08:49:11.580924 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:11Z","lastTransitionTime":"2026-02-23T08:49:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:11 crc kubenswrapper[4940]: I0223 08:49:11.683465 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:11 crc kubenswrapper[4940]: I0223 08:49:11.683531 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:11 crc kubenswrapper[4940]: I0223 08:49:11.683554 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:11 crc kubenswrapper[4940]: I0223 08:49:11.683582 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:11 crc kubenswrapper[4940]: I0223 08:49:11.683603 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:11Z","lastTransitionTime":"2026-02-23T08:49:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:11 crc kubenswrapper[4940]: I0223 08:49:11.785505 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:11 crc kubenswrapper[4940]: I0223 08:49:11.785563 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:11 crc kubenswrapper[4940]: I0223 08:49:11.785575 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:11 crc kubenswrapper[4940]: I0223 08:49:11.785594 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:11 crc kubenswrapper[4940]: I0223 08:49:11.785608 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:11Z","lastTransitionTime":"2026-02-23T08:49:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:11 crc kubenswrapper[4940]: I0223 08:49:11.888054 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:11 crc kubenswrapper[4940]: I0223 08:49:11.888105 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:11 crc kubenswrapper[4940]: I0223 08:49:11.888117 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:11 crc kubenswrapper[4940]: I0223 08:49:11.888138 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:11 crc kubenswrapper[4940]: I0223 08:49:11.888151 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:11Z","lastTransitionTime":"2026-02-23T08:49:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:11 crc kubenswrapper[4940]: I0223 08:49:11.990491 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:11 crc kubenswrapper[4940]: I0223 08:49:11.990534 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:11 crc kubenswrapper[4940]: I0223 08:49:11.990542 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:11 crc kubenswrapper[4940]: I0223 08:49:11.990555 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:11 crc kubenswrapper[4940]: I0223 08:49:11.990564 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:11Z","lastTransitionTime":"2026-02-23T08:49:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:12 crc kubenswrapper[4940]: I0223 08:49:12.092576 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:12 crc kubenswrapper[4940]: I0223 08:49:12.092854 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:12 crc kubenswrapper[4940]: I0223 08:49:12.092949 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:12 crc kubenswrapper[4940]: I0223 08:49:12.093063 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:12 crc kubenswrapper[4940]: I0223 08:49:12.093156 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:12Z","lastTransitionTime":"2026-02-23T08:49:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:12 crc kubenswrapper[4940]: I0223 08:49:12.196840 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:12 crc kubenswrapper[4940]: I0223 08:49:12.196894 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:12 crc kubenswrapper[4940]: I0223 08:49:12.196923 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:12 crc kubenswrapper[4940]: I0223 08:49:12.196944 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:12 crc kubenswrapper[4940]: I0223 08:49:12.196957 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:12Z","lastTransitionTime":"2026-02-23T08:49:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:12 crc kubenswrapper[4940]: I0223 08:49:12.299899 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:12 crc kubenswrapper[4940]: I0223 08:49:12.299970 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:12 crc kubenswrapper[4940]: I0223 08:49:12.299983 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:12 crc kubenswrapper[4940]: I0223 08:49:12.300004 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:12 crc kubenswrapper[4940]: I0223 08:49:12.300037 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:12Z","lastTransitionTime":"2026-02-23T08:49:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:12 crc kubenswrapper[4940]: I0223 08:49:12.338267 4940 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 10:02:00.74710088 +0000 UTC Feb 23 08:49:12 crc kubenswrapper[4940]: I0223 08:49:12.345685 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 08:49:12 crc kubenswrapper[4940]: E0223 08:49:12.345896 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 08:49:12 crc kubenswrapper[4940]: I0223 08:49:12.402981 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:12 crc kubenswrapper[4940]: I0223 08:49:12.403042 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:12 crc kubenswrapper[4940]: I0223 08:49:12.403056 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:12 crc kubenswrapper[4940]: I0223 08:49:12.403077 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:12 crc kubenswrapper[4940]: I0223 08:49:12.403091 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:12Z","lastTransitionTime":"2026-02-23T08:49:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:12 crc kubenswrapper[4940]: I0223 08:49:12.506401 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:12 crc kubenswrapper[4940]: I0223 08:49:12.506471 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:12 crc kubenswrapper[4940]: I0223 08:49:12.506484 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:12 crc kubenswrapper[4940]: I0223 08:49:12.506499 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:12 crc kubenswrapper[4940]: I0223 08:49:12.506510 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:12Z","lastTransitionTime":"2026-02-23T08:49:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:12 crc kubenswrapper[4940]: I0223 08:49:12.609441 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:12 crc kubenswrapper[4940]: I0223 08:49:12.609483 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:12 crc kubenswrapper[4940]: I0223 08:49:12.609493 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:12 crc kubenswrapper[4940]: I0223 08:49:12.609508 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:12 crc kubenswrapper[4940]: I0223 08:49:12.609519 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:12Z","lastTransitionTime":"2026-02-23T08:49:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:12 crc kubenswrapper[4940]: I0223 08:49:12.712981 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:12 crc kubenswrapper[4940]: I0223 08:49:12.713028 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:12 crc kubenswrapper[4940]: I0223 08:49:12.713039 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:12 crc kubenswrapper[4940]: I0223 08:49:12.713054 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:12 crc kubenswrapper[4940]: I0223 08:49:12.713065 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:12Z","lastTransitionTime":"2026-02-23T08:49:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:12 crc kubenswrapper[4940]: I0223 08:49:12.815986 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:12 crc kubenswrapper[4940]: I0223 08:49:12.816060 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:12 crc kubenswrapper[4940]: I0223 08:49:12.816073 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:12 crc kubenswrapper[4940]: I0223 08:49:12.816092 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:12 crc kubenswrapper[4940]: I0223 08:49:12.816125 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:12Z","lastTransitionTime":"2026-02-23T08:49:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:12 crc kubenswrapper[4940]: I0223 08:49:12.918363 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:12 crc kubenswrapper[4940]: I0223 08:49:12.918443 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:12 crc kubenswrapper[4940]: I0223 08:49:12.918466 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:12 crc kubenswrapper[4940]: I0223 08:49:12.918501 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:12 crc kubenswrapper[4940]: I0223 08:49:12.918536 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:12Z","lastTransitionTime":"2026-02-23T08:49:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:13 crc kubenswrapper[4940]: I0223 08:49:13.021516 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:13 crc kubenswrapper[4940]: I0223 08:49:13.021569 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:13 crc kubenswrapper[4940]: I0223 08:49:13.021580 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:13 crc kubenswrapper[4940]: I0223 08:49:13.021599 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:13 crc kubenswrapper[4940]: I0223 08:49:13.021626 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:13Z","lastTransitionTime":"2026-02-23T08:49:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:13 crc kubenswrapper[4940]: I0223 08:49:13.124897 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:13 crc kubenswrapper[4940]: I0223 08:49:13.125186 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:13 crc kubenswrapper[4940]: I0223 08:49:13.125269 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:13 crc kubenswrapper[4940]: I0223 08:49:13.125365 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:13 crc kubenswrapper[4940]: I0223 08:49:13.125455 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:13Z","lastTransitionTime":"2026-02-23T08:49:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:13 crc kubenswrapper[4940]: I0223 08:49:13.156511 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:13 crc kubenswrapper[4940]: I0223 08:49:13.156572 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:13 crc kubenswrapper[4940]: I0223 08:49:13.156590 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:13 crc kubenswrapper[4940]: I0223 08:49:13.156657 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:13 crc kubenswrapper[4940]: I0223 08:49:13.156683 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:13Z","lastTransitionTime":"2026-02-23T08:49:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:13 crc kubenswrapper[4940]: E0223 08:49:13.169069 4940 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T08:49:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T08:49:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T08:49:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T08:49:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0b40f9a7-6d5b-496d-bcec-88183c6aba29\\\",\\\"systemUUID\\\":\\\"3c406e8c-0d77-4ead-8ee9-37cf28c01cc1\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 08:49:13 crc kubenswrapper[4940]: I0223 08:49:13.174842 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:13 crc kubenswrapper[4940]: I0223 08:49:13.174927 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:13 crc kubenswrapper[4940]: I0223 08:49:13.174944 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:13 crc kubenswrapper[4940]: I0223 08:49:13.174966 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:13 crc kubenswrapper[4940]: I0223 08:49:13.175014 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:13Z","lastTransitionTime":"2026-02-23T08:49:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:13 crc kubenswrapper[4940]: E0223 08:49:13.188039 4940 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T08:49:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T08:49:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T08:49:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T08:49:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0b40f9a7-6d5b-496d-bcec-88183c6aba29\\\",\\\"systemUUID\\\":\\\"3c406e8c-0d77-4ead-8ee9-37cf28c01cc1\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 08:49:13 crc kubenswrapper[4940]: I0223 08:49:13.192323 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:13 crc kubenswrapper[4940]: I0223 08:49:13.192385 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:13 crc kubenswrapper[4940]: I0223 08:49:13.192397 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:13 crc kubenswrapper[4940]: I0223 08:49:13.192413 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:13 crc kubenswrapper[4940]: I0223 08:49:13.192423 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:13Z","lastTransitionTime":"2026-02-23T08:49:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:13 crc kubenswrapper[4940]: E0223 08:49:13.203459 4940 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T08:49:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T08:49:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T08:49:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T08:49:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0b40f9a7-6d5b-496d-bcec-88183c6aba29\\\",\\\"systemUUID\\\":\\\"3c406e8c-0d77-4ead-8ee9-37cf28c01cc1\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 08:49:13 crc kubenswrapper[4940]: I0223 08:49:13.207764 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:13 crc kubenswrapper[4940]: I0223 08:49:13.207817 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:13 crc kubenswrapper[4940]: I0223 08:49:13.207828 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:13 crc kubenswrapper[4940]: I0223 08:49:13.207842 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:13 crc kubenswrapper[4940]: I0223 08:49:13.207853 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:13Z","lastTransitionTime":"2026-02-23T08:49:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:13 crc kubenswrapper[4940]: E0223 08:49:13.222349 4940 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T08:49:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T08:49:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T08:49:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T08:49:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0b40f9a7-6d5b-496d-bcec-88183c6aba29\\\",\\\"systemUUID\\\":\\\"3c406e8c-0d77-4ead-8ee9-37cf28c01cc1\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 08:49:13 crc kubenswrapper[4940]: I0223 08:49:13.226746 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:13 crc kubenswrapper[4940]: I0223 08:49:13.226814 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:13 crc kubenswrapper[4940]: I0223 08:49:13.226826 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:13 crc kubenswrapper[4940]: I0223 08:49:13.226843 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:13 crc kubenswrapper[4940]: I0223 08:49:13.226858 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:13Z","lastTransitionTime":"2026-02-23T08:49:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:13 crc kubenswrapper[4940]: E0223 08:49:13.239040 4940 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T08:49:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T08:49:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T08:49:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T08:49:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0b40f9a7-6d5b-496d-bcec-88183c6aba29\\\",\\\"systemUUID\\\":\\\"3c406e8c-0d77-4ead-8ee9-37cf28c01cc1\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 08:49:13 crc kubenswrapper[4940]: E0223 08:49:13.239231 4940 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 23 08:49:13 crc kubenswrapper[4940]: I0223 08:49:13.241985 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:13 crc kubenswrapper[4940]: I0223 08:49:13.242045 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:13 crc kubenswrapper[4940]: I0223 08:49:13.242058 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:13 crc kubenswrapper[4940]: I0223 08:49:13.242085 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:13 crc kubenswrapper[4940]: I0223 08:49:13.242099 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:13Z","lastTransitionTime":"2026-02-23T08:49:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:13 crc kubenswrapper[4940]: I0223 08:49:13.338806 4940 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 23:32:25.948640557 +0000 UTC Feb 23 08:49:13 crc kubenswrapper[4940]: I0223 08:49:13.344706 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 08:49:13 crc kubenswrapper[4940]: E0223 08:49:13.344841 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 08:49:13 crc kubenswrapper[4940]: I0223 08:49:13.344900 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:13 crc kubenswrapper[4940]: I0223 08:49:13.344949 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 08:49:13 crc kubenswrapper[4940]: I0223 08:49:13.344969 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:13 crc kubenswrapper[4940]: I0223 08:49:13.344993 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:13 crc kubenswrapper[4940]: I0223 08:49:13.345020 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:13 crc kubenswrapper[4940]: I0223 08:49:13.345043 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:13Z","lastTransitionTime":"2026-02-23T08:49:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:13 crc kubenswrapper[4940]: E0223 08:49:13.345125 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 08:49:13 crc kubenswrapper[4940]: I0223 08:49:13.447750 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:13 crc kubenswrapper[4940]: I0223 08:49:13.447796 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:13 crc kubenswrapper[4940]: I0223 08:49:13.447813 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:13 crc kubenswrapper[4940]: I0223 08:49:13.447834 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:13 crc kubenswrapper[4940]: I0223 08:49:13.447851 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:13Z","lastTransitionTime":"2026-02-23T08:49:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:13 crc kubenswrapper[4940]: I0223 08:49:13.550129 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:13 crc kubenswrapper[4940]: I0223 08:49:13.550192 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:13 crc kubenswrapper[4940]: I0223 08:49:13.550204 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:13 crc kubenswrapper[4940]: I0223 08:49:13.550223 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:13 crc kubenswrapper[4940]: I0223 08:49:13.550239 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:13Z","lastTransitionTime":"2026-02-23T08:49:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:13 crc kubenswrapper[4940]: I0223 08:49:13.653023 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:13 crc kubenswrapper[4940]: I0223 08:49:13.653081 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:13 crc kubenswrapper[4940]: I0223 08:49:13.653092 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:13 crc kubenswrapper[4940]: I0223 08:49:13.653110 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:13 crc kubenswrapper[4940]: I0223 08:49:13.653125 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:13Z","lastTransitionTime":"2026-02-23T08:49:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:13 crc kubenswrapper[4940]: I0223 08:49:13.756063 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:13 crc kubenswrapper[4940]: I0223 08:49:13.756123 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:13 crc kubenswrapper[4940]: I0223 08:49:13.756140 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:13 crc kubenswrapper[4940]: I0223 08:49:13.756160 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:13 crc kubenswrapper[4940]: I0223 08:49:13.756177 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:13Z","lastTransitionTime":"2026-02-23T08:49:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:13 crc kubenswrapper[4940]: I0223 08:49:13.858885 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:13 crc kubenswrapper[4940]: I0223 08:49:13.858962 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:13 crc kubenswrapper[4940]: I0223 08:49:13.858985 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:13 crc kubenswrapper[4940]: I0223 08:49:13.859014 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:13 crc kubenswrapper[4940]: I0223 08:49:13.859038 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:13Z","lastTransitionTime":"2026-02-23T08:49:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:13 crc kubenswrapper[4940]: I0223 08:49:13.961159 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:13 crc kubenswrapper[4940]: I0223 08:49:13.961211 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:13 crc kubenswrapper[4940]: I0223 08:49:13.961220 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:13 crc kubenswrapper[4940]: I0223 08:49:13.961241 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:13 crc kubenswrapper[4940]: I0223 08:49:13.961252 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:13Z","lastTransitionTime":"2026-02-23T08:49:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:14 crc kubenswrapper[4940]: I0223 08:49:14.063871 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:14 crc kubenswrapper[4940]: I0223 08:49:14.063929 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:14 crc kubenswrapper[4940]: I0223 08:49:14.063941 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:14 crc kubenswrapper[4940]: I0223 08:49:14.063957 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:14 crc kubenswrapper[4940]: I0223 08:49:14.063968 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:14Z","lastTransitionTime":"2026-02-23T08:49:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:14 crc kubenswrapper[4940]: I0223 08:49:14.166122 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:14 crc kubenswrapper[4940]: I0223 08:49:14.166158 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:14 crc kubenswrapper[4940]: I0223 08:49:14.166167 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:14 crc kubenswrapper[4940]: I0223 08:49:14.166180 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:14 crc kubenswrapper[4940]: I0223 08:49:14.166189 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:14Z","lastTransitionTime":"2026-02-23T08:49:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:14 crc kubenswrapper[4940]: I0223 08:49:14.268997 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:14 crc kubenswrapper[4940]: I0223 08:49:14.269040 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:14 crc kubenswrapper[4940]: I0223 08:49:14.269049 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:14 crc kubenswrapper[4940]: I0223 08:49:14.269067 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:14 crc kubenswrapper[4940]: I0223 08:49:14.269079 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:14Z","lastTransitionTime":"2026-02-23T08:49:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:14 crc kubenswrapper[4940]: I0223 08:49:14.339251 4940 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 08:14:33.826889177 +0000 UTC Feb 23 08:49:14 crc kubenswrapper[4940]: I0223 08:49:14.345655 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 08:49:14 crc kubenswrapper[4940]: E0223 08:49:14.345789 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 08:49:14 crc kubenswrapper[4940]: E0223 08:49:14.347418 4940 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:iptables-alerter,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,Command:[/iptables-alerter/iptables-alerter.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONTAINER_RUNTIME_ENDPOINT,Value:unix:///run/crio/crio.sock,ValueFrom:nil,},EnvVar{Name:ALERTER_POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:iptables-alerter-script,ReadOnly:false,MountPath:/iptables-alerter,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-slash,ReadOnly:true,MountPath:/host,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rczfb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod iptables-alerter-4ln5h_openshift-network-operator(d75a4c96-2883-4a0b-bab2-0fab2b6c0b49): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Feb 23 08:49:14 crc kubenswrapper[4940]: E0223 08:49:14.348594 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"iptables-alerter\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/iptables-alerter-4ln5h" podUID="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" Feb 23 08:49:14 crc kubenswrapper[4940]: I0223 08:49:14.371523 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:14 crc kubenswrapper[4940]: I0223 08:49:14.371648 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:14 crc kubenswrapper[4940]: I0223 08:49:14.371668 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:14 crc kubenswrapper[4940]: I0223 08:49:14.371692 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:14 crc kubenswrapper[4940]: I0223 08:49:14.371739 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:14Z","lastTransitionTime":"2026-02-23T08:49:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:14 crc kubenswrapper[4940]: I0223 08:49:14.474141 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:14 crc kubenswrapper[4940]: I0223 08:49:14.474180 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:14 crc kubenswrapper[4940]: I0223 08:49:14.474192 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:14 crc kubenswrapper[4940]: I0223 08:49:14.474211 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:14 crc kubenswrapper[4940]: I0223 08:49:14.474223 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:14Z","lastTransitionTime":"2026-02-23T08:49:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:14 crc kubenswrapper[4940]: I0223 08:49:14.576141 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:14 crc kubenswrapper[4940]: I0223 08:49:14.576187 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:14 crc kubenswrapper[4940]: I0223 08:49:14.576196 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:14 crc kubenswrapper[4940]: I0223 08:49:14.576214 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:14 crc kubenswrapper[4940]: I0223 08:49:14.576228 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:14Z","lastTransitionTime":"2026-02-23T08:49:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:14 crc kubenswrapper[4940]: I0223 08:49:14.678396 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:14 crc kubenswrapper[4940]: I0223 08:49:14.678444 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:14 crc kubenswrapper[4940]: I0223 08:49:14.678455 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:14 crc kubenswrapper[4940]: I0223 08:49:14.678471 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:14 crc kubenswrapper[4940]: I0223 08:49:14.678483 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:14Z","lastTransitionTime":"2026-02-23T08:49:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:14 crc kubenswrapper[4940]: I0223 08:49:14.781133 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:14 crc kubenswrapper[4940]: I0223 08:49:14.781175 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:14 crc kubenswrapper[4940]: I0223 08:49:14.781184 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:14 crc kubenswrapper[4940]: I0223 08:49:14.781198 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:14 crc kubenswrapper[4940]: I0223 08:49:14.781208 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:14Z","lastTransitionTime":"2026-02-23T08:49:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:14 crc kubenswrapper[4940]: I0223 08:49:14.885869 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:14 crc kubenswrapper[4940]: I0223 08:49:14.885916 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:14 crc kubenswrapper[4940]: I0223 08:49:14.885926 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:14 crc kubenswrapper[4940]: I0223 08:49:14.885941 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:14 crc kubenswrapper[4940]: I0223 08:49:14.885954 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:14Z","lastTransitionTime":"2026-02-23T08:49:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:14 crc kubenswrapper[4940]: I0223 08:49:14.989193 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:14 crc kubenswrapper[4940]: I0223 08:49:14.989257 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:14 crc kubenswrapper[4940]: I0223 08:49:14.989274 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:14 crc kubenswrapper[4940]: I0223 08:49:14.989301 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:14 crc kubenswrapper[4940]: I0223 08:49:14.989318 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:14Z","lastTransitionTime":"2026-02-23T08:49:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:15 crc kubenswrapper[4940]: I0223 08:49:15.091882 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:15 crc kubenswrapper[4940]: I0223 08:49:15.091924 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:15 crc kubenswrapper[4940]: I0223 08:49:15.091934 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:15 crc kubenswrapper[4940]: I0223 08:49:15.091949 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:15 crc kubenswrapper[4940]: I0223 08:49:15.091960 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:15Z","lastTransitionTime":"2026-02-23T08:49:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:15 crc kubenswrapper[4940]: I0223 08:49:15.109410 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 08:49:15 crc kubenswrapper[4940]: I0223 08:49:15.109501 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 08:49:15 crc kubenswrapper[4940]: I0223 08:49:15.109528 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 08:49:15 crc kubenswrapper[4940]: E0223 08:49:15.109555 4940 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 08:49:31.109525529 +0000 UTC m=+102.492731716 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 08:49:15 crc kubenswrapper[4940]: E0223 08:49:15.109623 4940 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 23 08:49:15 crc kubenswrapper[4940]: E0223 08:49:15.109674 4940 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-23 08:49:31.109663834 +0000 UTC m=+102.492869991 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 23 08:49:15 crc kubenswrapper[4940]: E0223 08:49:15.109684 4940 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 23 08:49:15 crc kubenswrapper[4940]: E0223 08:49:15.109781 4940 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-23 08:49:31.109753797 +0000 UTC m=+102.492960014 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 23 08:49:15 crc kubenswrapper[4940]: I0223 08:49:15.194253 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:15 crc kubenswrapper[4940]: I0223 08:49:15.194352 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:15 crc kubenswrapper[4940]: I0223 08:49:15.194402 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:15 crc kubenswrapper[4940]: I0223 08:49:15.194456 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:15 crc kubenswrapper[4940]: I0223 08:49:15.194474 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:15Z","lastTransitionTime":"2026-02-23T08:49:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:15 crc kubenswrapper[4940]: I0223 08:49:15.210059 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 08:49:15 crc kubenswrapper[4940]: I0223 08:49:15.210120 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 08:49:15 crc kubenswrapper[4940]: E0223 08:49:15.210250 4940 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 23 08:49:15 crc kubenswrapper[4940]: E0223 08:49:15.210267 4940 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 23 08:49:15 crc kubenswrapper[4940]: E0223 08:49:15.210278 4940 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 23 08:49:15 crc kubenswrapper[4940]: E0223 08:49:15.210326 4940 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-23 08:49:31.210309297 +0000 UTC m=+102.593515454 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 23 08:49:15 crc kubenswrapper[4940]: E0223 08:49:15.210442 4940 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 23 08:49:15 crc kubenswrapper[4940]: E0223 08:49:15.210482 4940 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 23 08:49:15 crc kubenswrapper[4940]: E0223 08:49:15.210504 4940 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 23 08:49:15 crc kubenswrapper[4940]: E0223 08:49:15.210589 4940 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-23 08:49:31.210564116 +0000 UTC m=+102.593770323 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 23 08:49:15 crc kubenswrapper[4940]: I0223 08:49:15.297126 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:15 crc kubenswrapper[4940]: I0223 08:49:15.297185 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:15 crc kubenswrapper[4940]: I0223 08:49:15.297194 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:15 crc kubenswrapper[4940]: I0223 08:49:15.297214 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:15 crc kubenswrapper[4940]: I0223 08:49:15.297227 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:15Z","lastTransitionTime":"2026-02-23T08:49:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:15 crc kubenswrapper[4940]: I0223 08:49:15.340354 4940 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 18:57:38.896413319 +0000 UTC Feb 23 08:49:15 crc kubenswrapper[4940]: I0223 08:49:15.344798 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 08:49:15 crc kubenswrapper[4940]: I0223 08:49:15.344798 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 08:49:15 crc kubenswrapper[4940]: E0223 08:49:15.345061 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 08:49:15 crc kubenswrapper[4940]: E0223 08:49:15.345211 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 08:49:15 crc kubenswrapper[4940]: E0223 08:49:15.346940 4940 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 23 08:49:15 crc kubenswrapper[4940]: container &Container{Name:network-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,Command:[/bin/bash -c #!/bin/bash Feb 23 08:49:15 crc kubenswrapper[4940]: set -o allexport Feb 23 08:49:15 crc kubenswrapper[4940]: if [[ -f /etc/kubernetes/apiserver-url.env ]]; then Feb 23 08:49:15 crc kubenswrapper[4940]: source /etc/kubernetes/apiserver-url.env Feb 23 08:49:15 crc kubenswrapper[4940]: else Feb 23 08:49:15 crc kubenswrapper[4940]: echo "Error: /etc/kubernetes/apiserver-url.env is missing" Feb 23 08:49:15 crc kubenswrapper[4940]: exit 1 Feb 23 08:49:15 crc kubenswrapper[4940]: fi Feb 23 08:49:15 crc kubenswrapper[4940]: exec /usr/bin/cluster-network-operator start --listen=0.0.0.0:9104 Feb 23 08:49:15 crc kubenswrapper[4940]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:cno,HostPort:9104,ContainerPort:9104,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:RELEASE_VERSION,Value:4.18.1,ValueFrom:nil,},EnvVar{Name:KUBE_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b97554198294bf544fbc116c94a0a1fb2ec8a4de0e926bf9d9e320135f0bee6f,ValueFrom:nil,},EnvVar{Name:KUBE_RBAC_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09,ValueFrom:nil,},EnvVar{Name:MULTUS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26,ValueFrom:nil,},EnvVar{Name:MULTUS_ADMISSION_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317,ValueFrom:nil,},EnvVar{Name:CNI_PLUGINS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc,ValueFrom:nil,},EnvVar{Name:BOND_CNI_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78,ValueFrom:nil,},EnvVar{Name:WHEREABOUTS_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4,ValueFrom:nil,},EnvVar{Name:ROUTE_OVERRRIDE_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa,ValueFrom:nil,},EnvVar{Name:MULTUS_NETWORKPOLICY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:23f833d3738d68706eb2f2868bd76bd71cee016cffa6faf5f045a60cc8c6eddd,ValueFrom:nil,},EnvVar{Name:OVN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,ValueFrom:nil,},EnvVar{Name:OVN_NB_RAFT_ELECTION_TIMER,Value:10,ValueFrom:nil,},EnvVar{Name:OVN_SB_RAFT_ELECTION_TIMER,Value:16,ValueFrom:nil,},EnvVar{Name:OVN_NORTHD_PROBE_INTERVAL,Value:10000,ValueFrom:nil,},EnvVar{Name:OVN_CONTROLLER_INACTIVITY_PROBE,Value:180000,ValueFrom:nil,},EnvVar{Name:OVN_NB_INACTIVITY_PROBE,Value:60000,ValueFrom:nil,},EnvVar{Name:EGRESS_ROUTER_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c,ValueFrom:nil,},EnvVar{Name:NETWORK_METRICS_DAEMON_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_SOURCE_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_TARGET_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:NETWORK_OPERATOR_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:CLOUD_NETWORK_CONFIG_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8048f1cb0be521f09749c0a489503cd56d85b68c6ca93380e082cfd693cd97a8,ValueFrom:nil,},EnvVar{Name:CLI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,ValueFrom:nil,},EnvVar{Name:FRR_K8S_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5dbf844e49bb46b78586930149e5e5f5dc121014c8afd10fe36f3651967cc256,ValueFrom:nil,},EnvVar{Name:NETWORKING_CONSOLE_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd,ValueFrom:nil,},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:host-etc-kube,ReadOnly:true,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-tls,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rdwmf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-operator-58b4c7f79c-55gtf_openshift-network-operator(37a5e44f-9a88-4405-be8a-b645485e7312): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 23 08:49:15 crc kubenswrapper[4940]: > logger="UnhandledError" Feb 23 08:49:15 crc kubenswrapper[4940]: E0223 08:49:15.348569 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"network-operator\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" podUID="37a5e44f-9a88-4405-be8a-b645485e7312" Feb 23 08:49:15 crc kubenswrapper[4940]: I0223 08:49:15.399350 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:15 crc kubenswrapper[4940]: I0223 08:49:15.399389 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:15 crc kubenswrapper[4940]: I0223 08:49:15.399397 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:15 crc kubenswrapper[4940]: I0223 08:49:15.399413 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:15 crc kubenswrapper[4940]: I0223 08:49:15.399424 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:15Z","lastTransitionTime":"2026-02-23T08:49:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:15 crc kubenswrapper[4940]: I0223 08:49:15.502038 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:15 crc kubenswrapper[4940]: I0223 08:49:15.502086 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:15 crc kubenswrapper[4940]: I0223 08:49:15.502098 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:15 crc kubenswrapper[4940]: I0223 08:49:15.502118 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:15 crc kubenswrapper[4940]: I0223 08:49:15.502130 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:15Z","lastTransitionTime":"2026-02-23T08:49:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:15 crc kubenswrapper[4940]: I0223 08:49:15.604562 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:15 crc kubenswrapper[4940]: I0223 08:49:15.604675 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:15 crc kubenswrapper[4940]: I0223 08:49:15.604700 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:15 crc kubenswrapper[4940]: I0223 08:49:15.604723 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:15 crc kubenswrapper[4940]: I0223 08:49:15.604740 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:15Z","lastTransitionTime":"2026-02-23T08:49:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:15 crc kubenswrapper[4940]: I0223 08:49:15.706892 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:15 crc kubenswrapper[4940]: I0223 08:49:15.706933 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:15 crc kubenswrapper[4940]: I0223 08:49:15.706943 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:15 crc kubenswrapper[4940]: I0223 08:49:15.706958 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:15 crc kubenswrapper[4940]: I0223 08:49:15.706969 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:15Z","lastTransitionTime":"2026-02-23T08:49:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:15 crc kubenswrapper[4940]: I0223 08:49:15.809097 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:15 crc kubenswrapper[4940]: I0223 08:49:15.809160 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:15 crc kubenswrapper[4940]: I0223 08:49:15.809176 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:15 crc kubenswrapper[4940]: I0223 08:49:15.809202 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:15 crc kubenswrapper[4940]: I0223 08:49:15.809219 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:15Z","lastTransitionTime":"2026-02-23T08:49:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:15 crc kubenswrapper[4940]: I0223 08:49:15.911994 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:15 crc kubenswrapper[4940]: I0223 08:49:15.912056 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:15 crc kubenswrapper[4940]: I0223 08:49:15.912078 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:15 crc kubenswrapper[4940]: I0223 08:49:15.912110 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:15 crc kubenswrapper[4940]: I0223 08:49:15.912130 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:15Z","lastTransitionTime":"2026-02-23T08:49:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:16 crc kubenswrapper[4940]: I0223 08:49:16.014184 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:16 crc kubenswrapper[4940]: I0223 08:49:16.014233 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:16 crc kubenswrapper[4940]: I0223 08:49:16.014243 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:16 crc kubenswrapper[4940]: I0223 08:49:16.014257 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:16 crc kubenswrapper[4940]: I0223 08:49:16.014269 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:16Z","lastTransitionTime":"2026-02-23T08:49:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:16 crc kubenswrapper[4940]: I0223 08:49:16.116767 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:16 crc kubenswrapper[4940]: I0223 08:49:16.117013 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:16 crc kubenswrapper[4940]: I0223 08:49:16.117026 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:16 crc kubenswrapper[4940]: I0223 08:49:16.117042 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:16 crc kubenswrapper[4940]: I0223 08:49:16.117056 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:16Z","lastTransitionTime":"2026-02-23T08:49:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:16 crc kubenswrapper[4940]: I0223 08:49:16.218789 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:16 crc kubenswrapper[4940]: I0223 08:49:16.218828 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:16 crc kubenswrapper[4940]: I0223 08:49:16.218837 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:16 crc kubenswrapper[4940]: I0223 08:49:16.218850 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:16 crc kubenswrapper[4940]: I0223 08:49:16.218859 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:16Z","lastTransitionTime":"2026-02-23T08:49:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:16 crc kubenswrapper[4940]: I0223 08:49:16.320674 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:16 crc kubenswrapper[4940]: I0223 08:49:16.320715 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:16 crc kubenswrapper[4940]: I0223 08:49:16.320723 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:16 crc kubenswrapper[4940]: I0223 08:49:16.320736 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:16 crc kubenswrapper[4940]: I0223 08:49:16.320745 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:16Z","lastTransitionTime":"2026-02-23T08:49:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:16 crc kubenswrapper[4940]: I0223 08:49:16.341236 4940 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 23:26:12.939623589 +0000 UTC Feb 23 08:49:16 crc kubenswrapper[4940]: I0223 08:49:16.345433 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 08:49:16 crc kubenswrapper[4940]: E0223 08:49:16.345515 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 08:49:16 crc kubenswrapper[4940]: I0223 08:49:16.427015 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:16 crc kubenswrapper[4940]: I0223 08:49:16.427069 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:16 crc kubenswrapper[4940]: I0223 08:49:16.427081 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:16 crc kubenswrapper[4940]: I0223 08:49:16.427098 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:16 crc kubenswrapper[4940]: I0223 08:49:16.427109 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:16Z","lastTransitionTime":"2026-02-23T08:49:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:16 crc kubenswrapper[4940]: I0223 08:49:16.529722 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:16 crc kubenswrapper[4940]: I0223 08:49:16.529764 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:16 crc kubenswrapper[4940]: I0223 08:49:16.529776 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:16 crc kubenswrapper[4940]: I0223 08:49:16.529817 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:16 crc kubenswrapper[4940]: I0223 08:49:16.529830 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:16Z","lastTransitionTime":"2026-02-23T08:49:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:16 crc kubenswrapper[4940]: I0223 08:49:16.620858 4940 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 23 08:49:16 crc kubenswrapper[4940]: I0223 08:49:16.631948 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:16 crc kubenswrapper[4940]: I0223 08:49:16.632000 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:16 crc kubenswrapper[4940]: I0223 08:49:16.632012 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:16 crc kubenswrapper[4940]: I0223 08:49:16.632031 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:16 crc kubenswrapper[4940]: I0223 08:49:16.632045 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:16Z","lastTransitionTime":"2026-02-23T08:49:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:16 crc kubenswrapper[4940]: I0223 08:49:16.734520 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:16 crc kubenswrapper[4940]: I0223 08:49:16.734566 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:16 crc kubenswrapper[4940]: I0223 08:49:16.734577 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:16 crc kubenswrapper[4940]: I0223 08:49:16.734594 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:16 crc kubenswrapper[4940]: I0223 08:49:16.734626 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:16Z","lastTransitionTime":"2026-02-23T08:49:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:16 crc kubenswrapper[4940]: I0223 08:49:16.836744 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:16 crc kubenswrapper[4940]: I0223 08:49:16.836787 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:16 crc kubenswrapper[4940]: I0223 08:49:16.836798 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:16 crc kubenswrapper[4940]: I0223 08:49:16.836814 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:16 crc kubenswrapper[4940]: I0223 08:49:16.836825 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:16Z","lastTransitionTime":"2026-02-23T08:49:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:16 crc kubenswrapper[4940]: I0223 08:49:16.939575 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:16 crc kubenswrapper[4940]: I0223 08:49:16.939654 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:16 crc kubenswrapper[4940]: I0223 08:49:16.939667 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:16 crc kubenswrapper[4940]: I0223 08:49:16.939685 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:16 crc kubenswrapper[4940]: I0223 08:49:16.939695 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:16Z","lastTransitionTime":"2026-02-23T08:49:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:17 crc kubenswrapper[4940]: I0223 08:49:17.041970 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:17 crc kubenswrapper[4940]: I0223 08:49:17.042011 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:17 crc kubenswrapper[4940]: I0223 08:49:17.042021 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:17 crc kubenswrapper[4940]: I0223 08:49:17.042038 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:17 crc kubenswrapper[4940]: I0223 08:49:17.042048 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:17Z","lastTransitionTime":"2026-02-23T08:49:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:17 crc kubenswrapper[4940]: I0223 08:49:17.145224 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:17 crc kubenswrapper[4940]: I0223 08:49:17.145273 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:17 crc kubenswrapper[4940]: I0223 08:49:17.145284 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:17 crc kubenswrapper[4940]: I0223 08:49:17.145301 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:17 crc kubenswrapper[4940]: I0223 08:49:17.145311 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:17Z","lastTransitionTime":"2026-02-23T08:49:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:17 crc kubenswrapper[4940]: I0223 08:49:17.248427 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:17 crc kubenswrapper[4940]: I0223 08:49:17.248472 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:17 crc kubenswrapper[4940]: I0223 08:49:17.248482 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:17 crc kubenswrapper[4940]: I0223 08:49:17.248499 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:17 crc kubenswrapper[4940]: I0223 08:49:17.248511 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:17Z","lastTransitionTime":"2026-02-23T08:49:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:17 crc kubenswrapper[4940]: I0223 08:49:17.341643 4940 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 01:36:06.484745866 +0000 UTC Feb 23 08:49:17 crc kubenswrapper[4940]: I0223 08:49:17.345039 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 08:49:17 crc kubenswrapper[4940]: I0223 08:49:17.345090 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 08:49:17 crc kubenswrapper[4940]: E0223 08:49:17.345198 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 08:49:17 crc kubenswrapper[4940]: E0223 08:49:17.345325 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 08:49:17 crc kubenswrapper[4940]: I0223 08:49:17.350923 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:17 crc kubenswrapper[4940]: I0223 08:49:17.350978 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:17 crc kubenswrapper[4940]: I0223 08:49:17.351004 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:17 crc kubenswrapper[4940]: I0223 08:49:17.351030 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:17 crc kubenswrapper[4940]: I0223 08:49:17.351052 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:17Z","lastTransitionTime":"2026-02-23T08:49:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:17 crc kubenswrapper[4940]: I0223 08:49:17.453692 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:17 crc kubenswrapper[4940]: I0223 08:49:17.453750 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:17 crc kubenswrapper[4940]: I0223 08:49:17.453768 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:17 crc kubenswrapper[4940]: I0223 08:49:17.453792 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:17 crc kubenswrapper[4940]: I0223 08:49:17.453809 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:17Z","lastTransitionTime":"2026-02-23T08:49:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:17 crc kubenswrapper[4940]: I0223 08:49:17.556684 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:17 crc kubenswrapper[4940]: I0223 08:49:17.556758 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:17 crc kubenswrapper[4940]: I0223 08:49:17.556775 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:17 crc kubenswrapper[4940]: I0223 08:49:17.556802 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:17 crc kubenswrapper[4940]: I0223 08:49:17.556816 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:17Z","lastTransitionTime":"2026-02-23T08:49:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:17 crc kubenswrapper[4940]: I0223 08:49:17.659114 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:17 crc kubenswrapper[4940]: I0223 08:49:17.659167 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:17 crc kubenswrapper[4940]: I0223 08:49:17.659183 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:17 crc kubenswrapper[4940]: I0223 08:49:17.659207 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:17 crc kubenswrapper[4940]: I0223 08:49:17.659224 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:17Z","lastTransitionTime":"2026-02-23T08:49:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:17 crc kubenswrapper[4940]: I0223 08:49:17.762395 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:17 crc kubenswrapper[4940]: I0223 08:49:17.762523 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:17 crc kubenswrapper[4940]: I0223 08:49:17.762538 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:17 crc kubenswrapper[4940]: I0223 08:49:17.762555 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:17 crc kubenswrapper[4940]: I0223 08:49:17.762567 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:17Z","lastTransitionTime":"2026-02-23T08:49:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:17 crc kubenswrapper[4940]: I0223 08:49:17.866398 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:17 crc kubenswrapper[4940]: I0223 08:49:17.866442 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:17 crc kubenswrapper[4940]: I0223 08:49:17.866468 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:17 crc kubenswrapper[4940]: I0223 08:49:17.866481 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:17 crc kubenswrapper[4940]: I0223 08:49:17.866491 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:17Z","lastTransitionTime":"2026-02-23T08:49:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:17 crc kubenswrapper[4940]: I0223 08:49:17.969728 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:17 crc kubenswrapper[4940]: I0223 08:49:17.969801 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:17 crc kubenswrapper[4940]: I0223 08:49:17.969818 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:17 crc kubenswrapper[4940]: I0223 08:49:17.969835 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:17 crc kubenswrapper[4940]: I0223 08:49:17.969848 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:17Z","lastTransitionTime":"2026-02-23T08:49:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:18 crc kubenswrapper[4940]: I0223 08:49:18.072417 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:18 crc kubenswrapper[4940]: I0223 08:49:18.072514 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:18 crc kubenswrapper[4940]: I0223 08:49:18.072532 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:18 crc kubenswrapper[4940]: I0223 08:49:18.072564 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:18 crc kubenswrapper[4940]: I0223 08:49:18.072581 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:18Z","lastTransitionTime":"2026-02-23T08:49:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:18 crc kubenswrapper[4940]: I0223 08:49:18.174462 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:18 crc kubenswrapper[4940]: I0223 08:49:18.174539 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:18 crc kubenswrapper[4940]: I0223 08:49:18.174553 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:18 crc kubenswrapper[4940]: I0223 08:49:18.174588 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:18 crc kubenswrapper[4940]: I0223 08:49:18.174602 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:18Z","lastTransitionTime":"2026-02-23T08:49:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:18 crc kubenswrapper[4940]: I0223 08:49:18.276852 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:18 crc kubenswrapper[4940]: I0223 08:49:18.276896 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:18 crc kubenswrapper[4940]: I0223 08:49:18.276908 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:18 crc kubenswrapper[4940]: I0223 08:49:18.276927 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:18 crc kubenswrapper[4940]: I0223 08:49:18.276939 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:18Z","lastTransitionTime":"2026-02-23T08:49:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:18 crc kubenswrapper[4940]: I0223 08:49:18.342688 4940 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 07:10:48.945343895 +0000 UTC Feb 23 08:49:18 crc kubenswrapper[4940]: I0223 08:49:18.345019 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 08:49:18 crc kubenswrapper[4940]: E0223 08:49:18.345500 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 08:49:18 crc kubenswrapper[4940]: I0223 08:49:18.345802 4940 scope.go:117] "RemoveContainer" containerID="947d88435f9dfeac492c7a476d5413d6c9f65fea2e6a8322bed3a197be4e7c11" Feb 23 08:49:18 crc kubenswrapper[4940]: E0223 08:49:18.346045 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 23 08:49:18 crc kubenswrapper[4940]: I0223 08:49:18.379815 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:18 crc kubenswrapper[4940]: I0223 08:49:18.379883 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:18 crc kubenswrapper[4940]: I0223 08:49:18.379896 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:18 crc kubenswrapper[4940]: I0223 08:49:18.379930 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:18 crc kubenswrapper[4940]: I0223 08:49:18.379942 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:18Z","lastTransitionTime":"2026-02-23T08:49:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:18 crc kubenswrapper[4940]: I0223 08:49:18.482468 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:18 crc kubenswrapper[4940]: I0223 08:49:18.482507 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:18 crc kubenswrapper[4940]: I0223 08:49:18.482517 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:18 crc kubenswrapper[4940]: I0223 08:49:18.482531 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:18 crc kubenswrapper[4940]: I0223 08:49:18.482540 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:18Z","lastTransitionTime":"2026-02-23T08:49:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:18 crc kubenswrapper[4940]: I0223 08:49:18.584827 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:18 crc kubenswrapper[4940]: I0223 08:49:18.584873 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:18 crc kubenswrapper[4940]: I0223 08:49:18.584883 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:18 crc kubenswrapper[4940]: I0223 08:49:18.584897 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:18 crc kubenswrapper[4940]: I0223 08:49:18.584907 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:18Z","lastTransitionTime":"2026-02-23T08:49:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:18 crc kubenswrapper[4940]: I0223 08:49:18.688008 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:18 crc kubenswrapper[4940]: I0223 08:49:18.688049 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:18 crc kubenswrapper[4940]: I0223 08:49:18.688058 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:18 crc kubenswrapper[4940]: I0223 08:49:18.688073 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:18 crc kubenswrapper[4940]: I0223 08:49:18.688083 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:18Z","lastTransitionTime":"2026-02-23T08:49:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:18 crc kubenswrapper[4940]: I0223 08:49:18.791510 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:18 crc kubenswrapper[4940]: I0223 08:49:18.791572 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:18 crc kubenswrapper[4940]: I0223 08:49:18.791590 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:18 crc kubenswrapper[4940]: I0223 08:49:18.791644 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:18 crc kubenswrapper[4940]: I0223 08:49:18.791663 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:18Z","lastTransitionTime":"2026-02-23T08:49:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:18 crc kubenswrapper[4940]: I0223 08:49:18.894854 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:18 crc kubenswrapper[4940]: I0223 08:49:18.894900 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:18 crc kubenswrapper[4940]: I0223 08:49:18.894909 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:18 crc kubenswrapper[4940]: I0223 08:49:18.894926 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:18 crc kubenswrapper[4940]: I0223 08:49:18.894937 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:18Z","lastTransitionTime":"2026-02-23T08:49:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:18 crc kubenswrapper[4940]: I0223 08:49:18.997055 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:18 crc kubenswrapper[4940]: I0223 08:49:18.997119 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:18 crc kubenswrapper[4940]: I0223 08:49:18.997135 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:18 crc kubenswrapper[4940]: I0223 08:49:18.997152 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:18 crc kubenswrapper[4940]: I0223 08:49:18.997162 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:18Z","lastTransitionTime":"2026-02-23T08:49:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:19 crc kubenswrapper[4940]: I0223 08:49:19.099700 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:19 crc kubenswrapper[4940]: I0223 08:49:19.099747 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:19 crc kubenswrapper[4940]: I0223 08:49:19.099758 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:19 crc kubenswrapper[4940]: I0223 08:49:19.099781 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:19 crc kubenswrapper[4940]: I0223 08:49:19.099794 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:19Z","lastTransitionTime":"2026-02-23T08:49:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:19 crc kubenswrapper[4940]: I0223 08:49:19.202340 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:19 crc kubenswrapper[4940]: I0223 08:49:19.202374 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:19 crc kubenswrapper[4940]: I0223 08:49:19.202384 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:19 crc kubenswrapper[4940]: I0223 08:49:19.202399 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:19 crc kubenswrapper[4940]: I0223 08:49:19.202409 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:19Z","lastTransitionTime":"2026-02-23T08:49:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:19 crc kubenswrapper[4940]: I0223 08:49:19.305290 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:19 crc kubenswrapper[4940]: I0223 08:49:19.305361 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:19 crc kubenswrapper[4940]: I0223 08:49:19.305398 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:19 crc kubenswrapper[4940]: I0223 08:49:19.305429 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:19 crc kubenswrapper[4940]: I0223 08:49:19.305451 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:19Z","lastTransitionTime":"2026-02-23T08:49:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:19 crc kubenswrapper[4940]: I0223 08:49:19.343209 4940 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 00:31:01.715170191 +0000 UTC Feb 23 08:49:19 crc kubenswrapper[4940]: I0223 08:49:19.345674 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 08:49:19 crc kubenswrapper[4940]: E0223 08:49:19.345792 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 08:49:19 crc kubenswrapper[4940]: I0223 08:49:19.346465 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 08:49:19 crc kubenswrapper[4940]: E0223 08:49:19.346648 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 08:49:19 crc kubenswrapper[4940]: I0223 08:49:19.361862 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 08:49:19 crc kubenswrapper[4940]: I0223 08:49:19.377541 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 08:49:19 crc kubenswrapper[4940]: I0223 08:49:19.391807 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 08:49:19 crc kubenswrapper[4940]: I0223 08:49:19.407402 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:19 crc kubenswrapper[4940]: I0223 08:49:19.407457 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:19 crc kubenswrapper[4940]: I0223 08:49:19.407469 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:19 crc kubenswrapper[4940]: I0223 08:49:19.407487 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:19 crc kubenswrapper[4940]: I0223 08:49:19.407500 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:19Z","lastTransitionTime":"2026-02-23T08:49:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:19 crc kubenswrapper[4940]: I0223 08:49:19.409363 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 08:49:19 crc kubenswrapper[4940]: I0223 08:49:19.420190 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67d1926d-47ed-4a5c-b868-690599126446\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ee3bc200ebaed1c133ad32e65db44c8613c48203736dada71c0881d3d4f45a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6fefd6cf7ee54d7687a700f08f52ff20d438289f846da4d811f3295e85455c1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee77b54ead604e1f64b0057494a4ba082ac7f59289f563ce28ec6048485e3b07\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://947d88435f9dfeac492c7a476d5413d6c9f65fea2e6a8322bed3a197be4e7c11\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://947d88435f9dfeac492c7a476d5413d6c9f65fea2e6a8322bed3a197be4e7c11\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"message\\\":\\\"le observer\\\\nW0223 08:48:59.155726 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0223 08:48:59.155899 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0223 08:48:59.156685 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1318688766/tls.crt::/tmp/serving-cert-1318688766/tls.key\\\\\\\"\\\\nI0223 08:48:59.543716 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0223 08:48:59.548278 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0223 08:48:59.548307 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0223 08:48:59.548334 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0223 08:48:59.548340 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0223 08:48:59.558852 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0223 08:48:59.558871 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0223 08:48:59.558894 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0223 08:48:59.558902 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0223 08:48:59.558909 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0223 08:48:59.558912 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0223 08:48:59.558917 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0223 08:48:59.558922 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0223 08:48:59.560387 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T08:48:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c47633e36ba0e4b4070060e2925e103b6efa68771b18f7a7f429546f9f05c1d9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:53Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7d8e4f8739813cb8d24afc9999755886cd4a77958b8b4e1514fc0bc53403258\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7d8e4f8739813cb8d24afc9999755886cd4a77958b8b4e1514fc0bc53403258\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:47:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:47:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:47:49Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 08:49:19 crc kubenswrapper[4940]: I0223 08:49:19.439607 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b55d9910-222c-4c04-8a15-cecc288b8dd6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2696757a27ff629002d6fe762776487514e74ad9465357240b3f196cf9e3cec7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73db3b91fa5cdca73acde9a3bcb9062394a54e61e87020b5d656290a54aa4f70\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T08:48:50Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0223 08:48:20.958890 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0223 08:48:20.960636 1 observer_polling.go:159] Starting file observer\\\\nI0223 08:48:20.961488 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0223 08:48:20.962119 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0223 08:48:48.479316 1 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials\\\\nI0223 08:48:50.579389 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0223 08:48:50.579494 1 cmd.go:179] failed checking apiserver connectivity: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T08:48:20Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:48:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aceeb7fc620d96b7e2a474de52d7b3fb007417f4f1ec81e5853724d54381566\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dbb2f4667fe28c3ae9389cece3d7246f31a8bd82328370a699dae73e11b19a4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0c1a355a4218aa74f5a9f20e74427e20e7dd1864bfb683eeb2ac43ef6d7357f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:47:49Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 08:49:19 crc kubenswrapper[4940]: I0223 08:49:19.449237 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 08:49:19 crc kubenswrapper[4940]: I0223 08:49:19.459767 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 08:49:19 crc kubenswrapper[4940]: I0223 08:49:19.510371 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:19 crc kubenswrapper[4940]: I0223 08:49:19.510778 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:19 crc kubenswrapper[4940]: I0223 08:49:19.510987 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:19 crc kubenswrapper[4940]: I0223 08:49:19.511243 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:19 crc kubenswrapper[4940]: I0223 08:49:19.511451 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:19Z","lastTransitionTime":"2026-02-23T08:49:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:19 crc kubenswrapper[4940]: I0223 08:49:19.615459 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:19 crc kubenswrapper[4940]: I0223 08:49:19.615513 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:19 crc kubenswrapper[4940]: I0223 08:49:19.615530 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:19 crc kubenswrapper[4940]: I0223 08:49:19.615569 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:19 crc kubenswrapper[4940]: I0223 08:49:19.615587 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:19Z","lastTransitionTime":"2026-02-23T08:49:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:19 crc kubenswrapper[4940]: I0223 08:49:19.718351 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:19 crc kubenswrapper[4940]: I0223 08:49:19.718418 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:19 crc kubenswrapper[4940]: I0223 08:49:19.718428 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:19 crc kubenswrapper[4940]: I0223 08:49:19.718449 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:19 crc kubenswrapper[4940]: I0223 08:49:19.718462 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:19Z","lastTransitionTime":"2026-02-23T08:49:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:19 crc kubenswrapper[4940]: I0223 08:49:19.821318 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:19 crc kubenswrapper[4940]: I0223 08:49:19.821380 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:19 crc kubenswrapper[4940]: I0223 08:49:19.821393 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:19 crc kubenswrapper[4940]: I0223 08:49:19.821416 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:19 crc kubenswrapper[4940]: I0223 08:49:19.821431 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:19Z","lastTransitionTime":"2026-02-23T08:49:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:19 crc kubenswrapper[4940]: I0223 08:49:19.924233 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:19 crc kubenswrapper[4940]: I0223 08:49:19.924317 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:19 crc kubenswrapper[4940]: I0223 08:49:19.924338 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:19 crc kubenswrapper[4940]: I0223 08:49:19.924361 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:19 crc kubenswrapper[4940]: I0223 08:49:19.924377 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:19Z","lastTransitionTime":"2026-02-23T08:49:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:20 crc kubenswrapper[4940]: I0223 08:49:20.027252 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:20 crc kubenswrapper[4940]: I0223 08:49:20.027288 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:20 crc kubenswrapper[4940]: I0223 08:49:20.027298 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:20 crc kubenswrapper[4940]: I0223 08:49:20.027311 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:20 crc kubenswrapper[4940]: I0223 08:49:20.027322 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:20Z","lastTransitionTime":"2026-02-23T08:49:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:20 crc kubenswrapper[4940]: I0223 08:49:20.130297 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:20 crc kubenswrapper[4940]: I0223 08:49:20.130346 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:20 crc kubenswrapper[4940]: I0223 08:49:20.130357 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:20 crc kubenswrapper[4940]: I0223 08:49:20.130374 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:20 crc kubenswrapper[4940]: I0223 08:49:20.130387 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:20Z","lastTransitionTime":"2026-02-23T08:49:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:20 crc kubenswrapper[4940]: I0223 08:49:20.233270 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:20 crc kubenswrapper[4940]: I0223 08:49:20.233334 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:20 crc kubenswrapper[4940]: I0223 08:49:20.233355 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:20 crc kubenswrapper[4940]: I0223 08:49:20.233382 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:20 crc kubenswrapper[4940]: I0223 08:49:20.233405 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:20Z","lastTransitionTime":"2026-02-23T08:49:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:20 crc kubenswrapper[4940]: I0223 08:49:20.335602 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:20 crc kubenswrapper[4940]: I0223 08:49:20.335659 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:20 crc kubenswrapper[4940]: I0223 08:49:20.335670 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:20 crc kubenswrapper[4940]: I0223 08:49:20.335689 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:20 crc kubenswrapper[4940]: I0223 08:49:20.335702 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:20Z","lastTransitionTime":"2026-02-23T08:49:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:20 crc kubenswrapper[4940]: I0223 08:49:20.344219 4940 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 16:32:26.123540447 +0000 UTC Feb 23 08:49:20 crc kubenswrapper[4940]: I0223 08:49:20.345541 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 08:49:20 crc kubenswrapper[4940]: E0223 08:49:20.345757 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 08:49:20 crc kubenswrapper[4940]: I0223 08:49:20.439029 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:20 crc kubenswrapper[4940]: I0223 08:49:20.439103 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:20 crc kubenswrapper[4940]: I0223 08:49:20.439118 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:20 crc kubenswrapper[4940]: I0223 08:49:20.439136 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:20 crc kubenswrapper[4940]: I0223 08:49:20.439148 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:20Z","lastTransitionTime":"2026-02-23T08:49:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:20 crc kubenswrapper[4940]: I0223 08:49:20.542030 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:20 crc kubenswrapper[4940]: I0223 08:49:20.542073 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:20 crc kubenswrapper[4940]: I0223 08:49:20.542084 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:20 crc kubenswrapper[4940]: I0223 08:49:20.542101 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:20 crc kubenswrapper[4940]: I0223 08:49:20.542115 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:20Z","lastTransitionTime":"2026-02-23T08:49:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:20 crc kubenswrapper[4940]: I0223 08:49:20.644745 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:20 crc kubenswrapper[4940]: I0223 08:49:20.644816 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:20 crc kubenswrapper[4940]: I0223 08:49:20.644829 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:20 crc kubenswrapper[4940]: I0223 08:49:20.644851 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:20 crc kubenswrapper[4940]: I0223 08:49:20.644864 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:20Z","lastTransitionTime":"2026-02-23T08:49:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:20 crc kubenswrapper[4940]: I0223 08:49:20.748035 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:20 crc kubenswrapper[4940]: I0223 08:49:20.748194 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:20 crc kubenswrapper[4940]: I0223 08:49:20.748221 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:20 crc kubenswrapper[4940]: I0223 08:49:20.748246 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:20 crc kubenswrapper[4940]: I0223 08:49:20.748264 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:20Z","lastTransitionTime":"2026-02-23T08:49:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:20 crc kubenswrapper[4940]: I0223 08:49:20.850870 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:20 crc kubenswrapper[4940]: I0223 08:49:20.850934 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:20 crc kubenswrapper[4940]: I0223 08:49:20.850952 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:20 crc kubenswrapper[4940]: I0223 08:49:20.850984 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:20 crc kubenswrapper[4940]: I0223 08:49:20.851002 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:20Z","lastTransitionTime":"2026-02-23T08:49:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:20 crc kubenswrapper[4940]: I0223 08:49:20.953711 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:20 crc kubenswrapper[4940]: I0223 08:49:20.953758 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:20 crc kubenswrapper[4940]: I0223 08:49:20.953769 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:20 crc kubenswrapper[4940]: I0223 08:49:20.953791 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:20 crc kubenswrapper[4940]: I0223 08:49:20.953806 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:20Z","lastTransitionTime":"2026-02-23T08:49:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:21 crc kubenswrapper[4940]: I0223 08:49:21.057241 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:21 crc kubenswrapper[4940]: I0223 08:49:21.057319 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:21 crc kubenswrapper[4940]: I0223 08:49:21.057343 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:21 crc kubenswrapper[4940]: I0223 08:49:21.057399 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:21 crc kubenswrapper[4940]: I0223 08:49:21.057421 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:21Z","lastTransitionTime":"2026-02-23T08:49:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:21 crc kubenswrapper[4940]: I0223 08:49:21.160274 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:21 crc kubenswrapper[4940]: I0223 08:49:21.160448 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:21 crc kubenswrapper[4940]: I0223 08:49:21.160468 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:21 crc kubenswrapper[4940]: I0223 08:49:21.160491 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:21 crc kubenswrapper[4940]: I0223 08:49:21.160508 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:21Z","lastTransitionTime":"2026-02-23T08:49:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:21 crc kubenswrapper[4940]: I0223 08:49:21.264320 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:21 crc kubenswrapper[4940]: I0223 08:49:21.264394 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:21 crc kubenswrapper[4940]: I0223 08:49:21.264406 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:21 crc kubenswrapper[4940]: I0223 08:49:21.264429 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:21 crc kubenswrapper[4940]: I0223 08:49:21.264443 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:21Z","lastTransitionTime":"2026-02-23T08:49:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:21 crc kubenswrapper[4940]: I0223 08:49:21.345347 4940 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 13:29:47.071037567 +0000 UTC Feb 23 08:49:21 crc kubenswrapper[4940]: I0223 08:49:21.345538 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 08:49:21 crc kubenswrapper[4940]: I0223 08:49:21.345602 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 08:49:21 crc kubenswrapper[4940]: E0223 08:49:21.345762 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 08:49:21 crc kubenswrapper[4940]: E0223 08:49:21.346434 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 08:49:21 crc kubenswrapper[4940]: I0223 08:49:21.368360 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:21 crc kubenswrapper[4940]: I0223 08:49:21.368461 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:21 crc kubenswrapper[4940]: I0223 08:49:21.368481 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:21 crc kubenswrapper[4940]: I0223 08:49:21.368508 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:21 crc kubenswrapper[4940]: I0223 08:49:21.368556 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:21Z","lastTransitionTime":"2026-02-23T08:49:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:21 crc kubenswrapper[4940]: I0223 08:49:21.471046 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:21 crc kubenswrapper[4940]: I0223 08:49:21.471098 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:21 crc kubenswrapper[4940]: I0223 08:49:21.471110 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:21 crc kubenswrapper[4940]: I0223 08:49:21.471134 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:21 crc kubenswrapper[4940]: I0223 08:49:21.471156 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:21Z","lastTransitionTime":"2026-02-23T08:49:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:21 crc kubenswrapper[4940]: I0223 08:49:21.573733 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:21 crc kubenswrapper[4940]: I0223 08:49:21.573766 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:21 crc kubenswrapper[4940]: I0223 08:49:21.573779 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:21 crc kubenswrapper[4940]: I0223 08:49:21.573817 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:21 crc kubenswrapper[4940]: I0223 08:49:21.573830 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:21Z","lastTransitionTime":"2026-02-23T08:49:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:21 crc kubenswrapper[4940]: I0223 08:49:21.675774 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:21 crc kubenswrapper[4940]: I0223 08:49:21.675811 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:21 crc kubenswrapper[4940]: I0223 08:49:21.675821 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:21 crc kubenswrapper[4940]: I0223 08:49:21.675835 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:21 crc kubenswrapper[4940]: I0223 08:49:21.675845 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:21Z","lastTransitionTime":"2026-02-23T08:49:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:21 crc kubenswrapper[4940]: I0223 08:49:21.779365 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:21 crc kubenswrapper[4940]: I0223 08:49:21.779418 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:21 crc kubenswrapper[4940]: I0223 08:49:21.779435 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:21 crc kubenswrapper[4940]: I0223 08:49:21.779457 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:21 crc kubenswrapper[4940]: I0223 08:49:21.779469 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:21Z","lastTransitionTime":"2026-02-23T08:49:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:21 crc kubenswrapper[4940]: I0223 08:49:21.882802 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:21 crc kubenswrapper[4940]: I0223 08:49:21.882898 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:21 crc kubenswrapper[4940]: I0223 08:49:21.882917 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:21 crc kubenswrapper[4940]: I0223 08:49:21.882972 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:21 crc kubenswrapper[4940]: I0223 08:49:21.882991 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:21Z","lastTransitionTime":"2026-02-23T08:49:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:21 crc kubenswrapper[4940]: I0223 08:49:21.985465 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:21 crc kubenswrapper[4940]: I0223 08:49:21.985538 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:21 crc kubenswrapper[4940]: I0223 08:49:21.985552 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:21 crc kubenswrapper[4940]: I0223 08:49:21.985575 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:21 crc kubenswrapper[4940]: I0223 08:49:21.985589 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:21Z","lastTransitionTime":"2026-02-23T08:49:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:22 crc kubenswrapper[4940]: I0223 08:49:22.088938 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:22 crc kubenswrapper[4940]: I0223 08:49:22.088997 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:22 crc kubenswrapper[4940]: I0223 08:49:22.089011 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:22 crc kubenswrapper[4940]: I0223 08:49:22.089035 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:22 crc kubenswrapper[4940]: I0223 08:49:22.089049 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:22Z","lastTransitionTime":"2026-02-23T08:49:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:22 crc kubenswrapper[4940]: I0223 08:49:22.191927 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:22 crc kubenswrapper[4940]: I0223 08:49:22.191975 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:22 crc kubenswrapper[4940]: I0223 08:49:22.191985 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:22 crc kubenswrapper[4940]: I0223 08:49:22.192006 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:22 crc kubenswrapper[4940]: I0223 08:49:22.192021 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:22Z","lastTransitionTime":"2026-02-23T08:49:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:22 crc kubenswrapper[4940]: I0223 08:49:22.295116 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:22 crc kubenswrapper[4940]: I0223 08:49:22.295180 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:22 crc kubenswrapper[4940]: I0223 08:49:22.295195 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:22 crc kubenswrapper[4940]: I0223 08:49:22.295218 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:22 crc kubenswrapper[4940]: I0223 08:49:22.295232 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:22Z","lastTransitionTime":"2026-02-23T08:49:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:22 crc kubenswrapper[4940]: I0223 08:49:22.344892 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 08:49:22 crc kubenswrapper[4940]: E0223 08:49:22.345269 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 08:49:22 crc kubenswrapper[4940]: I0223 08:49:22.345645 4940 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 06:02:04.16601208 +0000 UTC Feb 23 08:49:22 crc kubenswrapper[4940]: I0223 08:49:22.398728 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:22 crc kubenswrapper[4940]: I0223 08:49:22.399300 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:22 crc kubenswrapper[4940]: I0223 08:49:22.399315 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:22 crc kubenswrapper[4940]: I0223 08:49:22.399343 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:22 crc kubenswrapper[4940]: I0223 08:49:22.399360 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:22Z","lastTransitionTime":"2026-02-23T08:49:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:22 crc kubenswrapper[4940]: I0223 08:49:22.501375 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:22 crc kubenswrapper[4940]: I0223 08:49:22.501421 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:22 crc kubenswrapper[4940]: I0223 08:49:22.501436 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:22 crc kubenswrapper[4940]: I0223 08:49:22.501463 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:22 crc kubenswrapper[4940]: I0223 08:49:22.501480 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:22Z","lastTransitionTime":"2026-02-23T08:49:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:22 crc kubenswrapper[4940]: I0223 08:49:22.604352 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:22 crc kubenswrapper[4940]: I0223 08:49:22.604670 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:22 crc kubenswrapper[4940]: I0223 08:49:22.604775 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:22 crc kubenswrapper[4940]: I0223 08:49:22.604855 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:22 crc kubenswrapper[4940]: I0223 08:49:22.605002 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:22Z","lastTransitionTime":"2026-02-23T08:49:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:22 crc kubenswrapper[4940]: I0223 08:49:22.708051 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:22 crc kubenswrapper[4940]: I0223 08:49:22.708092 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:22 crc kubenswrapper[4940]: I0223 08:49:22.708101 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:22 crc kubenswrapper[4940]: I0223 08:49:22.708116 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:22 crc kubenswrapper[4940]: I0223 08:49:22.708125 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:22Z","lastTransitionTime":"2026-02-23T08:49:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:22 crc kubenswrapper[4940]: I0223 08:49:22.765961 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"9f5fabdbd24e079413b28b30b7e72c60763c64d08d55badde2d84d38dcbc8a09"} Feb 23 08:49:22 crc kubenswrapper[4940]: I0223 08:49:22.766031 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"bce84ef684469eb1b554c08d38cbd547ce6fa507b189a9f63c740583e66bc5e4"} Feb 23 08:49:22 crc kubenswrapper[4940]: I0223 08:49:22.780781 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b55d9910-222c-4c04-8a15-cecc288b8dd6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2696757a27ff629002d6fe762776487514e74ad9465357240b3f196cf9e3cec7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73db3b91fa5cdca73acde9a3bcb9062394a54e61e87020b5d656290a54aa4f70\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T08:48:50Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0223 08:48:20.958890 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0223 08:48:20.960636 1 observer_polling.go:159] Starting file observer\\\\nI0223 08:48:20.961488 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0223 08:48:20.962119 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0223 08:48:48.479316 1 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials\\\\nI0223 08:48:50.579389 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0223 08:48:50.579494 1 cmd.go:179] failed checking apiserver connectivity: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T08:48:20Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:48:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aceeb7fc620d96b7e2a474de52d7b3fb007417f4f1ec81e5853724d54381566\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dbb2f4667fe28c3ae9389cece3d7246f31a8bd82328370a699dae73e11b19a4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0c1a355a4218aa74f5a9f20e74427e20e7dd1864bfb683eeb2ac43ef6d7357f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:47:49Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 08:49:22 crc kubenswrapper[4940]: I0223 08:49:22.793186 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 08:49:22 crc kubenswrapper[4940]: I0223 08:49:22.808415 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 08:49:22 crc kubenswrapper[4940]: I0223 08:49:22.810925 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:22 crc kubenswrapper[4940]: I0223 08:49:22.811003 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:22 crc kubenswrapper[4940]: I0223 08:49:22.811028 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:22 crc kubenswrapper[4940]: I0223 08:49:22.811063 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:22 crc kubenswrapper[4940]: I0223 08:49:22.811087 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:22Z","lastTransitionTime":"2026-02-23T08:49:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:22 crc kubenswrapper[4940]: I0223 08:49:22.821122 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 08:49:22 crc kubenswrapper[4940]: I0223 08:49:22.831166 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 08:49:22 crc kubenswrapper[4940]: I0223 08:49:22.842809 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 08:49:22 crc kubenswrapper[4940]: I0223 08:49:22.854842 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f5fabdbd24e079413b28b30b7e72c60763c64d08d55badde2d84d38dcbc8a09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bce84ef684469eb1b554c08d38cbd547ce6fa507b189a9f63c740583e66bc5e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 08:49:22 crc kubenswrapper[4940]: I0223 08:49:22.870884 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67d1926d-47ed-4a5c-b868-690599126446\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ee3bc200ebaed1c133ad32e65db44c8613c48203736dada71c0881d3d4f45a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6fefd6cf7ee54d7687a700f08f52ff20d438289f846da4d811f3295e85455c1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee77b54ead604e1f64b0057494a4ba082ac7f59289f563ce28ec6048485e3b07\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://947d88435f9dfeac492c7a476d5413d6c9f65fea2e6a8322bed3a197be4e7c11\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://947d88435f9dfeac492c7a476d5413d6c9f65fea2e6a8322bed3a197be4e7c11\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"message\\\":\\\"le observer\\\\nW0223 08:48:59.155726 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0223 08:48:59.155899 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0223 08:48:59.156685 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1318688766/tls.crt::/tmp/serving-cert-1318688766/tls.key\\\\\\\"\\\\nI0223 08:48:59.543716 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0223 08:48:59.548278 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0223 08:48:59.548307 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0223 08:48:59.548334 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0223 08:48:59.548340 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0223 08:48:59.558852 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0223 08:48:59.558871 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0223 08:48:59.558894 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0223 08:48:59.558902 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0223 08:48:59.558909 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0223 08:48:59.558912 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0223 08:48:59.558917 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0223 08:48:59.558922 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0223 08:48:59.560387 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T08:48:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c47633e36ba0e4b4070060e2925e103b6efa68771b18f7a7f429546f9f05c1d9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:53Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7d8e4f8739813cb8d24afc9999755886cd4a77958b8b4e1514fc0bc53403258\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7d8e4f8739813cb8d24afc9999755886cd4a77958b8b4e1514fc0bc53403258\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:47:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:47:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:47:49Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 08:49:22 crc kubenswrapper[4940]: I0223 08:49:22.914209 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:22 crc kubenswrapper[4940]: I0223 08:49:22.914245 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:22 crc kubenswrapper[4940]: I0223 08:49:22.914257 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:22 crc kubenswrapper[4940]: I0223 08:49:22.914276 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:22 crc kubenswrapper[4940]: I0223 08:49:22.914288 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:22Z","lastTransitionTime":"2026-02-23T08:49:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:23 crc kubenswrapper[4940]: I0223 08:49:23.017844 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:23 crc kubenswrapper[4940]: I0223 08:49:23.017901 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:23 crc kubenswrapper[4940]: I0223 08:49:23.017918 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:23 crc kubenswrapper[4940]: I0223 08:49:23.017947 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:23 crc kubenswrapper[4940]: I0223 08:49:23.017964 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:23Z","lastTransitionTime":"2026-02-23T08:49:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:23 crc kubenswrapper[4940]: I0223 08:49:23.120950 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:23 crc kubenswrapper[4940]: I0223 08:49:23.120990 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:23 crc kubenswrapper[4940]: I0223 08:49:23.121001 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:23 crc kubenswrapper[4940]: I0223 08:49:23.121018 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:23 crc kubenswrapper[4940]: I0223 08:49:23.121030 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:23Z","lastTransitionTime":"2026-02-23T08:49:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:23 crc kubenswrapper[4940]: I0223 08:49:23.224092 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:23 crc kubenswrapper[4940]: I0223 08:49:23.224151 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:23 crc kubenswrapper[4940]: I0223 08:49:23.224174 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:23 crc kubenswrapper[4940]: I0223 08:49:23.224201 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:23 crc kubenswrapper[4940]: I0223 08:49:23.224226 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:23Z","lastTransitionTime":"2026-02-23T08:49:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:23 crc kubenswrapper[4940]: I0223 08:49:23.272545 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:23 crc kubenswrapper[4940]: I0223 08:49:23.272650 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:23 crc kubenswrapper[4940]: I0223 08:49:23.272707 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:23 crc kubenswrapper[4940]: I0223 08:49:23.272744 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:23 crc kubenswrapper[4940]: I0223 08:49:23.272782 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:23Z","lastTransitionTime":"2026-02-23T08:49:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:23 crc kubenswrapper[4940]: E0223 08:49:23.291794 4940 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T08:49:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T08:49:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T08:49:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T08:49:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0b40f9a7-6d5b-496d-bcec-88183c6aba29\\\",\\\"systemUUID\\\":\\\"3c406e8c-0d77-4ead-8ee9-37cf28c01cc1\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:23Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:23 crc kubenswrapper[4940]: I0223 08:49:23.295875 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:23 crc kubenswrapper[4940]: I0223 08:49:23.295926 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:23 crc kubenswrapper[4940]: I0223 08:49:23.295936 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:23 crc kubenswrapper[4940]: I0223 08:49:23.295950 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:23 crc kubenswrapper[4940]: I0223 08:49:23.295967 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:23Z","lastTransitionTime":"2026-02-23T08:49:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:23 crc kubenswrapper[4940]: E0223 08:49:23.317500 4940 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T08:49:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T08:49:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T08:49:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T08:49:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0b40f9a7-6d5b-496d-bcec-88183c6aba29\\\",\\\"systemUUID\\\":\\\"3c406e8c-0d77-4ead-8ee9-37cf28c01cc1\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:23Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:23 crc kubenswrapper[4940]: I0223 08:49:23.323381 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:23 crc kubenswrapper[4940]: I0223 08:49:23.323440 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:23 crc kubenswrapper[4940]: I0223 08:49:23.323452 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:23 crc kubenswrapper[4940]: I0223 08:49:23.323512 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:23 crc kubenswrapper[4940]: I0223 08:49:23.323525 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:23Z","lastTransitionTime":"2026-02-23T08:49:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:23 crc kubenswrapper[4940]: E0223 08:49:23.343896 4940 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T08:49:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T08:49:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T08:49:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T08:49:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0b40f9a7-6d5b-496d-bcec-88183c6aba29\\\",\\\"systemUUID\\\":\\\"3c406e8c-0d77-4ead-8ee9-37cf28c01cc1\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:23Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:23 crc kubenswrapper[4940]: I0223 08:49:23.344659 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 08:49:23 crc kubenswrapper[4940]: E0223 08:49:23.344773 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 08:49:23 crc kubenswrapper[4940]: I0223 08:49:23.344835 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 08:49:23 crc kubenswrapper[4940]: E0223 08:49:23.345052 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 08:49:23 crc kubenswrapper[4940]: I0223 08:49:23.346048 4940 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 22:37:22.159312803 +0000 UTC Feb 23 08:49:23 crc kubenswrapper[4940]: I0223 08:49:23.348517 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:23 crc kubenswrapper[4940]: I0223 08:49:23.348558 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:23 crc kubenswrapper[4940]: I0223 08:49:23.348568 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:23 crc kubenswrapper[4940]: I0223 08:49:23.348586 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:23 crc kubenswrapper[4940]: I0223 08:49:23.348601 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:23Z","lastTransitionTime":"2026-02-23T08:49:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:23 crc kubenswrapper[4940]: E0223 08:49:23.368450 4940 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T08:49:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T08:49:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T08:49:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T08:49:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0b40f9a7-6d5b-496d-bcec-88183c6aba29\\\",\\\"systemUUID\\\":\\\"3c406e8c-0d77-4ead-8ee9-37cf28c01cc1\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:23Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:23 crc kubenswrapper[4940]: I0223 08:49:23.372854 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:23 crc kubenswrapper[4940]: I0223 08:49:23.372879 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:23 crc kubenswrapper[4940]: I0223 08:49:23.372890 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:23 crc kubenswrapper[4940]: I0223 08:49:23.372907 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:23 crc kubenswrapper[4940]: I0223 08:49:23.372919 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:23Z","lastTransitionTime":"2026-02-23T08:49:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:23 crc kubenswrapper[4940]: E0223 08:49:23.388317 4940 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T08:49:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T08:49:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T08:49:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T08:49:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0b40f9a7-6d5b-496d-bcec-88183c6aba29\\\",\\\"systemUUID\\\":\\\"3c406e8c-0d77-4ead-8ee9-37cf28c01cc1\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:23Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:23 crc kubenswrapper[4940]: E0223 08:49:23.388419 4940 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 23 08:49:23 crc kubenswrapper[4940]: I0223 08:49:23.390740 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:23 crc kubenswrapper[4940]: I0223 08:49:23.390771 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:23 crc kubenswrapper[4940]: I0223 08:49:23.390783 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:23 crc kubenswrapper[4940]: I0223 08:49:23.390822 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:23 crc kubenswrapper[4940]: I0223 08:49:23.390835 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:23Z","lastTransitionTime":"2026-02-23T08:49:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:23 crc kubenswrapper[4940]: I0223 08:49:23.493543 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:23 crc kubenswrapper[4940]: I0223 08:49:23.493595 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:23 crc kubenswrapper[4940]: I0223 08:49:23.493649 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:23 crc kubenswrapper[4940]: I0223 08:49:23.493674 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:23 crc kubenswrapper[4940]: I0223 08:49:23.493687 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:23Z","lastTransitionTime":"2026-02-23T08:49:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:23 crc kubenswrapper[4940]: I0223 08:49:23.597824 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:23 crc kubenswrapper[4940]: I0223 08:49:23.597931 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:23 crc kubenswrapper[4940]: I0223 08:49:23.597951 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:23 crc kubenswrapper[4940]: I0223 08:49:23.597979 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:23 crc kubenswrapper[4940]: I0223 08:49:23.597999 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:23Z","lastTransitionTime":"2026-02-23T08:49:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:23 crc kubenswrapper[4940]: I0223 08:49:23.701215 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:23 crc kubenswrapper[4940]: I0223 08:49:23.701268 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:23 crc kubenswrapper[4940]: I0223 08:49:23.701279 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:23 crc kubenswrapper[4940]: I0223 08:49:23.701349 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:23 crc kubenswrapper[4940]: I0223 08:49:23.701361 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:23Z","lastTransitionTime":"2026-02-23T08:49:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:23 crc kubenswrapper[4940]: I0223 08:49:23.803983 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:23 crc kubenswrapper[4940]: I0223 08:49:23.804051 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:23 crc kubenswrapper[4940]: I0223 08:49:23.804061 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:23 crc kubenswrapper[4940]: I0223 08:49:23.804079 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:23 crc kubenswrapper[4940]: I0223 08:49:23.804089 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:23Z","lastTransitionTime":"2026-02-23T08:49:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:23 crc kubenswrapper[4940]: I0223 08:49:23.907511 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:23 crc kubenswrapper[4940]: I0223 08:49:23.907571 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:23 crc kubenswrapper[4940]: I0223 08:49:23.907590 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:23 crc kubenswrapper[4940]: I0223 08:49:23.907643 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:23 crc kubenswrapper[4940]: I0223 08:49:23.907665 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:23Z","lastTransitionTime":"2026-02-23T08:49:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:24 crc kubenswrapper[4940]: I0223 08:49:24.010570 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:24 crc kubenswrapper[4940]: I0223 08:49:24.010893 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:24 crc kubenswrapper[4940]: I0223 08:49:24.010988 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:24 crc kubenswrapper[4940]: I0223 08:49:24.011068 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:24 crc kubenswrapper[4940]: I0223 08:49:24.011161 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:24Z","lastTransitionTime":"2026-02-23T08:49:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:24 crc kubenswrapper[4940]: I0223 08:49:24.114225 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:24 crc kubenswrapper[4940]: I0223 08:49:24.114298 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:24 crc kubenswrapper[4940]: I0223 08:49:24.114311 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:24 crc kubenswrapper[4940]: I0223 08:49:24.114492 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:24 crc kubenswrapper[4940]: I0223 08:49:24.114504 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:24Z","lastTransitionTime":"2026-02-23T08:49:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:24 crc kubenswrapper[4940]: I0223 08:49:24.217765 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:24 crc kubenswrapper[4940]: I0223 08:49:24.217816 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:24 crc kubenswrapper[4940]: I0223 08:49:24.217826 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:24 crc kubenswrapper[4940]: I0223 08:49:24.217844 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:24 crc kubenswrapper[4940]: I0223 08:49:24.217856 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:24Z","lastTransitionTime":"2026-02-23T08:49:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:24 crc kubenswrapper[4940]: I0223 08:49:24.321384 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:24 crc kubenswrapper[4940]: I0223 08:49:24.321444 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:24 crc kubenswrapper[4940]: I0223 08:49:24.321461 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:24 crc kubenswrapper[4940]: I0223 08:49:24.321484 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:24 crc kubenswrapper[4940]: I0223 08:49:24.321506 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:24Z","lastTransitionTime":"2026-02-23T08:49:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:24 crc kubenswrapper[4940]: I0223 08:49:24.345039 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 08:49:24 crc kubenswrapper[4940]: E0223 08:49:24.345241 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 08:49:24 crc kubenswrapper[4940]: I0223 08:49:24.347262 4940 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-06 18:57:29.971101417 +0000 UTC Feb 23 08:49:24 crc kubenswrapper[4940]: I0223 08:49:24.424188 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:24 crc kubenswrapper[4940]: I0223 08:49:24.424382 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:24 crc kubenswrapper[4940]: I0223 08:49:24.424400 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:24 crc kubenswrapper[4940]: I0223 08:49:24.424423 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:24 crc kubenswrapper[4940]: I0223 08:49:24.424440 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:24Z","lastTransitionTime":"2026-02-23T08:49:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:24 crc kubenswrapper[4940]: I0223 08:49:24.456910 4940 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Feb 23 08:49:24 crc kubenswrapper[4940]: I0223 08:49:24.527186 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:24 crc kubenswrapper[4940]: I0223 08:49:24.527229 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:24 crc kubenswrapper[4940]: I0223 08:49:24.527238 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:24 crc kubenswrapper[4940]: I0223 08:49:24.527251 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:24 crc kubenswrapper[4940]: I0223 08:49:24.527262 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:24Z","lastTransitionTime":"2026-02-23T08:49:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:24 crc kubenswrapper[4940]: I0223 08:49:24.630501 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:24 crc kubenswrapper[4940]: I0223 08:49:24.630553 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:24 crc kubenswrapper[4940]: I0223 08:49:24.630563 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:24 crc kubenswrapper[4940]: I0223 08:49:24.630584 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:24 crc kubenswrapper[4940]: I0223 08:49:24.630603 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:24Z","lastTransitionTime":"2026-02-23T08:49:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:24 crc kubenswrapper[4940]: I0223 08:49:24.733479 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:24 crc kubenswrapper[4940]: I0223 08:49:24.733523 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:24 crc kubenswrapper[4940]: I0223 08:49:24.733534 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:24 crc kubenswrapper[4940]: I0223 08:49:24.733553 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:24 crc kubenswrapper[4940]: I0223 08:49:24.733564 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:24Z","lastTransitionTime":"2026-02-23T08:49:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:24 crc kubenswrapper[4940]: I0223 08:49:24.835666 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:24 crc kubenswrapper[4940]: I0223 08:49:24.835712 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:24 crc kubenswrapper[4940]: I0223 08:49:24.835724 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:24 crc kubenswrapper[4940]: I0223 08:49:24.835746 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:24 crc kubenswrapper[4940]: I0223 08:49:24.835759 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:24Z","lastTransitionTime":"2026-02-23T08:49:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:24 crc kubenswrapper[4940]: I0223 08:49:24.938431 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:24 crc kubenswrapper[4940]: I0223 08:49:24.938510 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:24 crc kubenswrapper[4940]: I0223 08:49:24.938525 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:24 crc kubenswrapper[4940]: I0223 08:49:24.938551 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:24 crc kubenswrapper[4940]: I0223 08:49:24.938568 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:24Z","lastTransitionTime":"2026-02-23T08:49:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:25 crc kubenswrapper[4940]: I0223 08:49:25.040669 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:25 crc kubenswrapper[4940]: I0223 08:49:25.040708 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:25 crc kubenswrapper[4940]: I0223 08:49:25.040717 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:25 crc kubenswrapper[4940]: I0223 08:49:25.040732 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:25 crc kubenswrapper[4940]: I0223 08:49:25.040741 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:25Z","lastTransitionTime":"2026-02-23T08:49:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:25 crc kubenswrapper[4940]: I0223 08:49:25.143851 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:25 crc kubenswrapper[4940]: I0223 08:49:25.143909 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:25 crc kubenswrapper[4940]: I0223 08:49:25.143922 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:25 crc kubenswrapper[4940]: I0223 08:49:25.143944 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:25 crc kubenswrapper[4940]: I0223 08:49:25.143962 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:25Z","lastTransitionTime":"2026-02-23T08:49:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:25 crc kubenswrapper[4940]: I0223 08:49:25.247817 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:25 crc kubenswrapper[4940]: I0223 08:49:25.247883 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:25 crc kubenswrapper[4940]: I0223 08:49:25.247901 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:25 crc kubenswrapper[4940]: I0223 08:49:25.247925 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:25 crc kubenswrapper[4940]: I0223 08:49:25.247948 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:25Z","lastTransitionTime":"2026-02-23T08:49:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:25 crc kubenswrapper[4940]: I0223 08:49:25.345737 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 08:49:25 crc kubenswrapper[4940]: I0223 08:49:25.345783 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 08:49:25 crc kubenswrapper[4940]: E0223 08:49:25.346054 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 08:49:25 crc kubenswrapper[4940]: E0223 08:49:25.346168 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 08:49:25 crc kubenswrapper[4940]: I0223 08:49:25.347782 4940 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 04:22:58.979353241 +0000 UTC Feb 23 08:49:25 crc kubenswrapper[4940]: I0223 08:49:25.350233 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:25 crc kubenswrapper[4940]: I0223 08:49:25.350264 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:25 crc kubenswrapper[4940]: I0223 08:49:25.350274 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:25 crc kubenswrapper[4940]: I0223 08:49:25.350294 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:25 crc kubenswrapper[4940]: I0223 08:49:25.350307 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:25Z","lastTransitionTime":"2026-02-23T08:49:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:25 crc kubenswrapper[4940]: I0223 08:49:25.453183 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:25 crc kubenswrapper[4940]: I0223 08:49:25.453260 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:25 crc kubenswrapper[4940]: I0223 08:49:25.453277 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:25 crc kubenswrapper[4940]: I0223 08:49:25.453300 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:25 crc kubenswrapper[4940]: I0223 08:49:25.453315 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:25Z","lastTransitionTime":"2026-02-23T08:49:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:25 crc kubenswrapper[4940]: I0223 08:49:25.556122 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:25 crc kubenswrapper[4940]: I0223 08:49:25.556177 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:25 crc kubenswrapper[4940]: I0223 08:49:25.556186 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:25 crc kubenswrapper[4940]: I0223 08:49:25.556201 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:25 crc kubenswrapper[4940]: I0223 08:49:25.556210 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:25Z","lastTransitionTime":"2026-02-23T08:49:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:25 crc kubenswrapper[4940]: I0223 08:49:25.659670 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:25 crc kubenswrapper[4940]: I0223 08:49:25.659721 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:25 crc kubenswrapper[4940]: I0223 08:49:25.659735 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:25 crc kubenswrapper[4940]: I0223 08:49:25.659750 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:25 crc kubenswrapper[4940]: I0223 08:49:25.659760 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:25Z","lastTransitionTime":"2026-02-23T08:49:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:25 crc kubenswrapper[4940]: I0223 08:49:25.762823 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:25 crc kubenswrapper[4940]: I0223 08:49:25.762925 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:25 crc kubenswrapper[4940]: I0223 08:49:25.762942 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:25 crc kubenswrapper[4940]: I0223 08:49:25.763067 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:25 crc kubenswrapper[4940]: I0223 08:49:25.763129 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:25Z","lastTransitionTime":"2026-02-23T08:49:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:25 crc kubenswrapper[4940]: I0223 08:49:25.866343 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:25 crc kubenswrapper[4940]: I0223 08:49:25.866384 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:25 crc kubenswrapper[4940]: I0223 08:49:25.866393 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:25 crc kubenswrapper[4940]: I0223 08:49:25.866408 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:25 crc kubenswrapper[4940]: I0223 08:49:25.866420 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:25Z","lastTransitionTime":"2026-02-23T08:49:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:25 crc kubenswrapper[4940]: I0223 08:49:25.968876 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:25 crc kubenswrapper[4940]: I0223 08:49:25.968932 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:25 crc kubenswrapper[4940]: I0223 08:49:25.968943 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:25 crc kubenswrapper[4940]: I0223 08:49:25.968961 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:25 crc kubenswrapper[4940]: I0223 08:49:25.968973 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:25Z","lastTransitionTime":"2026-02-23T08:49:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:26 crc kubenswrapper[4940]: I0223 08:49:26.071941 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:26 crc kubenswrapper[4940]: I0223 08:49:26.072001 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:26 crc kubenswrapper[4940]: I0223 08:49:26.072014 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:26 crc kubenswrapper[4940]: I0223 08:49:26.072035 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:26 crc kubenswrapper[4940]: I0223 08:49:26.072046 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:26Z","lastTransitionTime":"2026-02-23T08:49:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:26 crc kubenswrapper[4940]: I0223 08:49:26.174938 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:26 crc kubenswrapper[4940]: I0223 08:49:26.175005 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:26 crc kubenswrapper[4940]: I0223 08:49:26.175018 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:26 crc kubenswrapper[4940]: I0223 08:49:26.175035 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:26 crc kubenswrapper[4940]: I0223 08:49:26.175046 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:26Z","lastTransitionTime":"2026-02-23T08:49:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:26 crc kubenswrapper[4940]: I0223 08:49:26.277730 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:26 crc kubenswrapper[4940]: I0223 08:49:26.277794 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:26 crc kubenswrapper[4940]: I0223 08:49:26.277811 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:26 crc kubenswrapper[4940]: I0223 08:49:26.277836 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:26 crc kubenswrapper[4940]: I0223 08:49:26.277854 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:26Z","lastTransitionTime":"2026-02-23T08:49:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:26 crc kubenswrapper[4940]: I0223 08:49:26.344598 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 08:49:26 crc kubenswrapper[4940]: E0223 08:49:26.344761 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 08:49:26 crc kubenswrapper[4940]: I0223 08:49:26.348698 4940 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 07:06:21.146442183 +0000 UTC Feb 23 08:49:26 crc kubenswrapper[4940]: I0223 08:49:26.381061 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:26 crc kubenswrapper[4940]: I0223 08:49:26.381141 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:26 crc kubenswrapper[4940]: I0223 08:49:26.381160 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:26 crc kubenswrapper[4940]: I0223 08:49:26.381190 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:26 crc kubenswrapper[4940]: I0223 08:49:26.381209 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:26Z","lastTransitionTime":"2026-02-23T08:49:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:26 crc kubenswrapper[4940]: I0223 08:49:26.483534 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:26 crc kubenswrapper[4940]: I0223 08:49:26.483588 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:26 crc kubenswrapper[4940]: I0223 08:49:26.483636 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:26 crc kubenswrapper[4940]: I0223 08:49:26.483661 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:26 crc kubenswrapper[4940]: I0223 08:49:26.483678 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:26Z","lastTransitionTime":"2026-02-23T08:49:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:26 crc kubenswrapper[4940]: I0223 08:49:26.585750 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:26 crc kubenswrapper[4940]: I0223 08:49:26.585796 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:26 crc kubenswrapper[4940]: I0223 08:49:26.585830 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:26 crc kubenswrapper[4940]: I0223 08:49:26.585846 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:26 crc kubenswrapper[4940]: I0223 08:49:26.585857 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:26Z","lastTransitionTime":"2026-02-23T08:49:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:26 crc kubenswrapper[4940]: I0223 08:49:26.688526 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:26 crc kubenswrapper[4940]: I0223 08:49:26.688588 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:26 crc kubenswrapper[4940]: I0223 08:49:26.688605 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:26 crc kubenswrapper[4940]: I0223 08:49:26.688655 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:26 crc kubenswrapper[4940]: I0223 08:49:26.688672 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:26Z","lastTransitionTime":"2026-02-23T08:49:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:26 crc kubenswrapper[4940]: I0223 08:49:26.790236 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:26 crc kubenswrapper[4940]: I0223 08:49:26.790288 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:26 crc kubenswrapper[4940]: I0223 08:49:26.790300 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:26 crc kubenswrapper[4940]: I0223 08:49:26.790319 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:26 crc kubenswrapper[4940]: I0223 08:49:26.790332 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:26Z","lastTransitionTime":"2026-02-23T08:49:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:26 crc kubenswrapper[4940]: I0223 08:49:26.892955 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:26 crc kubenswrapper[4940]: I0223 08:49:26.893029 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:26 crc kubenswrapper[4940]: I0223 08:49:26.893041 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:26 crc kubenswrapper[4940]: I0223 08:49:26.893064 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:26 crc kubenswrapper[4940]: I0223 08:49:26.893079 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:26Z","lastTransitionTime":"2026-02-23T08:49:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:26 crc kubenswrapper[4940]: I0223 08:49:26.995587 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:26 crc kubenswrapper[4940]: I0223 08:49:26.995653 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:26 crc kubenswrapper[4940]: I0223 08:49:26.995667 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:26 crc kubenswrapper[4940]: I0223 08:49:26.995684 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:26 crc kubenswrapper[4940]: I0223 08:49:26.995695 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:26Z","lastTransitionTime":"2026-02-23T08:49:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:27 crc kubenswrapper[4940]: I0223 08:49:27.099117 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:27 crc kubenswrapper[4940]: I0223 08:49:27.099181 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:27 crc kubenswrapper[4940]: I0223 08:49:27.099213 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:27 crc kubenswrapper[4940]: I0223 08:49:27.099241 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:27 crc kubenswrapper[4940]: I0223 08:49:27.099256 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:27Z","lastTransitionTime":"2026-02-23T08:49:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:27 crc kubenswrapper[4940]: I0223 08:49:27.202472 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:27 crc kubenswrapper[4940]: I0223 08:49:27.202545 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:27 crc kubenswrapper[4940]: I0223 08:49:27.202564 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:27 crc kubenswrapper[4940]: I0223 08:49:27.202593 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:27 crc kubenswrapper[4940]: I0223 08:49:27.202636 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:27Z","lastTransitionTime":"2026-02-23T08:49:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:27 crc kubenswrapper[4940]: I0223 08:49:27.306413 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:27 crc kubenswrapper[4940]: I0223 08:49:27.306490 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:27 crc kubenswrapper[4940]: I0223 08:49:27.306516 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:27 crc kubenswrapper[4940]: I0223 08:49:27.306551 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:27 crc kubenswrapper[4940]: I0223 08:49:27.306660 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:27Z","lastTransitionTime":"2026-02-23T08:49:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:27 crc kubenswrapper[4940]: I0223 08:49:27.345419 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 08:49:27 crc kubenswrapper[4940]: E0223 08:49:27.345656 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 08:49:27 crc kubenswrapper[4940]: I0223 08:49:27.345953 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 08:49:27 crc kubenswrapper[4940]: E0223 08:49:27.346100 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 08:49:27 crc kubenswrapper[4940]: I0223 08:49:27.349306 4940 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-18 00:53:31.577904627 +0000 UTC Feb 23 08:49:27 crc kubenswrapper[4940]: I0223 08:49:27.410003 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:27 crc kubenswrapper[4940]: I0223 08:49:27.410049 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:27 crc kubenswrapper[4940]: I0223 08:49:27.410062 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:27 crc kubenswrapper[4940]: I0223 08:49:27.410082 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:27 crc kubenswrapper[4940]: I0223 08:49:27.410093 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:27Z","lastTransitionTime":"2026-02-23T08:49:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:27 crc kubenswrapper[4940]: I0223 08:49:27.514777 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:27 crc kubenswrapper[4940]: I0223 08:49:27.514861 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:27 crc kubenswrapper[4940]: I0223 08:49:27.514895 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:27 crc kubenswrapper[4940]: I0223 08:49:27.514930 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:27 crc kubenswrapper[4940]: I0223 08:49:27.514959 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:27Z","lastTransitionTime":"2026-02-23T08:49:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:27 crc kubenswrapper[4940]: I0223 08:49:27.617978 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:27 crc kubenswrapper[4940]: I0223 08:49:27.618059 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:27 crc kubenswrapper[4940]: I0223 08:49:27.618074 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:27 crc kubenswrapper[4940]: I0223 08:49:27.618090 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:27 crc kubenswrapper[4940]: I0223 08:49:27.618100 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:27Z","lastTransitionTime":"2026-02-23T08:49:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:27 crc kubenswrapper[4940]: I0223 08:49:27.720628 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:27 crc kubenswrapper[4940]: I0223 08:49:27.720686 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:27 crc kubenswrapper[4940]: I0223 08:49:27.720731 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:27 crc kubenswrapper[4940]: I0223 08:49:27.720750 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:27 crc kubenswrapper[4940]: I0223 08:49:27.720761 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:27Z","lastTransitionTime":"2026-02-23T08:49:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:27 crc kubenswrapper[4940]: I0223 08:49:27.823815 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:27 crc kubenswrapper[4940]: I0223 08:49:27.823860 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:27 crc kubenswrapper[4940]: I0223 08:49:27.823869 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:27 crc kubenswrapper[4940]: I0223 08:49:27.823887 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:27 crc kubenswrapper[4940]: I0223 08:49:27.823899 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:27Z","lastTransitionTime":"2026-02-23T08:49:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:27 crc kubenswrapper[4940]: I0223 08:49:27.927844 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:27 crc kubenswrapper[4940]: I0223 08:49:27.928008 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:27 crc kubenswrapper[4940]: I0223 08:49:27.928018 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:27 crc kubenswrapper[4940]: I0223 08:49:27.928037 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:27 crc kubenswrapper[4940]: I0223 08:49:27.928049 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:27Z","lastTransitionTime":"2026-02-23T08:49:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:28 crc kubenswrapper[4940]: I0223 08:49:28.031274 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:28 crc kubenswrapper[4940]: I0223 08:49:28.031331 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:28 crc kubenswrapper[4940]: I0223 08:49:28.031344 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:28 crc kubenswrapper[4940]: I0223 08:49:28.031365 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:28 crc kubenswrapper[4940]: I0223 08:49:28.031377 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:28Z","lastTransitionTime":"2026-02-23T08:49:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:28 crc kubenswrapper[4940]: I0223 08:49:28.133990 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:28 crc kubenswrapper[4940]: I0223 08:49:28.134046 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:28 crc kubenswrapper[4940]: I0223 08:49:28.134060 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:28 crc kubenswrapper[4940]: I0223 08:49:28.134079 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:28 crc kubenswrapper[4940]: I0223 08:49:28.134091 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:28Z","lastTransitionTime":"2026-02-23T08:49:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:28 crc kubenswrapper[4940]: I0223 08:49:28.236490 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:28 crc kubenswrapper[4940]: I0223 08:49:28.236534 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:28 crc kubenswrapper[4940]: I0223 08:49:28.236545 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:28 crc kubenswrapper[4940]: I0223 08:49:28.236562 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:28 crc kubenswrapper[4940]: I0223 08:49:28.236574 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:28Z","lastTransitionTime":"2026-02-23T08:49:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:28 crc kubenswrapper[4940]: I0223 08:49:28.339645 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:28 crc kubenswrapper[4940]: I0223 08:49:28.339687 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:28 crc kubenswrapper[4940]: I0223 08:49:28.339697 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:28 crc kubenswrapper[4940]: I0223 08:49:28.339734 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:28 crc kubenswrapper[4940]: I0223 08:49:28.339746 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:28Z","lastTransitionTime":"2026-02-23T08:49:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:28 crc kubenswrapper[4940]: I0223 08:49:28.345382 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 08:49:28 crc kubenswrapper[4940]: E0223 08:49:28.345521 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 08:49:28 crc kubenswrapper[4940]: I0223 08:49:28.350105 4940 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 08:03:36.437963237 +0000 UTC Feb 23 08:49:28 crc kubenswrapper[4940]: I0223 08:49:28.442292 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:28 crc kubenswrapper[4940]: I0223 08:49:28.442345 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:28 crc kubenswrapper[4940]: I0223 08:49:28.442359 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:28 crc kubenswrapper[4940]: I0223 08:49:28.442381 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:28 crc kubenswrapper[4940]: I0223 08:49:28.442398 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:28Z","lastTransitionTime":"2026-02-23T08:49:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:28 crc kubenswrapper[4940]: I0223 08:49:28.545668 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:28 crc kubenswrapper[4940]: I0223 08:49:28.545717 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:28 crc kubenswrapper[4940]: I0223 08:49:28.545734 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:28 crc kubenswrapper[4940]: I0223 08:49:28.545757 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:28 crc kubenswrapper[4940]: I0223 08:49:28.545772 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:28Z","lastTransitionTime":"2026-02-23T08:49:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:28 crc kubenswrapper[4940]: I0223 08:49:28.654502 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:28 crc kubenswrapper[4940]: I0223 08:49:28.654565 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:28 crc kubenswrapper[4940]: I0223 08:49:28.654578 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:28 crc kubenswrapper[4940]: I0223 08:49:28.654600 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:28 crc kubenswrapper[4940]: I0223 08:49:28.654987 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:28Z","lastTransitionTime":"2026-02-23T08:49:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:28 crc kubenswrapper[4940]: I0223 08:49:28.758985 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:28 crc kubenswrapper[4940]: I0223 08:49:28.759042 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:28 crc kubenswrapper[4940]: I0223 08:49:28.759061 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:28 crc kubenswrapper[4940]: I0223 08:49:28.759084 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:28 crc kubenswrapper[4940]: I0223 08:49:28.759101 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:28Z","lastTransitionTime":"2026-02-23T08:49:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:28 crc kubenswrapper[4940]: I0223 08:49:28.783338 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"6ca6a355de735d39d8c069429deed2a9983ba8d97d9210475c0b22791b6357d4"} Feb 23 08:49:28 crc kubenswrapper[4940]: I0223 08:49:28.784670 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"4164a479bddce53118172c24d9dc0c8174560ac6f49d2a1fa7321845f7f4020a"} Feb 23 08:49:28 crc kubenswrapper[4940]: I0223 08:49:28.802209 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b55d9910-222c-4c04-8a15-cecc288b8dd6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2696757a27ff629002d6fe762776487514e74ad9465357240b3f196cf9e3cec7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73db3b91fa5cdca73acde9a3bcb9062394a54e61e87020b5d656290a54aa4f70\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T08:48:50Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0223 08:48:20.958890 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0223 08:48:20.960636 1 observer_polling.go:159] Starting file observer\\\\nI0223 08:48:20.961488 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0223 08:48:20.962119 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0223 08:48:48.479316 1 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials\\\\nI0223 08:48:50.579389 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0223 08:48:50.579494 1 cmd.go:179] failed checking apiserver connectivity: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T08:48:20Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:48:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aceeb7fc620d96b7e2a474de52d7b3fb007417f4f1ec81e5853724d54381566\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dbb2f4667fe28c3ae9389cece3d7246f31a8bd82328370a699dae73e11b19a4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0c1a355a4218aa74f5a9f20e74427e20e7dd1864bfb683eeb2ac43ef6d7357f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:47:49Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:28Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:28 crc kubenswrapper[4940]: I0223 08:49:28.815395 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ca6a355de735d39d8c069429deed2a9983ba8d97d9210475c0b22791b6357d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:28Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:28 crc kubenswrapper[4940]: I0223 08:49:28.829996 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:28Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:28 crc kubenswrapper[4940]: I0223 08:49:28.845147 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67d1926d-47ed-4a5c-b868-690599126446\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ee3bc200ebaed1c133ad32e65db44c8613c48203736dada71c0881d3d4f45a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6fefd6cf7ee54d7687a700f08f52ff20d438289f846da4d811f3295e85455c1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee77b54ead604e1f64b0057494a4ba082ac7f59289f563ce28ec6048485e3b07\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://947d88435f9dfeac492c7a476d5413d6c9f65fea2e6a8322bed3a197be4e7c11\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://947d88435f9dfeac492c7a476d5413d6c9f65fea2e6a8322bed3a197be4e7c11\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"message\\\":\\\"le observer\\\\nW0223 08:48:59.155726 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0223 08:48:59.155899 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0223 08:48:59.156685 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1318688766/tls.crt::/tmp/serving-cert-1318688766/tls.key\\\\\\\"\\\\nI0223 08:48:59.543716 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0223 08:48:59.548278 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0223 08:48:59.548307 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0223 08:48:59.548334 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0223 08:48:59.548340 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0223 08:48:59.558852 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0223 08:48:59.558871 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0223 08:48:59.558894 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0223 08:48:59.558902 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0223 08:48:59.558909 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0223 08:48:59.558912 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0223 08:48:59.558917 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0223 08:48:59.558922 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0223 08:48:59.560387 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T08:48:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c47633e36ba0e4b4070060e2925e103b6efa68771b18f7a7f429546f9f05c1d9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:53Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7d8e4f8739813cb8d24afc9999755886cd4a77958b8b4e1514fc0bc53403258\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7d8e4f8739813cb8d24afc9999755886cd4a77958b8b4e1514fc0bc53403258\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:47:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:47:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:47:49Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:28Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:28 crc kubenswrapper[4940]: I0223 08:49:28.860759 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:28Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:28 crc kubenswrapper[4940]: I0223 08:49:28.861129 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:28 crc kubenswrapper[4940]: I0223 08:49:28.861145 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:28 crc kubenswrapper[4940]: I0223 08:49:28.861154 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:28 crc kubenswrapper[4940]: I0223 08:49:28.861169 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:28 crc kubenswrapper[4940]: I0223 08:49:28.861179 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:28Z","lastTransitionTime":"2026-02-23T08:49:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:28 crc kubenswrapper[4940]: I0223 08:49:28.876110 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:28Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:28 crc kubenswrapper[4940]: I0223 08:49:28.890624 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:28Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:28 crc kubenswrapper[4940]: I0223 08:49:28.905817 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f5fabdbd24e079413b28b30b7e72c60763c64d08d55badde2d84d38dcbc8a09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bce84ef684469eb1b554c08d38cbd547ce6fa507b189a9f63c740583e66bc5e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:28Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:28 crc kubenswrapper[4940]: I0223 08:49:28.923047 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4164a479bddce53118172c24d9dc0c8174560ac6f49d2a1fa7321845f7f4020a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:28Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:28 crc kubenswrapper[4940]: I0223 08:49:28.936700 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:28Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:28 crc kubenswrapper[4940]: I0223 08:49:28.949891 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f5fabdbd24e079413b28b30b7e72c60763c64d08d55badde2d84d38dcbc8a09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bce84ef684469eb1b554c08d38cbd547ce6fa507b189a9f63c740583e66bc5e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:28Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:28 crc kubenswrapper[4940]: I0223 08:49:28.963586 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:28 crc kubenswrapper[4940]: I0223 08:49:28.963658 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:28 crc kubenswrapper[4940]: I0223 08:49:28.963674 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:28 crc kubenswrapper[4940]: I0223 08:49:28.963702 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:28 crc kubenswrapper[4940]: I0223 08:49:28.963721 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:28Z","lastTransitionTime":"2026-02-23T08:49:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:28 crc kubenswrapper[4940]: I0223 08:49:28.967070 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67d1926d-47ed-4a5c-b868-690599126446\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ee3bc200ebaed1c133ad32e65db44c8613c48203736dada71c0881d3d4f45a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6fefd6cf7ee54d7687a700f08f52ff20d438289f846da4d811f3295e85455c1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee77b54ead604e1f64b0057494a4ba082ac7f59289f563ce28ec6048485e3b07\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://947d88435f9dfeac492c7a476d5413d6c9f65fea2e6a8322bed3a197be4e7c11\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://947d88435f9dfeac492c7a476d5413d6c9f65fea2e6a8322bed3a197be4e7c11\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"message\\\":\\\"le observer\\\\nW0223 08:48:59.155726 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0223 08:48:59.155899 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0223 08:48:59.156685 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1318688766/tls.crt::/tmp/serving-cert-1318688766/tls.key\\\\\\\"\\\\nI0223 08:48:59.543716 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0223 08:48:59.548278 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0223 08:48:59.548307 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0223 08:48:59.548334 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0223 08:48:59.548340 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0223 08:48:59.558852 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0223 08:48:59.558871 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0223 08:48:59.558894 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0223 08:48:59.558902 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0223 08:48:59.558909 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0223 08:48:59.558912 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0223 08:48:59.558917 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0223 08:48:59.558922 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0223 08:48:59.560387 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T08:48:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c47633e36ba0e4b4070060e2925e103b6efa68771b18f7a7f429546f9f05c1d9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:53Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7d8e4f8739813cb8d24afc9999755886cd4a77958b8b4e1514fc0bc53403258\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7d8e4f8739813cb8d24afc9999755886cd4a77958b8b4e1514fc0bc53403258\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:47:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:47:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:47:49Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:28Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:28 crc kubenswrapper[4940]: I0223 08:49:28.988868 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:28Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:29 crc kubenswrapper[4940]: I0223 08:49:29.003288 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b55d9910-222c-4c04-8a15-cecc288b8dd6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2696757a27ff629002d6fe762776487514e74ad9465357240b3f196cf9e3cec7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73db3b91fa5cdca73acde9a3bcb9062394a54e61e87020b5d656290a54aa4f70\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T08:48:50Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0223 08:48:20.958890 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0223 08:48:20.960636 1 observer_polling.go:159] Starting file observer\\\\nI0223 08:48:20.961488 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0223 08:48:20.962119 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0223 08:48:48.479316 1 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials\\\\nI0223 08:48:50.579389 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0223 08:48:50.579494 1 cmd.go:179] failed checking apiserver connectivity: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T08:48:20Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:48:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aceeb7fc620d96b7e2a474de52d7b3fb007417f4f1ec81e5853724d54381566\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dbb2f4667fe28c3ae9389cece3d7246f31a8bd82328370a699dae73e11b19a4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0c1a355a4218aa74f5a9f20e74427e20e7dd1864bfb683eeb2ac43ef6d7357f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:47:49Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:29Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:29 crc kubenswrapper[4940]: I0223 08:49:29.016085 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ca6a355de735d39d8c069429deed2a9983ba8d97d9210475c0b22791b6357d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:29Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:29 crc kubenswrapper[4940]: I0223 08:49:29.034212 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:29Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:29 crc kubenswrapper[4940]: I0223 08:49:29.066392 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:29 crc kubenswrapper[4940]: I0223 08:49:29.066441 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:29 crc kubenswrapper[4940]: I0223 08:49:29.066452 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:29 crc kubenswrapper[4940]: I0223 08:49:29.066470 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:29 crc kubenswrapper[4940]: I0223 08:49:29.066483 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:29Z","lastTransitionTime":"2026-02-23T08:49:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:29 crc kubenswrapper[4940]: I0223 08:49:29.168333 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:29 crc kubenswrapper[4940]: I0223 08:49:29.168374 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:29 crc kubenswrapper[4940]: I0223 08:49:29.168384 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:29 crc kubenswrapper[4940]: I0223 08:49:29.168401 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:29 crc kubenswrapper[4940]: I0223 08:49:29.168412 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:29Z","lastTransitionTime":"2026-02-23T08:49:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:29 crc kubenswrapper[4940]: I0223 08:49:29.271524 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:29 crc kubenswrapper[4940]: I0223 08:49:29.271725 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:29 crc kubenswrapper[4940]: I0223 08:49:29.271876 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:29 crc kubenswrapper[4940]: I0223 08:49:29.272067 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:29 crc kubenswrapper[4940]: I0223 08:49:29.272100 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:29Z","lastTransitionTime":"2026-02-23T08:49:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:29 crc kubenswrapper[4940]: I0223 08:49:29.345699 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 08:49:29 crc kubenswrapper[4940]: I0223 08:49:29.345720 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 08:49:29 crc kubenswrapper[4940]: E0223 08:49:29.345896 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 08:49:29 crc kubenswrapper[4940]: E0223 08:49:29.346001 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 08:49:29 crc kubenswrapper[4940]: I0223 08:49:29.350473 4940 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 08:25:16.690156099 +0000 UTC Feb 23 08:49:29 crc kubenswrapper[4940]: I0223 08:49:29.361057 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b55d9910-222c-4c04-8a15-cecc288b8dd6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2696757a27ff629002d6fe762776487514e74ad9465357240b3f196cf9e3cec7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73db3b91fa5cdca73acde9a3bcb9062394a54e61e87020b5d656290a54aa4f70\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T08:48:50Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0223 08:48:20.958890 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0223 08:48:20.960636 1 observer_polling.go:159] Starting file observer\\\\nI0223 08:48:20.961488 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0223 08:48:20.962119 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0223 08:48:48.479316 1 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials\\\\nI0223 08:48:50.579389 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0223 08:48:50.579494 1 cmd.go:179] failed checking apiserver connectivity: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T08:48:20Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:48:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aceeb7fc620d96b7e2a474de52d7b3fb007417f4f1ec81e5853724d54381566\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dbb2f4667fe28c3ae9389cece3d7246f31a8bd82328370a699dae73e11b19a4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0c1a355a4218aa74f5a9f20e74427e20e7dd1864bfb683eeb2ac43ef6d7357f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:47:49Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:29Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:29 crc kubenswrapper[4940]: I0223 08:49:29.372557 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ca6a355de735d39d8c069429deed2a9983ba8d97d9210475c0b22791b6357d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:29Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:29 crc kubenswrapper[4940]: I0223 08:49:29.374532 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:29 crc kubenswrapper[4940]: I0223 08:49:29.374586 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:29 crc kubenswrapper[4940]: I0223 08:49:29.374601 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:29 crc kubenswrapper[4940]: I0223 08:49:29.374643 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:29 crc kubenswrapper[4940]: I0223 08:49:29.374658 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:29Z","lastTransitionTime":"2026-02-23T08:49:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:29 crc kubenswrapper[4940]: I0223 08:49:29.385468 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:29Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:29 crc kubenswrapper[4940]: I0223 08:49:29.398550 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:29Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:29 crc kubenswrapper[4940]: I0223 08:49:29.410898 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f5fabdbd24e079413b28b30b7e72c60763c64d08d55badde2d84d38dcbc8a09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bce84ef684469eb1b554c08d38cbd547ce6fa507b189a9f63c740583e66bc5e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:29Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:29 crc kubenswrapper[4940]: I0223 08:49:29.423652 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67d1926d-47ed-4a5c-b868-690599126446\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ee3bc200ebaed1c133ad32e65db44c8613c48203736dada71c0881d3d4f45a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6fefd6cf7ee54d7687a700f08f52ff20d438289f846da4d811f3295e85455c1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee77b54ead604e1f64b0057494a4ba082ac7f59289f563ce28ec6048485e3b07\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://947d88435f9dfeac492c7a476d5413d6c9f65fea2e6a8322bed3a197be4e7c11\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://947d88435f9dfeac492c7a476d5413d6c9f65fea2e6a8322bed3a197be4e7c11\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"message\\\":\\\"le observer\\\\nW0223 08:48:59.155726 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0223 08:48:59.155899 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0223 08:48:59.156685 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1318688766/tls.crt::/tmp/serving-cert-1318688766/tls.key\\\\\\\"\\\\nI0223 08:48:59.543716 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0223 08:48:59.548278 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0223 08:48:59.548307 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0223 08:48:59.548334 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0223 08:48:59.548340 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0223 08:48:59.558852 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0223 08:48:59.558871 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0223 08:48:59.558894 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0223 08:48:59.558902 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0223 08:48:59.558909 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0223 08:48:59.558912 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0223 08:48:59.558917 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0223 08:48:59.558922 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0223 08:48:59.560387 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T08:48:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c47633e36ba0e4b4070060e2925e103b6efa68771b18f7a7f429546f9f05c1d9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:53Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7d8e4f8739813cb8d24afc9999755886cd4a77958b8b4e1514fc0bc53403258\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7d8e4f8739813cb8d24afc9999755886cd4a77958b8b4e1514fc0bc53403258\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:47:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:47:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:47:49Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:29Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:29 crc kubenswrapper[4940]: I0223 08:49:29.439787 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:29Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:29 crc kubenswrapper[4940]: I0223 08:49:29.456013 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4164a479bddce53118172c24d9dc0c8174560ac6f49d2a1fa7321845f7f4020a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:29Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:29 crc kubenswrapper[4940]: I0223 08:49:29.481755 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:29 crc kubenswrapper[4940]: I0223 08:49:29.481854 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:29 crc kubenswrapper[4940]: I0223 08:49:29.481883 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:29 crc kubenswrapper[4940]: I0223 08:49:29.481923 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:29 crc kubenswrapper[4940]: I0223 08:49:29.481950 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:29Z","lastTransitionTime":"2026-02-23T08:49:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:29 crc kubenswrapper[4940]: I0223 08:49:29.584288 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:29 crc kubenswrapper[4940]: I0223 08:49:29.584754 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:29 crc kubenswrapper[4940]: I0223 08:49:29.584768 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:29 crc kubenswrapper[4940]: I0223 08:49:29.584787 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:29 crc kubenswrapper[4940]: I0223 08:49:29.584803 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:29Z","lastTransitionTime":"2026-02-23T08:49:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:29 crc kubenswrapper[4940]: I0223 08:49:29.686695 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:29 crc kubenswrapper[4940]: I0223 08:49:29.686742 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:29 crc kubenswrapper[4940]: I0223 08:49:29.686751 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:29 crc kubenswrapper[4940]: I0223 08:49:29.686764 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:29 crc kubenswrapper[4940]: I0223 08:49:29.686773 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:29Z","lastTransitionTime":"2026-02-23T08:49:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:29 crc kubenswrapper[4940]: I0223 08:49:29.787969 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:29 crc kubenswrapper[4940]: I0223 08:49:29.788010 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:29 crc kubenswrapper[4940]: I0223 08:49:29.788021 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:29 crc kubenswrapper[4940]: I0223 08:49:29.788038 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:29 crc kubenswrapper[4940]: I0223 08:49:29.788049 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:29Z","lastTransitionTime":"2026-02-23T08:49:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:29 crc kubenswrapper[4940]: I0223 08:49:29.890173 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:29 crc kubenswrapper[4940]: I0223 08:49:29.890227 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:29 crc kubenswrapper[4940]: I0223 08:49:29.890243 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:29 crc kubenswrapper[4940]: I0223 08:49:29.890261 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:29 crc kubenswrapper[4940]: I0223 08:49:29.890277 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:29Z","lastTransitionTime":"2026-02-23T08:49:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:29 crc kubenswrapper[4940]: I0223 08:49:29.992411 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:29 crc kubenswrapper[4940]: I0223 08:49:29.992446 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:29 crc kubenswrapper[4940]: I0223 08:49:29.992456 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:29 crc kubenswrapper[4940]: I0223 08:49:29.992468 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:29 crc kubenswrapper[4940]: I0223 08:49:29.992478 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:29Z","lastTransitionTime":"2026-02-23T08:49:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:30 crc kubenswrapper[4940]: I0223 08:49:30.095314 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:30 crc kubenswrapper[4940]: I0223 08:49:30.095353 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:30 crc kubenswrapper[4940]: I0223 08:49:30.095366 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:30 crc kubenswrapper[4940]: I0223 08:49:30.095384 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:30 crc kubenswrapper[4940]: I0223 08:49:30.095398 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:30Z","lastTransitionTime":"2026-02-23T08:49:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:30 crc kubenswrapper[4940]: I0223 08:49:30.198334 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:30 crc kubenswrapper[4940]: I0223 08:49:30.198405 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:30 crc kubenswrapper[4940]: I0223 08:49:30.198427 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:30 crc kubenswrapper[4940]: I0223 08:49:30.198456 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:30 crc kubenswrapper[4940]: I0223 08:49:30.198479 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:30Z","lastTransitionTime":"2026-02-23T08:49:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:30 crc kubenswrapper[4940]: I0223 08:49:30.300932 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:30 crc kubenswrapper[4940]: I0223 08:49:30.300981 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:30 crc kubenswrapper[4940]: I0223 08:49:30.300995 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:30 crc kubenswrapper[4940]: I0223 08:49:30.301016 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:30 crc kubenswrapper[4940]: I0223 08:49:30.301030 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:30Z","lastTransitionTime":"2026-02-23T08:49:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:30 crc kubenswrapper[4940]: I0223 08:49:30.345431 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 08:49:30 crc kubenswrapper[4940]: E0223 08:49:30.345634 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 08:49:30 crc kubenswrapper[4940]: I0223 08:49:30.350637 4940 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 07:22:31.483894163 +0000 UTC Feb 23 08:49:30 crc kubenswrapper[4940]: I0223 08:49:30.402842 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:30 crc kubenswrapper[4940]: I0223 08:49:30.402878 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:30 crc kubenswrapper[4940]: I0223 08:49:30.402889 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:30 crc kubenswrapper[4940]: I0223 08:49:30.402904 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:30 crc kubenswrapper[4940]: I0223 08:49:30.402916 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:30Z","lastTransitionTime":"2026-02-23T08:49:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:30 crc kubenswrapper[4940]: I0223 08:49:30.505508 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:30 crc kubenswrapper[4940]: I0223 08:49:30.505580 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:30 crc kubenswrapper[4940]: I0223 08:49:30.505597 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:30 crc kubenswrapper[4940]: I0223 08:49:30.505637 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:30 crc kubenswrapper[4940]: I0223 08:49:30.505658 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:30Z","lastTransitionTime":"2026-02-23T08:49:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:30 crc kubenswrapper[4940]: I0223 08:49:30.607729 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:30 crc kubenswrapper[4940]: I0223 08:49:30.607781 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:30 crc kubenswrapper[4940]: I0223 08:49:30.607790 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:30 crc kubenswrapper[4940]: I0223 08:49:30.607806 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:30 crc kubenswrapper[4940]: I0223 08:49:30.607816 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:30Z","lastTransitionTime":"2026-02-23T08:49:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:30 crc kubenswrapper[4940]: I0223 08:49:30.710667 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:30 crc kubenswrapper[4940]: I0223 08:49:30.710707 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:30 crc kubenswrapper[4940]: I0223 08:49:30.710719 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:30 crc kubenswrapper[4940]: I0223 08:49:30.710735 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:30 crc kubenswrapper[4940]: I0223 08:49:30.710747 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:30Z","lastTransitionTime":"2026-02-23T08:49:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:30 crc kubenswrapper[4940]: I0223 08:49:30.712291 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-4vcwd"] Feb 23 08:49:30 crc kubenswrapper[4940]: I0223 08:49:30.712650 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-4vcwd" Feb 23 08:49:30 crc kubenswrapper[4940]: I0223 08:49:30.714600 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Feb 23 08:49:30 crc kubenswrapper[4940]: I0223 08:49:30.715088 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Feb 23 08:49:30 crc kubenswrapper[4940]: I0223 08:49:30.715459 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Feb 23 08:49:30 crc kubenswrapper[4940]: I0223 08:49:30.730747 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67d1926d-47ed-4a5c-b868-690599126446\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ee3bc200ebaed1c133ad32e65db44c8613c48203736dada71c0881d3d4f45a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6fefd6cf7ee54d7687a700f08f52ff20d438289f846da4d811f3295e85455c1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee77b54ead604e1f64b0057494a4ba082ac7f59289f563ce28ec6048485e3b07\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://947d88435f9dfeac492c7a476d5413d6c9f65fea2e6a8322bed3a197be4e7c11\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://947d88435f9dfeac492c7a476d5413d6c9f65fea2e6a8322bed3a197be4e7c11\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"message\\\":\\\"le observer\\\\nW0223 08:48:59.155726 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0223 08:48:59.155899 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0223 08:48:59.156685 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1318688766/tls.crt::/tmp/serving-cert-1318688766/tls.key\\\\\\\"\\\\nI0223 08:48:59.543716 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0223 08:48:59.548278 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0223 08:48:59.548307 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0223 08:48:59.548334 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0223 08:48:59.548340 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0223 08:48:59.558852 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0223 08:48:59.558871 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0223 08:48:59.558894 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0223 08:48:59.558902 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0223 08:48:59.558909 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0223 08:48:59.558912 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0223 08:48:59.558917 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0223 08:48:59.558922 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0223 08:48:59.560387 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T08:48:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c47633e36ba0e4b4070060e2925e103b6efa68771b18f7a7f429546f9f05c1d9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:53Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7d8e4f8739813cb8d24afc9999755886cd4a77958b8b4e1514fc0bc53403258\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7d8e4f8739813cb8d24afc9999755886cd4a77958b8b4e1514fc0bc53403258\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:47:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:47:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:47:49Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:30Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:30 crc kubenswrapper[4940]: I0223 08:49:30.745699 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:30Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:30 crc kubenswrapper[4940]: I0223 08:49:30.763553 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4164a479bddce53118172c24d9dc0c8174560ac6f49d2a1fa7321845f7f4020a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:30Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:30 crc kubenswrapper[4940]: I0223 08:49:30.779333 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:30Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:30 crc kubenswrapper[4940]: I0223 08:49:30.799133 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f5fabdbd24e079413b28b30b7e72c60763c64d08d55badde2d84d38dcbc8a09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bce84ef684469eb1b554c08d38cbd547ce6fa507b189a9f63c740583e66bc5e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:30Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:30 crc kubenswrapper[4940]: I0223 08:49:30.810674 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4vcwd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41834650-70c0-4558-9052-d7cdfc785e09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:30Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:30Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwtwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:49:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4vcwd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:30Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:30 crc kubenswrapper[4940]: I0223 08:49:30.812797 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:30 crc kubenswrapper[4940]: I0223 08:49:30.812840 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:30 crc kubenswrapper[4940]: I0223 08:49:30.812855 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:30 crc kubenswrapper[4940]: I0223 08:49:30.812873 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:30 crc kubenswrapper[4940]: I0223 08:49:30.812888 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:30Z","lastTransitionTime":"2026-02-23T08:49:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:30 crc kubenswrapper[4940]: I0223 08:49:30.827576 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b55d9910-222c-4c04-8a15-cecc288b8dd6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2696757a27ff629002d6fe762776487514e74ad9465357240b3f196cf9e3cec7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73db3b91fa5cdca73acde9a3bcb9062394a54e61e87020b5d656290a54aa4f70\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T08:48:50Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0223 08:48:20.958890 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0223 08:48:20.960636 1 observer_polling.go:159] Starting file observer\\\\nI0223 08:48:20.961488 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0223 08:48:20.962119 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0223 08:48:48.479316 1 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials\\\\nI0223 08:48:50.579389 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0223 08:48:50.579494 1 cmd.go:179] failed checking apiserver connectivity: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T08:48:20Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:48:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aceeb7fc620d96b7e2a474de52d7b3fb007417f4f1ec81e5853724d54381566\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dbb2f4667fe28c3ae9389cece3d7246f31a8bd82328370a699dae73e11b19a4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0c1a355a4218aa74f5a9f20e74427e20e7dd1864bfb683eeb2ac43ef6d7357f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:47:49Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:30Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:30 crc kubenswrapper[4940]: I0223 08:49:30.845328 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ca6a355de735d39d8c069429deed2a9983ba8d97d9210475c0b22791b6357d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:30Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:30 crc kubenswrapper[4940]: I0223 08:49:30.851290 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dwtwl\" (UniqueName: \"kubernetes.io/projected/41834650-70c0-4558-9052-d7cdfc785e09-kube-api-access-dwtwl\") pod \"node-resolver-4vcwd\" (UID: \"41834650-70c0-4558-9052-d7cdfc785e09\") " pod="openshift-dns/node-resolver-4vcwd" Feb 23 08:49:30 crc kubenswrapper[4940]: I0223 08:49:30.851333 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/41834650-70c0-4558-9052-d7cdfc785e09-hosts-file\") pod \"node-resolver-4vcwd\" (UID: \"41834650-70c0-4558-9052-d7cdfc785e09\") " pod="openshift-dns/node-resolver-4vcwd" Feb 23 08:49:30 crc kubenswrapper[4940]: I0223 08:49:30.858894 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:30Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:30 crc kubenswrapper[4940]: I0223 08:49:30.915698 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:30 crc kubenswrapper[4940]: I0223 08:49:30.915751 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:30 crc kubenswrapper[4940]: I0223 08:49:30.915765 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:30 crc kubenswrapper[4940]: I0223 08:49:30.915788 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:30 crc kubenswrapper[4940]: I0223 08:49:30.915801 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:30Z","lastTransitionTime":"2026-02-23T08:49:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:30 crc kubenswrapper[4940]: I0223 08:49:30.952063 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dwtwl\" (UniqueName: \"kubernetes.io/projected/41834650-70c0-4558-9052-d7cdfc785e09-kube-api-access-dwtwl\") pod \"node-resolver-4vcwd\" (UID: \"41834650-70c0-4558-9052-d7cdfc785e09\") " pod="openshift-dns/node-resolver-4vcwd" Feb 23 08:49:30 crc kubenswrapper[4940]: I0223 08:49:30.952156 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/41834650-70c0-4558-9052-d7cdfc785e09-hosts-file\") pod \"node-resolver-4vcwd\" (UID: \"41834650-70c0-4558-9052-d7cdfc785e09\") " pod="openshift-dns/node-resolver-4vcwd" Feb 23 08:49:30 crc kubenswrapper[4940]: I0223 08:49:30.952273 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/41834650-70c0-4558-9052-d7cdfc785e09-hosts-file\") pod \"node-resolver-4vcwd\" (UID: \"41834650-70c0-4558-9052-d7cdfc785e09\") " pod="openshift-dns/node-resolver-4vcwd" Feb 23 08:49:30 crc kubenswrapper[4940]: I0223 08:49:30.983681 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dwtwl\" (UniqueName: \"kubernetes.io/projected/41834650-70c0-4558-9052-d7cdfc785e09-kube-api-access-dwtwl\") pod \"node-resolver-4vcwd\" (UID: \"41834650-70c0-4558-9052-d7cdfc785e09\") " pod="openshift-dns/node-resolver-4vcwd" Feb 23 08:49:31 crc kubenswrapper[4940]: I0223 08:49:31.018273 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:31 crc kubenswrapper[4940]: I0223 08:49:31.018333 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:31 crc kubenswrapper[4940]: I0223 08:49:31.018346 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:31 crc kubenswrapper[4940]: I0223 08:49:31.018367 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:31 crc kubenswrapper[4940]: I0223 08:49:31.018378 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:31Z","lastTransitionTime":"2026-02-23T08:49:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:31 crc kubenswrapper[4940]: I0223 08:49:31.032632 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-4vcwd" Feb 23 08:49:31 crc kubenswrapper[4940]: W0223 08:49:31.046748 4940 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod41834650_70c0_4558_9052_d7cdfc785e09.slice/crio-d1ae209aaee3960b70bf0a5d2570f5b710f6102f603a60102c795b265f04a7b0 WatchSource:0}: Error finding container d1ae209aaee3960b70bf0a5d2570f5b710f6102f603a60102c795b265f04a7b0: Status 404 returned error can't find the container with id d1ae209aaee3960b70bf0a5d2570f5b710f6102f603a60102c795b265f04a7b0 Feb 23 08:49:31 crc kubenswrapper[4940]: I0223 08:49:31.093499 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-tj6ms"] Feb 23 08:49:31 crc kubenswrapper[4940]: I0223 08:49:31.094260 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-tj6ms" Feb 23 08:49:31 crc kubenswrapper[4940]: I0223 08:49:31.096888 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Feb 23 08:49:31 crc kubenswrapper[4940]: I0223 08:49:31.096920 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Feb 23 08:49:31 crc kubenswrapper[4940]: I0223 08:49:31.097098 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Feb 23 08:49:31 crc kubenswrapper[4940]: I0223 08:49:31.097755 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Feb 23 08:49:31 crc kubenswrapper[4940]: I0223 08:49:31.098434 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-czrqm"] Feb 23 08:49:31 crc kubenswrapper[4940]: I0223 08:49:31.099084 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-26mgs"] Feb 23 08:49:31 crc kubenswrapper[4940]: I0223 08:49:31.099750 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-czrqm" Feb 23 08:49:31 crc kubenswrapper[4940]: I0223 08:49:31.101538 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Feb 23 08:49:31 crc kubenswrapper[4940]: I0223 08:49:31.101859 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Feb 23 08:49:31 crc kubenswrapper[4940]: I0223 08:49:31.102332 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" Feb 23 08:49:31 crc kubenswrapper[4940]: I0223 08:49:31.102449 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Feb 23 08:49:31 crc kubenswrapper[4940]: I0223 08:49:31.106957 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Feb 23 08:49:31 crc kubenswrapper[4940]: I0223 08:49:31.107156 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Feb 23 08:49:31 crc kubenswrapper[4940]: I0223 08:49:31.107479 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Feb 23 08:49:31 crc kubenswrapper[4940]: I0223 08:49:31.107573 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Feb 23 08:49:31 crc kubenswrapper[4940]: I0223 08:49:31.107766 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Feb 23 08:49:31 crc kubenswrapper[4940]: I0223 08:49:31.118633 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4164a479bddce53118172c24d9dc0c8174560ac6f49d2a1fa7321845f7f4020a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:31Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:31 crc kubenswrapper[4940]: I0223 08:49:31.120482 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:31 crc kubenswrapper[4940]: I0223 08:49:31.120552 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:31 crc kubenswrapper[4940]: I0223 08:49:31.120564 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:31 crc kubenswrapper[4940]: I0223 08:49:31.120579 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:31 crc kubenswrapper[4940]: I0223 08:49:31.120590 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:31Z","lastTransitionTime":"2026-02-23T08:49:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:31 crc kubenswrapper[4940]: I0223 08:49:31.133075 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:31Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:31 crc kubenswrapper[4940]: I0223 08:49:31.146562 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f5fabdbd24e079413b28b30b7e72c60763c64d08d55badde2d84d38dcbc8a09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bce84ef684469eb1b554c08d38cbd547ce6fa507b189a9f63c740583e66bc5e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:31Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:31 crc kubenswrapper[4940]: I0223 08:49:31.154400 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 08:49:31 crc kubenswrapper[4940]: I0223 08:49:31.154492 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/ec3904ad-5d0b-46b4-9c13-68454d9a3cb2-hostroot\") pod \"multus-czrqm\" (UID: \"ec3904ad-5d0b-46b4-9c13-68454d9a3cb2\") " pod="openshift-multus/multus-czrqm" Feb 23 08:49:31 crc kubenswrapper[4940]: I0223 08:49:31.154518 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/ec3904ad-5d0b-46b4-9c13-68454d9a3cb2-host-var-lib-kubelet\") pod \"multus-czrqm\" (UID: \"ec3904ad-5d0b-46b4-9c13-68454d9a3cb2\") " pod="openshift-multus/multus-czrqm" Feb 23 08:49:31 crc kubenswrapper[4940]: I0223 08:49:31.154541 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/ec3904ad-5d0b-46b4-9c13-68454d9a3cb2-host-run-multus-certs\") pod \"multus-czrqm\" (UID: \"ec3904ad-5d0b-46b4-9c13-68454d9a3cb2\") " pod="openshift-multus/multus-czrqm" Feb 23 08:49:31 crc kubenswrapper[4940]: I0223 08:49:31.154569 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 08:49:31 crc kubenswrapper[4940]: I0223 08:49:31.154593 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/ec3904ad-5d0b-46b4-9c13-68454d9a3cb2-multus-daemon-config\") pod \"multus-czrqm\" (UID: \"ec3904ad-5d0b-46b4-9c13-68454d9a3cb2\") " pod="openshift-multus/multus-czrqm" Feb 23 08:49:31 crc kubenswrapper[4940]: I0223 08:49:31.154629 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ec3904ad-5d0b-46b4-9c13-68454d9a3cb2-etc-kubernetes\") pod \"multus-czrqm\" (UID: \"ec3904ad-5d0b-46b4-9c13-68454d9a3cb2\") " pod="openshift-multus/multus-czrqm" Feb 23 08:49:31 crc kubenswrapper[4940]: I0223 08:49:31.154647 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8jmt7\" (UniqueName: \"kubernetes.io/projected/ec3904ad-5d0b-46b4-9c13-68454d9a3cb2-kube-api-access-8jmt7\") pod \"multus-czrqm\" (UID: \"ec3904ad-5d0b-46b4-9c13-68454d9a3cb2\") " pod="openshift-multus/multus-czrqm" Feb 23 08:49:31 crc kubenswrapper[4940]: I0223 08:49:31.154664 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/ec3904ad-5d0b-46b4-9c13-68454d9a3cb2-multus-cni-dir\") pod \"multus-czrqm\" (UID: \"ec3904ad-5d0b-46b4-9c13-68454d9a3cb2\") " pod="openshift-multus/multus-czrqm" Feb 23 08:49:31 crc kubenswrapper[4940]: I0223 08:49:31.154680 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/353c05f4-9ef1-4c0f-8388-8b56cc4c22d5-os-release\") pod \"multus-additional-cni-plugins-tj6ms\" (UID: \"353c05f4-9ef1-4c0f-8388-8b56cc4c22d5\") " pod="openshift-multus/multus-additional-cni-plugins-tj6ms" Feb 23 08:49:31 crc kubenswrapper[4940]: I0223 08:49:31.154698 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/ec3904ad-5d0b-46b4-9c13-68454d9a3cb2-system-cni-dir\") pod \"multus-czrqm\" (UID: \"ec3904ad-5d0b-46b4-9c13-68454d9a3cb2\") " pod="openshift-multus/multus-czrqm" Feb 23 08:49:31 crc kubenswrapper[4940]: I0223 08:49:31.154716 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/ec3904ad-5d0b-46b4-9c13-68454d9a3cb2-host-run-k8s-cni-cncf-io\") pod \"multus-czrqm\" (UID: \"ec3904ad-5d0b-46b4-9c13-68454d9a3cb2\") " pod="openshift-multus/multus-czrqm" Feb 23 08:49:31 crc kubenswrapper[4940]: I0223 08:49:31.154736 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/ec3904ad-5d0b-46b4-9c13-68454d9a3cb2-host-run-netns\") pod \"multus-czrqm\" (UID: \"ec3904ad-5d0b-46b4-9c13-68454d9a3cb2\") " pod="openshift-multus/multus-czrqm" Feb 23 08:49:31 crc kubenswrapper[4940]: I0223 08:49:31.154752 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/ec3904ad-5d0b-46b4-9c13-68454d9a3cb2-os-release\") pod \"multus-czrqm\" (UID: \"ec3904ad-5d0b-46b4-9c13-68454d9a3cb2\") " pod="openshift-multus/multus-czrqm" Feb 23 08:49:31 crc kubenswrapper[4940]: I0223 08:49:31.154770 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/ec3904ad-5d0b-46b4-9c13-68454d9a3cb2-host-var-lib-cni-bin\") pod \"multus-czrqm\" (UID: \"ec3904ad-5d0b-46b4-9c13-68454d9a3cb2\") " pod="openshift-multus/multus-czrqm" Feb 23 08:49:31 crc kubenswrapper[4940]: I0223 08:49:31.154785 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/ec3904ad-5d0b-46b4-9c13-68454d9a3cb2-cnibin\") pod \"multus-czrqm\" (UID: \"ec3904ad-5d0b-46b4-9c13-68454d9a3cb2\") " pod="openshift-multus/multus-czrqm" Feb 23 08:49:31 crc kubenswrapper[4940]: I0223 08:49:31.154803 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/ec3904ad-5d0b-46b4-9c13-68454d9a3cb2-cni-binary-copy\") pod \"multus-czrqm\" (UID: \"ec3904ad-5d0b-46b4-9c13-68454d9a3cb2\") " pod="openshift-multus/multus-czrqm" Feb 23 08:49:31 crc kubenswrapper[4940]: I0223 08:49:31.154818 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/ec3904ad-5d0b-46b4-9c13-68454d9a3cb2-multus-conf-dir\") pod \"multus-czrqm\" (UID: \"ec3904ad-5d0b-46b4-9c13-68454d9a3cb2\") " pod="openshift-multus/multus-czrqm" Feb 23 08:49:31 crc kubenswrapper[4940]: I0223 08:49:31.154835 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/353c05f4-9ef1-4c0f-8388-8b56cc4c22d5-tuning-conf-dir\") pod \"multus-additional-cni-plugins-tj6ms\" (UID: \"353c05f4-9ef1-4c0f-8388-8b56cc4c22d5\") " pod="openshift-multus/multus-additional-cni-plugins-tj6ms" Feb 23 08:49:31 crc kubenswrapper[4940]: E0223 08:49:31.154966 4940 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 08:50:03.154947086 +0000 UTC m=+134.538153243 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 08:49:31 crc kubenswrapper[4940]: E0223 08:49:31.155041 4940 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 23 08:49:31 crc kubenswrapper[4940]: E0223 08:49:31.155076 4940 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-23 08:50:03.155068441 +0000 UTC m=+134.538274598 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 23 08:49:31 crc kubenswrapper[4940]: I0223 08:49:31.154854 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/ec3904ad-5d0b-46b4-9c13-68454d9a3cb2-host-var-lib-cni-multus\") pod \"multus-czrqm\" (UID: \"ec3904ad-5d0b-46b4-9c13-68454d9a3cb2\") " pod="openshift-multus/multus-czrqm" Feb 23 08:49:31 crc kubenswrapper[4940]: I0223 08:49:31.155358 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 08:49:31 crc kubenswrapper[4940]: I0223 08:49:31.155393 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/353c05f4-9ef1-4c0f-8388-8b56cc4c22d5-system-cni-dir\") pod \"multus-additional-cni-plugins-tj6ms\" (UID: \"353c05f4-9ef1-4c0f-8388-8b56cc4c22d5\") " pod="openshift-multus/multus-additional-cni-plugins-tj6ms" Feb 23 08:49:31 crc kubenswrapper[4940]: I0223 08:49:31.155414 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/353c05f4-9ef1-4c0f-8388-8b56cc4c22d5-cnibin\") pod \"multus-additional-cni-plugins-tj6ms\" (UID: \"353c05f4-9ef1-4c0f-8388-8b56cc4c22d5\") " pod="openshift-multus/multus-additional-cni-plugins-tj6ms" Feb 23 08:49:31 crc kubenswrapper[4940]: I0223 08:49:31.155432 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j86pt\" (UniqueName: \"kubernetes.io/projected/353c05f4-9ef1-4c0f-8388-8b56cc4c22d5-kube-api-access-j86pt\") pod \"multus-additional-cni-plugins-tj6ms\" (UID: \"353c05f4-9ef1-4c0f-8388-8b56cc4c22d5\") " pod="openshift-multus/multus-additional-cni-plugins-tj6ms" Feb 23 08:49:31 crc kubenswrapper[4940]: I0223 08:49:31.155450 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/353c05f4-9ef1-4c0f-8388-8b56cc4c22d5-cni-binary-copy\") pod \"multus-additional-cni-plugins-tj6ms\" (UID: \"353c05f4-9ef1-4c0f-8388-8b56cc4c22d5\") " pod="openshift-multus/multus-additional-cni-plugins-tj6ms" Feb 23 08:49:31 crc kubenswrapper[4940]: I0223 08:49:31.155472 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/353c05f4-9ef1-4c0f-8388-8b56cc4c22d5-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-tj6ms\" (UID: \"353c05f4-9ef1-4c0f-8388-8b56cc4c22d5\") " pod="openshift-multus/multus-additional-cni-plugins-tj6ms" Feb 23 08:49:31 crc kubenswrapper[4940]: I0223 08:49:31.155489 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/ec3904ad-5d0b-46b4-9c13-68454d9a3cb2-multus-socket-dir-parent\") pod \"multus-czrqm\" (UID: \"ec3904ad-5d0b-46b4-9c13-68454d9a3cb2\") " pod="openshift-multus/multus-czrqm" Feb 23 08:49:31 crc kubenswrapper[4940]: E0223 08:49:31.155587 4940 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 23 08:49:31 crc kubenswrapper[4940]: E0223 08:49:31.155636 4940 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-23 08:50:03.15562718 +0000 UTC m=+134.538833337 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 23 08:49:31 crc kubenswrapper[4940]: I0223 08:49:31.163367 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67d1926d-47ed-4a5c-b868-690599126446\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ee3bc200ebaed1c133ad32e65db44c8613c48203736dada71c0881d3d4f45a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6fefd6cf7ee54d7687a700f08f52ff20d438289f846da4d811f3295e85455c1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee77b54ead604e1f64b0057494a4ba082ac7f59289f563ce28ec6048485e3b07\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://947d88435f9dfeac492c7a476d5413d6c9f65fea2e6a8322bed3a197be4e7c11\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://947d88435f9dfeac492c7a476d5413d6c9f65fea2e6a8322bed3a197be4e7c11\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"message\\\":\\\"le observer\\\\nW0223 08:48:59.155726 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0223 08:48:59.155899 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0223 08:48:59.156685 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1318688766/tls.crt::/tmp/serving-cert-1318688766/tls.key\\\\\\\"\\\\nI0223 08:48:59.543716 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0223 08:48:59.548278 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0223 08:48:59.548307 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0223 08:48:59.548334 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0223 08:48:59.548340 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0223 08:48:59.558852 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0223 08:48:59.558871 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0223 08:48:59.558894 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0223 08:48:59.558902 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0223 08:48:59.558909 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0223 08:48:59.558912 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0223 08:48:59.558917 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0223 08:48:59.558922 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0223 08:48:59.560387 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T08:48:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c47633e36ba0e4b4070060e2925e103b6efa68771b18f7a7f429546f9f05c1d9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:53Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7d8e4f8739813cb8d24afc9999755886cd4a77958b8b4e1514fc0bc53403258\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7d8e4f8739813cb8d24afc9999755886cd4a77958b8b4e1514fc0bc53403258\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:47:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:47:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:47:49Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:31Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:31 crc kubenswrapper[4940]: I0223 08:49:31.176017 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:31Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:31 crc kubenswrapper[4940]: I0223 08:49:31.189326 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4vcwd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41834650-70c0-4558-9052-d7cdfc785e09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:30Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:30Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwtwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:49:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4vcwd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:31Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:31 crc kubenswrapper[4940]: I0223 08:49:31.204572 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b55d9910-222c-4c04-8a15-cecc288b8dd6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2696757a27ff629002d6fe762776487514e74ad9465357240b3f196cf9e3cec7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73db3b91fa5cdca73acde9a3bcb9062394a54e61e87020b5d656290a54aa4f70\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T08:48:50Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0223 08:48:20.958890 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0223 08:48:20.960636 1 observer_polling.go:159] Starting file observer\\\\nI0223 08:48:20.961488 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0223 08:48:20.962119 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0223 08:48:48.479316 1 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials\\\\nI0223 08:48:50.579389 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0223 08:48:50.579494 1 cmd.go:179] failed checking apiserver connectivity: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T08:48:20Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:48:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aceeb7fc620d96b7e2a474de52d7b3fb007417f4f1ec81e5853724d54381566\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dbb2f4667fe28c3ae9389cece3d7246f31a8bd82328370a699dae73e11b19a4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0c1a355a4218aa74f5a9f20e74427e20e7dd1864bfb683eeb2ac43ef6d7357f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:47:49Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:31Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:31 crc kubenswrapper[4940]: I0223 08:49:31.217246 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ca6a355de735d39d8c069429deed2a9983ba8d97d9210475c0b22791b6357d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:31Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:31 crc kubenswrapper[4940]: I0223 08:49:31.222336 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:31 crc kubenswrapper[4940]: I0223 08:49:31.222375 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:31 crc kubenswrapper[4940]: I0223 08:49:31.222389 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:31 crc kubenswrapper[4940]: I0223 08:49:31.222403 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:31 crc kubenswrapper[4940]: I0223 08:49:31.222413 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:31Z","lastTransitionTime":"2026-02-23T08:49:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:31 crc kubenswrapper[4940]: I0223 08:49:31.231127 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:31Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:31 crc kubenswrapper[4940]: I0223 08:49:31.245391 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-tj6ms" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"353c05f4-9ef1-4c0f-8388-8b56cc4c22d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j86pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j86pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j86pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j86pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j86pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j86pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j86pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:49:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-tj6ms\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:31Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:31 crc kubenswrapper[4940]: I0223 08:49:31.256569 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f3f2cfd6-5ddf-436d-998f-440f1cc642b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpqgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpqgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:49:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-26mgs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:31Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:31 crc kubenswrapper[4940]: I0223 08:49:31.256755 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 08:49:31 crc kubenswrapper[4940]: I0223 08:49:31.256789 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/ec3904ad-5d0b-46b4-9c13-68454d9a3cb2-hostroot\") pod \"multus-czrqm\" (UID: \"ec3904ad-5d0b-46b4-9c13-68454d9a3cb2\") " pod="openshift-multus/multus-czrqm" Feb 23 08:49:31 crc kubenswrapper[4940]: I0223 08:49:31.256812 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/ec3904ad-5d0b-46b4-9c13-68454d9a3cb2-host-var-lib-kubelet\") pod \"multus-czrqm\" (UID: \"ec3904ad-5d0b-46b4-9c13-68454d9a3cb2\") " pod="openshift-multus/multus-czrqm" Feb 23 08:49:31 crc kubenswrapper[4940]: I0223 08:49:31.256833 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/ec3904ad-5d0b-46b4-9c13-68454d9a3cb2-host-run-multus-certs\") pod \"multus-czrqm\" (UID: \"ec3904ad-5d0b-46b4-9c13-68454d9a3cb2\") " pod="openshift-multus/multus-czrqm" Feb 23 08:49:31 crc kubenswrapper[4940]: I0223 08:49:31.256864 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/ec3904ad-5d0b-46b4-9c13-68454d9a3cb2-multus-daemon-config\") pod \"multus-czrqm\" (UID: \"ec3904ad-5d0b-46b4-9c13-68454d9a3cb2\") " pod="openshift-multus/multus-czrqm" Feb 23 08:49:31 crc kubenswrapper[4940]: I0223 08:49:31.256883 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ec3904ad-5d0b-46b4-9c13-68454d9a3cb2-etc-kubernetes\") pod \"multus-czrqm\" (UID: \"ec3904ad-5d0b-46b4-9c13-68454d9a3cb2\") " pod="openshift-multus/multus-czrqm" Feb 23 08:49:31 crc kubenswrapper[4940]: I0223 08:49:31.256903 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8jmt7\" (UniqueName: \"kubernetes.io/projected/ec3904ad-5d0b-46b4-9c13-68454d9a3cb2-kube-api-access-8jmt7\") pod \"multus-czrqm\" (UID: \"ec3904ad-5d0b-46b4-9c13-68454d9a3cb2\") " pod="openshift-multus/multus-czrqm" Feb 23 08:49:31 crc kubenswrapper[4940]: E0223 08:49:31.256921 4940 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 23 08:49:31 crc kubenswrapper[4940]: I0223 08:49:31.256926 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/ec3904ad-5d0b-46b4-9c13-68454d9a3cb2-host-run-multus-certs\") pod \"multus-czrqm\" (UID: \"ec3904ad-5d0b-46b4-9c13-68454d9a3cb2\") " pod="openshift-multus/multus-czrqm" Feb 23 08:49:31 crc kubenswrapper[4940]: I0223 08:49:31.256981 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/ec3904ad-5d0b-46b4-9c13-68454d9a3cb2-multus-cni-dir\") pod \"multus-czrqm\" (UID: \"ec3904ad-5d0b-46b4-9c13-68454d9a3cb2\") " pod="openshift-multus/multus-czrqm" Feb 23 08:49:31 crc kubenswrapper[4940]: I0223 08:49:31.256927 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/ec3904ad-5d0b-46b4-9c13-68454d9a3cb2-multus-cni-dir\") pod \"multus-czrqm\" (UID: \"ec3904ad-5d0b-46b4-9c13-68454d9a3cb2\") " pod="openshift-multus/multus-czrqm" Feb 23 08:49:31 crc kubenswrapper[4940]: I0223 08:49:31.257018 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/353c05f4-9ef1-4c0f-8388-8b56cc4c22d5-os-release\") pod \"multus-additional-cni-plugins-tj6ms\" (UID: \"353c05f4-9ef1-4c0f-8388-8b56cc4c22d5\") " pod="openshift-multus/multus-additional-cni-plugins-tj6ms" Feb 23 08:49:31 crc kubenswrapper[4940]: E0223 08:49:31.256941 4940 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 23 08:49:31 crc kubenswrapper[4940]: I0223 08:49:31.256881 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/ec3904ad-5d0b-46b4-9c13-68454d9a3cb2-host-var-lib-kubelet\") pod \"multus-czrqm\" (UID: \"ec3904ad-5d0b-46b4-9c13-68454d9a3cb2\") " pod="openshift-multus/multus-czrqm" Feb 23 08:49:31 crc kubenswrapper[4940]: I0223 08:49:31.257071 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/ec3904ad-5d0b-46b4-9c13-68454d9a3cb2-system-cni-dir\") pod \"multus-czrqm\" (UID: \"ec3904ad-5d0b-46b4-9c13-68454d9a3cb2\") " pod="openshift-multus/multus-czrqm" Feb 23 08:49:31 crc kubenswrapper[4940]: E0223 08:49:31.257053 4940 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 23 08:49:31 crc kubenswrapper[4940]: I0223 08:49:31.257039 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/ec3904ad-5d0b-46b4-9c13-68454d9a3cb2-system-cni-dir\") pod \"multus-czrqm\" (UID: \"ec3904ad-5d0b-46b4-9c13-68454d9a3cb2\") " pod="openshift-multus/multus-czrqm" Feb 23 08:49:31 crc kubenswrapper[4940]: I0223 08:49:31.257102 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/353c05f4-9ef1-4c0f-8388-8b56cc4c22d5-os-release\") pod \"multus-additional-cni-plugins-tj6ms\" (UID: \"353c05f4-9ef1-4c0f-8388-8b56cc4c22d5\") " pod="openshift-multus/multus-additional-cni-plugins-tj6ms" Feb 23 08:49:31 crc kubenswrapper[4940]: I0223 08:49:31.257136 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/ec3904ad-5d0b-46b4-9c13-68454d9a3cb2-host-run-k8s-cni-cncf-io\") pod \"multus-czrqm\" (UID: \"ec3904ad-5d0b-46b4-9c13-68454d9a3cb2\") " pod="openshift-multus/multus-czrqm" Feb 23 08:49:31 crc kubenswrapper[4940]: E0223 08:49:31.257147 4940 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-23 08:50:03.257131051 +0000 UTC m=+134.640337208 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 23 08:49:31 crc kubenswrapper[4940]: I0223 08:49:31.257171 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/ec3904ad-5d0b-46b4-9c13-68454d9a3cb2-host-run-k8s-cni-cncf-io\") pod \"multus-czrqm\" (UID: \"ec3904ad-5d0b-46b4-9c13-68454d9a3cb2\") " pod="openshift-multus/multus-czrqm" Feb 23 08:49:31 crc kubenswrapper[4940]: I0223 08:49:31.257017 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ec3904ad-5d0b-46b4-9c13-68454d9a3cb2-etc-kubernetes\") pod \"multus-czrqm\" (UID: \"ec3904ad-5d0b-46b4-9c13-68454d9a3cb2\") " pod="openshift-multus/multus-czrqm" Feb 23 08:49:31 crc kubenswrapper[4940]: I0223 08:49:31.257179 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/ec3904ad-5d0b-46b4-9c13-68454d9a3cb2-host-run-netns\") pod \"multus-czrqm\" (UID: \"ec3904ad-5d0b-46b4-9c13-68454d9a3cb2\") " pod="openshift-multus/multus-czrqm" Feb 23 08:49:31 crc kubenswrapper[4940]: I0223 08:49:31.257220 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/ec3904ad-5d0b-46b4-9c13-68454d9a3cb2-os-release\") pod \"multus-czrqm\" (UID: \"ec3904ad-5d0b-46b4-9c13-68454d9a3cb2\") " pod="openshift-multus/multus-czrqm" Feb 23 08:49:31 crc kubenswrapper[4940]: I0223 08:49:31.257243 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/ec3904ad-5d0b-46b4-9c13-68454d9a3cb2-host-var-lib-cni-bin\") pod \"multus-czrqm\" (UID: \"ec3904ad-5d0b-46b4-9c13-68454d9a3cb2\") " pod="openshift-multus/multus-czrqm" Feb 23 08:49:31 crc kubenswrapper[4940]: I0223 08:49:31.257273 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/ec3904ad-5d0b-46b4-9c13-68454d9a3cb2-host-run-netns\") pod \"multus-czrqm\" (UID: \"ec3904ad-5d0b-46b4-9c13-68454d9a3cb2\") " pod="openshift-multus/multus-czrqm" Feb 23 08:49:31 crc kubenswrapper[4940]: I0223 08:49:31.257278 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/ec3904ad-5d0b-46b4-9c13-68454d9a3cb2-cnibin\") pod \"multus-czrqm\" (UID: \"ec3904ad-5d0b-46b4-9c13-68454d9a3cb2\") " pod="openshift-multus/multus-czrqm" Feb 23 08:49:31 crc kubenswrapper[4940]: I0223 08:49:31.257301 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/ec3904ad-5d0b-46b4-9c13-68454d9a3cb2-host-var-lib-cni-bin\") pod \"multus-czrqm\" (UID: \"ec3904ad-5d0b-46b4-9c13-68454d9a3cb2\") " pod="openshift-multus/multus-czrqm" Feb 23 08:49:31 crc kubenswrapper[4940]: I0223 08:49:31.257314 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/f3f2cfd6-5ddf-436d-998f-440f1cc642b1-rootfs\") pod \"machine-config-daemon-26mgs\" (UID: \"f3f2cfd6-5ddf-436d-998f-440f1cc642b1\") " pod="openshift-machine-config-operator/machine-config-daemon-26mgs" Feb 23 08:49:31 crc kubenswrapper[4940]: I0223 08:49:31.257333 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/ec3904ad-5d0b-46b4-9c13-68454d9a3cb2-cnibin\") pod \"multus-czrqm\" (UID: \"ec3904ad-5d0b-46b4-9c13-68454d9a3cb2\") " pod="openshift-multus/multus-czrqm" Feb 23 08:49:31 crc kubenswrapper[4940]: I0223 08:49:31.257338 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/ec3904ad-5d0b-46b4-9c13-68454d9a3cb2-cni-binary-copy\") pod \"multus-czrqm\" (UID: \"ec3904ad-5d0b-46b4-9c13-68454d9a3cb2\") " pod="openshift-multus/multus-czrqm" Feb 23 08:49:31 crc kubenswrapper[4940]: I0223 08:49:31.257365 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/ec3904ad-5d0b-46b4-9c13-68454d9a3cb2-multus-conf-dir\") pod \"multus-czrqm\" (UID: \"ec3904ad-5d0b-46b4-9c13-68454d9a3cb2\") " pod="openshift-multus/multus-czrqm" Feb 23 08:49:31 crc kubenswrapper[4940]: I0223 08:49:31.257388 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/ec3904ad-5d0b-46b4-9c13-68454d9a3cb2-host-var-lib-cni-multus\") pod \"multus-czrqm\" (UID: \"ec3904ad-5d0b-46b4-9c13-68454d9a3cb2\") " pod="openshift-multus/multus-czrqm" Feb 23 08:49:31 crc kubenswrapper[4940]: I0223 08:49:31.257328 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/ec3904ad-5d0b-46b4-9c13-68454d9a3cb2-os-release\") pod \"multus-czrqm\" (UID: \"ec3904ad-5d0b-46b4-9c13-68454d9a3cb2\") " pod="openshift-multus/multus-czrqm" Feb 23 08:49:31 crc kubenswrapper[4940]: I0223 08:49:31.257411 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gpqgl\" (UniqueName: \"kubernetes.io/projected/f3f2cfd6-5ddf-436d-998f-440f1cc642b1-kube-api-access-gpqgl\") pod \"machine-config-daemon-26mgs\" (UID: \"f3f2cfd6-5ddf-436d-998f-440f1cc642b1\") " pod="openshift-machine-config-operator/machine-config-daemon-26mgs" Feb 23 08:49:31 crc kubenswrapper[4940]: I0223 08:49:31.257439 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/353c05f4-9ef1-4c0f-8388-8b56cc4c22d5-tuning-conf-dir\") pod \"multus-additional-cni-plugins-tj6ms\" (UID: \"353c05f4-9ef1-4c0f-8388-8b56cc4c22d5\") " pod="openshift-multus/multus-additional-cni-plugins-tj6ms" Feb 23 08:49:31 crc kubenswrapper[4940]: I0223 08:49:31.257443 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/ec3904ad-5d0b-46b4-9c13-68454d9a3cb2-multus-conf-dir\") pod \"multus-czrqm\" (UID: \"ec3904ad-5d0b-46b4-9c13-68454d9a3cb2\") " pod="openshift-multus/multus-czrqm" Feb 23 08:49:31 crc kubenswrapper[4940]: I0223 08:49:31.257449 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/ec3904ad-5d0b-46b4-9c13-68454d9a3cb2-host-var-lib-cni-multus\") pod \"multus-czrqm\" (UID: \"ec3904ad-5d0b-46b4-9c13-68454d9a3cb2\") " pod="openshift-multus/multus-czrqm" Feb 23 08:49:31 crc kubenswrapper[4940]: I0223 08:49:31.257487 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/353c05f4-9ef1-4c0f-8388-8b56cc4c22d5-system-cni-dir\") pod \"multus-additional-cni-plugins-tj6ms\" (UID: \"353c05f4-9ef1-4c0f-8388-8b56cc4c22d5\") " pod="openshift-multus/multus-additional-cni-plugins-tj6ms" Feb 23 08:49:31 crc kubenswrapper[4940]: I0223 08:49:31.257511 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/353c05f4-9ef1-4c0f-8388-8b56cc4c22d5-cnibin\") pod \"multus-additional-cni-plugins-tj6ms\" (UID: \"353c05f4-9ef1-4c0f-8388-8b56cc4c22d5\") " pod="openshift-multus/multus-additional-cni-plugins-tj6ms" Feb 23 08:49:31 crc kubenswrapper[4940]: I0223 08:49:31.257518 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/ec3904ad-5d0b-46b4-9c13-68454d9a3cb2-multus-daemon-config\") pod \"multus-czrqm\" (UID: \"ec3904ad-5d0b-46b4-9c13-68454d9a3cb2\") " pod="openshift-multus/multus-czrqm" Feb 23 08:49:31 crc kubenswrapper[4940]: I0223 08:49:31.257535 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j86pt\" (UniqueName: \"kubernetes.io/projected/353c05f4-9ef1-4c0f-8388-8b56cc4c22d5-kube-api-access-j86pt\") pod \"multus-additional-cni-plugins-tj6ms\" (UID: \"353c05f4-9ef1-4c0f-8388-8b56cc4c22d5\") " pod="openshift-multus/multus-additional-cni-plugins-tj6ms" Feb 23 08:49:31 crc kubenswrapper[4940]: I0223 08:49:31.257562 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/353c05f4-9ef1-4c0f-8388-8b56cc4c22d5-system-cni-dir\") pod \"multus-additional-cni-plugins-tj6ms\" (UID: \"353c05f4-9ef1-4c0f-8388-8b56cc4c22d5\") " pod="openshift-multus/multus-additional-cni-plugins-tj6ms" Feb 23 08:49:31 crc kubenswrapper[4940]: I0223 08:49:31.257563 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/f3f2cfd6-5ddf-436d-998f-440f1cc642b1-mcd-auth-proxy-config\") pod \"machine-config-daemon-26mgs\" (UID: \"f3f2cfd6-5ddf-436d-998f-440f1cc642b1\") " pod="openshift-machine-config-operator/machine-config-daemon-26mgs" Feb 23 08:49:31 crc kubenswrapper[4940]: I0223 08:49:31.257576 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/353c05f4-9ef1-4c0f-8388-8b56cc4c22d5-cnibin\") pod \"multus-additional-cni-plugins-tj6ms\" (UID: \"353c05f4-9ef1-4c0f-8388-8b56cc4c22d5\") " pod="openshift-multus/multus-additional-cni-plugins-tj6ms" Feb 23 08:49:31 crc kubenswrapper[4940]: I0223 08:49:31.257587 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/f3f2cfd6-5ddf-436d-998f-440f1cc642b1-proxy-tls\") pod \"machine-config-daemon-26mgs\" (UID: \"f3f2cfd6-5ddf-436d-998f-440f1cc642b1\") " pod="openshift-machine-config-operator/machine-config-daemon-26mgs" Feb 23 08:49:31 crc kubenswrapper[4940]: I0223 08:49:31.257638 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/353c05f4-9ef1-4c0f-8388-8b56cc4c22d5-cni-binary-copy\") pod \"multus-additional-cni-plugins-tj6ms\" (UID: \"353c05f4-9ef1-4c0f-8388-8b56cc4c22d5\") " pod="openshift-multus/multus-additional-cni-plugins-tj6ms" Feb 23 08:49:31 crc kubenswrapper[4940]: I0223 08:49:31.257664 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 08:49:31 crc kubenswrapper[4940]: I0223 08:49:31.257686 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/353c05f4-9ef1-4c0f-8388-8b56cc4c22d5-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-tj6ms\" (UID: \"353c05f4-9ef1-4c0f-8388-8b56cc4c22d5\") " pod="openshift-multus/multus-additional-cni-plugins-tj6ms" Feb 23 08:49:31 crc kubenswrapper[4940]: I0223 08:49:31.257711 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/ec3904ad-5d0b-46b4-9c13-68454d9a3cb2-multus-socket-dir-parent\") pod \"multus-czrqm\" (UID: \"ec3904ad-5d0b-46b4-9c13-68454d9a3cb2\") " pod="openshift-multus/multus-czrqm" Feb 23 08:49:31 crc kubenswrapper[4940]: I0223 08:49:31.257776 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/ec3904ad-5d0b-46b4-9c13-68454d9a3cb2-multus-socket-dir-parent\") pod \"multus-czrqm\" (UID: \"ec3904ad-5d0b-46b4-9c13-68454d9a3cb2\") " pod="openshift-multus/multus-czrqm" Feb 23 08:49:31 crc kubenswrapper[4940]: E0223 08:49:31.257790 4940 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 23 08:49:31 crc kubenswrapper[4940]: E0223 08:49:31.257812 4940 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 23 08:49:31 crc kubenswrapper[4940]: E0223 08:49:31.257823 4940 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 23 08:49:31 crc kubenswrapper[4940]: E0223 08:49:31.257860 4940 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-23 08:50:03.257843305 +0000 UTC m=+134.641049552 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 23 08:49:31 crc kubenswrapper[4940]: I0223 08:49:31.257794 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/ec3904ad-5d0b-46b4-9c13-68454d9a3cb2-cni-binary-copy\") pod \"multus-czrqm\" (UID: \"ec3904ad-5d0b-46b4-9c13-68454d9a3cb2\") " pod="openshift-multus/multus-czrqm" Feb 23 08:49:31 crc kubenswrapper[4940]: I0223 08:49:31.258266 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/353c05f4-9ef1-4c0f-8388-8b56cc4c22d5-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-tj6ms\" (UID: \"353c05f4-9ef1-4c0f-8388-8b56cc4c22d5\") " pod="openshift-multus/multus-additional-cni-plugins-tj6ms" Feb 23 08:49:31 crc kubenswrapper[4940]: I0223 08:49:31.258321 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/ec3904ad-5d0b-46b4-9c13-68454d9a3cb2-hostroot\") pod \"multus-czrqm\" (UID: \"ec3904ad-5d0b-46b4-9c13-68454d9a3cb2\") " pod="openshift-multus/multus-czrqm" Feb 23 08:49:31 crc kubenswrapper[4940]: I0223 08:49:31.258448 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/353c05f4-9ef1-4c0f-8388-8b56cc4c22d5-cni-binary-copy\") pod \"multus-additional-cni-plugins-tj6ms\" (UID: \"353c05f4-9ef1-4c0f-8388-8b56cc4c22d5\") " pod="openshift-multus/multus-additional-cni-plugins-tj6ms" Feb 23 08:49:31 crc kubenswrapper[4940]: I0223 08:49:31.258548 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/353c05f4-9ef1-4c0f-8388-8b56cc4c22d5-tuning-conf-dir\") pod \"multus-additional-cni-plugins-tj6ms\" (UID: \"353c05f4-9ef1-4c0f-8388-8b56cc4c22d5\") " pod="openshift-multus/multus-additional-cni-plugins-tj6ms" Feb 23 08:49:31 crc kubenswrapper[4940]: I0223 08:49:31.269518 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4vcwd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41834650-70c0-4558-9052-d7cdfc785e09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:30Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:30Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwtwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:49:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4vcwd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:31Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:31 crc kubenswrapper[4940]: I0223 08:49:31.277484 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j86pt\" (UniqueName: \"kubernetes.io/projected/353c05f4-9ef1-4c0f-8388-8b56cc4c22d5-kube-api-access-j86pt\") pod \"multus-additional-cni-plugins-tj6ms\" (UID: \"353c05f4-9ef1-4c0f-8388-8b56cc4c22d5\") " pod="openshift-multus/multus-additional-cni-plugins-tj6ms" Feb 23 08:49:31 crc kubenswrapper[4940]: I0223 08:49:31.277541 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8jmt7\" (UniqueName: \"kubernetes.io/projected/ec3904ad-5d0b-46b4-9c13-68454d9a3cb2-kube-api-access-8jmt7\") pod \"multus-czrqm\" (UID: \"ec3904ad-5d0b-46b4-9c13-68454d9a3cb2\") " pod="openshift-multus/multus-czrqm" Feb 23 08:49:31 crc kubenswrapper[4940]: I0223 08:49:31.283755 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67d1926d-47ed-4a5c-b868-690599126446\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ee3bc200ebaed1c133ad32e65db44c8613c48203736dada71c0881d3d4f45a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6fefd6cf7ee54d7687a700f08f52ff20d438289f846da4d811f3295e85455c1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee77b54ead604e1f64b0057494a4ba082ac7f59289f563ce28ec6048485e3b07\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://947d88435f9dfeac492c7a476d5413d6c9f65fea2e6a8322bed3a197be4e7c11\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://947d88435f9dfeac492c7a476d5413d6c9f65fea2e6a8322bed3a197be4e7c11\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"message\\\":\\\"le observer\\\\nW0223 08:48:59.155726 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0223 08:48:59.155899 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0223 08:48:59.156685 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1318688766/tls.crt::/tmp/serving-cert-1318688766/tls.key\\\\\\\"\\\\nI0223 08:48:59.543716 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0223 08:48:59.548278 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0223 08:48:59.548307 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0223 08:48:59.548334 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0223 08:48:59.548340 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0223 08:48:59.558852 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0223 08:48:59.558871 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0223 08:48:59.558894 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0223 08:48:59.558902 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0223 08:48:59.558909 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0223 08:48:59.558912 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0223 08:48:59.558917 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0223 08:48:59.558922 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0223 08:48:59.560387 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T08:48:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c47633e36ba0e4b4070060e2925e103b6efa68771b18f7a7f429546f9f05c1d9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:53Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7d8e4f8739813cb8d24afc9999755886cd4a77958b8b4e1514fc0bc53403258\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7d8e4f8739813cb8d24afc9999755886cd4a77958b8b4e1514fc0bc53403258\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:47:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:47:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:47:49Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:31Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:31 crc kubenswrapper[4940]: I0223 08:49:31.297582 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ca6a355de735d39d8c069429deed2a9983ba8d97d9210475c0b22791b6357d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:31Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:31 crc kubenswrapper[4940]: I0223 08:49:31.309995 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:31Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:31 crc kubenswrapper[4940]: I0223 08:49:31.325201 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:31 crc kubenswrapper[4940]: I0223 08:49:31.325260 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:31 crc kubenswrapper[4940]: I0223 08:49:31.325276 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:31 crc kubenswrapper[4940]: I0223 08:49:31.325298 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:31 crc kubenswrapper[4940]: I0223 08:49:31.325314 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:31Z","lastTransitionTime":"2026-02-23T08:49:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:31 crc kubenswrapper[4940]: I0223 08:49:31.326010 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-tj6ms" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"353c05f4-9ef1-4c0f-8388-8b56cc4c22d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j86pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j86pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j86pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j86pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j86pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j86pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j86pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:49:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-tj6ms\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:31Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:31 crc kubenswrapper[4940]: I0223 08:49:31.341101 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b55d9910-222c-4c04-8a15-cecc288b8dd6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2696757a27ff629002d6fe762776487514e74ad9465357240b3f196cf9e3cec7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73db3b91fa5cdca73acde9a3bcb9062394a54e61e87020b5d656290a54aa4f70\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T08:48:50Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0223 08:48:20.958890 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0223 08:48:20.960636 1 observer_polling.go:159] Starting file observer\\\\nI0223 08:48:20.961488 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0223 08:48:20.962119 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0223 08:48:48.479316 1 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials\\\\nI0223 08:48:50.579389 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0223 08:48:50.579494 1 cmd.go:179] failed checking apiserver connectivity: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T08:48:20Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:48:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aceeb7fc620d96b7e2a474de52d7b3fb007417f4f1ec81e5853724d54381566\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dbb2f4667fe28c3ae9389cece3d7246f31a8bd82328370a699dae73e11b19a4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0c1a355a4218aa74f5a9f20e74427e20e7dd1864bfb683eeb2ac43ef6d7357f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:47:49Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:31Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:31 crc kubenswrapper[4940]: I0223 08:49:31.345465 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 08:49:31 crc kubenswrapper[4940]: I0223 08:49:31.345630 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 08:49:31 crc kubenswrapper[4940]: E0223 08:49:31.345809 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 08:49:31 crc kubenswrapper[4940]: E0223 08:49:31.345898 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 08:49:31 crc kubenswrapper[4940]: I0223 08:49:31.350908 4940 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 16:55:05.791316302 +0000 UTC Feb 23 08:49:31 crc kubenswrapper[4940]: I0223 08:49:31.356095 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:31Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:31 crc kubenswrapper[4940]: I0223 08:49:31.358862 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gpqgl\" (UniqueName: \"kubernetes.io/projected/f3f2cfd6-5ddf-436d-998f-440f1cc642b1-kube-api-access-gpqgl\") pod \"machine-config-daemon-26mgs\" (UID: \"f3f2cfd6-5ddf-436d-998f-440f1cc642b1\") " pod="openshift-machine-config-operator/machine-config-daemon-26mgs" Feb 23 08:49:31 crc kubenswrapper[4940]: I0223 08:49:31.358910 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/f3f2cfd6-5ddf-436d-998f-440f1cc642b1-proxy-tls\") pod \"machine-config-daemon-26mgs\" (UID: \"f3f2cfd6-5ddf-436d-998f-440f1cc642b1\") " pod="openshift-machine-config-operator/machine-config-daemon-26mgs" Feb 23 08:49:31 crc kubenswrapper[4940]: I0223 08:49:31.358927 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/f3f2cfd6-5ddf-436d-998f-440f1cc642b1-mcd-auth-proxy-config\") pod \"machine-config-daemon-26mgs\" (UID: \"f3f2cfd6-5ddf-436d-998f-440f1cc642b1\") " pod="openshift-machine-config-operator/machine-config-daemon-26mgs" Feb 23 08:49:31 crc kubenswrapper[4940]: I0223 08:49:31.359295 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/f3f2cfd6-5ddf-436d-998f-440f1cc642b1-rootfs\") pod \"machine-config-daemon-26mgs\" (UID: \"f3f2cfd6-5ddf-436d-998f-440f1cc642b1\") " pod="openshift-machine-config-operator/machine-config-daemon-26mgs" Feb 23 08:49:31 crc kubenswrapper[4940]: I0223 08:49:31.359368 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/f3f2cfd6-5ddf-436d-998f-440f1cc642b1-rootfs\") pod \"machine-config-daemon-26mgs\" (UID: \"f3f2cfd6-5ddf-436d-998f-440f1cc642b1\") " pod="openshift-machine-config-operator/machine-config-daemon-26mgs" Feb 23 08:49:31 crc kubenswrapper[4940]: I0223 08:49:31.359649 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/f3f2cfd6-5ddf-436d-998f-440f1cc642b1-mcd-auth-proxy-config\") pod \"machine-config-daemon-26mgs\" (UID: \"f3f2cfd6-5ddf-436d-998f-440f1cc642b1\") " pod="openshift-machine-config-operator/machine-config-daemon-26mgs" Feb 23 08:49:31 crc kubenswrapper[4940]: I0223 08:49:31.363135 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/f3f2cfd6-5ddf-436d-998f-440f1cc642b1-proxy-tls\") pod \"machine-config-daemon-26mgs\" (UID: \"f3f2cfd6-5ddf-436d-998f-440f1cc642b1\") " pod="openshift-machine-config-operator/machine-config-daemon-26mgs" Feb 23 08:49:31 crc kubenswrapper[4940]: I0223 08:49:31.370332 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4164a479bddce53118172c24d9dc0c8174560ac6f49d2a1fa7321845f7f4020a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:31Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:31 crc kubenswrapper[4940]: I0223 08:49:31.377129 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gpqgl\" (UniqueName: \"kubernetes.io/projected/f3f2cfd6-5ddf-436d-998f-440f1cc642b1-kube-api-access-gpqgl\") pod \"machine-config-daemon-26mgs\" (UID: \"f3f2cfd6-5ddf-436d-998f-440f1cc642b1\") " pod="openshift-machine-config-operator/machine-config-daemon-26mgs" Feb 23 08:49:31 crc kubenswrapper[4940]: I0223 08:49:31.382839 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:31Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:31 crc kubenswrapper[4940]: I0223 08:49:31.393930 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f5fabdbd24e079413b28b30b7e72c60763c64d08d55badde2d84d38dcbc8a09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bce84ef684469eb1b554c08d38cbd547ce6fa507b189a9f63c740583e66bc5e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:31Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:31 crc kubenswrapper[4940]: I0223 08:49:31.405876 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-czrqm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec3904ad-5d0b-46b4-9c13-68454d9a3cb2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8jmt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:49:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-czrqm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:31Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:31 crc kubenswrapper[4940]: I0223 08:49:31.415983 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-tj6ms" Feb 23 08:49:31 crc kubenswrapper[4940]: I0223 08:49:31.421893 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-czrqm" Feb 23 08:49:31 crc kubenswrapper[4940]: I0223 08:49:31.426819 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:31 crc kubenswrapper[4940]: I0223 08:49:31.426849 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:31 crc kubenswrapper[4940]: I0223 08:49:31.426860 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:31 crc kubenswrapper[4940]: I0223 08:49:31.426875 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:31 crc kubenswrapper[4940]: I0223 08:49:31.426885 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:31Z","lastTransitionTime":"2026-02-23T08:49:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:31 crc kubenswrapper[4940]: W0223 08:49:31.427682 4940 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod353c05f4_9ef1_4c0f_8388_8b56cc4c22d5.slice/crio-b3923fe9f717309e893cfdf3c66f118953ff4595e278cc11a4d954511979a1bf WatchSource:0}: Error finding container b3923fe9f717309e893cfdf3c66f118953ff4595e278cc11a4d954511979a1bf: Status 404 returned error can't find the container with id b3923fe9f717309e893cfdf3c66f118953ff4595e278cc11a4d954511979a1bf Feb 23 08:49:31 crc kubenswrapper[4940]: I0223 08:49:31.428469 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" Feb 23 08:49:31 crc kubenswrapper[4940]: W0223 08:49:31.432020 4940 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podec3904ad_5d0b_46b4_9c13_68454d9a3cb2.slice/crio-a50302dcefcd2ebdd72742249e25d210225bdf1711d8b72e87df79a2347fa3fc WatchSource:0}: Error finding container a50302dcefcd2ebdd72742249e25d210225bdf1711d8b72e87df79a2347fa3fc: Status 404 returned error can't find the container with id a50302dcefcd2ebdd72742249e25d210225bdf1711d8b72e87df79a2347fa3fc Feb 23 08:49:31 crc kubenswrapper[4940]: I0223 08:49:31.476364 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-qkw6w"] Feb 23 08:49:31 crc kubenswrapper[4940]: I0223 08:49:31.477953 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-qkw6w" Feb 23 08:49:31 crc kubenswrapper[4940]: I0223 08:49:31.479978 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Feb 23 08:49:31 crc kubenswrapper[4940]: I0223 08:49:31.482110 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Feb 23 08:49:31 crc kubenswrapper[4940]: I0223 08:49:31.483167 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Feb 23 08:49:31 crc kubenswrapper[4940]: I0223 08:49:31.483289 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Feb 23 08:49:31 crc kubenswrapper[4940]: I0223 08:49:31.483732 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Feb 23 08:49:31 crc kubenswrapper[4940]: I0223 08:49:31.483849 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Feb 23 08:49:31 crc kubenswrapper[4940]: I0223 08:49:31.484041 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Feb 23 08:49:31 crc kubenswrapper[4940]: I0223 08:49:31.495983 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b55d9910-222c-4c04-8a15-cecc288b8dd6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2696757a27ff629002d6fe762776487514e74ad9465357240b3f196cf9e3cec7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73db3b91fa5cdca73acde9a3bcb9062394a54e61e87020b5d656290a54aa4f70\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T08:48:50Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0223 08:48:20.958890 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0223 08:48:20.960636 1 observer_polling.go:159] Starting file observer\\\\nI0223 08:48:20.961488 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0223 08:48:20.962119 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0223 08:48:48.479316 1 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials\\\\nI0223 08:48:50.579389 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0223 08:48:50.579494 1 cmd.go:179] failed checking apiserver connectivity: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T08:48:20Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:48:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aceeb7fc620d96b7e2a474de52d7b3fb007417f4f1ec81e5853724d54381566\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dbb2f4667fe28c3ae9389cece3d7246f31a8bd82328370a699dae73e11b19a4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0c1a355a4218aa74f5a9f20e74427e20e7dd1864bfb683eeb2ac43ef6d7357f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:47:49Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:31Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:31 crc kubenswrapper[4940]: I0223 08:49:31.507940 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ca6a355de735d39d8c069429deed2a9983ba8d97d9210475c0b22791b6357d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:31Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:31 crc kubenswrapper[4940]: I0223 08:49:31.521279 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:31Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:31 crc kubenswrapper[4940]: I0223 08:49:31.529477 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:31 crc kubenswrapper[4940]: I0223 08:49:31.529510 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:31 crc kubenswrapper[4940]: I0223 08:49:31.529519 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:31 crc kubenswrapper[4940]: I0223 08:49:31.529536 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:31 crc kubenswrapper[4940]: I0223 08:49:31.529547 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:31Z","lastTransitionTime":"2026-02-23T08:49:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:31 crc kubenswrapper[4940]: I0223 08:49:31.535799 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-tj6ms" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"353c05f4-9ef1-4c0f-8388-8b56cc4c22d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j86pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j86pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j86pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j86pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j86pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j86pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j86pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:49:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-tj6ms\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:31Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:31 crc kubenswrapper[4940]: I0223 08:49:31.546304 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:31Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:31 crc kubenswrapper[4940]: I0223 08:49:31.557003 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f5fabdbd24e079413b28b30b7e72c60763c64d08d55badde2d84d38dcbc8a09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bce84ef684469eb1b554c08d38cbd547ce6fa507b189a9f63c740583e66bc5e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:31Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:31 crc kubenswrapper[4940]: I0223 08:49:31.560581 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d0b5a971-c6f4-4518-9bb3-49d228275668-host-run-ovn-kubernetes\") pod \"ovnkube-node-qkw6w\" (UID: \"d0b5a971-c6f4-4518-9bb3-49d228275668\") " pod="openshift-ovn-kubernetes/ovnkube-node-qkw6w" Feb 23 08:49:31 crc kubenswrapper[4940]: I0223 08:49:31.560656 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sq2ck\" (UniqueName: \"kubernetes.io/projected/d0b5a971-c6f4-4518-9bb3-49d228275668-kube-api-access-sq2ck\") pod \"ovnkube-node-qkw6w\" (UID: \"d0b5a971-c6f4-4518-9bb3-49d228275668\") " pod="openshift-ovn-kubernetes/ovnkube-node-qkw6w" Feb 23 08:49:31 crc kubenswrapper[4940]: I0223 08:49:31.560685 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d0b5a971-c6f4-4518-9bb3-49d228275668-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-qkw6w\" (UID: \"d0b5a971-c6f4-4518-9bb3-49d228275668\") " pod="openshift-ovn-kubernetes/ovnkube-node-qkw6w" Feb 23 08:49:31 crc kubenswrapper[4940]: I0223 08:49:31.560856 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/d0b5a971-c6f4-4518-9bb3-49d228275668-host-cni-bin\") pod \"ovnkube-node-qkw6w\" (UID: \"d0b5a971-c6f4-4518-9bb3-49d228275668\") " pod="openshift-ovn-kubernetes/ovnkube-node-qkw6w" Feb 23 08:49:31 crc kubenswrapper[4940]: I0223 08:49:31.560899 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d0b5a971-c6f4-4518-9bb3-49d228275668-ovn-node-metrics-cert\") pod \"ovnkube-node-qkw6w\" (UID: \"d0b5a971-c6f4-4518-9bb3-49d228275668\") " pod="openshift-ovn-kubernetes/ovnkube-node-qkw6w" Feb 23 08:49:31 crc kubenswrapper[4940]: I0223 08:49:31.560935 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/d0b5a971-c6f4-4518-9bb3-49d228275668-env-overrides\") pod \"ovnkube-node-qkw6w\" (UID: \"d0b5a971-c6f4-4518-9bb3-49d228275668\") " pod="openshift-ovn-kubernetes/ovnkube-node-qkw6w" Feb 23 08:49:31 crc kubenswrapper[4940]: I0223 08:49:31.560957 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/d0b5a971-c6f4-4518-9bb3-49d228275668-systemd-units\") pod \"ovnkube-node-qkw6w\" (UID: \"d0b5a971-c6f4-4518-9bb3-49d228275668\") " pod="openshift-ovn-kubernetes/ovnkube-node-qkw6w" Feb 23 08:49:31 crc kubenswrapper[4940]: I0223 08:49:31.560976 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/d0b5a971-c6f4-4518-9bb3-49d228275668-ovnkube-config\") pod \"ovnkube-node-qkw6w\" (UID: \"d0b5a971-c6f4-4518-9bb3-49d228275668\") " pod="openshift-ovn-kubernetes/ovnkube-node-qkw6w" Feb 23 08:49:31 crc kubenswrapper[4940]: I0223 08:49:31.561002 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/d0b5a971-c6f4-4518-9bb3-49d228275668-host-run-netns\") pod \"ovnkube-node-qkw6w\" (UID: \"d0b5a971-c6f4-4518-9bb3-49d228275668\") " pod="openshift-ovn-kubernetes/ovnkube-node-qkw6w" Feb 23 08:49:31 crc kubenswrapper[4940]: I0223 08:49:31.561021 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d0b5a971-c6f4-4518-9bb3-49d228275668-host-cni-netd\") pod \"ovnkube-node-qkw6w\" (UID: \"d0b5a971-c6f4-4518-9bb3-49d228275668\") " pod="openshift-ovn-kubernetes/ovnkube-node-qkw6w" Feb 23 08:49:31 crc kubenswrapper[4940]: I0223 08:49:31.561111 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/d0b5a971-c6f4-4518-9bb3-49d228275668-node-log\") pod \"ovnkube-node-qkw6w\" (UID: \"d0b5a971-c6f4-4518-9bb3-49d228275668\") " pod="openshift-ovn-kubernetes/ovnkube-node-qkw6w" Feb 23 08:49:31 crc kubenswrapper[4940]: I0223 08:49:31.561154 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d0b5a971-c6f4-4518-9bb3-49d228275668-run-openvswitch\") pod \"ovnkube-node-qkw6w\" (UID: \"d0b5a971-c6f4-4518-9bb3-49d228275668\") " pod="openshift-ovn-kubernetes/ovnkube-node-qkw6w" Feb 23 08:49:31 crc kubenswrapper[4940]: I0223 08:49:31.561196 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d0b5a971-c6f4-4518-9bb3-49d228275668-var-lib-openvswitch\") pod \"ovnkube-node-qkw6w\" (UID: \"d0b5a971-c6f4-4518-9bb3-49d228275668\") " pod="openshift-ovn-kubernetes/ovnkube-node-qkw6w" Feb 23 08:49:31 crc kubenswrapper[4940]: I0223 08:49:31.561233 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d0b5a971-c6f4-4518-9bb3-49d228275668-host-slash\") pod \"ovnkube-node-qkw6w\" (UID: \"d0b5a971-c6f4-4518-9bb3-49d228275668\") " pod="openshift-ovn-kubernetes/ovnkube-node-qkw6w" Feb 23 08:49:31 crc kubenswrapper[4940]: I0223 08:49:31.561250 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/d0b5a971-c6f4-4518-9bb3-49d228275668-run-systemd\") pod \"ovnkube-node-qkw6w\" (UID: \"d0b5a971-c6f4-4518-9bb3-49d228275668\") " pod="openshift-ovn-kubernetes/ovnkube-node-qkw6w" Feb 23 08:49:31 crc kubenswrapper[4940]: I0223 08:49:31.561266 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d0b5a971-c6f4-4518-9bb3-49d228275668-etc-openvswitch\") pod \"ovnkube-node-qkw6w\" (UID: \"d0b5a971-c6f4-4518-9bb3-49d228275668\") " pod="openshift-ovn-kubernetes/ovnkube-node-qkw6w" Feb 23 08:49:31 crc kubenswrapper[4940]: I0223 08:49:31.561285 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/d0b5a971-c6f4-4518-9bb3-49d228275668-run-ovn\") pod \"ovnkube-node-qkw6w\" (UID: \"d0b5a971-c6f4-4518-9bb3-49d228275668\") " pod="openshift-ovn-kubernetes/ovnkube-node-qkw6w" Feb 23 08:49:31 crc kubenswrapper[4940]: I0223 08:49:31.561301 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/d0b5a971-c6f4-4518-9bb3-49d228275668-log-socket\") pod \"ovnkube-node-qkw6w\" (UID: \"d0b5a971-c6f4-4518-9bb3-49d228275668\") " pod="openshift-ovn-kubernetes/ovnkube-node-qkw6w" Feb 23 08:49:31 crc kubenswrapper[4940]: I0223 08:49:31.561326 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/d0b5a971-c6f4-4518-9bb3-49d228275668-ovnkube-script-lib\") pod \"ovnkube-node-qkw6w\" (UID: \"d0b5a971-c6f4-4518-9bb3-49d228275668\") " pod="openshift-ovn-kubernetes/ovnkube-node-qkw6w" Feb 23 08:49:31 crc kubenswrapper[4940]: I0223 08:49:31.561342 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/d0b5a971-c6f4-4518-9bb3-49d228275668-host-kubelet\") pod \"ovnkube-node-qkw6w\" (UID: \"d0b5a971-c6f4-4518-9bb3-49d228275668\") " pod="openshift-ovn-kubernetes/ovnkube-node-qkw6w" Feb 23 08:49:31 crc kubenswrapper[4940]: I0223 08:49:31.568980 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-czrqm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec3904ad-5d0b-46b4-9c13-68454d9a3cb2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8jmt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:49:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-czrqm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:31Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:31 crc kubenswrapper[4940]: I0223 08:49:31.589194 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qkw6w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0b5a971-c6f4-4518-9bb3-49d228275668\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:49:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qkw6w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:31Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:31 crc kubenswrapper[4940]: I0223 08:49:31.602154 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:31Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:31 crc kubenswrapper[4940]: I0223 08:49:31.620348 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4164a479bddce53118172c24d9dc0c8174560ac6f49d2a1fa7321845f7f4020a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:31Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:31 crc kubenswrapper[4940]: I0223 08:49:31.631933 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f3f2cfd6-5ddf-436d-998f-440f1cc642b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpqgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpqgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:49:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-26mgs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:31Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:31 crc kubenswrapper[4940]: I0223 08:49:31.633395 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:31 crc kubenswrapper[4940]: I0223 08:49:31.633465 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:31 crc kubenswrapper[4940]: I0223 08:49:31.633477 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:31 crc kubenswrapper[4940]: I0223 08:49:31.633501 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:31 crc kubenswrapper[4940]: I0223 08:49:31.633514 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:31Z","lastTransitionTime":"2026-02-23T08:49:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:31 crc kubenswrapper[4940]: I0223 08:49:31.646685 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67d1926d-47ed-4a5c-b868-690599126446\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ee3bc200ebaed1c133ad32e65db44c8613c48203736dada71c0881d3d4f45a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6fefd6cf7ee54d7687a700f08f52ff20d438289f846da4d811f3295e85455c1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee77b54ead604e1f64b0057494a4ba082ac7f59289f563ce28ec6048485e3b07\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://947d88435f9dfeac492c7a476d5413d6c9f65fea2e6a8322bed3a197be4e7c11\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://947d88435f9dfeac492c7a476d5413d6c9f65fea2e6a8322bed3a197be4e7c11\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"message\\\":\\\"le observer\\\\nW0223 08:48:59.155726 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0223 08:48:59.155899 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0223 08:48:59.156685 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1318688766/tls.crt::/tmp/serving-cert-1318688766/tls.key\\\\\\\"\\\\nI0223 08:48:59.543716 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0223 08:48:59.548278 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0223 08:48:59.548307 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0223 08:48:59.548334 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0223 08:48:59.548340 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0223 08:48:59.558852 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0223 08:48:59.558871 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0223 08:48:59.558894 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0223 08:48:59.558902 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0223 08:48:59.558909 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0223 08:48:59.558912 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0223 08:48:59.558917 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0223 08:48:59.558922 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0223 08:48:59.560387 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T08:48:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c47633e36ba0e4b4070060e2925e103b6efa68771b18f7a7f429546f9f05c1d9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:53Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7d8e4f8739813cb8d24afc9999755886cd4a77958b8b4e1514fc0bc53403258\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7d8e4f8739813cb8d24afc9999755886cd4a77958b8b4e1514fc0bc53403258\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:47:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:47:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:47:49Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:31Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:31 crc kubenswrapper[4940]: I0223 08:49:31.657160 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4vcwd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41834650-70c0-4558-9052-d7cdfc785e09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:30Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:30Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwtwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:49:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4vcwd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:31Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:31 crc kubenswrapper[4940]: I0223 08:49:31.662660 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d0b5a971-c6f4-4518-9bb3-49d228275668-etc-openvswitch\") pod \"ovnkube-node-qkw6w\" (UID: \"d0b5a971-c6f4-4518-9bb3-49d228275668\") " pod="openshift-ovn-kubernetes/ovnkube-node-qkw6w" Feb 23 08:49:31 crc kubenswrapper[4940]: I0223 08:49:31.662706 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/d0b5a971-c6f4-4518-9bb3-49d228275668-run-ovn\") pod \"ovnkube-node-qkw6w\" (UID: \"d0b5a971-c6f4-4518-9bb3-49d228275668\") " pod="openshift-ovn-kubernetes/ovnkube-node-qkw6w" Feb 23 08:49:31 crc kubenswrapper[4940]: I0223 08:49:31.662728 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/d0b5a971-c6f4-4518-9bb3-49d228275668-log-socket\") pod \"ovnkube-node-qkw6w\" (UID: \"d0b5a971-c6f4-4518-9bb3-49d228275668\") " pod="openshift-ovn-kubernetes/ovnkube-node-qkw6w" Feb 23 08:49:31 crc kubenswrapper[4940]: I0223 08:49:31.662756 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d0b5a971-c6f4-4518-9bb3-49d228275668-host-slash\") pod \"ovnkube-node-qkw6w\" (UID: \"d0b5a971-c6f4-4518-9bb3-49d228275668\") " pod="openshift-ovn-kubernetes/ovnkube-node-qkw6w" Feb 23 08:49:31 crc kubenswrapper[4940]: I0223 08:49:31.662775 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/d0b5a971-c6f4-4518-9bb3-49d228275668-run-systemd\") pod \"ovnkube-node-qkw6w\" (UID: \"d0b5a971-c6f4-4518-9bb3-49d228275668\") " pod="openshift-ovn-kubernetes/ovnkube-node-qkw6w" Feb 23 08:49:31 crc kubenswrapper[4940]: I0223 08:49:31.662798 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/d0b5a971-c6f4-4518-9bb3-49d228275668-ovnkube-script-lib\") pod \"ovnkube-node-qkw6w\" (UID: \"d0b5a971-c6f4-4518-9bb3-49d228275668\") " pod="openshift-ovn-kubernetes/ovnkube-node-qkw6w" Feb 23 08:49:31 crc kubenswrapper[4940]: I0223 08:49:31.662797 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d0b5a971-c6f4-4518-9bb3-49d228275668-etc-openvswitch\") pod \"ovnkube-node-qkw6w\" (UID: \"d0b5a971-c6f4-4518-9bb3-49d228275668\") " pod="openshift-ovn-kubernetes/ovnkube-node-qkw6w" Feb 23 08:49:31 crc kubenswrapper[4940]: I0223 08:49:31.662820 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/d0b5a971-c6f4-4518-9bb3-49d228275668-host-kubelet\") pod \"ovnkube-node-qkw6w\" (UID: \"d0b5a971-c6f4-4518-9bb3-49d228275668\") " pod="openshift-ovn-kubernetes/ovnkube-node-qkw6w" Feb 23 08:49:31 crc kubenswrapper[4940]: I0223 08:49:31.662839 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sq2ck\" (UniqueName: \"kubernetes.io/projected/d0b5a971-c6f4-4518-9bb3-49d228275668-kube-api-access-sq2ck\") pod \"ovnkube-node-qkw6w\" (UID: \"d0b5a971-c6f4-4518-9bb3-49d228275668\") " pod="openshift-ovn-kubernetes/ovnkube-node-qkw6w" Feb 23 08:49:31 crc kubenswrapper[4940]: I0223 08:49:31.662867 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d0b5a971-c6f4-4518-9bb3-49d228275668-host-slash\") pod \"ovnkube-node-qkw6w\" (UID: \"d0b5a971-c6f4-4518-9bb3-49d228275668\") " pod="openshift-ovn-kubernetes/ovnkube-node-qkw6w" Feb 23 08:49:31 crc kubenswrapper[4940]: I0223 08:49:31.662871 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d0b5a971-c6f4-4518-9bb3-49d228275668-host-run-ovn-kubernetes\") pod \"ovnkube-node-qkw6w\" (UID: \"d0b5a971-c6f4-4518-9bb3-49d228275668\") " pod="openshift-ovn-kubernetes/ovnkube-node-qkw6w" Feb 23 08:49:31 crc kubenswrapper[4940]: I0223 08:49:31.662896 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d0b5a971-c6f4-4518-9bb3-49d228275668-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-qkw6w\" (UID: \"d0b5a971-c6f4-4518-9bb3-49d228275668\") " pod="openshift-ovn-kubernetes/ovnkube-node-qkw6w" Feb 23 08:49:31 crc kubenswrapper[4940]: I0223 08:49:31.662927 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/d0b5a971-c6f4-4518-9bb3-49d228275668-log-socket\") pod \"ovnkube-node-qkw6w\" (UID: \"d0b5a971-c6f4-4518-9bb3-49d228275668\") " pod="openshift-ovn-kubernetes/ovnkube-node-qkw6w" Feb 23 08:49:31 crc kubenswrapper[4940]: I0223 08:49:31.662933 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/d0b5a971-c6f4-4518-9bb3-49d228275668-host-cni-bin\") pod \"ovnkube-node-qkw6w\" (UID: \"d0b5a971-c6f4-4518-9bb3-49d228275668\") " pod="openshift-ovn-kubernetes/ovnkube-node-qkw6w" Feb 23 08:49:31 crc kubenswrapper[4940]: I0223 08:49:31.662954 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d0b5a971-c6f4-4518-9bb3-49d228275668-ovn-node-metrics-cert\") pod \"ovnkube-node-qkw6w\" (UID: \"d0b5a971-c6f4-4518-9bb3-49d228275668\") " pod="openshift-ovn-kubernetes/ovnkube-node-qkw6w" Feb 23 08:49:31 crc kubenswrapper[4940]: I0223 08:49:31.662975 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/d0b5a971-c6f4-4518-9bb3-49d228275668-env-overrides\") pod \"ovnkube-node-qkw6w\" (UID: \"d0b5a971-c6f4-4518-9bb3-49d228275668\") " pod="openshift-ovn-kubernetes/ovnkube-node-qkw6w" Feb 23 08:49:31 crc kubenswrapper[4940]: I0223 08:49:31.663002 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/d0b5a971-c6f4-4518-9bb3-49d228275668-systemd-units\") pod \"ovnkube-node-qkw6w\" (UID: \"d0b5a971-c6f4-4518-9bb3-49d228275668\") " pod="openshift-ovn-kubernetes/ovnkube-node-qkw6w" Feb 23 08:49:31 crc kubenswrapper[4940]: I0223 08:49:31.663020 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/d0b5a971-c6f4-4518-9bb3-49d228275668-ovnkube-config\") pod \"ovnkube-node-qkw6w\" (UID: \"d0b5a971-c6f4-4518-9bb3-49d228275668\") " pod="openshift-ovn-kubernetes/ovnkube-node-qkw6w" Feb 23 08:49:31 crc kubenswrapper[4940]: I0223 08:49:31.663042 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/d0b5a971-c6f4-4518-9bb3-49d228275668-host-run-netns\") pod \"ovnkube-node-qkw6w\" (UID: \"d0b5a971-c6f4-4518-9bb3-49d228275668\") " pod="openshift-ovn-kubernetes/ovnkube-node-qkw6w" Feb 23 08:49:31 crc kubenswrapper[4940]: I0223 08:49:31.663059 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d0b5a971-c6f4-4518-9bb3-49d228275668-host-cni-netd\") pod \"ovnkube-node-qkw6w\" (UID: \"d0b5a971-c6f4-4518-9bb3-49d228275668\") " pod="openshift-ovn-kubernetes/ovnkube-node-qkw6w" Feb 23 08:49:31 crc kubenswrapper[4940]: I0223 08:49:31.663078 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/d0b5a971-c6f4-4518-9bb3-49d228275668-node-log\") pod \"ovnkube-node-qkw6w\" (UID: \"d0b5a971-c6f4-4518-9bb3-49d228275668\") " pod="openshift-ovn-kubernetes/ovnkube-node-qkw6w" Feb 23 08:49:31 crc kubenswrapper[4940]: I0223 08:49:31.663103 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d0b5a971-c6f4-4518-9bb3-49d228275668-run-openvswitch\") pod \"ovnkube-node-qkw6w\" (UID: \"d0b5a971-c6f4-4518-9bb3-49d228275668\") " pod="openshift-ovn-kubernetes/ovnkube-node-qkw6w" Feb 23 08:49:31 crc kubenswrapper[4940]: I0223 08:49:31.663122 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d0b5a971-c6f4-4518-9bb3-49d228275668-var-lib-openvswitch\") pod \"ovnkube-node-qkw6w\" (UID: \"d0b5a971-c6f4-4518-9bb3-49d228275668\") " pod="openshift-ovn-kubernetes/ovnkube-node-qkw6w" Feb 23 08:49:31 crc kubenswrapper[4940]: I0223 08:49:31.663188 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d0b5a971-c6f4-4518-9bb3-49d228275668-var-lib-openvswitch\") pod \"ovnkube-node-qkw6w\" (UID: \"d0b5a971-c6f4-4518-9bb3-49d228275668\") " pod="openshift-ovn-kubernetes/ovnkube-node-qkw6w" Feb 23 08:49:31 crc kubenswrapper[4940]: I0223 08:49:31.663234 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d0b5a971-c6f4-4518-9bb3-49d228275668-host-run-ovn-kubernetes\") pod \"ovnkube-node-qkw6w\" (UID: \"d0b5a971-c6f4-4518-9bb3-49d228275668\") " pod="openshift-ovn-kubernetes/ovnkube-node-qkw6w" Feb 23 08:49:31 crc kubenswrapper[4940]: I0223 08:49:31.663237 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/d0b5a971-c6f4-4518-9bb3-49d228275668-run-systemd\") pod \"ovnkube-node-qkw6w\" (UID: \"d0b5a971-c6f4-4518-9bb3-49d228275668\") " pod="openshift-ovn-kubernetes/ovnkube-node-qkw6w" Feb 23 08:49:31 crc kubenswrapper[4940]: I0223 08:49:31.663266 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d0b5a971-c6f4-4518-9bb3-49d228275668-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-qkw6w\" (UID: \"d0b5a971-c6f4-4518-9bb3-49d228275668\") " pod="openshift-ovn-kubernetes/ovnkube-node-qkw6w" Feb 23 08:49:31 crc kubenswrapper[4940]: I0223 08:49:31.662899 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/d0b5a971-c6f4-4518-9bb3-49d228275668-run-ovn\") pod \"ovnkube-node-qkw6w\" (UID: \"d0b5a971-c6f4-4518-9bb3-49d228275668\") " pod="openshift-ovn-kubernetes/ovnkube-node-qkw6w" Feb 23 08:49:31 crc kubenswrapper[4940]: I0223 08:49:31.663310 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/d0b5a971-c6f4-4518-9bb3-49d228275668-host-cni-bin\") pod \"ovnkube-node-qkw6w\" (UID: \"d0b5a971-c6f4-4518-9bb3-49d228275668\") " pod="openshift-ovn-kubernetes/ovnkube-node-qkw6w" Feb 23 08:49:31 crc kubenswrapper[4940]: I0223 08:49:31.663341 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/d0b5a971-c6f4-4518-9bb3-49d228275668-host-run-netns\") pod \"ovnkube-node-qkw6w\" (UID: \"d0b5a971-c6f4-4518-9bb3-49d228275668\") " pod="openshift-ovn-kubernetes/ovnkube-node-qkw6w" Feb 23 08:49:31 crc kubenswrapper[4940]: I0223 08:49:31.663371 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d0b5a971-c6f4-4518-9bb3-49d228275668-host-cni-netd\") pod \"ovnkube-node-qkw6w\" (UID: \"d0b5a971-c6f4-4518-9bb3-49d228275668\") " pod="openshift-ovn-kubernetes/ovnkube-node-qkw6w" Feb 23 08:49:31 crc kubenswrapper[4940]: I0223 08:49:31.663401 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/d0b5a971-c6f4-4518-9bb3-49d228275668-node-log\") pod \"ovnkube-node-qkw6w\" (UID: \"d0b5a971-c6f4-4518-9bb3-49d228275668\") " pod="openshift-ovn-kubernetes/ovnkube-node-qkw6w" Feb 23 08:49:31 crc kubenswrapper[4940]: I0223 08:49:31.663426 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d0b5a971-c6f4-4518-9bb3-49d228275668-run-openvswitch\") pod \"ovnkube-node-qkw6w\" (UID: \"d0b5a971-c6f4-4518-9bb3-49d228275668\") " pod="openshift-ovn-kubernetes/ovnkube-node-qkw6w" Feb 23 08:49:31 crc kubenswrapper[4940]: I0223 08:49:31.663762 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/d0b5a971-c6f4-4518-9bb3-49d228275668-systemd-units\") pod \"ovnkube-node-qkw6w\" (UID: \"d0b5a971-c6f4-4518-9bb3-49d228275668\") " pod="openshift-ovn-kubernetes/ovnkube-node-qkw6w" Feb 23 08:49:31 crc kubenswrapper[4940]: I0223 08:49:31.663811 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/d0b5a971-c6f4-4518-9bb3-49d228275668-host-kubelet\") pod \"ovnkube-node-qkw6w\" (UID: \"d0b5a971-c6f4-4518-9bb3-49d228275668\") " pod="openshift-ovn-kubernetes/ovnkube-node-qkw6w" Feb 23 08:49:31 crc kubenswrapper[4940]: I0223 08:49:31.663934 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/d0b5a971-c6f4-4518-9bb3-49d228275668-ovnkube-script-lib\") pod \"ovnkube-node-qkw6w\" (UID: \"d0b5a971-c6f4-4518-9bb3-49d228275668\") " pod="openshift-ovn-kubernetes/ovnkube-node-qkw6w" Feb 23 08:49:31 crc kubenswrapper[4940]: I0223 08:49:31.664062 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/d0b5a971-c6f4-4518-9bb3-49d228275668-env-overrides\") pod \"ovnkube-node-qkw6w\" (UID: \"d0b5a971-c6f4-4518-9bb3-49d228275668\") " pod="openshift-ovn-kubernetes/ovnkube-node-qkw6w" Feb 23 08:49:31 crc kubenswrapper[4940]: I0223 08:49:31.664097 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/d0b5a971-c6f4-4518-9bb3-49d228275668-ovnkube-config\") pod \"ovnkube-node-qkw6w\" (UID: \"d0b5a971-c6f4-4518-9bb3-49d228275668\") " pod="openshift-ovn-kubernetes/ovnkube-node-qkw6w" Feb 23 08:49:31 crc kubenswrapper[4940]: I0223 08:49:31.668062 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d0b5a971-c6f4-4518-9bb3-49d228275668-ovn-node-metrics-cert\") pod \"ovnkube-node-qkw6w\" (UID: \"d0b5a971-c6f4-4518-9bb3-49d228275668\") " pod="openshift-ovn-kubernetes/ovnkube-node-qkw6w" Feb 23 08:49:31 crc kubenswrapper[4940]: I0223 08:49:31.680261 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sq2ck\" (UniqueName: \"kubernetes.io/projected/d0b5a971-c6f4-4518-9bb3-49d228275668-kube-api-access-sq2ck\") pod \"ovnkube-node-qkw6w\" (UID: \"d0b5a971-c6f4-4518-9bb3-49d228275668\") " pod="openshift-ovn-kubernetes/ovnkube-node-qkw6w" Feb 23 08:49:31 crc kubenswrapper[4940]: I0223 08:49:31.735679 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:31 crc kubenswrapper[4940]: I0223 08:49:31.735709 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:31 crc kubenswrapper[4940]: I0223 08:49:31.735718 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:31 crc kubenswrapper[4940]: I0223 08:49:31.735730 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:31 crc kubenswrapper[4940]: I0223 08:49:31.735738 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:31Z","lastTransitionTime":"2026-02-23T08:49:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:31 crc kubenswrapper[4940]: I0223 08:49:31.793401 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-tj6ms" event={"ID":"353c05f4-9ef1-4c0f-8388-8b56cc4c22d5","Type":"ContainerStarted","Data":"8e45b5826dae32da76487ca6179e00691e0f92748e895fbe42aa14d39a808a12"} Feb 23 08:49:31 crc kubenswrapper[4940]: I0223 08:49:31.793443 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-tj6ms" event={"ID":"353c05f4-9ef1-4c0f-8388-8b56cc4c22d5","Type":"ContainerStarted","Data":"b3923fe9f717309e893cfdf3c66f118953ff4595e278cc11a4d954511979a1bf"} Feb 23 08:49:31 crc kubenswrapper[4940]: I0223 08:49:31.795242 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-4vcwd" event={"ID":"41834650-70c0-4558-9052-d7cdfc785e09","Type":"ContainerStarted","Data":"5a8663adb017b0225e3a827e7670572c25e2b8123a2153e9f99830d8ddc60ebe"} Feb 23 08:49:31 crc kubenswrapper[4940]: I0223 08:49:31.795274 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-4vcwd" event={"ID":"41834650-70c0-4558-9052-d7cdfc785e09","Type":"ContainerStarted","Data":"d1ae209aaee3960b70bf0a5d2570f5b710f6102f603a60102c795b265f04a7b0"} Feb 23 08:49:31 crc kubenswrapper[4940]: I0223 08:49:31.798318 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" event={"ID":"f3f2cfd6-5ddf-436d-998f-440f1cc642b1","Type":"ContainerStarted","Data":"8a14bc73401f5a1f767af0ebe6b81d75f4f735acadd163be9aa0b78d067677a0"} Feb 23 08:49:31 crc kubenswrapper[4940]: I0223 08:49:31.798347 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" event={"ID":"f3f2cfd6-5ddf-436d-998f-440f1cc642b1","Type":"ContainerStarted","Data":"a57b4e844c0d09d7debf8e4d2507287815cc75be308dc7010647ac2041181dc7"} Feb 23 08:49:31 crc kubenswrapper[4940]: I0223 08:49:31.798358 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" event={"ID":"f3f2cfd6-5ddf-436d-998f-440f1cc642b1","Type":"ContainerStarted","Data":"950f2a2b9b6fb7b8e8ceddb7d8420cf85910ea827358ed25d43d87d231e5f502"} Feb 23 08:49:31 crc kubenswrapper[4940]: I0223 08:49:31.799988 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-czrqm" event={"ID":"ec3904ad-5d0b-46b4-9c13-68454d9a3cb2","Type":"ContainerStarted","Data":"f5c3f117be2e1eba2edb562924ecec6d7ac44b508366bc7c63a86214ada740dc"} Feb 23 08:49:31 crc kubenswrapper[4940]: I0223 08:49:31.800112 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-czrqm" event={"ID":"ec3904ad-5d0b-46b4-9c13-68454d9a3cb2","Type":"ContainerStarted","Data":"a50302dcefcd2ebdd72742249e25d210225bdf1711d8b72e87df79a2347fa3fc"} Feb 23 08:49:31 crc kubenswrapper[4940]: I0223 08:49:31.806305 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:31Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:31 crc kubenswrapper[4940]: I0223 08:49:31.818124 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4164a479bddce53118172c24d9dc0c8174560ac6f49d2a1fa7321845f7f4020a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:31Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:31 crc kubenswrapper[4940]: I0223 08:49:31.829184 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:31Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:31 crc kubenswrapper[4940]: I0223 08:49:31.838693 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:31 crc kubenswrapper[4940]: I0223 08:49:31.838755 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:31 crc kubenswrapper[4940]: I0223 08:49:31.838767 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:31 crc kubenswrapper[4940]: I0223 08:49:31.838793 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:31 crc kubenswrapper[4940]: I0223 08:49:31.838809 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:31Z","lastTransitionTime":"2026-02-23T08:49:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:31 crc kubenswrapper[4940]: I0223 08:49:31.841229 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f5fabdbd24e079413b28b30b7e72c60763c64d08d55badde2d84d38dcbc8a09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bce84ef684469eb1b554c08d38cbd547ce6fa507b189a9f63c740583e66bc5e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:31Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:31 crc kubenswrapper[4940]: I0223 08:49:31.853716 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-czrqm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec3904ad-5d0b-46b4-9c13-68454d9a3cb2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8jmt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:49:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-czrqm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:31Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:31 crc kubenswrapper[4940]: I0223 08:49:31.873348 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qkw6w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0b5a971-c6f4-4518-9bb3-49d228275668\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:49:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qkw6w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:31Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:31 crc kubenswrapper[4940]: I0223 08:49:31.884644 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f3f2cfd6-5ddf-436d-998f-440f1cc642b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpqgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpqgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:49:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-26mgs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:31Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:31 crc kubenswrapper[4940]: I0223 08:49:31.892879 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4vcwd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41834650-70c0-4558-9052-d7cdfc785e09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:30Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:30Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwtwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:49:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4vcwd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:31Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:31 crc kubenswrapper[4940]: I0223 08:49:31.894045 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-qkw6w" Feb 23 08:49:31 crc kubenswrapper[4940]: I0223 08:49:31.905769 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67d1926d-47ed-4a5c-b868-690599126446\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ee3bc200ebaed1c133ad32e65db44c8613c48203736dada71c0881d3d4f45a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6fefd6cf7ee54d7687a700f08f52ff20d438289f846da4d811f3295e85455c1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee77b54ead604e1f64b0057494a4ba082ac7f59289f563ce28ec6048485e3b07\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://947d88435f9dfeac492c7a476d5413d6c9f65fea2e6a8322bed3a197be4e7c11\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://947d88435f9dfeac492c7a476d5413d6c9f65fea2e6a8322bed3a197be4e7c11\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"message\\\":\\\"le observer\\\\nW0223 08:48:59.155726 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0223 08:48:59.155899 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0223 08:48:59.156685 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1318688766/tls.crt::/tmp/serving-cert-1318688766/tls.key\\\\\\\"\\\\nI0223 08:48:59.543716 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0223 08:48:59.548278 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0223 08:48:59.548307 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0223 08:48:59.548334 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0223 08:48:59.548340 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0223 08:48:59.558852 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0223 08:48:59.558871 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0223 08:48:59.558894 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0223 08:48:59.558902 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0223 08:48:59.558909 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0223 08:48:59.558912 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0223 08:48:59.558917 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0223 08:48:59.558922 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0223 08:48:59.560387 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T08:48:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c47633e36ba0e4b4070060e2925e103b6efa68771b18f7a7f429546f9f05c1d9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:53Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7d8e4f8739813cb8d24afc9999755886cd4a77958b8b4e1514fc0bc53403258\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7d8e4f8739813cb8d24afc9999755886cd4a77958b8b4e1514fc0bc53403258\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:47:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:47:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:47:49Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:31Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:31 crc kubenswrapper[4940]: I0223 08:49:31.916863 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:31Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:31 crc kubenswrapper[4940]: I0223 08:49:31.931936 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-tj6ms" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"353c05f4-9ef1-4c0f-8388-8b56cc4c22d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j86pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e45b5826dae32da76487ca6179e00691e0f92748e895fbe42aa14d39a808a12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j86pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j86pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j86pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j86pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j86pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j86pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:49:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-tj6ms\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:31Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:31 crc kubenswrapper[4940]: I0223 08:49:31.941159 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:31 crc kubenswrapper[4940]: I0223 08:49:31.941192 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:31 crc kubenswrapper[4940]: I0223 08:49:31.941201 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:31 crc kubenswrapper[4940]: I0223 08:49:31.941214 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:31 crc kubenswrapper[4940]: I0223 08:49:31.941223 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:31Z","lastTransitionTime":"2026-02-23T08:49:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:31 crc kubenswrapper[4940]: I0223 08:49:31.946133 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b55d9910-222c-4c04-8a15-cecc288b8dd6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2696757a27ff629002d6fe762776487514e74ad9465357240b3f196cf9e3cec7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73db3b91fa5cdca73acde9a3bcb9062394a54e61e87020b5d656290a54aa4f70\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T08:48:50Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0223 08:48:20.958890 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0223 08:48:20.960636 1 observer_polling.go:159] Starting file observer\\\\nI0223 08:48:20.961488 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0223 08:48:20.962119 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0223 08:48:48.479316 1 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials\\\\nI0223 08:48:50.579389 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0223 08:48:50.579494 1 cmd.go:179] failed checking apiserver connectivity: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T08:48:20Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:48:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aceeb7fc620d96b7e2a474de52d7b3fb007417f4f1ec81e5853724d54381566\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dbb2f4667fe28c3ae9389cece3d7246f31a8bd82328370a699dae73e11b19a4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0c1a355a4218aa74f5a9f20e74427e20e7dd1864bfb683eeb2ac43ef6d7357f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:47:49Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:31Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:31 crc kubenswrapper[4940]: I0223 08:49:31.965128 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ca6a355de735d39d8c069429deed2a9983ba8d97d9210475c0b22791b6357d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:31Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:31 crc kubenswrapper[4940]: I0223 08:49:31.975815 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f3f2cfd6-5ddf-436d-998f-440f1cc642b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a14bc73401f5a1f767af0ebe6b81d75f4f735acadd163be9aa0b78d067677a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpqgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a57b4e844c0d09d7debf8e4d2507287815cc75be308dc7010647ac2041181dc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpqgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:49:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-26mgs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:31Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:31 crc kubenswrapper[4940]: I0223 08:49:31.989709 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67d1926d-47ed-4a5c-b868-690599126446\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ee3bc200ebaed1c133ad32e65db44c8613c48203736dada71c0881d3d4f45a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6fefd6cf7ee54d7687a700f08f52ff20d438289f846da4d811f3295e85455c1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee77b54ead604e1f64b0057494a4ba082ac7f59289f563ce28ec6048485e3b07\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://947d88435f9dfeac492c7a476d5413d6c9f65fea2e6a8322bed3a197be4e7c11\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://947d88435f9dfeac492c7a476d5413d6c9f65fea2e6a8322bed3a197be4e7c11\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"message\\\":\\\"le observer\\\\nW0223 08:48:59.155726 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0223 08:48:59.155899 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0223 08:48:59.156685 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1318688766/tls.crt::/tmp/serving-cert-1318688766/tls.key\\\\\\\"\\\\nI0223 08:48:59.543716 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0223 08:48:59.548278 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0223 08:48:59.548307 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0223 08:48:59.548334 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0223 08:48:59.548340 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0223 08:48:59.558852 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0223 08:48:59.558871 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0223 08:48:59.558894 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0223 08:48:59.558902 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0223 08:48:59.558909 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0223 08:48:59.558912 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0223 08:48:59.558917 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0223 08:48:59.558922 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0223 08:48:59.560387 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T08:48:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c47633e36ba0e4b4070060e2925e103b6efa68771b18f7a7f429546f9f05c1d9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:53Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7d8e4f8739813cb8d24afc9999755886cd4a77958b8b4e1514fc0bc53403258\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7d8e4f8739813cb8d24afc9999755886cd4a77958b8b4e1514fc0bc53403258\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:47:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:47:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:47:49Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:31Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:32 crc kubenswrapper[4940]: I0223 08:49:32.001659 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4vcwd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41834650-70c0-4558-9052-d7cdfc785e09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a8663adb017b0225e3a827e7670572c25e2b8123a2153e9f99830d8ddc60ebe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwtwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:49:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4vcwd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:31Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:32 crc kubenswrapper[4940]: I0223 08:49:32.012110 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b55d9910-222c-4c04-8a15-cecc288b8dd6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2696757a27ff629002d6fe762776487514e74ad9465357240b3f196cf9e3cec7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73db3b91fa5cdca73acde9a3bcb9062394a54e61e87020b5d656290a54aa4f70\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T08:48:50Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0223 08:48:20.958890 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0223 08:48:20.960636 1 observer_polling.go:159] Starting file observer\\\\nI0223 08:48:20.961488 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0223 08:48:20.962119 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0223 08:48:48.479316 1 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials\\\\nI0223 08:48:50.579389 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0223 08:48:50.579494 1 cmd.go:179] failed checking apiserver connectivity: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T08:48:20Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:48:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aceeb7fc620d96b7e2a474de52d7b3fb007417f4f1ec81e5853724d54381566\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dbb2f4667fe28c3ae9389cece3d7246f31a8bd82328370a699dae73e11b19a4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0c1a355a4218aa74f5a9f20e74427e20e7dd1864bfb683eeb2ac43ef6d7357f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:47:49Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:32Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:32 crc kubenswrapper[4940]: I0223 08:49:32.022253 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ca6a355de735d39d8c069429deed2a9983ba8d97d9210475c0b22791b6357d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:32Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:32 crc kubenswrapper[4940]: I0223 08:49:32.032322 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:32Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:32 crc kubenswrapper[4940]: I0223 08:49:32.043073 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:32 crc kubenswrapper[4940]: I0223 08:49:32.043119 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:32 crc kubenswrapper[4940]: I0223 08:49:32.043128 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:32 crc kubenswrapper[4940]: I0223 08:49:32.043147 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:32 crc kubenswrapper[4940]: I0223 08:49:32.043158 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:32Z","lastTransitionTime":"2026-02-23T08:49:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:32 crc kubenswrapper[4940]: I0223 08:49:32.045330 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-tj6ms" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"353c05f4-9ef1-4c0f-8388-8b56cc4c22d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j86pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e45b5826dae32da76487ca6179e00691e0f92748e895fbe42aa14d39a808a12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j86pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j86pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j86pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j86pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j86pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j86pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:49:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-tj6ms\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:32Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:32 crc kubenswrapper[4940]: I0223 08:49:32.058435 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f5fabdbd24e079413b28b30b7e72c60763c64d08d55badde2d84d38dcbc8a09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bce84ef684469eb1b554c08d38cbd547ce6fa507b189a9f63c740583e66bc5e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:32Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:32 crc kubenswrapper[4940]: I0223 08:49:32.070490 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-czrqm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec3904ad-5d0b-46b4-9c13-68454d9a3cb2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5c3f117be2e1eba2edb562924ecec6d7ac44b508366bc7c63a86214ada740dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8jmt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:49:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-czrqm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:32Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:32 crc kubenswrapper[4940]: I0223 08:49:32.088874 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qkw6w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0b5a971-c6f4-4518-9bb3-49d228275668\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:49:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qkw6w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:32Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:32 crc kubenswrapper[4940]: I0223 08:49:32.102189 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:32Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:32 crc kubenswrapper[4940]: I0223 08:49:32.116707 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4164a479bddce53118172c24d9dc0c8174560ac6f49d2a1fa7321845f7f4020a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:32Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:32 crc kubenswrapper[4940]: I0223 08:49:32.128879 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:32Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:32 crc kubenswrapper[4940]: I0223 08:49:32.145095 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:32 crc kubenswrapper[4940]: I0223 08:49:32.145213 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:32 crc kubenswrapper[4940]: I0223 08:49:32.145224 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:32 crc kubenswrapper[4940]: I0223 08:49:32.145241 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:32 crc kubenswrapper[4940]: I0223 08:49:32.145252 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:32Z","lastTransitionTime":"2026-02-23T08:49:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:32 crc kubenswrapper[4940]: I0223 08:49:32.247275 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:32 crc kubenswrapper[4940]: I0223 08:49:32.247307 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:32 crc kubenswrapper[4940]: I0223 08:49:32.247317 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:32 crc kubenswrapper[4940]: I0223 08:49:32.247333 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:32 crc kubenswrapper[4940]: I0223 08:49:32.247344 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:32Z","lastTransitionTime":"2026-02-23T08:49:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:32 crc kubenswrapper[4940]: I0223 08:49:32.345462 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 08:49:32 crc kubenswrapper[4940]: E0223 08:49:32.345630 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 08:49:32 crc kubenswrapper[4940]: I0223 08:49:32.350207 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:32 crc kubenswrapper[4940]: I0223 08:49:32.350261 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:32 crc kubenswrapper[4940]: I0223 08:49:32.350273 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:32 crc kubenswrapper[4940]: I0223 08:49:32.350289 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:32 crc kubenswrapper[4940]: I0223 08:49:32.350303 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:32Z","lastTransitionTime":"2026-02-23T08:49:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:32 crc kubenswrapper[4940]: I0223 08:49:32.351280 4940 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 08:42:22.504707679 +0000 UTC Feb 23 08:49:32 crc kubenswrapper[4940]: I0223 08:49:32.452715 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:32 crc kubenswrapper[4940]: I0223 08:49:32.452759 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:32 crc kubenswrapper[4940]: I0223 08:49:32.452770 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:32 crc kubenswrapper[4940]: I0223 08:49:32.452787 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:32 crc kubenswrapper[4940]: I0223 08:49:32.452802 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:32Z","lastTransitionTime":"2026-02-23T08:49:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:32 crc kubenswrapper[4940]: I0223 08:49:32.555423 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:32 crc kubenswrapper[4940]: I0223 08:49:32.555550 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:32 crc kubenswrapper[4940]: I0223 08:49:32.555577 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:32 crc kubenswrapper[4940]: I0223 08:49:32.555637 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:32 crc kubenswrapper[4940]: I0223 08:49:32.555671 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:32Z","lastTransitionTime":"2026-02-23T08:49:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:32 crc kubenswrapper[4940]: I0223 08:49:32.658002 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:32 crc kubenswrapper[4940]: I0223 08:49:32.658038 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:32 crc kubenswrapper[4940]: I0223 08:49:32.658047 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:32 crc kubenswrapper[4940]: I0223 08:49:32.658063 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:32 crc kubenswrapper[4940]: I0223 08:49:32.658072 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:32Z","lastTransitionTime":"2026-02-23T08:49:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:32 crc kubenswrapper[4940]: I0223 08:49:32.760168 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:32 crc kubenswrapper[4940]: I0223 08:49:32.760219 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:32 crc kubenswrapper[4940]: I0223 08:49:32.760231 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:32 crc kubenswrapper[4940]: I0223 08:49:32.760250 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:32 crc kubenswrapper[4940]: I0223 08:49:32.760262 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:32Z","lastTransitionTime":"2026-02-23T08:49:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:32 crc kubenswrapper[4940]: I0223 08:49:32.805002 4940 generic.go:334] "Generic (PLEG): container finished" podID="d0b5a971-c6f4-4518-9bb3-49d228275668" containerID="56ef19c4e2659a35760fb8e2bb8a79c35a283fa3f4b00766bdd237d1464bd933" exitCode=0 Feb 23 08:49:32 crc kubenswrapper[4940]: I0223 08:49:32.805064 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qkw6w" event={"ID":"d0b5a971-c6f4-4518-9bb3-49d228275668","Type":"ContainerDied","Data":"56ef19c4e2659a35760fb8e2bb8a79c35a283fa3f4b00766bdd237d1464bd933"} Feb 23 08:49:32 crc kubenswrapper[4940]: I0223 08:49:32.805090 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qkw6w" event={"ID":"d0b5a971-c6f4-4518-9bb3-49d228275668","Type":"ContainerStarted","Data":"4f21335f3055d4efa629aa9cfc916ee3c69d12b98c562517e8087e1715257691"} Feb 23 08:49:32 crc kubenswrapper[4940]: I0223 08:49:32.806534 4940 generic.go:334] "Generic (PLEG): container finished" podID="353c05f4-9ef1-4c0f-8388-8b56cc4c22d5" containerID="8e45b5826dae32da76487ca6179e00691e0f92748e895fbe42aa14d39a808a12" exitCode=0 Feb 23 08:49:32 crc kubenswrapper[4940]: I0223 08:49:32.807049 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-tj6ms" event={"ID":"353c05f4-9ef1-4c0f-8388-8b56cc4c22d5","Type":"ContainerDied","Data":"8e45b5826dae32da76487ca6179e00691e0f92748e895fbe42aa14d39a808a12"} Feb 23 08:49:32 crc kubenswrapper[4940]: I0223 08:49:32.835477 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qkw6w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0b5a971-c6f4-4518-9bb3-49d228275668\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56ef19c4e2659a35760fb8e2bb8a79c35a283fa3f4b00766bdd237d1464bd933\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56ef19c4e2659a35760fb8e2bb8a79c35a283fa3f4b00766bdd237d1464bd933\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:49:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:49:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:49:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qkw6w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:32Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:32 crc kubenswrapper[4940]: I0223 08:49:32.852463 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:32Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:32 crc kubenswrapper[4940]: I0223 08:49:32.872120 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:32 crc kubenswrapper[4940]: I0223 08:49:32.872170 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:32 crc kubenswrapper[4940]: I0223 08:49:32.872182 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:32 crc kubenswrapper[4940]: I0223 08:49:32.872198 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:32 crc kubenswrapper[4940]: I0223 08:49:32.872213 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:32Z","lastTransitionTime":"2026-02-23T08:49:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:32 crc kubenswrapper[4940]: I0223 08:49:32.879646 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4164a479bddce53118172c24d9dc0c8174560ac6f49d2a1fa7321845f7f4020a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:32Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:32 crc kubenswrapper[4940]: I0223 08:49:32.893939 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:32Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:32 crc kubenswrapper[4940]: I0223 08:49:32.907339 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f5fabdbd24e079413b28b30b7e72c60763c64d08d55badde2d84d38dcbc8a09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bce84ef684469eb1b554c08d38cbd547ce6fa507b189a9f63c740583e66bc5e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:32Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:32 crc kubenswrapper[4940]: I0223 08:49:32.919564 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-czrqm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec3904ad-5d0b-46b4-9c13-68454d9a3cb2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5c3f117be2e1eba2edb562924ecec6d7ac44b508366bc7c63a86214ada740dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8jmt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:49:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-czrqm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:32Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:32 crc kubenswrapper[4940]: I0223 08:49:32.932588 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f3f2cfd6-5ddf-436d-998f-440f1cc642b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a14bc73401f5a1f767af0ebe6b81d75f4f735acadd163be9aa0b78d067677a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpqgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a57b4e844c0d09d7debf8e4d2507287815cc75be308dc7010647ac2041181dc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpqgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:49:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-26mgs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:32Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:32 crc kubenswrapper[4940]: I0223 08:49:32.949360 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67d1926d-47ed-4a5c-b868-690599126446\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ee3bc200ebaed1c133ad32e65db44c8613c48203736dada71c0881d3d4f45a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6fefd6cf7ee54d7687a700f08f52ff20d438289f846da4d811f3295e85455c1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee77b54ead604e1f64b0057494a4ba082ac7f59289f563ce28ec6048485e3b07\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://947d88435f9dfeac492c7a476d5413d6c9f65fea2e6a8322bed3a197be4e7c11\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://947d88435f9dfeac492c7a476d5413d6c9f65fea2e6a8322bed3a197be4e7c11\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"message\\\":\\\"le observer\\\\nW0223 08:48:59.155726 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0223 08:48:59.155899 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0223 08:48:59.156685 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1318688766/tls.crt::/tmp/serving-cert-1318688766/tls.key\\\\\\\"\\\\nI0223 08:48:59.543716 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0223 08:48:59.548278 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0223 08:48:59.548307 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0223 08:48:59.548334 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0223 08:48:59.548340 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0223 08:48:59.558852 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0223 08:48:59.558871 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0223 08:48:59.558894 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0223 08:48:59.558902 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0223 08:48:59.558909 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0223 08:48:59.558912 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0223 08:48:59.558917 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0223 08:48:59.558922 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0223 08:48:59.560387 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T08:48:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c47633e36ba0e4b4070060e2925e103b6efa68771b18f7a7f429546f9f05c1d9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:53Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7d8e4f8739813cb8d24afc9999755886cd4a77958b8b4e1514fc0bc53403258\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7d8e4f8739813cb8d24afc9999755886cd4a77958b8b4e1514fc0bc53403258\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:47:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:47:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:47:49Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:32Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:32 crc kubenswrapper[4940]: I0223 08:49:32.962128 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4vcwd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41834650-70c0-4558-9052-d7cdfc785e09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a8663adb017b0225e3a827e7670572c25e2b8123a2153e9f99830d8ddc60ebe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwtwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:49:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4vcwd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:32Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:32 crc kubenswrapper[4940]: I0223 08:49:32.975777 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:32 crc kubenswrapper[4940]: I0223 08:49:32.975819 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:32 crc kubenswrapper[4940]: I0223 08:49:32.975828 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:32 crc kubenswrapper[4940]: I0223 08:49:32.975847 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:32 crc kubenswrapper[4940]: I0223 08:49:32.975860 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:32Z","lastTransitionTime":"2026-02-23T08:49:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:32 crc kubenswrapper[4940]: I0223 08:49:32.976176 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b55d9910-222c-4c04-8a15-cecc288b8dd6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2696757a27ff629002d6fe762776487514e74ad9465357240b3f196cf9e3cec7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73db3b91fa5cdca73acde9a3bcb9062394a54e61e87020b5d656290a54aa4f70\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T08:48:50Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0223 08:48:20.958890 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0223 08:48:20.960636 1 observer_polling.go:159] Starting file observer\\\\nI0223 08:48:20.961488 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0223 08:48:20.962119 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0223 08:48:48.479316 1 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials\\\\nI0223 08:48:50.579389 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0223 08:48:50.579494 1 cmd.go:179] failed checking apiserver connectivity: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T08:48:20Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:48:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aceeb7fc620d96b7e2a474de52d7b3fb007417f4f1ec81e5853724d54381566\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dbb2f4667fe28c3ae9389cece3d7246f31a8bd82328370a699dae73e11b19a4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0c1a355a4218aa74f5a9f20e74427e20e7dd1864bfb683eeb2ac43ef6d7357f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:47:49Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:32Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:32 crc kubenswrapper[4940]: I0223 08:49:32.988743 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ca6a355de735d39d8c069429deed2a9983ba8d97d9210475c0b22791b6357d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:32Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:33 crc kubenswrapper[4940]: I0223 08:49:33.001887 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:32Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:33 crc kubenswrapper[4940]: I0223 08:49:33.017941 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-tj6ms" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"353c05f4-9ef1-4c0f-8388-8b56cc4c22d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j86pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e45b5826dae32da76487ca6179e00691e0f92748e895fbe42aa14d39a808a12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j86pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j86pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j86pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j86pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j86pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j86pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:49:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-tj6ms\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:33Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:33 crc kubenswrapper[4940]: I0223 08:49:33.033841 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:33Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:33 crc kubenswrapper[4940]: I0223 08:49:33.051832 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4164a479bddce53118172c24d9dc0c8174560ac6f49d2a1fa7321845f7f4020a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:33Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:33 crc kubenswrapper[4940]: I0223 08:49:33.065840 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:33Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:33 crc kubenswrapper[4940]: I0223 08:49:33.077942 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:33 crc kubenswrapper[4940]: I0223 08:49:33.077988 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:33 crc kubenswrapper[4940]: I0223 08:49:33.078001 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:33 crc kubenswrapper[4940]: I0223 08:49:33.078017 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:33 crc kubenswrapper[4940]: I0223 08:49:33.078028 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:33Z","lastTransitionTime":"2026-02-23T08:49:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:33 crc kubenswrapper[4940]: I0223 08:49:33.079965 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f5fabdbd24e079413b28b30b7e72c60763c64d08d55badde2d84d38dcbc8a09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bce84ef684469eb1b554c08d38cbd547ce6fa507b189a9f63c740583e66bc5e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:33Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:33 crc kubenswrapper[4940]: I0223 08:49:33.097650 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-czrqm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec3904ad-5d0b-46b4-9c13-68454d9a3cb2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5c3f117be2e1eba2edb562924ecec6d7ac44b508366bc7c63a86214ada740dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8jmt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:49:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-czrqm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:33Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:33 crc kubenswrapper[4940]: I0223 08:49:33.117669 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qkw6w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0b5a971-c6f4-4518-9bb3-49d228275668\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56ef19c4e2659a35760fb8e2bb8a79c35a283fa3f4b00766bdd237d1464bd933\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56ef19c4e2659a35760fb8e2bb8a79c35a283fa3f4b00766bdd237d1464bd933\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:49:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:49:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:49:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qkw6w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:33Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:33 crc kubenswrapper[4940]: I0223 08:49:33.131770 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f3f2cfd6-5ddf-436d-998f-440f1cc642b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a14bc73401f5a1f767af0ebe6b81d75f4f735acadd163be9aa0b78d067677a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpqgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a57b4e844c0d09d7debf8e4d2507287815cc75be308dc7010647ac2041181dc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpqgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:49:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-26mgs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:33Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:33 crc kubenswrapper[4940]: I0223 08:49:33.146142 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4vcwd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41834650-70c0-4558-9052-d7cdfc785e09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a8663adb017b0225e3a827e7670572c25e2b8123a2153e9f99830d8ddc60ebe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwtwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:49:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4vcwd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:33Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:33 crc kubenswrapper[4940]: I0223 08:49:33.163360 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67d1926d-47ed-4a5c-b868-690599126446\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ee3bc200ebaed1c133ad32e65db44c8613c48203736dada71c0881d3d4f45a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6fefd6cf7ee54d7687a700f08f52ff20d438289f846da4d811f3295e85455c1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee77b54ead604e1f64b0057494a4ba082ac7f59289f563ce28ec6048485e3b07\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://947d88435f9dfeac492c7a476d5413d6c9f65fea2e6a8322bed3a197be4e7c11\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://947d88435f9dfeac492c7a476d5413d6c9f65fea2e6a8322bed3a197be4e7c11\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"message\\\":\\\"le observer\\\\nW0223 08:48:59.155726 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0223 08:48:59.155899 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0223 08:48:59.156685 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1318688766/tls.crt::/tmp/serving-cert-1318688766/tls.key\\\\\\\"\\\\nI0223 08:48:59.543716 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0223 08:48:59.548278 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0223 08:48:59.548307 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0223 08:48:59.548334 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0223 08:48:59.548340 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0223 08:48:59.558852 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0223 08:48:59.558871 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0223 08:48:59.558894 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0223 08:48:59.558902 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0223 08:48:59.558909 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0223 08:48:59.558912 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0223 08:48:59.558917 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0223 08:48:59.558922 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0223 08:48:59.560387 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T08:48:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c47633e36ba0e4b4070060e2925e103b6efa68771b18f7a7f429546f9f05c1d9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:53Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7d8e4f8739813cb8d24afc9999755886cd4a77958b8b4e1514fc0bc53403258\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7d8e4f8739813cb8d24afc9999755886cd4a77958b8b4e1514fc0bc53403258\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:47:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:47:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:47:49Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:33Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:33 crc kubenswrapper[4940]: I0223 08:49:33.177712 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:33Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:33 crc kubenswrapper[4940]: I0223 08:49:33.180377 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:33 crc kubenswrapper[4940]: I0223 08:49:33.180442 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:33 crc kubenswrapper[4940]: I0223 08:49:33.180454 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:33 crc kubenswrapper[4940]: I0223 08:49:33.180478 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:33 crc kubenswrapper[4940]: I0223 08:49:33.180496 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:33Z","lastTransitionTime":"2026-02-23T08:49:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:33 crc kubenswrapper[4940]: I0223 08:49:33.194818 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-tj6ms" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"353c05f4-9ef1-4c0f-8388-8b56cc4c22d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j86pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e45b5826dae32da76487ca6179e00691e0f92748e895fbe42aa14d39a808a12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e45b5826dae32da76487ca6179e00691e0f92748e895fbe42aa14d39a808a12\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j86pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j86pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j86pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j86pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j86pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j86pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:49:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-tj6ms\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:33Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:33 crc kubenswrapper[4940]: I0223 08:49:33.207827 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b55d9910-222c-4c04-8a15-cecc288b8dd6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2696757a27ff629002d6fe762776487514e74ad9465357240b3f196cf9e3cec7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73db3b91fa5cdca73acde9a3bcb9062394a54e61e87020b5d656290a54aa4f70\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T08:48:50Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0223 08:48:20.958890 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0223 08:48:20.960636 1 observer_polling.go:159] Starting file observer\\\\nI0223 08:48:20.961488 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0223 08:48:20.962119 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0223 08:48:48.479316 1 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials\\\\nI0223 08:48:50.579389 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0223 08:48:50.579494 1 cmd.go:179] failed checking apiserver connectivity: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T08:48:20Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:48:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aceeb7fc620d96b7e2a474de52d7b3fb007417f4f1ec81e5853724d54381566\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dbb2f4667fe28c3ae9389cece3d7246f31a8bd82328370a699dae73e11b19a4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0c1a355a4218aa74f5a9f20e74427e20e7dd1864bfb683eeb2ac43ef6d7357f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:47:49Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:33Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:33 crc kubenswrapper[4940]: I0223 08:49:33.222208 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ca6a355de735d39d8c069429deed2a9983ba8d97d9210475c0b22791b6357d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:33Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:33 crc kubenswrapper[4940]: I0223 08:49:33.283631 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:33 crc kubenswrapper[4940]: I0223 08:49:33.283667 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:33 crc kubenswrapper[4940]: I0223 08:49:33.283677 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:33 crc kubenswrapper[4940]: I0223 08:49:33.283691 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:33 crc kubenswrapper[4940]: I0223 08:49:33.283700 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:33Z","lastTransitionTime":"2026-02-23T08:49:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:33 crc kubenswrapper[4940]: I0223 08:49:33.344728 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 08:49:33 crc kubenswrapper[4940]: E0223 08:49:33.344863 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 08:49:33 crc kubenswrapper[4940]: I0223 08:49:33.345108 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 08:49:33 crc kubenswrapper[4940]: E0223 08:49:33.345399 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 08:49:33 crc kubenswrapper[4940]: I0223 08:49:33.349745 4940 scope.go:117] "RemoveContainer" containerID="947d88435f9dfeac492c7a476d5413d6c9f65fea2e6a8322bed3a197be4e7c11" Feb 23 08:49:33 crc kubenswrapper[4940]: E0223 08:49:33.350069 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 23 08:49:33 crc kubenswrapper[4940]: I0223 08:49:33.351402 4940 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 11:01:17.809601213 +0000 UTC Feb 23 08:49:33 crc kubenswrapper[4940]: I0223 08:49:33.386312 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:33 crc kubenswrapper[4940]: I0223 08:49:33.386350 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:33 crc kubenswrapper[4940]: I0223 08:49:33.386358 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:33 crc kubenswrapper[4940]: I0223 08:49:33.386372 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:33 crc kubenswrapper[4940]: I0223 08:49:33.386381 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:33Z","lastTransitionTime":"2026-02-23T08:49:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:33 crc kubenswrapper[4940]: I0223 08:49:33.430498 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:33 crc kubenswrapper[4940]: I0223 08:49:33.430544 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:33 crc kubenswrapper[4940]: I0223 08:49:33.430556 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:33 crc kubenswrapper[4940]: I0223 08:49:33.430576 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:33 crc kubenswrapper[4940]: I0223 08:49:33.430587 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:33Z","lastTransitionTime":"2026-02-23T08:49:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:33 crc kubenswrapper[4940]: E0223 08:49:33.445747 4940 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T08:49:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T08:49:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T08:49:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T08:49:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0b40f9a7-6d5b-496d-bcec-88183c6aba29\\\",\\\"systemUUID\\\":\\\"3c406e8c-0d77-4ead-8ee9-37cf28c01cc1\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:33Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:33 crc kubenswrapper[4940]: I0223 08:49:33.451082 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:33 crc kubenswrapper[4940]: I0223 08:49:33.451122 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:33 crc kubenswrapper[4940]: I0223 08:49:33.451135 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:33 crc kubenswrapper[4940]: I0223 08:49:33.451149 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:33 crc kubenswrapper[4940]: I0223 08:49:33.451159 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:33Z","lastTransitionTime":"2026-02-23T08:49:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:33 crc kubenswrapper[4940]: E0223 08:49:33.471022 4940 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T08:49:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T08:49:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T08:49:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T08:49:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0b40f9a7-6d5b-496d-bcec-88183c6aba29\\\",\\\"systemUUID\\\":\\\"3c406e8c-0d77-4ead-8ee9-37cf28c01cc1\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:33Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:33 crc kubenswrapper[4940]: I0223 08:49:33.476873 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:33 crc kubenswrapper[4940]: I0223 08:49:33.476938 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:33 crc kubenswrapper[4940]: I0223 08:49:33.476953 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:33 crc kubenswrapper[4940]: I0223 08:49:33.476982 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:33 crc kubenswrapper[4940]: I0223 08:49:33.476997 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:33Z","lastTransitionTime":"2026-02-23T08:49:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:33 crc kubenswrapper[4940]: E0223 08:49:33.496696 4940 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T08:49:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T08:49:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T08:49:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T08:49:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0b40f9a7-6d5b-496d-bcec-88183c6aba29\\\",\\\"systemUUID\\\":\\\"3c406e8c-0d77-4ead-8ee9-37cf28c01cc1\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:33Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:33 crc kubenswrapper[4940]: I0223 08:49:33.501262 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:33 crc kubenswrapper[4940]: I0223 08:49:33.501502 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:33 crc kubenswrapper[4940]: I0223 08:49:33.501598 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:33 crc kubenswrapper[4940]: I0223 08:49:33.501726 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:33 crc kubenswrapper[4940]: I0223 08:49:33.501840 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:33Z","lastTransitionTime":"2026-02-23T08:49:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:33 crc kubenswrapper[4940]: E0223 08:49:33.517573 4940 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T08:49:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T08:49:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T08:49:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T08:49:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0b40f9a7-6d5b-496d-bcec-88183c6aba29\\\",\\\"systemUUID\\\":\\\"3c406e8c-0d77-4ead-8ee9-37cf28c01cc1\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:33Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:33 crc kubenswrapper[4940]: I0223 08:49:33.523183 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:33 crc kubenswrapper[4940]: I0223 08:49:33.523239 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:33 crc kubenswrapper[4940]: I0223 08:49:33.523258 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:33 crc kubenswrapper[4940]: I0223 08:49:33.523280 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:33 crc kubenswrapper[4940]: I0223 08:49:33.523293 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:33Z","lastTransitionTime":"2026-02-23T08:49:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:33 crc kubenswrapper[4940]: E0223 08:49:33.541398 4940 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T08:49:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T08:49:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T08:49:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T08:49:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0b40f9a7-6d5b-496d-bcec-88183c6aba29\\\",\\\"systemUUID\\\":\\\"3c406e8c-0d77-4ead-8ee9-37cf28c01cc1\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:33Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:33 crc kubenswrapper[4940]: E0223 08:49:33.541561 4940 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 23 08:49:33 crc kubenswrapper[4940]: I0223 08:49:33.543187 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:33 crc kubenswrapper[4940]: I0223 08:49:33.543215 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:33 crc kubenswrapper[4940]: I0223 08:49:33.543227 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:33 crc kubenswrapper[4940]: I0223 08:49:33.543245 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:33 crc kubenswrapper[4940]: I0223 08:49:33.543258 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:33Z","lastTransitionTime":"2026-02-23T08:49:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:33 crc kubenswrapper[4940]: I0223 08:49:33.646342 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:33 crc kubenswrapper[4940]: I0223 08:49:33.646386 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:33 crc kubenswrapper[4940]: I0223 08:49:33.646397 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:33 crc kubenswrapper[4940]: I0223 08:49:33.646413 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:33 crc kubenswrapper[4940]: I0223 08:49:33.646423 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:33Z","lastTransitionTime":"2026-02-23T08:49:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:33 crc kubenswrapper[4940]: I0223 08:49:33.748529 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:33 crc kubenswrapper[4940]: I0223 08:49:33.748567 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:33 crc kubenswrapper[4940]: I0223 08:49:33.748576 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:33 crc kubenswrapper[4940]: I0223 08:49:33.748590 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:33 crc kubenswrapper[4940]: I0223 08:49:33.748599 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:33Z","lastTransitionTime":"2026-02-23T08:49:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:33 crc kubenswrapper[4940]: I0223 08:49:33.814798 4940 generic.go:334] "Generic (PLEG): container finished" podID="353c05f4-9ef1-4c0f-8388-8b56cc4c22d5" containerID="7a9386fc4a4a81adf5f99f88359b958f8c47f37eb176a860c96e5e3b6a219f3f" exitCode=0 Feb 23 08:49:33 crc kubenswrapper[4940]: I0223 08:49:33.814882 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-tj6ms" event={"ID":"353c05f4-9ef1-4c0f-8388-8b56cc4c22d5","Type":"ContainerDied","Data":"7a9386fc4a4a81adf5f99f88359b958f8c47f37eb176a860c96e5e3b6a219f3f"} Feb 23 08:49:33 crc kubenswrapper[4940]: I0223 08:49:33.820963 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qkw6w" event={"ID":"d0b5a971-c6f4-4518-9bb3-49d228275668","Type":"ContainerStarted","Data":"683969deceafc005dbe91b7dbc1fd159cc1da9f55e48112ff68947c40997891e"} Feb 23 08:49:33 crc kubenswrapper[4940]: I0223 08:49:33.821006 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qkw6w" event={"ID":"d0b5a971-c6f4-4518-9bb3-49d228275668","Type":"ContainerStarted","Data":"5cff793d0cde25f9d7edf3abaaf81005870d7ad01f985cfcc3cf3f5a57ea062b"} Feb 23 08:49:33 crc kubenswrapper[4940]: I0223 08:49:33.821019 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qkw6w" event={"ID":"d0b5a971-c6f4-4518-9bb3-49d228275668","Type":"ContainerStarted","Data":"4558e2adbf43351cb79df3a10fc2e37fa7edc8eae79ceb459ae1ea068e8df21b"} Feb 23 08:49:33 crc kubenswrapper[4940]: I0223 08:49:33.821029 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qkw6w" event={"ID":"d0b5a971-c6f4-4518-9bb3-49d228275668","Type":"ContainerStarted","Data":"524ac500bdffecad2d1533ea863b5f2dc0acd130368e80faa476facf6817bcc0"} Feb 23 08:49:33 crc kubenswrapper[4940]: I0223 08:49:33.821037 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qkw6w" event={"ID":"d0b5a971-c6f4-4518-9bb3-49d228275668","Type":"ContainerStarted","Data":"2f189355dc4d43c8b1df6111c9c3a128e90b724b976602933c384817e4f51882"} Feb 23 08:49:33 crc kubenswrapper[4940]: I0223 08:49:33.821047 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qkw6w" event={"ID":"d0b5a971-c6f4-4518-9bb3-49d228275668","Type":"ContainerStarted","Data":"9ef9add89b50dd7b480368148d404e8f7128d92e6d921c3c89a17463706f3dca"} Feb 23 08:49:33 crc kubenswrapper[4940]: I0223 08:49:33.833392 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b55d9910-222c-4c04-8a15-cecc288b8dd6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2696757a27ff629002d6fe762776487514e74ad9465357240b3f196cf9e3cec7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73db3b91fa5cdca73acde9a3bcb9062394a54e61e87020b5d656290a54aa4f70\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T08:48:50Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0223 08:48:20.958890 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0223 08:48:20.960636 1 observer_polling.go:159] Starting file observer\\\\nI0223 08:48:20.961488 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0223 08:48:20.962119 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0223 08:48:48.479316 1 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials\\\\nI0223 08:48:50.579389 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0223 08:48:50.579494 1 cmd.go:179] failed checking apiserver connectivity: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T08:48:20Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:48:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aceeb7fc620d96b7e2a474de52d7b3fb007417f4f1ec81e5853724d54381566\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dbb2f4667fe28c3ae9389cece3d7246f31a8bd82328370a699dae73e11b19a4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0c1a355a4218aa74f5a9f20e74427e20e7dd1864bfb683eeb2ac43ef6d7357f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:47:49Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:33Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:33 crc kubenswrapper[4940]: I0223 08:49:33.846970 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ca6a355de735d39d8c069429deed2a9983ba8d97d9210475c0b22791b6357d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:33Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:33 crc kubenswrapper[4940]: I0223 08:49:33.851033 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:33 crc kubenswrapper[4940]: I0223 08:49:33.851086 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:33 crc kubenswrapper[4940]: I0223 08:49:33.851106 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:33 crc kubenswrapper[4940]: I0223 08:49:33.851134 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:33 crc kubenswrapper[4940]: I0223 08:49:33.851159 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:33Z","lastTransitionTime":"2026-02-23T08:49:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:33 crc kubenswrapper[4940]: I0223 08:49:33.860248 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:33Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:33 crc kubenswrapper[4940]: I0223 08:49:33.878380 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-tj6ms" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"353c05f4-9ef1-4c0f-8388-8b56cc4c22d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j86pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e45b5826dae32da76487ca6179e00691e0f92748e895fbe42aa14d39a808a12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e45b5826dae32da76487ca6179e00691e0f92748e895fbe42aa14d39a808a12\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j86pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a9386fc4a4a81adf5f99f88359b958f8c47f37eb176a860c96e5e3b6a219f3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a9386fc4a4a81adf5f99f88359b958f8c47f37eb176a860c96e5e3b6a219f3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:49:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j86pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j86pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j86pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j86pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j86pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:49:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-tj6ms\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:33Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:33 crc kubenswrapper[4940]: I0223 08:49:33.893456 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f5fabdbd24e079413b28b30b7e72c60763c64d08d55badde2d84d38dcbc8a09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bce84ef684469eb1b554c08d38cbd547ce6fa507b189a9f63c740583e66bc5e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:33Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:33 crc kubenswrapper[4940]: I0223 08:49:33.909477 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-czrqm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec3904ad-5d0b-46b4-9c13-68454d9a3cb2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5c3f117be2e1eba2edb562924ecec6d7ac44b508366bc7c63a86214ada740dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8jmt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:49:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-czrqm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:33Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:33 crc kubenswrapper[4940]: I0223 08:49:33.930320 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qkw6w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0b5a971-c6f4-4518-9bb3-49d228275668\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56ef19c4e2659a35760fb8e2bb8a79c35a283fa3f4b00766bdd237d1464bd933\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56ef19c4e2659a35760fb8e2bb8a79c35a283fa3f4b00766bdd237d1464bd933\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:49:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:49:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:49:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qkw6w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:33Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:33 crc kubenswrapper[4940]: I0223 08:49:33.943851 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:33Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:33 crc kubenswrapper[4940]: I0223 08:49:33.953989 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:33 crc kubenswrapper[4940]: I0223 08:49:33.954031 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:33 crc kubenswrapper[4940]: I0223 08:49:33.954042 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:33 crc kubenswrapper[4940]: I0223 08:49:33.954058 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:33 crc kubenswrapper[4940]: I0223 08:49:33.954068 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:33Z","lastTransitionTime":"2026-02-23T08:49:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:33 crc kubenswrapper[4940]: I0223 08:49:33.958960 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4164a479bddce53118172c24d9dc0c8174560ac6f49d2a1fa7321845f7f4020a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:33Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:33 crc kubenswrapper[4940]: I0223 08:49:33.970247 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:33Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:33 crc kubenswrapper[4940]: I0223 08:49:33.982856 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f3f2cfd6-5ddf-436d-998f-440f1cc642b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a14bc73401f5a1f767af0ebe6b81d75f4f735acadd163be9aa0b78d067677a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpqgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a57b4e844c0d09d7debf8e4d2507287815cc75be308dc7010647ac2041181dc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpqgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:49:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-26mgs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:33Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:33 crc kubenswrapper[4940]: I0223 08:49:33.996829 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67d1926d-47ed-4a5c-b868-690599126446\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ee3bc200ebaed1c133ad32e65db44c8613c48203736dada71c0881d3d4f45a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6fefd6cf7ee54d7687a700f08f52ff20d438289f846da4d811f3295e85455c1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee77b54ead604e1f64b0057494a4ba082ac7f59289f563ce28ec6048485e3b07\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://947d88435f9dfeac492c7a476d5413d6c9f65fea2e6a8322bed3a197be4e7c11\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://947d88435f9dfeac492c7a476d5413d6c9f65fea2e6a8322bed3a197be4e7c11\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"message\\\":\\\"le observer\\\\nW0223 08:48:59.155726 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0223 08:48:59.155899 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0223 08:48:59.156685 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1318688766/tls.crt::/tmp/serving-cert-1318688766/tls.key\\\\\\\"\\\\nI0223 08:48:59.543716 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0223 08:48:59.548278 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0223 08:48:59.548307 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0223 08:48:59.548334 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0223 08:48:59.548340 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0223 08:48:59.558852 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0223 08:48:59.558871 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0223 08:48:59.558894 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0223 08:48:59.558902 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0223 08:48:59.558909 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0223 08:48:59.558912 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0223 08:48:59.558917 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0223 08:48:59.558922 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0223 08:48:59.560387 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T08:48:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c47633e36ba0e4b4070060e2925e103b6efa68771b18f7a7f429546f9f05c1d9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:53Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7d8e4f8739813cb8d24afc9999755886cd4a77958b8b4e1514fc0bc53403258\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7d8e4f8739813cb8d24afc9999755886cd4a77958b8b4e1514fc0bc53403258\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:47:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:47:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:47:49Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:33Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:34 crc kubenswrapper[4940]: I0223 08:49:34.007901 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4vcwd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41834650-70c0-4558-9052-d7cdfc785e09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a8663adb017b0225e3a827e7670572c25e2b8123a2153e9f99830d8ddc60ebe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwtwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:49:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4vcwd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:34Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:34 crc kubenswrapper[4940]: I0223 08:49:34.061362 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:34 crc kubenswrapper[4940]: I0223 08:49:34.061417 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:34 crc kubenswrapper[4940]: I0223 08:49:34.061428 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:34 crc kubenswrapper[4940]: I0223 08:49:34.061445 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:34 crc kubenswrapper[4940]: I0223 08:49:34.061455 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:34Z","lastTransitionTime":"2026-02-23T08:49:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:34 crc kubenswrapper[4940]: I0223 08:49:34.164900 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:34 crc kubenswrapper[4940]: I0223 08:49:34.164960 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:34 crc kubenswrapper[4940]: I0223 08:49:34.164977 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:34 crc kubenswrapper[4940]: I0223 08:49:34.165002 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:34 crc kubenswrapper[4940]: I0223 08:49:34.165020 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:34Z","lastTransitionTime":"2026-02-23T08:49:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:34 crc kubenswrapper[4940]: I0223 08:49:34.268339 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:34 crc kubenswrapper[4940]: I0223 08:49:34.268408 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:34 crc kubenswrapper[4940]: I0223 08:49:34.268420 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:34 crc kubenswrapper[4940]: I0223 08:49:34.268443 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:34 crc kubenswrapper[4940]: I0223 08:49:34.268460 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:34Z","lastTransitionTime":"2026-02-23T08:49:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:34 crc kubenswrapper[4940]: I0223 08:49:34.345025 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 08:49:34 crc kubenswrapper[4940]: E0223 08:49:34.345174 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 08:49:34 crc kubenswrapper[4940]: I0223 08:49:34.355684 4940 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 14:05:44.836892207 +0000 UTC Feb 23 08:49:34 crc kubenswrapper[4940]: I0223 08:49:34.371465 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:34 crc kubenswrapper[4940]: I0223 08:49:34.371501 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:34 crc kubenswrapper[4940]: I0223 08:49:34.371509 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:34 crc kubenswrapper[4940]: I0223 08:49:34.371524 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:34 crc kubenswrapper[4940]: I0223 08:49:34.371535 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:34Z","lastTransitionTime":"2026-02-23T08:49:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:34 crc kubenswrapper[4940]: I0223 08:49:34.473804 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:34 crc kubenswrapper[4940]: I0223 08:49:34.473869 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:34 crc kubenswrapper[4940]: I0223 08:49:34.473883 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:34 crc kubenswrapper[4940]: I0223 08:49:34.473900 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:34 crc kubenswrapper[4940]: I0223 08:49:34.473914 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:34Z","lastTransitionTime":"2026-02-23T08:49:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:34 crc kubenswrapper[4940]: I0223 08:49:34.575872 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:34 crc kubenswrapper[4940]: I0223 08:49:34.575917 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:34 crc kubenswrapper[4940]: I0223 08:49:34.575928 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:34 crc kubenswrapper[4940]: I0223 08:49:34.575945 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:34 crc kubenswrapper[4940]: I0223 08:49:34.575958 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:34Z","lastTransitionTime":"2026-02-23T08:49:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:34 crc kubenswrapper[4940]: I0223 08:49:34.678697 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:34 crc kubenswrapper[4940]: I0223 08:49:34.678741 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:34 crc kubenswrapper[4940]: I0223 08:49:34.678749 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:34 crc kubenswrapper[4940]: I0223 08:49:34.678763 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:34 crc kubenswrapper[4940]: I0223 08:49:34.678773 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:34Z","lastTransitionTime":"2026-02-23T08:49:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:34 crc kubenswrapper[4940]: I0223 08:49:34.781504 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:34 crc kubenswrapper[4940]: I0223 08:49:34.781932 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:34 crc kubenswrapper[4940]: I0223 08:49:34.782072 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:34 crc kubenswrapper[4940]: I0223 08:49:34.782215 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:34 crc kubenswrapper[4940]: I0223 08:49:34.782407 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:34Z","lastTransitionTime":"2026-02-23T08:49:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:34 crc kubenswrapper[4940]: I0223 08:49:34.826478 4940 generic.go:334] "Generic (PLEG): container finished" podID="353c05f4-9ef1-4c0f-8388-8b56cc4c22d5" containerID="e0e096f58de305f0df76444507c878fdf5eb1a134deb3725e1e46ac4d6581541" exitCode=0 Feb 23 08:49:34 crc kubenswrapper[4940]: I0223 08:49:34.826524 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-tj6ms" event={"ID":"353c05f4-9ef1-4c0f-8388-8b56cc4c22d5","Type":"ContainerDied","Data":"e0e096f58de305f0df76444507c878fdf5eb1a134deb3725e1e46ac4d6581541"} Feb 23 08:49:34 crc kubenswrapper[4940]: I0223 08:49:34.849764 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67d1926d-47ed-4a5c-b868-690599126446\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ee3bc200ebaed1c133ad32e65db44c8613c48203736dada71c0881d3d4f45a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6fefd6cf7ee54d7687a700f08f52ff20d438289f846da4d811f3295e85455c1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee77b54ead604e1f64b0057494a4ba082ac7f59289f563ce28ec6048485e3b07\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://947d88435f9dfeac492c7a476d5413d6c9f65fea2e6a8322bed3a197be4e7c11\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://947d88435f9dfeac492c7a476d5413d6c9f65fea2e6a8322bed3a197be4e7c11\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"message\\\":\\\"le observer\\\\nW0223 08:48:59.155726 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0223 08:48:59.155899 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0223 08:48:59.156685 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1318688766/tls.crt::/tmp/serving-cert-1318688766/tls.key\\\\\\\"\\\\nI0223 08:48:59.543716 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0223 08:48:59.548278 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0223 08:48:59.548307 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0223 08:48:59.548334 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0223 08:48:59.548340 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0223 08:48:59.558852 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0223 08:48:59.558871 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0223 08:48:59.558894 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0223 08:48:59.558902 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0223 08:48:59.558909 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0223 08:48:59.558912 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0223 08:48:59.558917 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0223 08:48:59.558922 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0223 08:48:59.560387 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T08:48:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c47633e36ba0e4b4070060e2925e103b6efa68771b18f7a7f429546f9f05c1d9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:53Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7d8e4f8739813cb8d24afc9999755886cd4a77958b8b4e1514fc0bc53403258\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7d8e4f8739813cb8d24afc9999755886cd4a77958b8b4e1514fc0bc53403258\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:47:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:47:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:47:49Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:34Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:34 crc kubenswrapper[4940]: I0223 08:49:34.864371 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4vcwd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41834650-70c0-4558-9052-d7cdfc785e09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a8663adb017b0225e3a827e7670572c25e2b8123a2153e9f99830d8ddc60ebe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwtwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:49:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4vcwd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:34Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:34 crc kubenswrapper[4940]: I0223 08:49:34.878768 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b55d9910-222c-4c04-8a15-cecc288b8dd6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2696757a27ff629002d6fe762776487514e74ad9465357240b3f196cf9e3cec7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73db3b91fa5cdca73acde9a3bcb9062394a54e61e87020b5d656290a54aa4f70\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T08:48:50Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0223 08:48:20.958890 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0223 08:48:20.960636 1 observer_polling.go:159] Starting file observer\\\\nI0223 08:48:20.961488 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0223 08:48:20.962119 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0223 08:48:48.479316 1 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials\\\\nI0223 08:48:50.579389 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0223 08:48:50.579494 1 cmd.go:179] failed checking apiserver connectivity: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T08:48:20Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:48:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aceeb7fc620d96b7e2a474de52d7b3fb007417f4f1ec81e5853724d54381566\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dbb2f4667fe28c3ae9389cece3d7246f31a8bd82328370a699dae73e11b19a4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0c1a355a4218aa74f5a9f20e74427e20e7dd1864bfb683eeb2ac43ef6d7357f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:47:49Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:34Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:34 crc kubenswrapper[4940]: I0223 08:49:34.884054 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:34 crc kubenswrapper[4940]: I0223 08:49:34.884091 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:34 crc kubenswrapper[4940]: I0223 08:49:34.884104 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:34 crc kubenswrapper[4940]: I0223 08:49:34.884120 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:34 crc kubenswrapper[4940]: I0223 08:49:34.884131 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:34Z","lastTransitionTime":"2026-02-23T08:49:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:34 crc kubenswrapper[4940]: I0223 08:49:34.890842 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ca6a355de735d39d8c069429deed2a9983ba8d97d9210475c0b22791b6357d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:34Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:34 crc kubenswrapper[4940]: I0223 08:49:34.906014 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:34Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:34 crc kubenswrapper[4940]: I0223 08:49:34.922227 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-tj6ms" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"353c05f4-9ef1-4c0f-8388-8b56cc4c22d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j86pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e45b5826dae32da76487ca6179e00691e0f92748e895fbe42aa14d39a808a12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e45b5826dae32da76487ca6179e00691e0f92748e895fbe42aa14d39a808a12\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j86pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a9386fc4a4a81adf5f99f88359b958f8c47f37eb176a860c96e5e3b6a219f3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a9386fc4a4a81adf5f99f88359b958f8c47f37eb176a860c96e5e3b6a219f3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:49:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j86pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0e096f58de305f0df76444507c878fdf5eb1a134deb3725e1e46ac4d6581541\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0e096f58de305f0df76444507c878fdf5eb1a134deb3725e1e46ac4d6581541\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:49:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:49:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j86pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j86pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j86pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j86pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:49:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-tj6ms\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:34Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:34 crc kubenswrapper[4940]: I0223 08:49:34.935020 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f5fabdbd24e079413b28b30b7e72c60763c64d08d55badde2d84d38dcbc8a09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bce84ef684469eb1b554c08d38cbd547ce6fa507b189a9f63c740583e66bc5e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:34Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:34 crc kubenswrapper[4940]: I0223 08:49:34.947165 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-czrqm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec3904ad-5d0b-46b4-9c13-68454d9a3cb2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5c3f117be2e1eba2edb562924ecec6d7ac44b508366bc7c63a86214ada740dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8jmt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:49:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-czrqm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:34Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:34 crc kubenswrapper[4940]: I0223 08:49:34.966086 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qkw6w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0b5a971-c6f4-4518-9bb3-49d228275668\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56ef19c4e2659a35760fb8e2bb8a79c35a283fa3f4b00766bdd237d1464bd933\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56ef19c4e2659a35760fb8e2bb8a79c35a283fa3f4b00766bdd237d1464bd933\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:49:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:49:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:49:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qkw6w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:34Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:34 crc kubenswrapper[4940]: I0223 08:49:34.978170 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:34Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:34 crc kubenswrapper[4940]: I0223 08:49:34.986795 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:34 crc kubenswrapper[4940]: I0223 08:49:34.986832 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:34 crc kubenswrapper[4940]: I0223 08:49:34.986844 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:34 crc kubenswrapper[4940]: I0223 08:49:34.986863 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:34 crc kubenswrapper[4940]: I0223 08:49:34.986874 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:34Z","lastTransitionTime":"2026-02-23T08:49:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:34 crc kubenswrapper[4940]: I0223 08:49:34.992844 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4164a479bddce53118172c24d9dc0c8174560ac6f49d2a1fa7321845f7f4020a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:34Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:35 crc kubenswrapper[4940]: I0223 08:49:35.008361 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:35Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:35 crc kubenswrapper[4940]: I0223 08:49:35.020370 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f3f2cfd6-5ddf-436d-998f-440f1cc642b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a14bc73401f5a1f767af0ebe6b81d75f4f735acadd163be9aa0b78d067677a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpqgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a57b4e844c0d09d7debf8e4d2507287815cc75be308dc7010647ac2041181dc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpqgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:49:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-26mgs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:35Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:35 crc kubenswrapper[4940]: I0223 08:49:35.089890 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:35 crc kubenswrapper[4940]: I0223 08:49:35.089931 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:35 crc kubenswrapper[4940]: I0223 08:49:35.089942 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:35 crc kubenswrapper[4940]: I0223 08:49:35.089958 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:35 crc kubenswrapper[4940]: I0223 08:49:35.089968 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:35Z","lastTransitionTime":"2026-02-23T08:49:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:35 crc kubenswrapper[4940]: I0223 08:49:35.192586 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:35 crc kubenswrapper[4940]: I0223 08:49:35.192639 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:35 crc kubenswrapper[4940]: I0223 08:49:35.192650 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:35 crc kubenswrapper[4940]: I0223 08:49:35.192668 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:35 crc kubenswrapper[4940]: I0223 08:49:35.192680 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:35Z","lastTransitionTime":"2026-02-23T08:49:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:35 crc kubenswrapper[4940]: I0223 08:49:35.295531 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:35 crc kubenswrapper[4940]: I0223 08:49:35.295875 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:35 crc kubenswrapper[4940]: I0223 08:49:35.295973 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:35 crc kubenswrapper[4940]: I0223 08:49:35.296076 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:35 crc kubenswrapper[4940]: I0223 08:49:35.296260 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:35Z","lastTransitionTime":"2026-02-23T08:49:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:35 crc kubenswrapper[4940]: I0223 08:49:35.344736 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 08:49:35 crc kubenswrapper[4940]: I0223 08:49:35.344750 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 08:49:35 crc kubenswrapper[4940]: E0223 08:49:35.345240 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 08:49:35 crc kubenswrapper[4940]: E0223 08:49:35.345465 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 08:49:35 crc kubenswrapper[4940]: I0223 08:49:35.356659 4940 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 05:55:25.464982508 +0000 UTC Feb 23 08:49:35 crc kubenswrapper[4940]: I0223 08:49:35.398629 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:35 crc kubenswrapper[4940]: I0223 08:49:35.398674 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:35 crc kubenswrapper[4940]: I0223 08:49:35.398686 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:35 crc kubenswrapper[4940]: I0223 08:49:35.398703 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:35 crc kubenswrapper[4940]: I0223 08:49:35.398715 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:35Z","lastTransitionTime":"2026-02-23T08:49:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:35 crc kubenswrapper[4940]: I0223 08:49:35.502034 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:35 crc kubenswrapper[4940]: I0223 08:49:35.502103 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:35 crc kubenswrapper[4940]: I0223 08:49:35.502125 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:35 crc kubenswrapper[4940]: I0223 08:49:35.502153 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:35 crc kubenswrapper[4940]: I0223 08:49:35.502175 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:35Z","lastTransitionTime":"2026-02-23T08:49:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:35 crc kubenswrapper[4940]: I0223 08:49:35.605026 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:35 crc kubenswrapper[4940]: I0223 08:49:35.605302 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:35 crc kubenswrapper[4940]: I0223 08:49:35.605377 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:35 crc kubenswrapper[4940]: I0223 08:49:35.605451 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:35 crc kubenswrapper[4940]: I0223 08:49:35.605524 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:35Z","lastTransitionTime":"2026-02-23T08:49:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:35 crc kubenswrapper[4940]: I0223 08:49:35.707513 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:35 crc kubenswrapper[4940]: I0223 08:49:35.707559 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:35 crc kubenswrapper[4940]: I0223 08:49:35.707569 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:35 crc kubenswrapper[4940]: I0223 08:49:35.707586 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:35 crc kubenswrapper[4940]: I0223 08:49:35.707597 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:35Z","lastTransitionTime":"2026-02-23T08:49:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:35 crc kubenswrapper[4940]: I0223 08:49:35.810218 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:35 crc kubenswrapper[4940]: I0223 08:49:35.810261 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:35 crc kubenswrapper[4940]: I0223 08:49:35.810270 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:35 crc kubenswrapper[4940]: I0223 08:49:35.810286 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:35 crc kubenswrapper[4940]: I0223 08:49:35.810295 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:35Z","lastTransitionTime":"2026-02-23T08:49:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:35 crc kubenswrapper[4940]: I0223 08:49:35.832139 4940 generic.go:334] "Generic (PLEG): container finished" podID="353c05f4-9ef1-4c0f-8388-8b56cc4c22d5" containerID="fcc8e2b5ae391db63624b8bcbaa5cab45fcc5f168db2a52700d34ad2c3eb9b89" exitCode=0 Feb 23 08:49:35 crc kubenswrapper[4940]: I0223 08:49:35.832224 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-tj6ms" event={"ID":"353c05f4-9ef1-4c0f-8388-8b56cc4c22d5","Type":"ContainerDied","Data":"fcc8e2b5ae391db63624b8bcbaa5cab45fcc5f168db2a52700d34ad2c3eb9b89"} Feb 23 08:49:35 crc kubenswrapper[4940]: I0223 08:49:35.843838 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qkw6w" event={"ID":"d0b5a971-c6f4-4518-9bb3-49d228275668","Type":"ContainerStarted","Data":"a7a97208ce587c7633bbea9ad618e0721a712dfd62ff249a5843018bc125922a"} Feb 23 08:49:35 crc kubenswrapper[4940]: I0223 08:49:35.848622 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b55d9910-222c-4c04-8a15-cecc288b8dd6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2696757a27ff629002d6fe762776487514e74ad9465357240b3f196cf9e3cec7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73db3b91fa5cdca73acde9a3bcb9062394a54e61e87020b5d656290a54aa4f70\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T08:48:50Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0223 08:48:20.958890 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0223 08:48:20.960636 1 observer_polling.go:159] Starting file observer\\\\nI0223 08:48:20.961488 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0223 08:48:20.962119 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0223 08:48:48.479316 1 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials\\\\nI0223 08:48:50.579389 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0223 08:48:50.579494 1 cmd.go:179] failed checking apiserver connectivity: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T08:48:20Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:48:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aceeb7fc620d96b7e2a474de52d7b3fb007417f4f1ec81e5853724d54381566\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dbb2f4667fe28c3ae9389cece3d7246f31a8bd82328370a699dae73e11b19a4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0c1a355a4218aa74f5a9f20e74427e20e7dd1864bfb683eeb2ac43ef6d7357f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:47:49Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:35Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:35 crc kubenswrapper[4940]: I0223 08:49:35.862369 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ca6a355de735d39d8c069429deed2a9983ba8d97d9210475c0b22791b6357d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:35Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:35 crc kubenswrapper[4940]: I0223 08:49:35.878053 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:35Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:35 crc kubenswrapper[4940]: I0223 08:49:35.898542 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-tj6ms" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"353c05f4-9ef1-4c0f-8388-8b56cc4c22d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j86pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e45b5826dae32da76487ca6179e00691e0f92748e895fbe42aa14d39a808a12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e45b5826dae32da76487ca6179e00691e0f92748e895fbe42aa14d39a808a12\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j86pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a9386fc4a4a81adf5f99f88359b958f8c47f37eb176a860c96e5e3b6a219f3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a9386fc4a4a81adf5f99f88359b958f8c47f37eb176a860c96e5e3b6a219f3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:49:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j86pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0e096f58de305f0df76444507c878fdf5eb1a134deb3725e1e46ac4d6581541\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0e096f58de305f0df76444507c878fdf5eb1a134deb3725e1e46ac4d6581541\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:49:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:49:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j86pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcc8e2b5ae391db63624b8bcbaa5cab45fcc5f168db2a52700d34ad2c3eb9b89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fcc8e2b5ae391db63624b8bcbaa5cab45fcc5f168db2a52700d34ad2c3eb9b89\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:49:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:49:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j86pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j86pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j86pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:49:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-tj6ms\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:35Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:35 crc kubenswrapper[4940]: I0223 08:49:35.914484 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:35 crc kubenswrapper[4940]: I0223 08:49:35.914515 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:35 crc kubenswrapper[4940]: I0223 08:49:35.914523 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:35 crc kubenswrapper[4940]: I0223 08:49:35.914536 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:35 crc kubenswrapper[4940]: I0223 08:49:35.914547 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:35Z","lastTransitionTime":"2026-02-23T08:49:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:35 crc kubenswrapper[4940]: I0223 08:49:35.917893 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-czrqm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec3904ad-5d0b-46b4-9c13-68454d9a3cb2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5c3f117be2e1eba2edb562924ecec6d7ac44b508366bc7c63a86214ada740dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8jmt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:49:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-czrqm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:35Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:35 crc kubenswrapper[4940]: I0223 08:49:35.943578 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qkw6w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0b5a971-c6f4-4518-9bb3-49d228275668\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56ef19c4e2659a35760fb8e2bb8a79c35a283fa3f4b00766bdd237d1464bd933\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56ef19c4e2659a35760fb8e2bb8a79c35a283fa3f4b00766bdd237d1464bd933\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:49:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:49:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:49:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qkw6w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:35Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:35 crc kubenswrapper[4940]: I0223 08:49:35.958101 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:35Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:35 crc kubenswrapper[4940]: I0223 08:49:35.971849 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4164a479bddce53118172c24d9dc0c8174560ac6f49d2a1fa7321845f7f4020a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:35Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:35 crc kubenswrapper[4940]: I0223 08:49:35.991598 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:35Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:36 crc kubenswrapper[4940]: I0223 08:49:36.004317 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f5fabdbd24e079413b28b30b7e72c60763c64d08d55badde2d84d38dcbc8a09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bce84ef684469eb1b554c08d38cbd547ce6fa507b189a9f63c740583e66bc5e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:36Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:36 crc kubenswrapper[4940]: I0223 08:49:36.015273 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f3f2cfd6-5ddf-436d-998f-440f1cc642b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a14bc73401f5a1f767af0ebe6b81d75f4f735acadd163be9aa0b78d067677a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpqgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a57b4e844c0d09d7debf8e4d2507287815cc75be308dc7010647ac2041181dc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpqgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:49:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-26mgs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:36Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:36 crc kubenswrapper[4940]: I0223 08:49:36.017213 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:36 crc kubenswrapper[4940]: I0223 08:49:36.017248 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:36 crc kubenswrapper[4940]: I0223 08:49:36.017257 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:36 crc kubenswrapper[4940]: I0223 08:49:36.017273 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:36 crc kubenswrapper[4940]: I0223 08:49:36.017286 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:36Z","lastTransitionTime":"2026-02-23T08:49:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:36 crc kubenswrapper[4940]: I0223 08:49:36.027524 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67d1926d-47ed-4a5c-b868-690599126446\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ee3bc200ebaed1c133ad32e65db44c8613c48203736dada71c0881d3d4f45a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6fefd6cf7ee54d7687a700f08f52ff20d438289f846da4d811f3295e85455c1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee77b54ead604e1f64b0057494a4ba082ac7f59289f563ce28ec6048485e3b07\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://947d88435f9dfeac492c7a476d5413d6c9f65fea2e6a8322bed3a197be4e7c11\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://947d88435f9dfeac492c7a476d5413d6c9f65fea2e6a8322bed3a197be4e7c11\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"message\\\":\\\"le observer\\\\nW0223 08:48:59.155726 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0223 08:48:59.155899 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0223 08:48:59.156685 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1318688766/tls.crt::/tmp/serving-cert-1318688766/tls.key\\\\\\\"\\\\nI0223 08:48:59.543716 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0223 08:48:59.548278 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0223 08:48:59.548307 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0223 08:48:59.548334 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0223 08:48:59.548340 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0223 08:48:59.558852 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0223 08:48:59.558871 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0223 08:48:59.558894 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0223 08:48:59.558902 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0223 08:48:59.558909 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0223 08:48:59.558912 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0223 08:48:59.558917 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0223 08:48:59.558922 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0223 08:48:59.560387 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T08:48:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c47633e36ba0e4b4070060e2925e103b6efa68771b18f7a7f429546f9f05c1d9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:53Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7d8e4f8739813cb8d24afc9999755886cd4a77958b8b4e1514fc0bc53403258\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7d8e4f8739813cb8d24afc9999755886cd4a77958b8b4e1514fc0bc53403258\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:47:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:47:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:47:49Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:36Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:36 crc kubenswrapper[4940]: I0223 08:49:36.035953 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4vcwd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41834650-70c0-4558-9052-d7cdfc785e09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a8663adb017b0225e3a827e7670572c25e2b8123a2153e9f99830d8ddc60ebe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwtwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:49:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4vcwd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:36Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:36 crc kubenswrapper[4940]: I0223 08:49:36.119225 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:36 crc kubenswrapper[4940]: I0223 08:49:36.119269 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:36 crc kubenswrapper[4940]: I0223 08:49:36.119279 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:36 crc kubenswrapper[4940]: I0223 08:49:36.119295 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:36 crc kubenswrapper[4940]: I0223 08:49:36.119305 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:36Z","lastTransitionTime":"2026-02-23T08:49:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:36 crc kubenswrapper[4940]: I0223 08:49:36.221406 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:36 crc kubenswrapper[4940]: I0223 08:49:36.221453 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:36 crc kubenswrapper[4940]: I0223 08:49:36.221468 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:36 crc kubenswrapper[4940]: I0223 08:49:36.221487 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:36 crc kubenswrapper[4940]: I0223 08:49:36.221502 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:36Z","lastTransitionTime":"2026-02-23T08:49:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:36 crc kubenswrapper[4940]: I0223 08:49:36.324351 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:36 crc kubenswrapper[4940]: I0223 08:49:36.324408 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:36 crc kubenswrapper[4940]: I0223 08:49:36.324435 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:36 crc kubenswrapper[4940]: I0223 08:49:36.324459 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:36 crc kubenswrapper[4940]: I0223 08:49:36.324479 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:36Z","lastTransitionTime":"2026-02-23T08:49:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:36 crc kubenswrapper[4940]: I0223 08:49:36.345832 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 08:49:36 crc kubenswrapper[4940]: E0223 08:49:36.345975 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 08:49:36 crc kubenswrapper[4940]: I0223 08:49:36.357604 4940 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 19:49:37.855552048 +0000 UTC Feb 23 08:49:36 crc kubenswrapper[4940]: I0223 08:49:36.360392 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Feb 23 08:49:36 crc kubenswrapper[4940]: I0223 08:49:36.426420 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:36 crc kubenswrapper[4940]: I0223 08:49:36.426466 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:36 crc kubenswrapper[4940]: I0223 08:49:36.426477 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:36 crc kubenswrapper[4940]: I0223 08:49:36.426496 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:36 crc kubenswrapper[4940]: I0223 08:49:36.426507 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:36Z","lastTransitionTime":"2026-02-23T08:49:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:36 crc kubenswrapper[4940]: I0223 08:49:36.529122 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:36 crc kubenswrapper[4940]: I0223 08:49:36.529152 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:36 crc kubenswrapper[4940]: I0223 08:49:36.529161 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:36 crc kubenswrapper[4940]: I0223 08:49:36.529174 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:36 crc kubenswrapper[4940]: I0223 08:49:36.529198 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:36Z","lastTransitionTime":"2026-02-23T08:49:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:36 crc kubenswrapper[4940]: I0223 08:49:36.631691 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:36 crc kubenswrapper[4940]: I0223 08:49:36.631736 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:36 crc kubenswrapper[4940]: I0223 08:49:36.631748 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:36 crc kubenswrapper[4940]: I0223 08:49:36.631768 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:36 crc kubenswrapper[4940]: I0223 08:49:36.631779 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:36Z","lastTransitionTime":"2026-02-23T08:49:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:36 crc kubenswrapper[4940]: I0223 08:49:36.734070 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:36 crc kubenswrapper[4940]: I0223 08:49:36.734096 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:36 crc kubenswrapper[4940]: I0223 08:49:36.734104 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:36 crc kubenswrapper[4940]: I0223 08:49:36.734116 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:36 crc kubenswrapper[4940]: I0223 08:49:36.734124 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:36Z","lastTransitionTime":"2026-02-23T08:49:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:36 crc kubenswrapper[4940]: I0223 08:49:36.836020 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:36 crc kubenswrapper[4940]: I0223 08:49:36.836046 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:36 crc kubenswrapper[4940]: I0223 08:49:36.836054 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:36 crc kubenswrapper[4940]: I0223 08:49:36.836066 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:36 crc kubenswrapper[4940]: I0223 08:49:36.836074 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:36Z","lastTransitionTime":"2026-02-23T08:49:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:36 crc kubenswrapper[4940]: I0223 08:49:36.852236 4940 generic.go:334] "Generic (PLEG): container finished" podID="353c05f4-9ef1-4c0f-8388-8b56cc4c22d5" containerID="a0b3f29a8c1eea72788476a0aac20ec99cb908df8983b5271448ae824b8ab96a" exitCode=0 Feb 23 08:49:36 crc kubenswrapper[4940]: I0223 08:49:36.852809 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-tj6ms" event={"ID":"353c05f4-9ef1-4c0f-8388-8b56cc4c22d5","Type":"ContainerDied","Data":"a0b3f29a8c1eea72788476a0aac20ec99cb908df8983b5271448ae824b8ab96a"} Feb 23 08:49:36 crc kubenswrapper[4940]: I0223 08:49:36.868667 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:36Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:36 crc kubenswrapper[4940]: I0223 08:49:36.885059 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f5fabdbd24e079413b28b30b7e72c60763c64d08d55badde2d84d38dcbc8a09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bce84ef684469eb1b554c08d38cbd547ce6fa507b189a9f63c740583e66bc5e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:36Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:36 crc kubenswrapper[4940]: I0223 08:49:36.910639 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-czrqm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec3904ad-5d0b-46b4-9c13-68454d9a3cb2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5c3f117be2e1eba2edb562924ecec6d7ac44b508366bc7c63a86214ada740dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8jmt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:49:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-czrqm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:36Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:36 crc kubenswrapper[4940]: I0223 08:49:36.938422 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qkw6w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0b5a971-c6f4-4518-9bb3-49d228275668\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56ef19c4e2659a35760fb8e2bb8a79c35a283fa3f4b00766bdd237d1464bd933\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56ef19c4e2659a35760fb8e2bb8a79c35a283fa3f4b00766bdd237d1464bd933\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:49:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:49:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:49:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qkw6w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:36Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:36 crc kubenswrapper[4940]: I0223 08:49:36.940714 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:36 crc kubenswrapper[4940]: I0223 08:49:36.940756 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:36 crc kubenswrapper[4940]: I0223 08:49:36.940768 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:36 crc kubenswrapper[4940]: I0223 08:49:36.940786 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:36 crc kubenswrapper[4940]: I0223 08:49:36.940915 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:36Z","lastTransitionTime":"2026-02-23T08:49:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:36 crc kubenswrapper[4940]: I0223 08:49:36.957428 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47a2763e-a680-45c0-b4e8-0930c73e2e6b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4316680c09083874958b09e3d47897f844490827fafe87653b1459708ba55af3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c467a466c26e4cd6c762317dcc08a84b7bd61aa044e8d075f7ada83206764d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2a314775a13f5a612d19ad1c27ccadc74e4ad191c84bd78a1068105509e514a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://863243ea1e361e4f5cd70d64fa4a0c078de0c0529c9928b413db44a13924c474\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cca6b93a0655ce53ad6d1ccfe625533a67c58b3876c57fe4473d699c95938bd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc81c994f3f21269783d08f800aa4a58bdf43d229b5b61d7f40f8706230fd6c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc81c994f3f21269783d08f800aa4a58bdf43d229b5b61d7f40f8706230fd6c5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:47:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:47:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://260cdbf12890f13f411672fde6af3350b3ddc30824ac5b43110247ae1c7b74fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://260cdbf12890f13f411672fde6af3350b3ddc30824ac5b43110247ae1c7b74fc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:47:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:47:52Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://768f56feda9f5952088933827a03f9d23d7d807d1354578de2335e7c3e8beaaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://768f56feda9f5952088933827a03f9d23d7d807d1354578de2335e7c3e8beaaf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:47:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:47:49Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:36Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:36 crc kubenswrapper[4940]: I0223 08:49:36.969459 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:36Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:36 crc kubenswrapper[4940]: I0223 08:49:36.984855 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4164a479bddce53118172c24d9dc0c8174560ac6f49d2a1fa7321845f7f4020a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:36Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:36 crc kubenswrapper[4940]: I0223 08:49:36.996391 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f3f2cfd6-5ddf-436d-998f-440f1cc642b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a14bc73401f5a1f767af0ebe6b81d75f4f735acadd163be9aa0b78d067677a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpqgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a57b4e844c0d09d7debf8e4d2507287815cc75be308dc7010647ac2041181dc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpqgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:49:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-26mgs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:36Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:37 crc kubenswrapper[4940]: I0223 08:49:37.008581 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67d1926d-47ed-4a5c-b868-690599126446\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ee3bc200ebaed1c133ad32e65db44c8613c48203736dada71c0881d3d4f45a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6fefd6cf7ee54d7687a700f08f52ff20d438289f846da4d811f3295e85455c1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee77b54ead604e1f64b0057494a4ba082ac7f59289f563ce28ec6048485e3b07\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://947d88435f9dfeac492c7a476d5413d6c9f65fea2e6a8322bed3a197be4e7c11\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://947d88435f9dfeac492c7a476d5413d6c9f65fea2e6a8322bed3a197be4e7c11\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"message\\\":\\\"le observer\\\\nW0223 08:48:59.155726 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0223 08:48:59.155899 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0223 08:48:59.156685 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1318688766/tls.crt::/tmp/serving-cert-1318688766/tls.key\\\\\\\"\\\\nI0223 08:48:59.543716 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0223 08:48:59.548278 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0223 08:48:59.548307 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0223 08:48:59.548334 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0223 08:48:59.548340 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0223 08:48:59.558852 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0223 08:48:59.558871 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0223 08:48:59.558894 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0223 08:48:59.558902 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0223 08:48:59.558909 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0223 08:48:59.558912 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0223 08:48:59.558917 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0223 08:48:59.558922 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0223 08:48:59.560387 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T08:48:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c47633e36ba0e4b4070060e2925e103b6efa68771b18f7a7f429546f9f05c1d9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:53Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7d8e4f8739813cb8d24afc9999755886cd4a77958b8b4e1514fc0bc53403258\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7d8e4f8739813cb8d24afc9999755886cd4a77958b8b4e1514fc0bc53403258\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:47:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:47:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:47:49Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:37Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:37 crc kubenswrapper[4940]: I0223 08:49:37.017284 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4vcwd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41834650-70c0-4558-9052-d7cdfc785e09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a8663adb017b0225e3a827e7670572c25e2b8123a2153e9f99830d8ddc60ebe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwtwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:49:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4vcwd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:37Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:37 crc kubenswrapper[4940]: I0223 08:49:37.029669 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b55d9910-222c-4c04-8a15-cecc288b8dd6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2696757a27ff629002d6fe762776487514e74ad9465357240b3f196cf9e3cec7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73db3b91fa5cdca73acde9a3bcb9062394a54e61e87020b5d656290a54aa4f70\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T08:48:50Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0223 08:48:20.958890 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0223 08:48:20.960636 1 observer_polling.go:159] Starting file observer\\\\nI0223 08:48:20.961488 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0223 08:48:20.962119 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0223 08:48:48.479316 1 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials\\\\nI0223 08:48:50.579389 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0223 08:48:50.579494 1 cmd.go:179] failed checking apiserver connectivity: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T08:48:20Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:48:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aceeb7fc620d96b7e2a474de52d7b3fb007417f4f1ec81e5853724d54381566\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dbb2f4667fe28c3ae9389cece3d7246f31a8bd82328370a699dae73e11b19a4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0c1a355a4218aa74f5a9f20e74427e20e7dd1864bfb683eeb2ac43ef6d7357f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:47:49Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:37Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:37 crc kubenswrapper[4940]: I0223 08:49:37.040367 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ca6a355de735d39d8c069429deed2a9983ba8d97d9210475c0b22791b6357d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:37Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:37 crc kubenswrapper[4940]: I0223 08:49:37.043009 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:37 crc kubenswrapper[4940]: I0223 08:49:37.043038 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:37 crc kubenswrapper[4940]: I0223 08:49:37.043050 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:37 crc kubenswrapper[4940]: I0223 08:49:37.043065 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:37 crc kubenswrapper[4940]: I0223 08:49:37.043077 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:37Z","lastTransitionTime":"2026-02-23T08:49:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:37 crc kubenswrapper[4940]: I0223 08:49:37.053258 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:37Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:37 crc kubenswrapper[4940]: I0223 08:49:37.068889 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-tj6ms" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"353c05f4-9ef1-4c0f-8388-8b56cc4c22d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j86pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e45b5826dae32da76487ca6179e00691e0f92748e895fbe42aa14d39a808a12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e45b5826dae32da76487ca6179e00691e0f92748e895fbe42aa14d39a808a12\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j86pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a9386fc4a4a81adf5f99f88359b958f8c47f37eb176a860c96e5e3b6a219f3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a9386fc4a4a81adf5f99f88359b958f8c47f37eb176a860c96e5e3b6a219f3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:49:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j86pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0e096f58de305f0df76444507c878fdf5eb1a134deb3725e1e46ac4d6581541\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0e096f58de305f0df76444507c878fdf5eb1a134deb3725e1e46ac4d6581541\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:49:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:49:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j86pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcc8e2b5ae391db63624b8bcbaa5cab45fcc5f168db2a52700d34ad2c3eb9b89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fcc8e2b5ae391db63624b8bcbaa5cab45fcc5f168db2a52700d34ad2c3eb9b89\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:49:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:49:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j86pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0b3f29a8c1eea72788476a0aac20ec99cb908df8983b5271448ae824b8ab96a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0b3f29a8c1eea72788476a0aac20ec99cb908df8983b5271448ae824b8ab96a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:49:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:49:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j86pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j86pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:49:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-tj6ms\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:37Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:37 crc kubenswrapper[4940]: I0223 08:49:37.145063 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:37 crc kubenswrapper[4940]: I0223 08:49:37.145104 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:37 crc kubenswrapper[4940]: I0223 08:49:37.145117 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:37 crc kubenswrapper[4940]: I0223 08:49:37.145132 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:37 crc kubenswrapper[4940]: I0223 08:49:37.145143 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:37Z","lastTransitionTime":"2026-02-23T08:49:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:37 crc kubenswrapper[4940]: I0223 08:49:37.248530 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:37 crc kubenswrapper[4940]: I0223 08:49:37.248934 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:37 crc kubenswrapper[4940]: I0223 08:49:37.249045 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:37 crc kubenswrapper[4940]: I0223 08:49:37.249699 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:37 crc kubenswrapper[4940]: I0223 08:49:37.249807 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:37Z","lastTransitionTime":"2026-02-23T08:49:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:37 crc kubenswrapper[4940]: I0223 08:49:37.345559 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 08:49:37 crc kubenswrapper[4940]: I0223 08:49:37.345721 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 08:49:37 crc kubenswrapper[4940]: E0223 08:49:37.345850 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 08:49:37 crc kubenswrapper[4940]: E0223 08:49:37.345960 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 08:49:37 crc kubenswrapper[4940]: I0223 08:49:37.352370 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:37 crc kubenswrapper[4940]: I0223 08:49:37.352414 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:37 crc kubenswrapper[4940]: I0223 08:49:37.352424 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:37 crc kubenswrapper[4940]: I0223 08:49:37.352440 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:37 crc kubenswrapper[4940]: I0223 08:49:37.352453 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:37Z","lastTransitionTime":"2026-02-23T08:49:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:37 crc kubenswrapper[4940]: I0223 08:49:37.357761 4940 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 10:38:59.487974412 +0000 UTC Feb 23 08:49:37 crc kubenswrapper[4940]: I0223 08:49:37.374324 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-ll9gt"] Feb 23 08:49:37 crc kubenswrapper[4940]: I0223 08:49:37.374828 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-ll9gt" Feb 23 08:49:37 crc kubenswrapper[4940]: I0223 08:49:37.376353 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Feb 23 08:49:37 crc kubenswrapper[4940]: I0223 08:49:37.378989 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Feb 23 08:49:37 crc kubenswrapper[4940]: I0223 08:49:37.379079 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Feb 23 08:49:37 crc kubenswrapper[4940]: I0223 08:49:37.379451 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Feb 23 08:49:37 crc kubenswrapper[4940]: I0223 08:49:37.399417 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47a2763e-a680-45c0-b4e8-0930c73e2e6b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4316680c09083874958b09e3d47897f844490827fafe87653b1459708ba55af3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c467a466c26e4cd6c762317dcc08a84b7bd61aa044e8d075f7ada83206764d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2a314775a13f5a612d19ad1c27ccadc74e4ad191c84bd78a1068105509e514a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://863243ea1e361e4f5cd70d64fa4a0c078de0c0529c9928b413db44a13924c474\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cca6b93a0655ce53ad6d1ccfe625533a67c58b3876c57fe4473d699c95938bd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc81c994f3f21269783d08f800aa4a58bdf43d229b5b61d7f40f8706230fd6c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc81c994f3f21269783d08f800aa4a58bdf43d229b5b61d7f40f8706230fd6c5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:47:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:47:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://260cdbf12890f13f411672fde6af3350b3ddc30824ac5b43110247ae1c7b74fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://260cdbf12890f13f411672fde6af3350b3ddc30824ac5b43110247ae1c7b74fc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:47:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:47:52Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://768f56feda9f5952088933827a03f9d23d7d807d1354578de2335e7c3e8beaaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://768f56feda9f5952088933827a03f9d23d7d807d1354578de2335e7c3e8beaaf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:47:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:47:49Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:37Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:37 crc kubenswrapper[4940]: I0223 08:49:37.415947 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:37Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:37 crc kubenswrapper[4940]: I0223 08:49:37.432673 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4164a479bddce53118172c24d9dc0c8174560ac6f49d2a1fa7321845f7f4020a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:37Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:37 crc kubenswrapper[4940]: I0223 08:49:37.450427 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:37Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:37 crc kubenswrapper[4940]: I0223 08:49:37.454732 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:37 crc kubenswrapper[4940]: I0223 08:49:37.454777 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:37 crc kubenswrapper[4940]: I0223 08:49:37.454791 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:37 crc kubenswrapper[4940]: I0223 08:49:37.454808 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:37 crc kubenswrapper[4940]: I0223 08:49:37.454819 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:37Z","lastTransitionTime":"2026-02-23T08:49:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:37 crc kubenswrapper[4940]: I0223 08:49:37.465226 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f5fabdbd24e079413b28b30b7e72c60763c64d08d55badde2d84d38dcbc8a09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bce84ef684469eb1b554c08d38cbd547ce6fa507b189a9f63c740583e66bc5e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:37Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:37 crc kubenswrapper[4940]: I0223 08:49:37.481358 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-czrqm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec3904ad-5d0b-46b4-9c13-68454d9a3cb2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5c3f117be2e1eba2edb562924ecec6d7ac44b508366bc7c63a86214ada740dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8jmt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:49:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-czrqm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:37Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:37 crc kubenswrapper[4940]: I0223 08:49:37.504269 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qkw6w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0b5a971-c6f4-4518-9bb3-49d228275668\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56ef19c4e2659a35760fb8e2bb8a79c35a283fa3f4b00766bdd237d1464bd933\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56ef19c4e2659a35760fb8e2bb8a79c35a283fa3f4b00766bdd237d1464bd933\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:49:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:49:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:49:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qkw6w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:37Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:37 crc kubenswrapper[4940]: I0223 08:49:37.517240 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ll9gt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac7931c6-f7d4-4166-b332-d954717f67c0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:37Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:37Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8clf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:49:37Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ll9gt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:37Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:37 crc kubenswrapper[4940]: I0223 08:49:37.522188 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ac7931c6-f7d4-4166-b332-d954717f67c0-host\") pod \"node-ca-ll9gt\" (UID: \"ac7931c6-f7d4-4166-b332-d954717f67c0\") " pod="openshift-image-registry/node-ca-ll9gt" Feb 23 08:49:37 crc kubenswrapper[4940]: I0223 08:49:37.522360 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s8clf\" (UniqueName: \"kubernetes.io/projected/ac7931c6-f7d4-4166-b332-d954717f67c0-kube-api-access-s8clf\") pod \"node-ca-ll9gt\" (UID: \"ac7931c6-f7d4-4166-b332-d954717f67c0\") " pod="openshift-image-registry/node-ca-ll9gt" Feb 23 08:49:37 crc kubenswrapper[4940]: I0223 08:49:37.522496 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/ac7931c6-f7d4-4166-b332-d954717f67c0-serviceca\") pod \"node-ca-ll9gt\" (UID: \"ac7931c6-f7d4-4166-b332-d954717f67c0\") " pod="openshift-image-registry/node-ca-ll9gt" Feb 23 08:49:37 crc kubenswrapper[4940]: I0223 08:49:37.528866 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f3f2cfd6-5ddf-436d-998f-440f1cc642b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a14bc73401f5a1f767af0ebe6b81d75f4f735acadd163be9aa0b78d067677a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpqgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a57b4e844c0d09d7debf8e4d2507287815cc75be308dc7010647ac2041181dc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpqgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:49:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-26mgs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:37Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:37 crc kubenswrapper[4940]: I0223 08:49:37.544113 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67d1926d-47ed-4a5c-b868-690599126446\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ee3bc200ebaed1c133ad32e65db44c8613c48203736dada71c0881d3d4f45a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6fefd6cf7ee54d7687a700f08f52ff20d438289f846da4d811f3295e85455c1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee77b54ead604e1f64b0057494a4ba082ac7f59289f563ce28ec6048485e3b07\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://947d88435f9dfeac492c7a476d5413d6c9f65fea2e6a8322bed3a197be4e7c11\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://947d88435f9dfeac492c7a476d5413d6c9f65fea2e6a8322bed3a197be4e7c11\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"message\\\":\\\"le observer\\\\nW0223 08:48:59.155726 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0223 08:48:59.155899 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0223 08:48:59.156685 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1318688766/tls.crt::/tmp/serving-cert-1318688766/tls.key\\\\\\\"\\\\nI0223 08:48:59.543716 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0223 08:48:59.548278 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0223 08:48:59.548307 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0223 08:48:59.548334 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0223 08:48:59.548340 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0223 08:48:59.558852 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0223 08:48:59.558871 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0223 08:48:59.558894 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0223 08:48:59.558902 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0223 08:48:59.558909 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0223 08:48:59.558912 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0223 08:48:59.558917 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0223 08:48:59.558922 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0223 08:48:59.560387 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T08:48:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c47633e36ba0e4b4070060e2925e103b6efa68771b18f7a7f429546f9f05c1d9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:53Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7d8e4f8739813cb8d24afc9999755886cd4a77958b8b4e1514fc0bc53403258\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7d8e4f8739813cb8d24afc9999755886cd4a77958b8b4e1514fc0bc53403258\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:47:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:47:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:47:49Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:37Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:37 crc kubenswrapper[4940]: I0223 08:49:37.554878 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4vcwd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41834650-70c0-4558-9052-d7cdfc785e09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a8663adb017b0225e3a827e7670572c25e2b8123a2153e9f99830d8ddc60ebe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwtwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:49:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4vcwd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:37Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:37 crc kubenswrapper[4940]: I0223 08:49:37.557222 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:37 crc kubenswrapper[4940]: I0223 08:49:37.557277 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:37 crc kubenswrapper[4940]: I0223 08:49:37.557298 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:37 crc kubenswrapper[4940]: I0223 08:49:37.557320 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:37 crc kubenswrapper[4940]: I0223 08:49:37.557334 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:37Z","lastTransitionTime":"2026-02-23T08:49:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:37 crc kubenswrapper[4940]: I0223 08:49:37.567178 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b55d9910-222c-4c04-8a15-cecc288b8dd6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2696757a27ff629002d6fe762776487514e74ad9465357240b3f196cf9e3cec7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73db3b91fa5cdca73acde9a3bcb9062394a54e61e87020b5d656290a54aa4f70\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T08:48:50Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0223 08:48:20.958890 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0223 08:48:20.960636 1 observer_polling.go:159] Starting file observer\\\\nI0223 08:48:20.961488 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0223 08:48:20.962119 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0223 08:48:48.479316 1 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials\\\\nI0223 08:48:50.579389 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0223 08:48:50.579494 1 cmd.go:179] failed checking apiserver connectivity: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T08:48:20Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:48:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aceeb7fc620d96b7e2a474de52d7b3fb007417f4f1ec81e5853724d54381566\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dbb2f4667fe28c3ae9389cece3d7246f31a8bd82328370a699dae73e11b19a4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0c1a355a4218aa74f5a9f20e74427e20e7dd1864bfb683eeb2ac43ef6d7357f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:47:49Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:37Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:37 crc kubenswrapper[4940]: I0223 08:49:37.581305 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ca6a355de735d39d8c069429deed2a9983ba8d97d9210475c0b22791b6357d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:37Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:37 crc kubenswrapper[4940]: I0223 08:49:37.593647 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:37Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:37 crc kubenswrapper[4940]: I0223 08:49:37.607886 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-tj6ms" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"353c05f4-9ef1-4c0f-8388-8b56cc4c22d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j86pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e45b5826dae32da76487ca6179e00691e0f92748e895fbe42aa14d39a808a12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e45b5826dae32da76487ca6179e00691e0f92748e895fbe42aa14d39a808a12\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j86pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a9386fc4a4a81adf5f99f88359b958f8c47f37eb176a860c96e5e3b6a219f3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a9386fc4a4a81adf5f99f88359b958f8c47f37eb176a860c96e5e3b6a219f3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:49:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j86pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0e096f58de305f0df76444507c878fdf5eb1a134deb3725e1e46ac4d6581541\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0e096f58de305f0df76444507c878fdf5eb1a134deb3725e1e46ac4d6581541\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:49:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:49:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j86pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcc8e2b5ae391db63624b8bcbaa5cab45fcc5f168db2a52700d34ad2c3eb9b89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fcc8e2b5ae391db63624b8bcbaa5cab45fcc5f168db2a52700d34ad2c3eb9b89\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:49:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:49:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j86pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0b3f29a8c1eea72788476a0aac20ec99cb908df8983b5271448ae824b8ab96a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0b3f29a8c1eea72788476a0aac20ec99cb908df8983b5271448ae824b8ab96a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:49:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:49:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j86pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j86pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:49:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-tj6ms\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:37Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:37 crc kubenswrapper[4940]: I0223 08:49:37.623518 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ac7931c6-f7d4-4166-b332-d954717f67c0-host\") pod \"node-ca-ll9gt\" (UID: \"ac7931c6-f7d4-4166-b332-d954717f67c0\") " pod="openshift-image-registry/node-ca-ll9gt" Feb 23 08:49:37 crc kubenswrapper[4940]: I0223 08:49:37.623566 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s8clf\" (UniqueName: \"kubernetes.io/projected/ac7931c6-f7d4-4166-b332-d954717f67c0-kube-api-access-s8clf\") pod \"node-ca-ll9gt\" (UID: \"ac7931c6-f7d4-4166-b332-d954717f67c0\") " pod="openshift-image-registry/node-ca-ll9gt" Feb 23 08:49:37 crc kubenswrapper[4940]: I0223 08:49:37.623624 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/ac7931c6-f7d4-4166-b332-d954717f67c0-serviceca\") pod \"node-ca-ll9gt\" (UID: \"ac7931c6-f7d4-4166-b332-d954717f67c0\") " pod="openshift-image-registry/node-ca-ll9gt" Feb 23 08:49:37 crc kubenswrapper[4940]: I0223 08:49:37.623727 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ac7931c6-f7d4-4166-b332-d954717f67c0-host\") pod \"node-ca-ll9gt\" (UID: \"ac7931c6-f7d4-4166-b332-d954717f67c0\") " pod="openshift-image-registry/node-ca-ll9gt" Feb 23 08:49:37 crc kubenswrapper[4940]: I0223 08:49:37.624497 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/ac7931c6-f7d4-4166-b332-d954717f67c0-serviceca\") pod \"node-ca-ll9gt\" (UID: \"ac7931c6-f7d4-4166-b332-d954717f67c0\") " pod="openshift-image-registry/node-ca-ll9gt" Feb 23 08:49:37 crc kubenswrapper[4940]: I0223 08:49:37.648046 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s8clf\" (UniqueName: \"kubernetes.io/projected/ac7931c6-f7d4-4166-b332-d954717f67c0-kube-api-access-s8clf\") pod \"node-ca-ll9gt\" (UID: \"ac7931c6-f7d4-4166-b332-d954717f67c0\") " pod="openshift-image-registry/node-ca-ll9gt" Feb 23 08:49:37 crc kubenswrapper[4940]: I0223 08:49:37.659410 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:37 crc kubenswrapper[4940]: I0223 08:49:37.659451 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:37 crc kubenswrapper[4940]: I0223 08:49:37.659463 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:37 crc kubenswrapper[4940]: I0223 08:49:37.659481 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:37 crc kubenswrapper[4940]: I0223 08:49:37.659495 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:37Z","lastTransitionTime":"2026-02-23T08:49:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:37 crc kubenswrapper[4940]: I0223 08:49:37.692983 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-ll9gt" Feb 23 08:49:37 crc kubenswrapper[4940]: W0223 08:49:37.713752 4940 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podac7931c6_f7d4_4166_b332_d954717f67c0.slice/crio-cebc6213a80a8f74cfa83867716e13d4ae4a21e2d7a6f7a7ba8fa2c63c3b2daf WatchSource:0}: Error finding container cebc6213a80a8f74cfa83867716e13d4ae4a21e2d7a6f7a7ba8fa2c63c3b2daf: Status 404 returned error can't find the container with id cebc6213a80a8f74cfa83867716e13d4ae4a21e2d7a6f7a7ba8fa2c63c3b2daf Feb 23 08:49:37 crc kubenswrapper[4940]: I0223 08:49:37.762654 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:37 crc kubenswrapper[4940]: I0223 08:49:37.762707 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:37 crc kubenswrapper[4940]: I0223 08:49:37.762718 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:37 crc kubenswrapper[4940]: I0223 08:49:37.762737 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:37 crc kubenswrapper[4940]: I0223 08:49:37.762748 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:37Z","lastTransitionTime":"2026-02-23T08:49:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:37 crc kubenswrapper[4940]: I0223 08:49:37.859813 4940 generic.go:334] "Generic (PLEG): container finished" podID="353c05f4-9ef1-4c0f-8388-8b56cc4c22d5" containerID="d249ae04e3dece913fbfa0f45169adf9d267043d3cf2dd93c4ace24c5fdf1dd6" exitCode=0 Feb 23 08:49:37 crc kubenswrapper[4940]: I0223 08:49:37.859869 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-tj6ms" event={"ID":"353c05f4-9ef1-4c0f-8388-8b56cc4c22d5","Type":"ContainerDied","Data":"d249ae04e3dece913fbfa0f45169adf9d267043d3cf2dd93c4ace24c5fdf1dd6"} Feb 23 08:49:37 crc kubenswrapper[4940]: I0223 08:49:37.861184 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-ll9gt" event={"ID":"ac7931c6-f7d4-4166-b332-d954717f67c0","Type":"ContainerStarted","Data":"cebc6213a80a8f74cfa83867716e13d4ae4a21e2d7a6f7a7ba8fa2c63c3b2daf"} Feb 23 08:49:37 crc kubenswrapper[4940]: I0223 08:49:37.865380 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:37 crc kubenswrapper[4940]: I0223 08:49:37.865415 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:37 crc kubenswrapper[4940]: I0223 08:49:37.865424 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:37 crc kubenswrapper[4940]: I0223 08:49:37.865439 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:37 crc kubenswrapper[4940]: I0223 08:49:37.865451 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:37Z","lastTransitionTime":"2026-02-23T08:49:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:37 crc kubenswrapper[4940]: I0223 08:49:37.874019 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f3f2cfd6-5ddf-436d-998f-440f1cc642b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a14bc73401f5a1f767af0ebe6b81d75f4f735acadd163be9aa0b78d067677a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpqgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a57b4e844c0d09d7debf8e4d2507287815cc75be308dc7010647ac2041181dc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpqgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:49:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-26mgs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:37Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:37 crc kubenswrapper[4940]: I0223 08:49:37.890949 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67d1926d-47ed-4a5c-b868-690599126446\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ee3bc200ebaed1c133ad32e65db44c8613c48203736dada71c0881d3d4f45a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6fefd6cf7ee54d7687a700f08f52ff20d438289f846da4d811f3295e85455c1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee77b54ead604e1f64b0057494a4ba082ac7f59289f563ce28ec6048485e3b07\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://947d88435f9dfeac492c7a476d5413d6c9f65fea2e6a8322bed3a197be4e7c11\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://947d88435f9dfeac492c7a476d5413d6c9f65fea2e6a8322bed3a197be4e7c11\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"message\\\":\\\"le observer\\\\nW0223 08:48:59.155726 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0223 08:48:59.155899 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0223 08:48:59.156685 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1318688766/tls.crt::/tmp/serving-cert-1318688766/tls.key\\\\\\\"\\\\nI0223 08:48:59.543716 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0223 08:48:59.548278 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0223 08:48:59.548307 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0223 08:48:59.548334 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0223 08:48:59.548340 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0223 08:48:59.558852 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0223 08:48:59.558871 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0223 08:48:59.558894 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0223 08:48:59.558902 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0223 08:48:59.558909 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0223 08:48:59.558912 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0223 08:48:59.558917 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0223 08:48:59.558922 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0223 08:48:59.560387 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T08:48:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c47633e36ba0e4b4070060e2925e103b6efa68771b18f7a7f429546f9f05c1d9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:53Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7d8e4f8739813cb8d24afc9999755886cd4a77958b8b4e1514fc0bc53403258\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7d8e4f8739813cb8d24afc9999755886cd4a77958b8b4e1514fc0bc53403258\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:47:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:47:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:47:49Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:37Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:37 crc kubenswrapper[4940]: I0223 08:49:37.907184 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4vcwd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41834650-70c0-4558-9052-d7cdfc785e09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a8663adb017b0225e3a827e7670572c25e2b8123a2153e9f99830d8ddc60ebe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwtwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:49:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4vcwd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:37Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:37 crc kubenswrapper[4940]: I0223 08:49:37.927571 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b55d9910-222c-4c04-8a15-cecc288b8dd6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2696757a27ff629002d6fe762776487514e74ad9465357240b3f196cf9e3cec7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73db3b91fa5cdca73acde9a3bcb9062394a54e61e87020b5d656290a54aa4f70\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T08:48:50Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0223 08:48:20.958890 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0223 08:48:20.960636 1 observer_polling.go:159] Starting file observer\\\\nI0223 08:48:20.961488 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0223 08:48:20.962119 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0223 08:48:48.479316 1 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials\\\\nI0223 08:48:50.579389 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0223 08:48:50.579494 1 cmd.go:179] failed checking apiserver connectivity: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T08:48:20Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:48:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aceeb7fc620d96b7e2a474de52d7b3fb007417f4f1ec81e5853724d54381566\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dbb2f4667fe28c3ae9389cece3d7246f31a8bd82328370a699dae73e11b19a4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0c1a355a4218aa74f5a9f20e74427e20e7dd1864bfb683eeb2ac43ef6d7357f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:47:49Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:37Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:37 crc kubenswrapper[4940]: I0223 08:49:37.944478 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ca6a355de735d39d8c069429deed2a9983ba8d97d9210475c0b22791b6357d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:37Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:37 crc kubenswrapper[4940]: I0223 08:49:37.958363 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:37Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:37 crc kubenswrapper[4940]: I0223 08:49:37.969250 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:37 crc kubenswrapper[4940]: I0223 08:49:37.969300 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:37 crc kubenswrapper[4940]: I0223 08:49:37.969311 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:37 crc kubenswrapper[4940]: I0223 08:49:37.969329 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:37 crc kubenswrapper[4940]: I0223 08:49:37.969342 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:37Z","lastTransitionTime":"2026-02-23T08:49:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:37 crc kubenswrapper[4940]: I0223 08:49:37.979160 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-tj6ms" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"353c05f4-9ef1-4c0f-8388-8b56cc4c22d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j86pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e45b5826dae32da76487ca6179e00691e0f92748e895fbe42aa14d39a808a12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e45b5826dae32da76487ca6179e00691e0f92748e895fbe42aa14d39a808a12\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j86pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a9386fc4a4a81adf5f99f88359b958f8c47f37eb176a860c96e5e3b6a219f3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a9386fc4a4a81adf5f99f88359b958f8c47f37eb176a860c96e5e3b6a219f3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:49:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j86pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0e096f58de305f0df76444507c878fdf5eb1a134deb3725e1e46ac4d6581541\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0e096f58de305f0df76444507c878fdf5eb1a134deb3725e1e46ac4d6581541\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:49:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:49:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j86pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcc8e2b5ae391db63624b8bcbaa5cab45fcc5f168db2a52700d34ad2c3eb9b89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fcc8e2b5ae391db63624b8bcbaa5cab45fcc5f168db2a52700d34ad2c3eb9b89\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:49:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:49:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j86pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0b3f29a8c1eea72788476a0aac20ec99cb908df8983b5271448ae824b8ab96a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0b3f29a8c1eea72788476a0aac20ec99cb908df8983b5271448ae824b8ab96a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:49:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:49:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j86pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d249ae04e3dece913fbfa0f45169adf9d267043d3cf2dd93c4ace24c5fdf1dd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d249ae04e3dece913fbfa0f45169adf9d267043d3cf2dd93c4ace24c5fdf1dd6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:49:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:49:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j86pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:49:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-tj6ms\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:37Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:37 crc kubenswrapper[4940]: I0223 08:49:37.997110 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47a2763e-a680-45c0-b4e8-0930c73e2e6b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4316680c09083874958b09e3d47897f844490827fafe87653b1459708ba55af3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c467a466c26e4cd6c762317dcc08a84b7bd61aa044e8d075f7ada83206764d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2a314775a13f5a612d19ad1c27ccadc74e4ad191c84bd78a1068105509e514a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://863243ea1e361e4f5cd70d64fa4a0c078de0c0529c9928b413db44a13924c474\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cca6b93a0655ce53ad6d1ccfe625533a67c58b3876c57fe4473d699c95938bd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc81c994f3f21269783d08f800aa4a58bdf43d229b5b61d7f40f8706230fd6c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc81c994f3f21269783d08f800aa4a58bdf43d229b5b61d7f40f8706230fd6c5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:47:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:47:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://260cdbf12890f13f411672fde6af3350b3ddc30824ac5b43110247ae1c7b74fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://260cdbf12890f13f411672fde6af3350b3ddc30824ac5b43110247ae1c7b74fc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:47:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:47:52Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://768f56feda9f5952088933827a03f9d23d7d807d1354578de2335e7c3e8beaaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://768f56feda9f5952088933827a03f9d23d7d807d1354578de2335e7c3e8beaaf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:47:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:47:49Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:37Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:38 crc kubenswrapper[4940]: I0223 08:49:38.010749 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:38Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:38 crc kubenswrapper[4940]: I0223 08:49:38.023053 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4164a479bddce53118172c24d9dc0c8174560ac6f49d2a1fa7321845f7f4020a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:38Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:38 crc kubenswrapper[4940]: I0223 08:49:38.034490 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:38Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:38 crc kubenswrapper[4940]: I0223 08:49:38.046191 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f5fabdbd24e079413b28b30b7e72c60763c64d08d55badde2d84d38dcbc8a09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bce84ef684469eb1b554c08d38cbd547ce6fa507b189a9f63c740583e66bc5e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:38Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:38 crc kubenswrapper[4940]: I0223 08:49:38.060264 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-czrqm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec3904ad-5d0b-46b4-9c13-68454d9a3cb2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5c3f117be2e1eba2edb562924ecec6d7ac44b508366bc7c63a86214ada740dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8jmt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:49:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-czrqm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:38Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:38 crc kubenswrapper[4940]: I0223 08:49:38.071809 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:38 crc kubenswrapper[4940]: I0223 08:49:38.071845 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:38 crc kubenswrapper[4940]: I0223 08:49:38.071878 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:38 crc kubenswrapper[4940]: I0223 08:49:38.071899 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:38 crc kubenswrapper[4940]: I0223 08:49:38.071911 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:38Z","lastTransitionTime":"2026-02-23T08:49:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:38 crc kubenswrapper[4940]: I0223 08:49:38.076637 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qkw6w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0b5a971-c6f4-4518-9bb3-49d228275668\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56ef19c4e2659a35760fb8e2bb8a79c35a283fa3f4b00766bdd237d1464bd933\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56ef19c4e2659a35760fb8e2bb8a79c35a283fa3f4b00766bdd237d1464bd933\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:49:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:49:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:49:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qkw6w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:38Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:38 crc kubenswrapper[4940]: I0223 08:49:38.086377 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ll9gt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac7931c6-f7d4-4166-b332-d954717f67c0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:37Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:37Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8clf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:49:37Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ll9gt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:38Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:38 crc kubenswrapper[4940]: I0223 08:49:38.174093 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:38 crc kubenswrapper[4940]: I0223 08:49:38.174133 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:38 crc kubenswrapper[4940]: I0223 08:49:38.174142 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:38 crc kubenswrapper[4940]: I0223 08:49:38.174156 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:38 crc kubenswrapper[4940]: I0223 08:49:38.174167 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:38Z","lastTransitionTime":"2026-02-23T08:49:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:38 crc kubenswrapper[4940]: I0223 08:49:38.277387 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:38 crc kubenswrapper[4940]: I0223 08:49:38.277419 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:38 crc kubenswrapper[4940]: I0223 08:49:38.277428 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:38 crc kubenswrapper[4940]: I0223 08:49:38.277462 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:38 crc kubenswrapper[4940]: I0223 08:49:38.277476 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:38Z","lastTransitionTime":"2026-02-23T08:49:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:38 crc kubenswrapper[4940]: I0223 08:49:38.344983 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 08:49:38 crc kubenswrapper[4940]: E0223 08:49:38.345120 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 08:49:38 crc kubenswrapper[4940]: I0223 08:49:38.358539 4940 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 13:23:40.288294211 +0000 UTC Feb 23 08:49:38 crc kubenswrapper[4940]: I0223 08:49:38.380725 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:38 crc kubenswrapper[4940]: I0223 08:49:38.380779 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:38 crc kubenswrapper[4940]: I0223 08:49:38.380795 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:38 crc kubenswrapper[4940]: I0223 08:49:38.380816 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:38 crc kubenswrapper[4940]: I0223 08:49:38.380830 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:38Z","lastTransitionTime":"2026-02-23T08:49:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:38 crc kubenswrapper[4940]: I0223 08:49:38.483701 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:38 crc kubenswrapper[4940]: I0223 08:49:38.483749 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:38 crc kubenswrapper[4940]: I0223 08:49:38.483952 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:38 crc kubenswrapper[4940]: I0223 08:49:38.483971 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:38 crc kubenswrapper[4940]: I0223 08:49:38.483984 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:38Z","lastTransitionTime":"2026-02-23T08:49:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:38 crc kubenswrapper[4940]: I0223 08:49:38.586812 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:38 crc kubenswrapper[4940]: I0223 08:49:38.586870 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:38 crc kubenswrapper[4940]: I0223 08:49:38.586881 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:38 crc kubenswrapper[4940]: I0223 08:49:38.586897 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:38 crc kubenswrapper[4940]: I0223 08:49:38.586909 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:38Z","lastTransitionTime":"2026-02-23T08:49:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:38 crc kubenswrapper[4940]: I0223 08:49:38.689953 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:38 crc kubenswrapper[4940]: I0223 08:49:38.690021 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:38 crc kubenswrapper[4940]: I0223 08:49:38.690034 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:38 crc kubenswrapper[4940]: I0223 08:49:38.690053 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:38 crc kubenswrapper[4940]: I0223 08:49:38.690064 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:38Z","lastTransitionTime":"2026-02-23T08:49:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:38 crc kubenswrapper[4940]: I0223 08:49:38.793241 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:38 crc kubenswrapper[4940]: I0223 08:49:38.793606 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:38 crc kubenswrapper[4940]: I0223 08:49:38.793654 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:38 crc kubenswrapper[4940]: I0223 08:49:38.793671 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:38 crc kubenswrapper[4940]: I0223 08:49:38.793681 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:38Z","lastTransitionTime":"2026-02-23T08:49:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:38 crc kubenswrapper[4940]: I0223 08:49:38.866487 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-ll9gt" event={"ID":"ac7931c6-f7d4-4166-b332-d954717f67c0","Type":"ContainerStarted","Data":"e4dcef9eae99fcca399a3b47ff74ebb46cd4c0854422c9153d2ceb6f3a1ea8a4"} Feb 23 08:49:38 crc kubenswrapper[4940]: I0223 08:49:38.872747 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-tj6ms" event={"ID":"353c05f4-9ef1-4c0f-8388-8b56cc4c22d5","Type":"ContainerStarted","Data":"a1be9bb45923468ac8fc8f206f3493431ef579d16d158ba7cf330944b1eb7585"} Feb 23 08:49:38 crc kubenswrapper[4940]: I0223 08:49:38.877241 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qkw6w" event={"ID":"d0b5a971-c6f4-4518-9bb3-49d228275668","Type":"ContainerStarted","Data":"173a2928eeb364258d9dd197699c532181920160acd9a1c8feeca71a17f214fe"} Feb 23 08:49:38 crc kubenswrapper[4940]: I0223 08:49:38.878035 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-qkw6w" Feb 23 08:49:38 crc kubenswrapper[4940]: I0223 08:49:38.878067 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-qkw6w" Feb 23 08:49:38 crc kubenswrapper[4940]: I0223 08:49:38.878078 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-qkw6w" Feb 23 08:49:38 crc kubenswrapper[4940]: I0223 08:49:38.885364 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67d1926d-47ed-4a5c-b868-690599126446\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ee3bc200ebaed1c133ad32e65db44c8613c48203736dada71c0881d3d4f45a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6fefd6cf7ee54d7687a700f08f52ff20d438289f846da4d811f3295e85455c1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee77b54ead604e1f64b0057494a4ba082ac7f59289f563ce28ec6048485e3b07\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://947d88435f9dfeac492c7a476d5413d6c9f65fea2e6a8322bed3a197be4e7c11\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://947d88435f9dfeac492c7a476d5413d6c9f65fea2e6a8322bed3a197be4e7c11\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"message\\\":\\\"le observer\\\\nW0223 08:48:59.155726 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0223 08:48:59.155899 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0223 08:48:59.156685 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1318688766/tls.crt::/tmp/serving-cert-1318688766/tls.key\\\\\\\"\\\\nI0223 08:48:59.543716 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0223 08:48:59.548278 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0223 08:48:59.548307 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0223 08:48:59.548334 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0223 08:48:59.548340 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0223 08:48:59.558852 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0223 08:48:59.558871 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0223 08:48:59.558894 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0223 08:48:59.558902 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0223 08:48:59.558909 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0223 08:48:59.558912 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0223 08:48:59.558917 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0223 08:48:59.558922 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0223 08:48:59.560387 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T08:48:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c47633e36ba0e4b4070060e2925e103b6efa68771b18f7a7f429546f9f05c1d9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:53Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7d8e4f8739813cb8d24afc9999755886cd4a77958b8b4e1514fc0bc53403258\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7d8e4f8739813cb8d24afc9999755886cd4a77958b8b4e1514fc0bc53403258\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:47:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:47:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:47:49Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:38Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:38 crc kubenswrapper[4940]: I0223 08:49:38.896774 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:38 crc kubenswrapper[4940]: I0223 08:49:38.896835 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:38 crc kubenswrapper[4940]: I0223 08:49:38.896848 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:38 crc kubenswrapper[4940]: I0223 08:49:38.896868 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:38 crc kubenswrapper[4940]: I0223 08:49:38.896881 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:38Z","lastTransitionTime":"2026-02-23T08:49:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:38 crc kubenswrapper[4940]: I0223 08:49:38.898891 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4vcwd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41834650-70c0-4558-9052-d7cdfc785e09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a8663adb017b0225e3a827e7670572c25e2b8123a2153e9f99830d8ddc60ebe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwtwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:49:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4vcwd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:38Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:38 crc kubenswrapper[4940]: I0223 08:49:38.932591 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b55d9910-222c-4c04-8a15-cecc288b8dd6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2696757a27ff629002d6fe762776487514e74ad9465357240b3f196cf9e3cec7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73db3b91fa5cdca73acde9a3bcb9062394a54e61e87020b5d656290a54aa4f70\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T08:48:50Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0223 08:48:20.958890 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0223 08:48:20.960636 1 observer_polling.go:159] Starting file observer\\\\nI0223 08:48:20.961488 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0223 08:48:20.962119 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0223 08:48:48.479316 1 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials\\\\nI0223 08:48:50.579389 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0223 08:48:50.579494 1 cmd.go:179] failed checking apiserver connectivity: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T08:48:20Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:48:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aceeb7fc620d96b7e2a474de52d7b3fb007417f4f1ec81e5853724d54381566\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dbb2f4667fe28c3ae9389cece3d7246f31a8bd82328370a699dae73e11b19a4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0c1a355a4218aa74f5a9f20e74427e20e7dd1864bfb683eeb2ac43ef6d7357f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:47:49Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:38Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:38 crc kubenswrapper[4940]: I0223 08:49:38.934925 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-qkw6w" Feb 23 08:49:38 crc kubenswrapper[4940]: I0223 08:49:38.936509 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-qkw6w" Feb 23 08:49:38 crc kubenswrapper[4940]: I0223 08:49:38.944893 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ca6a355de735d39d8c069429deed2a9983ba8d97d9210475c0b22791b6357d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:38Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:38 crc kubenswrapper[4940]: I0223 08:49:38.959499 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:38Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:38 crc kubenswrapper[4940]: I0223 08:49:38.977637 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-tj6ms" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"353c05f4-9ef1-4c0f-8388-8b56cc4c22d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j86pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e45b5826dae32da76487ca6179e00691e0f92748e895fbe42aa14d39a808a12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e45b5826dae32da76487ca6179e00691e0f92748e895fbe42aa14d39a808a12\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j86pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a9386fc4a4a81adf5f99f88359b958f8c47f37eb176a860c96e5e3b6a219f3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a9386fc4a4a81adf5f99f88359b958f8c47f37eb176a860c96e5e3b6a219f3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:49:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j86pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0e096f58de305f0df76444507c878fdf5eb1a134deb3725e1e46ac4d6581541\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0e096f58de305f0df76444507c878fdf5eb1a134deb3725e1e46ac4d6581541\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:49:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:49:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j86pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcc8e2b5ae391db63624b8bcbaa5cab45fcc5f168db2a52700d34ad2c3eb9b89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fcc8e2b5ae391db63624b8bcbaa5cab45fcc5f168db2a52700d34ad2c3eb9b89\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:49:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:49:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j86pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0b3f29a8c1eea72788476a0aac20ec99cb908df8983b5271448ae824b8ab96a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0b3f29a8c1eea72788476a0aac20ec99cb908df8983b5271448ae824b8ab96a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:49:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:49:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j86pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d249ae04e3dece913fbfa0f45169adf9d267043d3cf2dd93c4ace24c5fdf1dd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d249ae04e3dece913fbfa0f45169adf9d267043d3cf2dd93c4ace24c5fdf1dd6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:49:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:49:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j86pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:49:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-tj6ms\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:38Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:38 crc kubenswrapper[4940]: I0223 08:49:38.994591 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-czrqm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec3904ad-5d0b-46b4-9c13-68454d9a3cb2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5c3f117be2e1eba2edb562924ecec6d7ac44b508366bc7c63a86214ada740dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8jmt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:49:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-czrqm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:38Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:38 crc kubenswrapper[4940]: I0223 08:49:38.999002 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:38 crc kubenswrapper[4940]: I0223 08:49:38.999047 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:38 crc kubenswrapper[4940]: I0223 08:49:38.999058 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:38 crc kubenswrapper[4940]: I0223 08:49:38.999074 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:38 crc kubenswrapper[4940]: I0223 08:49:38.999085 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:38Z","lastTransitionTime":"2026-02-23T08:49:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:39 crc kubenswrapper[4940]: I0223 08:49:39.016325 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qkw6w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0b5a971-c6f4-4518-9bb3-49d228275668\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56ef19c4e2659a35760fb8e2bb8a79c35a283fa3f4b00766bdd237d1464bd933\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56ef19c4e2659a35760fb8e2bb8a79c35a283fa3f4b00766bdd237d1464bd933\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:49:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:49:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:49:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qkw6w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:39Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:39 crc kubenswrapper[4940]: I0223 08:49:39.038271 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47a2763e-a680-45c0-b4e8-0930c73e2e6b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4316680c09083874958b09e3d47897f844490827fafe87653b1459708ba55af3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c467a466c26e4cd6c762317dcc08a84b7bd61aa044e8d075f7ada83206764d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2a314775a13f5a612d19ad1c27ccadc74e4ad191c84bd78a1068105509e514a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://863243ea1e361e4f5cd70d64fa4a0c078de0c0529c9928b413db44a13924c474\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cca6b93a0655ce53ad6d1ccfe625533a67c58b3876c57fe4473d699c95938bd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc81c994f3f21269783d08f800aa4a58bdf43d229b5b61d7f40f8706230fd6c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc81c994f3f21269783d08f800aa4a58bdf43d229b5b61d7f40f8706230fd6c5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:47:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:47:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://260cdbf12890f13f411672fde6af3350b3ddc30824ac5b43110247ae1c7b74fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://260cdbf12890f13f411672fde6af3350b3ddc30824ac5b43110247ae1c7b74fc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:47:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:47:52Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://768f56feda9f5952088933827a03f9d23d7d807d1354578de2335e7c3e8beaaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://768f56feda9f5952088933827a03f9d23d7d807d1354578de2335e7c3e8beaaf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:47:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:47:49Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:39Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:39 crc kubenswrapper[4940]: I0223 08:49:39.054790 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:39Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:39 crc kubenswrapper[4940]: I0223 08:49:39.076550 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4164a479bddce53118172c24d9dc0c8174560ac6f49d2a1fa7321845f7f4020a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:39Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:39 crc kubenswrapper[4940]: I0223 08:49:39.093682 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:39Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:39 crc kubenswrapper[4940]: I0223 08:49:39.101522 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:39 crc kubenswrapper[4940]: I0223 08:49:39.101568 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:39 crc kubenswrapper[4940]: I0223 08:49:39.101581 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:39 crc kubenswrapper[4940]: I0223 08:49:39.101602 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:39 crc kubenswrapper[4940]: I0223 08:49:39.101634 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:39Z","lastTransitionTime":"2026-02-23T08:49:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:39 crc kubenswrapper[4940]: I0223 08:49:39.106886 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f5fabdbd24e079413b28b30b7e72c60763c64d08d55badde2d84d38dcbc8a09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bce84ef684469eb1b554c08d38cbd547ce6fa507b189a9f63c740583e66bc5e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:39Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:39 crc kubenswrapper[4940]: I0223 08:49:39.118743 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ll9gt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac7931c6-f7d4-4166-b332-d954717f67c0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4dcef9eae99fcca399a3b47ff74ebb46cd4c0854422c9153d2ceb6f3a1ea8a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8clf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:49:37Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ll9gt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:39Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:39 crc kubenswrapper[4940]: I0223 08:49:39.131500 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f3f2cfd6-5ddf-436d-998f-440f1cc642b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a14bc73401f5a1f767af0ebe6b81d75f4f735acadd163be9aa0b78d067677a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpqgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a57b4e844c0d09d7debf8e4d2507287815cc75be308dc7010647ac2041181dc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpqgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:49:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-26mgs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:39Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:39 crc kubenswrapper[4940]: I0223 08:49:39.152658 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47a2763e-a680-45c0-b4e8-0930c73e2e6b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4316680c09083874958b09e3d47897f844490827fafe87653b1459708ba55af3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c467a466c26e4cd6c762317dcc08a84b7bd61aa044e8d075f7ada83206764d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2a314775a13f5a612d19ad1c27ccadc74e4ad191c84bd78a1068105509e514a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://863243ea1e361e4f5cd70d64fa4a0c078de0c0529c9928b413db44a13924c474\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cca6b93a0655ce53ad6d1ccfe625533a67c58b3876c57fe4473d699c95938bd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc81c994f3f21269783d08f800aa4a58bdf43d229b5b61d7f40f8706230fd6c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc81c994f3f21269783d08f800aa4a58bdf43d229b5b61d7f40f8706230fd6c5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:47:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:47:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://260cdbf12890f13f411672fde6af3350b3ddc30824ac5b43110247ae1c7b74fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://260cdbf12890f13f411672fde6af3350b3ddc30824ac5b43110247ae1c7b74fc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:47:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:47:52Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://768f56feda9f5952088933827a03f9d23d7d807d1354578de2335e7c3e8beaaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://768f56feda9f5952088933827a03f9d23d7d807d1354578de2335e7c3e8beaaf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:47:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:47:49Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:39Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:39 crc kubenswrapper[4940]: I0223 08:49:39.165002 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:39Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:39 crc kubenswrapper[4940]: I0223 08:49:39.177716 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4164a479bddce53118172c24d9dc0c8174560ac6f49d2a1fa7321845f7f4020a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:39Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:39 crc kubenswrapper[4940]: I0223 08:49:39.188189 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:39Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:39 crc kubenswrapper[4940]: I0223 08:49:39.202380 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f5fabdbd24e079413b28b30b7e72c60763c64d08d55badde2d84d38dcbc8a09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bce84ef684469eb1b554c08d38cbd547ce6fa507b189a9f63c740583e66bc5e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:39Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:39 crc kubenswrapper[4940]: I0223 08:49:39.204313 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:39 crc kubenswrapper[4940]: I0223 08:49:39.204335 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:39 crc kubenswrapper[4940]: I0223 08:49:39.204344 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:39 crc kubenswrapper[4940]: I0223 08:49:39.204359 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:39 crc kubenswrapper[4940]: I0223 08:49:39.204371 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:39Z","lastTransitionTime":"2026-02-23T08:49:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:39 crc kubenswrapper[4940]: I0223 08:49:39.214946 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-czrqm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec3904ad-5d0b-46b4-9c13-68454d9a3cb2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5c3f117be2e1eba2edb562924ecec6d7ac44b508366bc7c63a86214ada740dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8jmt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:49:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-czrqm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:39Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:39 crc kubenswrapper[4940]: I0223 08:49:39.234357 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qkw6w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0b5a971-c6f4-4518-9bb3-49d228275668\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://524ac500bdffecad2d1533ea863b5f2dc0acd130368e80faa476facf6817bcc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4558e2adbf43351cb79df3a10fc2e37fa7edc8eae79ceb459ae1ea068e8df21b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://683969deceafc005dbe91b7dbc1fd159cc1da9f55e48112ff68947c40997891e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cff793d0cde25f9d7edf3abaaf81005870d7ad01f985cfcc3cf3f5a57ea062b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f189355dc4d43c8b1df6111c9c3a128e90b724b976602933c384817e4f51882\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ef9add89b50dd7b480368148d404e8f7128d92e6d921c3c89a17463706f3dca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://173a2928eeb364258d9dd197699c532181920160acd9a1c8feeca71a17f214fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7a97208ce587c7633bbea9ad618e0721a712dfd62ff249a5843018bc125922a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56ef19c4e2659a35760fb8e2bb8a79c35a283fa3f4b00766bdd237d1464bd933\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56ef19c4e2659a35760fb8e2bb8a79c35a283fa3f4b00766bdd237d1464bd933\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:49:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:49:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:49:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qkw6w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:39Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:39 crc kubenswrapper[4940]: I0223 08:49:39.244522 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ll9gt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac7931c6-f7d4-4166-b332-d954717f67c0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4dcef9eae99fcca399a3b47ff74ebb46cd4c0854422c9153d2ceb6f3a1ea8a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8clf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:49:37Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ll9gt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:39Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:39 crc kubenswrapper[4940]: I0223 08:49:39.255093 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f3f2cfd6-5ddf-436d-998f-440f1cc642b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a14bc73401f5a1f767af0ebe6b81d75f4f735acadd163be9aa0b78d067677a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpqgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a57b4e844c0d09d7debf8e4d2507287815cc75be308dc7010647ac2041181dc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpqgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:49:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-26mgs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:39Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:39 crc kubenswrapper[4940]: I0223 08:49:39.265477 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4vcwd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41834650-70c0-4558-9052-d7cdfc785e09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a8663adb017b0225e3a827e7670572c25e2b8123a2153e9f99830d8ddc60ebe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwtwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:49:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4vcwd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:39Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:39 crc kubenswrapper[4940]: I0223 08:49:39.279288 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67d1926d-47ed-4a5c-b868-690599126446\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ee3bc200ebaed1c133ad32e65db44c8613c48203736dada71c0881d3d4f45a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6fefd6cf7ee54d7687a700f08f52ff20d438289f846da4d811f3295e85455c1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee77b54ead604e1f64b0057494a4ba082ac7f59289f563ce28ec6048485e3b07\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://947d88435f9dfeac492c7a476d5413d6c9f65fea2e6a8322bed3a197be4e7c11\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://947d88435f9dfeac492c7a476d5413d6c9f65fea2e6a8322bed3a197be4e7c11\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"message\\\":\\\"le observer\\\\nW0223 08:48:59.155726 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0223 08:48:59.155899 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0223 08:48:59.156685 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1318688766/tls.crt::/tmp/serving-cert-1318688766/tls.key\\\\\\\"\\\\nI0223 08:48:59.543716 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0223 08:48:59.548278 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0223 08:48:59.548307 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0223 08:48:59.548334 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0223 08:48:59.548340 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0223 08:48:59.558852 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0223 08:48:59.558871 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0223 08:48:59.558894 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0223 08:48:59.558902 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0223 08:48:59.558909 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0223 08:48:59.558912 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0223 08:48:59.558917 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0223 08:48:59.558922 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0223 08:48:59.560387 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T08:48:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c47633e36ba0e4b4070060e2925e103b6efa68771b18f7a7f429546f9f05c1d9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:53Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7d8e4f8739813cb8d24afc9999755886cd4a77958b8b4e1514fc0bc53403258\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7d8e4f8739813cb8d24afc9999755886cd4a77958b8b4e1514fc0bc53403258\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:47:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:47:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:47:49Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:39Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:39 crc kubenswrapper[4940]: I0223 08:49:39.290734 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ca6a355de735d39d8c069429deed2a9983ba8d97d9210475c0b22791b6357d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:39Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:39 crc kubenswrapper[4940]: I0223 08:49:39.302236 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:39Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:39 crc kubenswrapper[4940]: I0223 08:49:39.306668 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:39 crc kubenswrapper[4940]: I0223 08:49:39.306703 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:39 crc kubenswrapper[4940]: I0223 08:49:39.306714 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:39 crc kubenswrapper[4940]: I0223 08:49:39.306732 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:39 crc kubenswrapper[4940]: I0223 08:49:39.306745 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:39Z","lastTransitionTime":"2026-02-23T08:49:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:39 crc kubenswrapper[4940]: I0223 08:49:39.319831 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-tj6ms" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"353c05f4-9ef1-4c0f-8388-8b56cc4c22d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1be9bb45923468ac8fc8f206f3493431ef579d16d158ba7cf330944b1eb7585\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j86pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e45b5826dae32da76487ca6179e00691e0f92748e895fbe42aa14d39a808a12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e45b5826dae32da76487ca6179e00691e0f92748e895fbe42aa14d39a808a12\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j86pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a9386fc4a4a81adf5f99f88359b958f8c47f37eb176a860c96e5e3b6a219f3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a9386fc4a4a81adf5f99f88359b958f8c47f37eb176a860c96e5e3b6a219f3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:49:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j86pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0e096f58de305f0df76444507c878fdf5eb1a134deb3725e1e46ac4d6581541\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0e096f58de305f0df76444507c878fdf5eb1a134deb3725e1e46ac4d6581541\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:49:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:49:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j86pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcc8e2b5ae391db63624b8bcbaa5cab45fcc5f168db2a52700d34ad2c3eb9b89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fcc8e2b5ae391db63624b8bcbaa5cab45fcc5f168db2a52700d34ad2c3eb9b89\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:49:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:49:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j86pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0b3f29a8c1eea72788476a0aac20ec99cb908df8983b5271448ae824b8ab96a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0b3f29a8c1eea72788476a0aac20ec99cb908df8983b5271448ae824b8ab96a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:49:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:49:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j86pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d249ae04e3dece913fbfa0f45169adf9d267043d3cf2dd93c4ace24c5fdf1dd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d249ae04e3dece913fbfa0f45169adf9d267043d3cf2dd93c4ace24c5fdf1dd6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:49:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:49:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j86pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:49:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-tj6ms\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:39Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:39 crc kubenswrapper[4940]: I0223 08:49:39.338060 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b55d9910-222c-4c04-8a15-cecc288b8dd6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2696757a27ff629002d6fe762776487514e74ad9465357240b3f196cf9e3cec7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73db3b91fa5cdca73acde9a3bcb9062394a54e61e87020b5d656290a54aa4f70\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T08:48:50Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0223 08:48:20.958890 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0223 08:48:20.960636 1 observer_polling.go:159] Starting file observer\\\\nI0223 08:48:20.961488 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0223 08:48:20.962119 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0223 08:48:48.479316 1 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials\\\\nI0223 08:48:50.579389 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0223 08:48:50.579494 1 cmd.go:179] failed checking apiserver connectivity: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T08:48:20Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:48:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aceeb7fc620d96b7e2a474de52d7b3fb007417f4f1ec81e5853724d54381566\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dbb2f4667fe28c3ae9389cece3d7246f31a8bd82328370a699dae73e11b19a4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0c1a355a4218aa74f5a9f20e74427e20e7dd1864bfb683eeb2ac43ef6d7357f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:47:49Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:39Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:39 crc kubenswrapper[4940]: I0223 08:49:39.345401 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 08:49:39 crc kubenswrapper[4940]: I0223 08:49:39.345502 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 08:49:39 crc kubenswrapper[4940]: E0223 08:49:39.345546 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 08:49:39 crc kubenswrapper[4940]: E0223 08:49:39.345687 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 08:49:39 crc kubenswrapper[4940]: I0223 08:49:39.358105 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b55d9910-222c-4c04-8a15-cecc288b8dd6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2696757a27ff629002d6fe762776487514e74ad9465357240b3f196cf9e3cec7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73db3b91fa5cdca73acde9a3bcb9062394a54e61e87020b5d656290a54aa4f70\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T08:48:50Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0223 08:48:20.958890 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0223 08:48:20.960636 1 observer_polling.go:159] Starting file observer\\\\nI0223 08:48:20.961488 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0223 08:48:20.962119 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0223 08:48:48.479316 1 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials\\\\nI0223 08:48:50.579389 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0223 08:48:50.579494 1 cmd.go:179] failed checking apiserver connectivity: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T08:48:20Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:48:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aceeb7fc620d96b7e2a474de52d7b3fb007417f4f1ec81e5853724d54381566\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dbb2f4667fe28c3ae9389cece3d7246f31a8bd82328370a699dae73e11b19a4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0c1a355a4218aa74f5a9f20e74427e20e7dd1864bfb683eeb2ac43ef6d7357f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:47:49Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:39Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:39 crc kubenswrapper[4940]: I0223 08:49:39.359071 4940 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 12:18:35.395857484 +0000 UTC Feb 23 08:49:39 crc kubenswrapper[4940]: I0223 08:49:39.370820 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ca6a355de735d39d8c069429deed2a9983ba8d97d9210475c0b22791b6357d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:39Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:39 crc kubenswrapper[4940]: I0223 08:49:39.385357 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:39Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:39 crc kubenswrapper[4940]: I0223 08:49:39.400341 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-tj6ms" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"353c05f4-9ef1-4c0f-8388-8b56cc4c22d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1be9bb45923468ac8fc8f206f3493431ef579d16d158ba7cf330944b1eb7585\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j86pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e45b5826dae32da76487ca6179e00691e0f92748e895fbe42aa14d39a808a12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e45b5826dae32da76487ca6179e00691e0f92748e895fbe42aa14d39a808a12\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j86pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a9386fc4a4a81adf5f99f88359b958f8c47f37eb176a860c96e5e3b6a219f3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a9386fc4a4a81adf5f99f88359b958f8c47f37eb176a860c96e5e3b6a219f3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:49:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j86pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0e096f58de305f0df76444507c878fdf5eb1a134deb3725e1e46ac4d6581541\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0e096f58de305f0df76444507c878fdf5eb1a134deb3725e1e46ac4d6581541\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:49:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:49:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j86pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcc8e2b5ae391db63624b8bcbaa5cab45fcc5f168db2a52700d34ad2c3eb9b89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fcc8e2b5ae391db63624b8bcbaa5cab45fcc5f168db2a52700d34ad2c3eb9b89\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:49:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:49:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j86pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0b3f29a8c1eea72788476a0aac20ec99cb908df8983b5271448ae824b8ab96a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0b3f29a8c1eea72788476a0aac20ec99cb908df8983b5271448ae824b8ab96a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:49:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:49:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j86pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d249ae04e3dece913fbfa0f45169adf9d267043d3cf2dd93c4ace24c5fdf1dd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d249ae04e3dece913fbfa0f45169adf9d267043d3cf2dd93c4ace24c5fdf1dd6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:49:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:49:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j86pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:49:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-tj6ms\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:39Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:39 crc kubenswrapper[4940]: I0223 08:49:39.408538 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:39 crc kubenswrapper[4940]: I0223 08:49:39.408572 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:39 crc kubenswrapper[4940]: I0223 08:49:39.408583 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:39 crc kubenswrapper[4940]: I0223 08:49:39.408599 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:39 crc kubenswrapper[4940]: I0223 08:49:39.408643 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:39Z","lastTransitionTime":"2026-02-23T08:49:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:39 crc kubenswrapper[4940]: I0223 08:49:39.412819 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:39Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:39 crc kubenswrapper[4940]: I0223 08:49:39.427712 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f5fabdbd24e079413b28b30b7e72c60763c64d08d55badde2d84d38dcbc8a09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bce84ef684469eb1b554c08d38cbd547ce6fa507b189a9f63c740583e66bc5e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:39Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:39 crc kubenswrapper[4940]: I0223 08:49:39.446105 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-czrqm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec3904ad-5d0b-46b4-9c13-68454d9a3cb2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5c3f117be2e1eba2edb562924ecec6d7ac44b508366bc7c63a86214ada740dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8jmt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:49:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-czrqm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:39Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:39 crc kubenswrapper[4940]: I0223 08:49:39.462805 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qkw6w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0b5a971-c6f4-4518-9bb3-49d228275668\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://524ac500bdffecad2d1533ea863b5f2dc0acd130368e80faa476facf6817bcc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4558e2adbf43351cb79df3a10fc2e37fa7edc8eae79ceb459ae1ea068e8df21b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://683969deceafc005dbe91b7dbc1fd159cc1da9f55e48112ff68947c40997891e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cff793d0cde25f9d7edf3abaaf81005870d7ad01f985cfcc3cf3f5a57ea062b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f189355dc4d43c8b1df6111c9c3a128e90b724b976602933c384817e4f51882\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ef9add89b50dd7b480368148d404e8f7128d92e6d921c3c89a17463706f3dca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://173a2928eeb364258d9dd197699c532181920160acd9a1c8feeca71a17f214fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7a97208ce587c7633bbea9ad618e0721a712dfd62ff249a5843018bc125922a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56ef19c4e2659a35760fb8e2bb8a79c35a283fa3f4b00766bdd237d1464bd933\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56ef19c4e2659a35760fb8e2bb8a79c35a283fa3f4b00766bdd237d1464bd933\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:49:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:49:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:49:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qkw6w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:39Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:39 crc kubenswrapper[4940]: I0223 08:49:39.491021 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47a2763e-a680-45c0-b4e8-0930c73e2e6b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4316680c09083874958b09e3d47897f844490827fafe87653b1459708ba55af3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c467a466c26e4cd6c762317dcc08a84b7bd61aa044e8d075f7ada83206764d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2a314775a13f5a612d19ad1c27ccadc74e4ad191c84bd78a1068105509e514a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://863243ea1e361e4f5cd70d64fa4a0c078de0c0529c9928b413db44a13924c474\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cca6b93a0655ce53ad6d1ccfe625533a67c58b3876c57fe4473d699c95938bd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc81c994f3f21269783d08f800aa4a58bdf43d229b5b61d7f40f8706230fd6c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc81c994f3f21269783d08f800aa4a58bdf43d229b5b61d7f40f8706230fd6c5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:47:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:47:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://260cdbf12890f13f411672fde6af3350b3ddc30824ac5b43110247ae1c7b74fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://260cdbf12890f13f411672fde6af3350b3ddc30824ac5b43110247ae1c7b74fc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:47:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:47:52Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://768f56feda9f5952088933827a03f9d23d7d807d1354578de2335e7c3e8beaaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://768f56feda9f5952088933827a03f9d23d7d807d1354578de2335e7c3e8beaaf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:47:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:47:49Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:39Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:39 crc kubenswrapper[4940]: I0223 08:49:39.503999 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:39Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:39 crc kubenswrapper[4940]: I0223 08:49:39.510675 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:39 crc kubenswrapper[4940]: I0223 08:49:39.510712 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:39 crc kubenswrapper[4940]: I0223 08:49:39.510754 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:39 crc kubenswrapper[4940]: I0223 08:49:39.510773 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:39 crc kubenswrapper[4940]: I0223 08:49:39.510784 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:39Z","lastTransitionTime":"2026-02-23T08:49:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:39 crc kubenswrapper[4940]: I0223 08:49:39.518792 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4164a479bddce53118172c24d9dc0c8174560ac6f49d2a1fa7321845f7f4020a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:39Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:39 crc kubenswrapper[4940]: I0223 08:49:39.529971 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ll9gt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac7931c6-f7d4-4166-b332-d954717f67c0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4dcef9eae99fcca399a3b47ff74ebb46cd4c0854422c9153d2ceb6f3a1ea8a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8clf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:49:37Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ll9gt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:39Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:39 crc kubenswrapper[4940]: I0223 08:49:39.540969 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f3f2cfd6-5ddf-436d-998f-440f1cc642b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a14bc73401f5a1f767af0ebe6b81d75f4f735acadd163be9aa0b78d067677a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpqgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a57b4e844c0d09d7debf8e4d2507287815cc75be308dc7010647ac2041181dc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpqgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:49:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-26mgs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:39Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:39 crc kubenswrapper[4940]: I0223 08:49:39.554066 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67d1926d-47ed-4a5c-b868-690599126446\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ee3bc200ebaed1c133ad32e65db44c8613c48203736dada71c0881d3d4f45a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6fefd6cf7ee54d7687a700f08f52ff20d438289f846da4d811f3295e85455c1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee77b54ead604e1f64b0057494a4ba082ac7f59289f563ce28ec6048485e3b07\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://947d88435f9dfeac492c7a476d5413d6c9f65fea2e6a8322bed3a197be4e7c11\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://947d88435f9dfeac492c7a476d5413d6c9f65fea2e6a8322bed3a197be4e7c11\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"message\\\":\\\"le observer\\\\nW0223 08:48:59.155726 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0223 08:48:59.155899 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0223 08:48:59.156685 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1318688766/tls.crt::/tmp/serving-cert-1318688766/tls.key\\\\\\\"\\\\nI0223 08:48:59.543716 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0223 08:48:59.548278 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0223 08:48:59.548307 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0223 08:48:59.548334 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0223 08:48:59.548340 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0223 08:48:59.558852 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0223 08:48:59.558871 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0223 08:48:59.558894 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0223 08:48:59.558902 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0223 08:48:59.558909 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0223 08:48:59.558912 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0223 08:48:59.558917 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0223 08:48:59.558922 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0223 08:48:59.560387 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T08:48:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c47633e36ba0e4b4070060e2925e103b6efa68771b18f7a7f429546f9f05c1d9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:53Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7d8e4f8739813cb8d24afc9999755886cd4a77958b8b4e1514fc0bc53403258\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7d8e4f8739813cb8d24afc9999755886cd4a77958b8b4e1514fc0bc53403258\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:47:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:47:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:47:49Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:39Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:39 crc kubenswrapper[4940]: I0223 08:49:39.566492 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4vcwd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41834650-70c0-4558-9052-d7cdfc785e09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a8663adb017b0225e3a827e7670572c25e2b8123a2153e9f99830d8ddc60ebe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwtwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:49:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4vcwd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:39Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:39 crc kubenswrapper[4940]: I0223 08:49:39.612678 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:39 crc kubenswrapper[4940]: I0223 08:49:39.612722 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:39 crc kubenswrapper[4940]: I0223 08:49:39.612737 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:39 crc kubenswrapper[4940]: I0223 08:49:39.612758 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:39 crc kubenswrapper[4940]: I0223 08:49:39.612776 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:39Z","lastTransitionTime":"2026-02-23T08:49:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:39 crc kubenswrapper[4940]: I0223 08:49:39.715359 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:39 crc kubenswrapper[4940]: I0223 08:49:39.715412 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:39 crc kubenswrapper[4940]: I0223 08:49:39.715423 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:39 crc kubenswrapper[4940]: I0223 08:49:39.715440 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:39 crc kubenswrapper[4940]: I0223 08:49:39.715843 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:39Z","lastTransitionTime":"2026-02-23T08:49:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:39 crc kubenswrapper[4940]: I0223 08:49:39.818114 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:39 crc kubenswrapper[4940]: I0223 08:49:39.818165 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:39 crc kubenswrapper[4940]: I0223 08:49:39.818177 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:39 crc kubenswrapper[4940]: I0223 08:49:39.818193 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:39 crc kubenswrapper[4940]: I0223 08:49:39.818205 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:39Z","lastTransitionTime":"2026-02-23T08:49:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:39 crc kubenswrapper[4940]: I0223 08:49:39.920444 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:39 crc kubenswrapper[4940]: I0223 08:49:39.920498 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:39 crc kubenswrapper[4940]: I0223 08:49:39.920511 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:39 crc kubenswrapper[4940]: I0223 08:49:39.920533 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:39 crc kubenswrapper[4940]: I0223 08:49:39.920549 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:39Z","lastTransitionTime":"2026-02-23T08:49:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:40 crc kubenswrapper[4940]: I0223 08:49:40.023206 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:40 crc kubenswrapper[4940]: I0223 08:49:40.023286 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:40 crc kubenswrapper[4940]: I0223 08:49:40.023307 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:40 crc kubenswrapper[4940]: I0223 08:49:40.023336 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:40 crc kubenswrapper[4940]: I0223 08:49:40.023360 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:40Z","lastTransitionTime":"2026-02-23T08:49:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:40 crc kubenswrapper[4940]: I0223 08:49:40.126025 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:40 crc kubenswrapper[4940]: I0223 08:49:40.126094 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:40 crc kubenswrapper[4940]: I0223 08:49:40.126119 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:40 crc kubenswrapper[4940]: I0223 08:49:40.126150 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:40 crc kubenswrapper[4940]: I0223 08:49:40.126182 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:40Z","lastTransitionTime":"2026-02-23T08:49:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:40 crc kubenswrapper[4940]: I0223 08:49:40.229915 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:40 crc kubenswrapper[4940]: I0223 08:49:40.229964 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:40 crc kubenswrapper[4940]: I0223 08:49:40.229975 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:40 crc kubenswrapper[4940]: I0223 08:49:40.229994 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:40 crc kubenswrapper[4940]: I0223 08:49:40.230008 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:40Z","lastTransitionTime":"2026-02-23T08:49:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:40 crc kubenswrapper[4940]: I0223 08:49:40.332117 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:40 crc kubenswrapper[4940]: I0223 08:49:40.332164 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:40 crc kubenswrapper[4940]: I0223 08:49:40.332222 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:40 crc kubenswrapper[4940]: I0223 08:49:40.332259 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:40 crc kubenswrapper[4940]: I0223 08:49:40.332272 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:40Z","lastTransitionTime":"2026-02-23T08:49:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:40 crc kubenswrapper[4940]: I0223 08:49:40.345449 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 08:49:40 crc kubenswrapper[4940]: E0223 08:49:40.345592 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 08:49:40 crc kubenswrapper[4940]: I0223 08:49:40.359955 4940 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 20:49:47.662771455 +0000 UTC Feb 23 08:49:40 crc kubenswrapper[4940]: I0223 08:49:40.435001 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:40 crc kubenswrapper[4940]: I0223 08:49:40.435040 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:40 crc kubenswrapper[4940]: I0223 08:49:40.435049 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:40 crc kubenswrapper[4940]: I0223 08:49:40.435061 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:40 crc kubenswrapper[4940]: I0223 08:49:40.435070 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:40Z","lastTransitionTime":"2026-02-23T08:49:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:40 crc kubenswrapper[4940]: I0223 08:49:40.537777 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:40 crc kubenswrapper[4940]: I0223 08:49:40.537809 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:40 crc kubenswrapper[4940]: I0223 08:49:40.537817 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:40 crc kubenswrapper[4940]: I0223 08:49:40.537831 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:40 crc kubenswrapper[4940]: I0223 08:49:40.537841 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:40Z","lastTransitionTime":"2026-02-23T08:49:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:40 crc kubenswrapper[4940]: I0223 08:49:40.640347 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:40 crc kubenswrapper[4940]: I0223 08:49:40.640403 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:40 crc kubenswrapper[4940]: I0223 08:49:40.640412 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:40 crc kubenswrapper[4940]: I0223 08:49:40.640429 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:40 crc kubenswrapper[4940]: I0223 08:49:40.640439 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:40Z","lastTransitionTime":"2026-02-23T08:49:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:40 crc kubenswrapper[4940]: I0223 08:49:40.743804 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:40 crc kubenswrapper[4940]: I0223 08:49:40.743842 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:40 crc kubenswrapper[4940]: I0223 08:49:40.743852 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:40 crc kubenswrapper[4940]: I0223 08:49:40.743866 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:40 crc kubenswrapper[4940]: I0223 08:49:40.743877 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:40Z","lastTransitionTime":"2026-02-23T08:49:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:40 crc kubenswrapper[4940]: I0223 08:49:40.847112 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:40 crc kubenswrapper[4940]: I0223 08:49:40.847162 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:40 crc kubenswrapper[4940]: I0223 08:49:40.847174 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:40 crc kubenswrapper[4940]: I0223 08:49:40.847192 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:40 crc kubenswrapper[4940]: I0223 08:49:40.847204 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:40Z","lastTransitionTime":"2026-02-23T08:49:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:40 crc kubenswrapper[4940]: I0223 08:49:40.888941 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qkw6w_d0b5a971-c6f4-4518-9bb3-49d228275668/ovnkube-controller/0.log" Feb 23 08:49:40 crc kubenswrapper[4940]: I0223 08:49:40.892398 4940 generic.go:334] "Generic (PLEG): container finished" podID="d0b5a971-c6f4-4518-9bb3-49d228275668" containerID="173a2928eeb364258d9dd197699c532181920160acd9a1c8feeca71a17f214fe" exitCode=1 Feb 23 08:49:40 crc kubenswrapper[4940]: I0223 08:49:40.892469 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qkw6w" event={"ID":"d0b5a971-c6f4-4518-9bb3-49d228275668","Type":"ContainerDied","Data":"173a2928eeb364258d9dd197699c532181920160acd9a1c8feeca71a17f214fe"} Feb 23 08:49:40 crc kubenswrapper[4940]: I0223 08:49:40.893136 4940 scope.go:117] "RemoveContainer" containerID="173a2928eeb364258d9dd197699c532181920160acd9a1c8feeca71a17f214fe" Feb 23 08:49:40 crc kubenswrapper[4940]: I0223 08:49:40.905756 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b55d9910-222c-4c04-8a15-cecc288b8dd6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2696757a27ff629002d6fe762776487514e74ad9465357240b3f196cf9e3cec7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73db3b91fa5cdca73acde9a3bcb9062394a54e61e87020b5d656290a54aa4f70\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T08:48:50Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0223 08:48:20.958890 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0223 08:48:20.960636 1 observer_polling.go:159] Starting file observer\\\\nI0223 08:48:20.961488 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0223 08:48:20.962119 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0223 08:48:48.479316 1 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials\\\\nI0223 08:48:50.579389 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0223 08:48:50.579494 1 cmd.go:179] failed checking apiserver connectivity: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T08:48:20Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:48:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aceeb7fc620d96b7e2a474de52d7b3fb007417f4f1ec81e5853724d54381566\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dbb2f4667fe28c3ae9389cece3d7246f31a8bd82328370a699dae73e11b19a4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0c1a355a4218aa74f5a9f20e74427e20e7dd1864bfb683eeb2ac43ef6d7357f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:47:49Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:40Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:40 crc kubenswrapper[4940]: I0223 08:49:40.918643 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ca6a355de735d39d8c069429deed2a9983ba8d97d9210475c0b22791b6357d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:40Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:40 crc kubenswrapper[4940]: I0223 08:49:40.930885 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:40Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:40 crc kubenswrapper[4940]: I0223 08:49:40.946917 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-tj6ms" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"353c05f4-9ef1-4c0f-8388-8b56cc4c22d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1be9bb45923468ac8fc8f206f3493431ef579d16d158ba7cf330944b1eb7585\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j86pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e45b5826dae32da76487ca6179e00691e0f92748e895fbe42aa14d39a808a12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e45b5826dae32da76487ca6179e00691e0f92748e895fbe42aa14d39a808a12\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j86pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a9386fc4a4a81adf5f99f88359b958f8c47f37eb176a860c96e5e3b6a219f3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a9386fc4a4a81adf5f99f88359b958f8c47f37eb176a860c96e5e3b6a219f3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:49:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j86pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0e096f58de305f0df76444507c878fdf5eb1a134deb3725e1e46ac4d6581541\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0e096f58de305f0df76444507c878fdf5eb1a134deb3725e1e46ac4d6581541\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:49:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:49:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j86pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcc8e2b5ae391db63624b8bcbaa5cab45fcc5f168db2a52700d34ad2c3eb9b89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fcc8e2b5ae391db63624b8bcbaa5cab45fcc5f168db2a52700d34ad2c3eb9b89\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:49:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:49:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j86pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0b3f29a8c1eea72788476a0aac20ec99cb908df8983b5271448ae824b8ab96a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0b3f29a8c1eea72788476a0aac20ec99cb908df8983b5271448ae824b8ab96a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:49:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:49:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j86pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d249ae04e3dece913fbfa0f45169adf9d267043d3cf2dd93c4ace24c5fdf1dd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d249ae04e3dece913fbfa0f45169adf9d267043d3cf2dd93c4ace24c5fdf1dd6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:49:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:49:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j86pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:49:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-tj6ms\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:40Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:40 crc kubenswrapper[4940]: I0223 08:49:40.951250 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:40 crc kubenswrapper[4940]: I0223 08:49:40.951308 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:40 crc kubenswrapper[4940]: I0223 08:49:40.951322 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:40 crc kubenswrapper[4940]: I0223 08:49:40.951347 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:40 crc kubenswrapper[4940]: I0223 08:49:40.951366 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:40Z","lastTransitionTime":"2026-02-23T08:49:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:40 crc kubenswrapper[4940]: I0223 08:49:40.968201 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47a2763e-a680-45c0-b4e8-0930c73e2e6b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4316680c09083874958b09e3d47897f844490827fafe87653b1459708ba55af3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c467a466c26e4cd6c762317dcc08a84b7bd61aa044e8d075f7ada83206764d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2a314775a13f5a612d19ad1c27ccadc74e4ad191c84bd78a1068105509e514a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://863243ea1e361e4f5cd70d64fa4a0c078de0c0529c9928b413db44a13924c474\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cca6b93a0655ce53ad6d1ccfe625533a67c58b3876c57fe4473d699c95938bd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc81c994f3f21269783d08f800aa4a58bdf43d229b5b61d7f40f8706230fd6c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc81c994f3f21269783d08f800aa4a58bdf43d229b5b61d7f40f8706230fd6c5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:47:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:47:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://260cdbf12890f13f411672fde6af3350b3ddc30824ac5b43110247ae1c7b74fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://260cdbf12890f13f411672fde6af3350b3ddc30824ac5b43110247ae1c7b74fc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:47:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:47:52Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://768f56feda9f5952088933827a03f9d23d7d807d1354578de2335e7c3e8beaaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://768f56feda9f5952088933827a03f9d23d7d807d1354578de2335e7c3e8beaaf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:47:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:47:49Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:40Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:40 crc kubenswrapper[4940]: I0223 08:49:40.983504 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:40Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:40 crc kubenswrapper[4940]: I0223 08:49:40.998521 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4164a479bddce53118172c24d9dc0c8174560ac6f49d2a1fa7321845f7f4020a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:40Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:41 crc kubenswrapper[4940]: I0223 08:49:41.010439 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:41Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:41 crc kubenswrapper[4940]: I0223 08:49:41.025636 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f5fabdbd24e079413b28b30b7e72c60763c64d08d55badde2d84d38dcbc8a09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bce84ef684469eb1b554c08d38cbd547ce6fa507b189a9f63c740583e66bc5e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:41Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:41 crc kubenswrapper[4940]: I0223 08:49:41.040195 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-czrqm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec3904ad-5d0b-46b4-9c13-68454d9a3cb2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5c3f117be2e1eba2edb562924ecec6d7ac44b508366bc7c63a86214ada740dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8jmt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:49:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-czrqm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:41Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:41 crc kubenswrapper[4940]: I0223 08:49:41.056240 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:41 crc kubenswrapper[4940]: I0223 08:49:41.056263 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:41 crc kubenswrapper[4940]: I0223 08:49:41.056271 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:41 crc kubenswrapper[4940]: I0223 08:49:41.056285 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:41 crc kubenswrapper[4940]: I0223 08:49:41.056294 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:41Z","lastTransitionTime":"2026-02-23T08:49:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:41 crc kubenswrapper[4940]: I0223 08:49:41.058250 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qkw6w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0b5a971-c6f4-4518-9bb3-49d228275668\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://524ac500bdffecad2d1533ea863b5f2dc0acd130368e80faa476facf6817bcc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4558e2adbf43351cb79df3a10fc2e37fa7edc8eae79ceb459ae1ea068e8df21b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://683969deceafc005dbe91b7dbc1fd159cc1da9f55e48112ff68947c40997891e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cff793d0cde25f9d7edf3abaaf81005870d7ad01f985cfcc3cf3f5a57ea062b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f189355dc4d43c8b1df6111c9c3a128e90b724b976602933c384817e4f51882\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ef9add89b50dd7b480368148d404e8f7128d92e6d921c3c89a17463706f3dca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://173a2928eeb364258d9dd197699c532181920160acd9a1c8feeca71a17f214fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://173a2928eeb364258d9dd197699c532181920160acd9a1c8feeca71a17f214fe\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-23T08:49:40Z\\\",\\\"message\\\":\\\":311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0223 08:49:40.833226 6533 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0223 08:49:40.833452 6533 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0223 08:49:40.833784 6533 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0223 08:49:40.833796 6533 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0223 08:49:40.833819 6533 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0223 08:49:40.833842 6533 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0223 08:49:40.833862 6533 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0223 08:49:40.833873 6533 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0223 08:49:40.833880 6533 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0223 08:49:40.833898 6533 factory.go:656] Stopping watch factory\\\\nI0223 08:49:40.833917 6533 ovnkube.go:599] Stopped ovnkube\\\\nI0223 08:49:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T08:49:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7a97208ce587c7633bbea9ad618e0721a712dfd62ff249a5843018bc125922a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56ef19c4e2659a35760fb8e2bb8a79c35a283fa3f4b00766bdd237d1464bd933\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56ef19c4e2659a35760fb8e2bb8a79c35a283fa3f4b00766bdd237d1464bd933\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:49:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:49:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:49:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qkw6w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:41Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:41 crc kubenswrapper[4940]: I0223 08:49:41.069408 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ll9gt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac7931c6-f7d4-4166-b332-d954717f67c0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4dcef9eae99fcca399a3b47ff74ebb46cd4c0854422c9153d2ceb6f3a1ea8a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8clf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:49:37Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ll9gt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:41Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:41 crc kubenswrapper[4940]: I0223 08:49:41.085547 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f3f2cfd6-5ddf-436d-998f-440f1cc642b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a14bc73401f5a1f767af0ebe6b81d75f4f735acadd163be9aa0b78d067677a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpqgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a57b4e844c0d09d7debf8e4d2507287815cc75be308dc7010647ac2041181dc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpqgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:49:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-26mgs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:41Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:41 crc kubenswrapper[4940]: I0223 08:49:41.099658 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67d1926d-47ed-4a5c-b868-690599126446\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ee3bc200ebaed1c133ad32e65db44c8613c48203736dada71c0881d3d4f45a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6fefd6cf7ee54d7687a700f08f52ff20d438289f846da4d811f3295e85455c1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee77b54ead604e1f64b0057494a4ba082ac7f59289f563ce28ec6048485e3b07\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://947d88435f9dfeac492c7a476d5413d6c9f65fea2e6a8322bed3a197be4e7c11\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://947d88435f9dfeac492c7a476d5413d6c9f65fea2e6a8322bed3a197be4e7c11\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"message\\\":\\\"le observer\\\\nW0223 08:48:59.155726 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0223 08:48:59.155899 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0223 08:48:59.156685 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1318688766/tls.crt::/tmp/serving-cert-1318688766/tls.key\\\\\\\"\\\\nI0223 08:48:59.543716 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0223 08:48:59.548278 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0223 08:48:59.548307 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0223 08:48:59.548334 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0223 08:48:59.548340 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0223 08:48:59.558852 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0223 08:48:59.558871 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0223 08:48:59.558894 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0223 08:48:59.558902 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0223 08:48:59.558909 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0223 08:48:59.558912 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0223 08:48:59.558917 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0223 08:48:59.558922 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0223 08:48:59.560387 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T08:48:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c47633e36ba0e4b4070060e2925e103b6efa68771b18f7a7f429546f9f05c1d9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:53Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7d8e4f8739813cb8d24afc9999755886cd4a77958b8b4e1514fc0bc53403258\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7d8e4f8739813cb8d24afc9999755886cd4a77958b8b4e1514fc0bc53403258\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:47:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:47:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:47:49Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:41Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:41 crc kubenswrapper[4940]: I0223 08:49:41.113496 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4vcwd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41834650-70c0-4558-9052-d7cdfc785e09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a8663adb017b0225e3a827e7670572c25e2b8123a2153e9f99830d8ddc60ebe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwtwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:49:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4vcwd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:41Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:41 crc kubenswrapper[4940]: I0223 08:49:41.158927 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:41 crc kubenswrapper[4940]: I0223 08:49:41.158999 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:41 crc kubenswrapper[4940]: I0223 08:49:41.159016 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:41 crc kubenswrapper[4940]: I0223 08:49:41.159039 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:41 crc kubenswrapper[4940]: I0223 08:49:41.159055 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:41Z","lastTransitionTime":"2026-02-23T08:49:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:41 crc kubenswrapper[4940]: I0223 08:49:41.261844 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:41 crc kubenswrapper[4940]: I0223 08:49:41.261879 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:41 crc kubenswrapper[4940]: I0223 08:49:41.261887 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:41 crc kubenswrapper[4940]: I0223 08:49:41.261900 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:41 crc kubenswrapper[4940]: I0223 08:49:41.261911 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:41Z","lastTransitionTime":"2026-02-23T08:49:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:41 crc kubenswrapper[4940]: I0223 08:49:41.344802 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 08:49:41 crc kubenswrapper[4940]: I0223 08:49:41.344871 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 08:49:41 crc kubenswrapper[4940]: E0223 08:49:41.344917 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 08:49:41 crc kubenswrapper[4940]: E0223 08:49:41.345010 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 08:49:41 crc kubenswrapper[4940]: I0223 08:49:41.361109 4940 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 05:28:40.719241516 +0000 UTC Feb 23 08:49:41 crc kubenswrapper[4940]: I0223 08:49:41.363842 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:41 crc kubenswrapper[4940]: I0223 08:49:41.364729 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:41 crc kubenswrapper[4940]: I0223 08:49:41.364766 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:41 crc kubenswrapper[4940]: I0223 08:49:41.364794 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:41 crc kubenswrapper[4940]: I0223 08:49:41.364812 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:41Z","lastTransitionTime":"2026-02-23T08:49:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:41 crc kubenswrapper[4940]: I0223 08:49:41.466760 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:41 crc kubenswrapper[4940]: I0223 08:49:41.466798 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:41 crc kubenswrapper[4940]: I0223 08:49:41.466809 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:41 crc kubenswrapper[4940]: I0223 08:49:41.466826 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:41 crc kubenswrapper[4940]: I0223 08:49:41.466837 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:41Z","lastTransitionTime":"2026-02-23T08:49:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:41 crc kubenswrapper[4940]: I0223 08:49:41.569115 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:41 crc kubenswrapper[4940]: I0223 08:49:41.569146 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:41 crc kubenswrapper[4940]: I0223 08:49:41.569155 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:41 crc kubenswrapper[4940]: I0223 08:49:41.569167 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:41 crc kubenswrapper[4940]: I0223 08:49:41.569176 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:41Z","lastTransitionTime":"2026-02-23T08:49:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:41 crc kubenswrapper[4940]: I0223 08:49:41.671736 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:41 crc kubenswrapper[4940]: I0223 08:49:41.671891 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:41 crc kubenswrapper[4940]: I0223 08:49:41.671919 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:41 crc kubenswrapper[4940]: I0223 08:49:41.671948 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:41 crc kubenswrapper[4940]: I0223 08:49:41.671969 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:41Z","lastTransitionTime":"2026-02-23T08:49:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:41 crc kubenswrapper[4940]: I0223 08:49:41.774502 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:41 crc kubenswrapper[4940]: I0223 08:49:41.774548 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:41 crc kubenswrapper[4940]: I0223 08:49:41.774561 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:41 crc kubenswrapper[4940]: I0223 08:49:41.774576 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:41 crc kubenswrapper[4940]: I0223 08:49:41.774587 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:41Z","lastTransitionTime":"2026-02-23T08:49:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:41 crc kubenswrapper[4940]: I0223 08:49:41.877553 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:41 crc kubenswrapper[4940]: I0223 08:49:41.877651 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:41 crc kubenswrapper[4940]: I0223 08:49:41.877675 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:41 crc kubenswrapper[4940]: I0223 08:49:41.877705 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:41 crc kubenswrapper[4940]: I0223 08:49:41.877730 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:41Z","lastTransitionTime":"2026-02-23T08:49:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:41 crc kubenswrapper[4940]: I0223 08:49:41.899116 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qkw6w_d0b5a971-c6f4-4518-9bb3-49d228275668/ovnkube-controller/1.log" Feb 23 08:49:41 crc kubenswrapper[4940]: I0223 08:49:41.900240 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qkw6w_d0b5a971-c6f4-4518-9bb3-49d228275668/ovnkube-controller/0.log" Feb 23 08:49:41 crc kubenswrapper[4940]: I0223 08:49:41.903048 4940 generic.go:334] "Generic (PLEG): container finished" podID="d0b5a971-c6f4-4518-9bb3-49d228275668" containerID="69094f2d145425aee384881c22bbc1f0478a24df582e7eedcc1cf79af015ce56" exitCode=1 Feb 23 08:49:41 crc kubenswrapper[4940]: I0223 08:49:41.903097 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qkw6w" event={"ID":"d0b5a971-c6f4-4518-9bb3-49d228275668","Type":"ContainerDied","Data":"69094f2d145425aee384881c22bbc1f0478a24df582e7eedcc1cf79af015ce56"} Feb 23 08:49:41 crc kubenswrapper[4940]: I0223 08:49:41.903138 4940 scope.go:117] "RemoveContainer" containerID="173a2928eeb364258d9dd197699c532181920160acd9a1c8feeca71a17f214fe" Feb 23 08:49:41 crc kubenswrapper[4940]: I0223 08:49:41.904406 4940 scope.go:117] "RemoveContainer" containerID="69094f2d145425aee384881c22bbc1f0478a24df582e7eedcc1cf79af015ce56" Feb 23 08:49:41 crc kubenswrapper[4940]: E0223 08:49:41.904762 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-qkw6w_openshift-ovn-kubernetes(d0b5a971-c6f4-4518-9bb3-49d228275668)\"" pod="openshift-ovn-kubernetes/ovnkube-node-qkw6w" podUID="d0b5a971-c6f4-4518-9bb3-49d228275668" Feb 23 08:49:41 crc kubenswrapper[4940]: I0223 08:49:41.923392 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b55d9910-222c-4c04-8a15-cecc288b8dd6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2696757a27ff629002d6fe762776487514e74ad9465357240b3f196cf9e3cec7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73db3b91fa5cdca73acde9a3bcb9062394a54e61e87020b5d656290a54aa4f70\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T08:48:50Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0223 08:48:20.958890 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0223 08:48:20.960636 1 observer_polling.go:159] Starting file observer\\\\nI0223 08:48:20.961488 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0223 08:48:20.962119 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0223 08:48:48.479316 1 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials\\\\nI0223 08:48:50.579389 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0223 08:48:50.579494 1 cmd.go:179] failed checking apiserver connectivity: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T08:48:20Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:48:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aceeb7fc620d96b7e2a474de52d7b3fb007417f4f1ec81e5853724d54381566\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dbb2f4667fe28c3ae9389cece3d7246f31a8bd82328370a699dae73e11b19a4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0c1a355a4218aa74f5a9f20e74427e20e7dd1864bfb683eeb2ac43ef6d7357f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:47:49Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:41Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:41 crc kubenswrapper[4940]: I0223 08:49:41.937756 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ca6a355de735d39d8c069429deed2a9983ba8d97d9210475c0b22791b6357d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:41Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:41 crc kubenswrapper[4940]: I0223 08:49:41.952085 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:41Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:41 crc kubenswrapper[4940]: I0223 08:49:41.967974 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-tj6ms" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"353c05f4-9ef1-4c0f-8388-8b56cc4c22d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1be9bb45923468ac8fc8f206f3493431ef579d16d158ba7cf330944b1eb7585\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j86pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e45b5826dae32da76487ca6179e00691e0f92748e895fbe42aa14d39a808a12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e45b5826dae32da76487ca6179e00691e0f92748e895fbe42aa14d39a808a12\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j86pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a9386fc4a4a81adf5f99f88359b958f8c47f37eb176a860c96e5e3b6a219f3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a9386fc4a4a81adf5f99f88359b958f8c47f37eb176a860c96e5e3b6a219f3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:49:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j86pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0e096f58de305f0df76444507c878fdf5eb1a134deb3725e1e46ac4d6581541\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0e096f58de305f0df76444507c878fdf5eb1a134deb3725e1e46ac4d6581541\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:49:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:49:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j86pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcc8e2b5ae391db63624b8bcbaa5cab45fcc5f168db2a52700d34ad2c3eb9b89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fcc8e2b5ae391db63624b8bcbaa5cab45fcc5f168db2a52700d34ad2c3eb9b89\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:49:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:49:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j86pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0b3f29a8c1eea72788476a0aac20ec99cb908df8983b5271448ae824b8ab96a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0b3f29a8c1eea72788476a0aac20ec99cb908df8983b5271448ae824b8ab96a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:49:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:49:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j86pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d249ae04e3dece913fbfa0f45169adf9d267043d3cf2dd93c4ace24c5fdf1dd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d249ae04e3dece913fbfa0f45169adf9d267043d3cf2dd93c4ace24c5fdf1dd6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:49:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:49:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j86pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:49:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-tj6ms\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:41Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:41 crc kubenswrapper[4940]: I0223 08:49:41.980315 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:41 crc kubenswrapper[4940]: I0223 08:49:41.980382 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:41 crc kubenswrapper[4940]: I0223 08:49:41.980397 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:41 crc kubenswrapper[4940]: I0223 08:49:41.980412 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:41 crc kubenswrapper[4940]: I0223 08:49:41.980423 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:41Z","lastTransitionTime":"2026-02-23T08:49:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:41 crc kubenswrapper[4940]: I0223 08:49:41.985387 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qkw6w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0b5a971-c6f4-4518-9bb3-49d228275668\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://524ac500bdffecad2d1533ea863b5f2dc0acd130368e80faa476facf6817bcc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4558e2adbf43351cb79df3a10fc2e37fa7edc8eae79ceb459ae1ea068e8df21b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://683969deceafc005dbe91b7dbc1fd159cc1da9f55e48112ff68947c40997891e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cff793d0cde25f9d7edf3abaaf81005870d7ad01f985cfcc3cf3f5a57ea062b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f189355dc4d43c8b1df6111c9c3a128e90b724b976602933c384817e4f51882\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ef9add89b50dd7b480368148d404e8f7128d92e6d921c3c89a17463706f3dca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://69094f2d145425aee384881c22bbc1f0478a24df582e7eedcc1cf79af015ce56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://173a2928eeb364258d9dd197699c532181920160acd9a1c8feeca71a17f214fe\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-23T08:49:40Z\\\",\\\"message\\\":\\\":311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0223 08:49:40.833226 6533 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0223 08:49:40.833452 6533 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0223 08:49:40.833784 6533 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0223 08:49:40.833796 6533 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0223 08:49:40.833819 6533 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0223 08:49:40.833842 6533 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0223 08:49:40.833862 6533 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0223 08:49:40.833873 6533 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0223 08:49:40.833880 6533 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0223 08:49:40.833898 6533 factory.go:656] Stopping watch factory\\\\nI0223 08:49:40.833917 6533 ovnkube.go:599] Stopped ovnkube\\\\nI0223 08:49:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T08:49:37Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://69094f2d145425aee384881c22bbc1f0478a24df582e7eedcc1cf79af015ce56\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-23T08:49:41Z\\\",\\\"message\\\":\\\"oUUID:dce28c51-c9f1-478b-97c8-7e209d6e7cbe}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {e3c4661a-36a6-47f0-a6c0-a4ee741f2224}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0223 08:49:41.701782 6731 lb_config.go:1031] Cluster endpoints for openshift-network-diagnostics/network-check-target for network=default are: map[]\\\\nI0223 08:49:41.701745 6731 services_controller.go:452] Built service openshift-machine-api/machine-api-operator per-node LB for network=default: []services.LB{}\\\\nI0223 08:49:41.701823 6731 services_controller.go:444] Built service openshift-kube-controller-manager-operator/metrics LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI0223 08:49:41.701844 6731 services_controller.go:445] Built service openshift-kube-controller-manager-operator/metrics LB template configs for network=default: []services.lbConfig(nil)\\\\nI0223 08:49:41.701867 6731 services_controller.go:453] Built service openshift-machine-api/machine-api-operator template LB for network=default: []services.LB{}\\\\nI0223 08:49:41.701866 6731 services_controller.go:451] Built service openshift-kube-controller-manager-operator/metrics cluster-wide LB for network=default: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-kube-controller-manager-operator/metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[str\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T08:49:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7a97208ce587c7633bbea9ad618e0721a712dfd62ff249a5843018bc125922a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56ef19c4e2659a35760fb8e2bb8a79c35a283fa3f4b00766bdd237d1464bd933\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56ef19c4e2659a35760fb8e2bb8a79c35a283fa3f4b00766bdd237d1464bd933\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:49:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:49:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:49:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qkw6w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:41Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:42 crc kubenswrapper[4940]: I0223 08:49:42.002518 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47a2763e-a680-45c0-b4e8-0930c73e2e6b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4316680c09083874958b09e3d47897f844490827fafe87653b1459708ba55af3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c467a466c26e4cd6c762317dcc08a84b7bd61aa044e8d075f7ada83206764d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2a314775a13f5a612d19ad1c27ccadc74e4ad191c84bd78a1068105509e514a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://863243ea1e361e4f5cd70d64fa4a0c078de0c0529c9928b413db44a13924c474\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cca6b93a0655ce53ad6d1ccfe625533a67c58b3876c57fe4473d699c95938bd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc81c994f3f21269783d08f800aa4a58bdf43d229b5b61d7f40f8706230fd6c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc81c994f3f21269783d08f800aa4a58bdf43d229b5b61d7f40f8706230fd6c5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:47:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:47:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://260cdbf12890f13f411672fde6af3350b3ddc30824ac5b43110247ae1c7b74fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://260cdbf12890f13f411672fde6af3350b3ddc30824ac5b43110247ae1c7b74fc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:47:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:47:52Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://768f56feda9f5952088933827a03f9d23d7d807d1354578de2335e7c3e8beaaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://768f56feda9f5952088933827a03f9d23d7d807d1354578de2335e7c3e8beaaf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:47:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:47:49Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:42Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:42 crc kubenswrapper[4940]: I0223 08:49:42.016722 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:42Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:42 crc kubenswrapper[4940]: I0223 08:49:42.029529 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4164a479bddce53118172c24d9dc0c8174560ac6f49d2a1fa7321845f7f4020a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:42Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:42 crc kubenswrapper[4940]: I0223 08:49:42.039601 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:42Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:42 crc kubenswrapper[4940]: I0223 08:49:42.050534 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f5fabdbd24e079413b28b30b7e72c60763c64d08d55badde2d84d38dcbc8a09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bce84ef684469eb1b554c08d38cbd547ce6fa507b189a9f63c740583e66bc5e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:42Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:42 crc kubenswrapper[4940]: I0223 08:49:42.061700 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-czrqm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec3904ad-5d0b-46b4-9c13-68454d9a3cb2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5c3f117be2e1eba2edb562924ecec6d7ac44b508366bc7c63a86214ada740dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8jmt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:49:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-czrqm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:42Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:42 crc kubenswrapper[4940]: I0223 08:49:42.071358 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ll9gt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac7931c6-f7d4-4166-b332-d954717f67c0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4dcef9eae99fcca399a3b47ff74ebb46cd4c0854422c9153d2ceb6f3a1ea8a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8clf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:49:37Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ll9gt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:42Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:42 crc kubenswrapper[4940]: I0223 08:49:42.081000 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f3f2cfd6-5ddf-436d-998f-440f1cc642b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a14bc73401f5a1f767af0ebe6b81d75f4f735acadd163be9aa0b78d067677a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpqgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a57b4e844c0d09d7debf8e4d2507287815cc75be308dc7010647ac2041181dc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpqgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:49:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-26mgs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:42Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:42 crc kubenswrapper[4940]: I0223 08:49:42.082213 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:42 crc kubenswrapper[4940]: I0223 08:49:42.082250 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:42 crc kubenswrapper[4940]: I0223 08:49:42.082260 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:42 crc kubenswrapper[4940]: I0223 08:49:42.082275 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:42 crc kubenswrapper[4940]: I0223 08:49:42.082287 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:42Z","lastTransitionTime":"2026-02-23T08:49:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:42 crc kubenswrapper[4940]: I0223 08:49:42.093869 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67d1926d-47ed-4a5c-b868-690599126446\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ee3bc200ebaed1c133ad32e65db44c8613c48203736dada71c0881d3d4f45a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6fefd6cf7ee54d7687a700f08f52ff20d438289f846da4d811f3295e85455c1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee77b54ead604e1f64b0057494a4ba082ac7f59289f563ce28ec6048485e3b07\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://947d88435f9dfeac492c7a476d5413d6c9f65fea2e6a8322bed3a197be4e7c11\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://947d88435f9dfeac492c7a476d5413d6c9f65fea2e6a8322bed3a197be4e7c11\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"message\\\":\\\"le observer\\\\nW0223 08:48:59.155726 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0223 08:48:59.155899 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0223 08:48:59.156685 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1318688766/tls.crt::/tmp/serving-cert-1318688766/tls.key\\\\\\\"\\\\nI0223 08:48:59.543716 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0223 08:48:59.548278 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0223 08:48:59.548307 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0223 08:48:59.548334 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0223 08:48:59.548340 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0223 08:48:59.558852 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0223 08:48:59.558871 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0223 08:48:59.558894 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0223 08:48:59.558902 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0223 08:48:59.558909 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0223 08:48:59.558912 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0223 08:48:59.558917 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0223 08:48:59.558922 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0223 08:48:59.560387 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T08:48:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c47633e36ba0e4b4070060e2925e103b6efa68771b18f7a7f429546f9f05c1d9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:53Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7d8e4f8739813cb8d24afc9999755886cd4a77958b8b4e1514fc0bc53403258\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7d8e4f8739813cb8d24afc9999755886cd4a77958b8b4e1514fc0bc53403258\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:47:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:47:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:47:49Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:42Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:42 crc kubenswrapper[4940]: I0223 08:49:42.102786 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4vcwd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41834650-70c0-4558-9052-d7cdfc785e09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a8663adb017b0225e3a827e7670572c25e2b8123a2153e9f99830d8ddc60ebe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwtwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:49:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4vcwd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:42Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:42 crc kubenswrapper[4940]: I0223 08:49:42.185182 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:42 crc kubenswrapper[4940]: I0223 08:49:42.185230 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:42 crc kubenswrapper[4940]: I0223 08:49:42.185239 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:42 crc kubenswrapper[4940]: I0223 08:49:42.185254 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:42 crc kubenswrapper[4940]: I0223 08:49:42.185265 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:42Z","lastTransitionTime":"2026-02-23T08:49:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:42 crc kubenswrapper[4940]: I0223 08:49:42.288053 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:42 crc kubenswrapper[4940]: I0223 08:49:42.288122 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:42 crc kubenswrapper[4940]: I0223 08:49:42.288146 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:42 crc kubenswrapper[4940]: I0223 08:49:42.288169 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:42 crc kubenswrapper[4940]: I0223 08:49:42.288185 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:42Z","lastTransitionTime":"2026-02-23T08:49:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:42 crc kubenswrapper[4940]: I0223 08:49:42.344950 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 08:49:42 crc kubenswrapper[4940]: E0223 08:49:42.345119 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 08:49:42 crc kubenswrapper[4940]: I0223 08:49:42.361232 4940 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 02:20:47.260559151 +0000 UTC Feb 23 08:49:42 crc kubenswrapper[4940]: I0223 08:49:42.390515 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:42 crc kubenswrapper[4940]: I0223 08:49:42.390544 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:42 crc kubenswrapper[4940]: I0223 08:49:42.390555 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:42 crc kubenswrapper[4940]: I0223 08:49:42.390569 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:42 crc kubenswrapper[4940]: I0223 08:49:42.390580 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:42Z","lastTransitionTime":"2026-02-23T08:49:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:42 crc kubenswrapper[4940]: I0223 08:49:42.492597 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:42 crc kubenswrapper[4940]: I0223 08:49:42.492847 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:42 crc kubenswrapper[4940]: I0223 08:49:42.492924 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:42 crc kubenswrapper[4940]: I0223 08:49:42.493003 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:42 crc kubenswrapper[4940]: I0223 08:49:42.493102 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:42Z","lastTransitionTime":"2026-02-23T08:49:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:42 crc kubenswrapper[4940]: I0223 08:49:42.595992 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:42 crc kubenswrapper[4940]: I0223 08:49:42.596297 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:42 crc kubenswrapper[4940]: I0223 08:49:42.596377 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:42 crc kubenswrapper[4940]: I0223 08:49:42.596454 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:42 crc kubenswrapper[4940]: I0223 08:49:42.596528 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:42Z","lastTransitionTime":"2026-02-23T08:49:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:42 crc kubenswrapper[4940]: I0223 08:49:42.699147 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:42 crc kubenswrapper[4940]: I0223 08:49:42.699435 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:42 crc kubenswrapper[4940]: I0223 08:49:42.699514 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:42 crc kubenswrapper[4940]: I0223 08:49:42.699591 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:42 crc kubenswrapper[4940]: I0223 08:49:42.699694 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:42Z","lastTransitionTime":"2026-02-23T08:49:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:42 crc kubenswrapper[4940]: I0223 08:49:42.801740 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:42 crc kubenswrapper[4940]: I0223 08:49:42.801808 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:42 crc kubenswrapper[4940]: I0223 08:49:42.801827 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:42 crc kubenswrapper[4940]: I0223 08:49:42.801888 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:42 crc kubenswrapper[4940]: I0223 08:49:42.801906 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:42Z","lastTransitionTime":"2026-02-23T08:49:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:42 crc kubenswrapper[4940]: I0223 08:49:42.904838 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:42 crc kubenswrapper[4940]: I0223 08:49:42.905064 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:42 crc kubenswrapper[4940]: I0223 08:49:42.905075 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:42 crc kubenswrapper[4940]: I0223 08:49:42.905095 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:42 crc kubenswrapper[4940]: I0223 08:49:42.905108 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:42Z","lastTransitionTime":"2026-02-23T08:49:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:42 crc kubenswrapper[4940]: I0223 08:49:42.912009 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qkw6w_d0b5a971-c6f4-4518-9bb3-49d228275668/ovnkube-controller/1.log" Feb 23 08:49:42 crc kubenswrapper[4940]: I0223 08:49:42.917467 4940 scope.go:117] "RemoveContainer" containerID="69094f2d145425aee384881c22bbc1f0478a24df582e7eedcc1cf79af015ce56" Feb 23 08:49:42 crc kubenswrapper[4940]: E0223 08:49:42.917693 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-qkw6w_openshift-ovn-kubernetes(d0b5a971-c6f4-4518-9bb3-49d228275668)\"" pod="openshift-ovn-kubernetes/ovnkube-node-qkw6w" podUID="d0b5a971-c6f4-4518-9bb3-49d228275668" Feb 23 08:49:42 crc kubenswrapper[4940]: I0223 08:49:42.934925 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f3f2cfd6-5ddf-436d-998f-440f1cc642b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a14bc73401f5a1f767af0ebe6b81d75f4f735acadd163be9aa0b78d067677a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpqgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a57b4e844c0d09d7debf8e4d2507287815cc75be308dc7010647ac2041181dc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpqgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:49:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-26mgs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:42Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:42 crc kubenswrapper[4940]: I0223 08:49:42.945857 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4vcwd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41834650-70c0-4558-9052-d7cdfc785e09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a8663adb017b0225e3a827e7670572c25e2b8123a2153e9f99830d8ddc60ebe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwtwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:49:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4vcwd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:42Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:42 crc kubenswrapper[4940]: I0223 08:49:42.963628 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67d1926d-47ed-4a5c-b868-690599126446\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ee3bc200ebaed1c133ad32e65db44c8613c48203736dada71c0881d3d4f45a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6fefd6cf7ee54d7687a700f08f52ff20d438289f846da4d811f3295e85455c1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee77b54ead604e1f64b0057494a4ba082ac7f59289f563ce28ec6048485e3b07\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://947d88435f9dfeac492c7a476d5413d6c9f65fea2e6a8322bed3a197be4e7c11\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://947d88435f9dfeac492c7a476d5413d6c9f65fea2e6a8322bed3a197be4e7c11\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"message\\\":\\\"le observer\\\\nW0223 08:48:59.155726 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0223 08:48:59.155899 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0223 08:48:59.156685 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1318688766/tls.crt::/tmp/serving-cert-1318688766/tls.key\\\\\\\"\\\\nI0223 08:48:59.543716 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0223 08:48:59.548278 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0223 08:48:59.548307 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0223 08:48:59.548334 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0223 08:48:59.548340 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0223 08:48:59.558852 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0223 08:48:59.558871 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0223 08:48:59.558894 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0223 08:48:59.558902 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0223 08:48:59.558909 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0223 08:48:59.558912 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0223 08:48:59.558917 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0223 08:48:59.558922 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0223 08:48:59.560387 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T08:48:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c47633e36ba0e4b4070060e2925e103b6efa68771b18f7a7f429546f9f05c1d9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:53Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7d8e4f8739813cb8d24afc9999755886cd4a77958b8b4e1514fc0bc53403258\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7d8e4f8739813cb8d24afc9999755886cd4a77958b8b4e1514fc0bc53403258\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:47:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:47:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:47:49Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:42Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:42 crc kubenswrapper[4940]: I0223 08:49:42.979747 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:42Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:42 crc kubenswrapper[4940]: I0223 08:49:42.996510 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-tj6ms" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"353c05f4-9ef1-4c0f-8388-8b56cc4c22d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1be9bb45923468ac8fc8f206f3493431ef579d16d158ba7cf330944b1eb7585\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j86pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e45b5826dae32da76487ca6179e00691e0f92748e895fbe42aa14d39a808a12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e45b5826dae32da76487ca6179e00691e0f92748e895fbe42aa14d39a808a12\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j86pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a9386fc4a4a81adf5f99f88359b958f8c47f37eb176a860c96e5e3b6a219f3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a9386fc4a4a81adf5f99f88359b958f8c47f37eb176a860c96e5e3b6a219f3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:49:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j86pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0e096f58de305f0df76444507c878fdf5eb1a134deb3725e1e46ac4d6581541\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0e096f58de305f0df76444507c878fdf5eb1a134deb3725e1e46ac4d6581541\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:49:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:49:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j86pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcc8e2b5ae391db63624b8bcbaa5cab45fcc5f168db2a52700d34ad2c3eb9b89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fcc8e2b5ae391db63624b8bcbaa5cab45fcc5f168db2a52700d34ad2c3eb9b89\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:49:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:49:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j86pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0b3f29a8c1eea72788476a0aac20ec99cb908df8983b5271448ae824b8ab96a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0b3f29a8c1eea72788476a0aac20ec99cb908df8983b5271448ae824b8ab96a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:49:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:49:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j86pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d249ae04e3dece913fbfa0f45169adf9d267043d3cf2dd93c4ace24c5fdf1dd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d249ae04e3dece913fbfa0f45169adf9d267043d3cf2dd93c4ace24c5fdf1dd6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:49:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:49:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j86pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:49:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-tj6ms\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:42Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:43 crc kubenswrapper[4940]: I0223 08:49:43.008087 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:43 crc kubenswrapper[4940]: I0223 08:49:43.008133 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:43 crc kubenswrapper[4940]: I0223 08:49:43.008145 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:43 crc kubenswrapper[4940]: I0223 08:49:43.008171 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:43 crc kubenswrapper[4940]: I0223 08:49:43.008184 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:43Z","lastTransitionTime":"2026-02-23T08:49:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:43 crc kubenswrapper[4940]: I0223 08:49:43.013451 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b55d9910-222c-4c04-8a15-cecc288b8dd6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2696757a27ff629002d6fe762776487514e74ad9465357240b3f196cf9e3cec7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73db3b91fa5cdca73acde9a3bcb9062394a54e61e87020b5d656290a54aa4f70\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T08:48:50Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0223 08:48:20.958890 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0223 08:48:20.960636 1 observer_polling.go:159] Starting file observer\\\\nI0223 08:48:20.961488 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0223 08:48:20.962119 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0223 08:48:48.479316 1 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials\\\\nI0223 08:48:50.579389 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0223 08:48:50.579494 1 cmd.go:179] failed checking apiserver connectivity: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T08:48:20Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:48:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aceeb7fc620d96b7e2a474de52d7b3fb007417f4f1ec81e5853724d54381566\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dbb2f4667fe28c3ae9389cece3d7246f31a8bd82328370a699dae73e11b19a4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0c1a355a4218aa74f5a9f20e74427e20e7dd1864bfb683eeb2ac43ef6d7357f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:47:49Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:43Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:43 crc kubenswrapper[4940]: I0223 08:49:43.025463 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ca6a355de735d39d8c069429deed2a9983ba8d97d9210475c0b22791b6357d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:43Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:43 crc kubenswrapper[4940]: I0223 08:49:43.039894 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:43Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:43 crc kubenswrapper[4940]: I0223 08:49:43.052171 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4164a479bddce53118172c24d9dc0c8174560ac6f49d2a1fa7321845f7f4020a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:43Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:43 crc kubenswrapper[4940]: I0223 08:49:43.064244 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:43Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:43 crc kubenswrapper[4940]: I0223 08:49:43.076241 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f5fabdbd24e079413b28b30b7e72c60763c64d08d55badde2d84d38dcbc8a09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bce84ef684469eb1b554c08d38cbd547ce6fa507b189a9f63c740583e66bc5e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:43Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:43 crc kubenswrapper[4940]: I0223 08:49:43.092234 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-czrqm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec3904ad-5d0b-46b4-9c13-68454d9a3cb2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5c3f117be2e1eba2edb562924ecec6d7ac44b508366bc7c63a86214ada740dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8jmt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:49:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-czrqm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:43Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:43 crc kubenswrapper[4940]: I0223 08:49:43.109577 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qkw6w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0b5a971-c6f4-4518-9bb3-49d228275668\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://524ac500bdffecad2d1533ea863b5f2dc0acd130368e80faa476facf6817bcc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4558e2adbf43351cb79df3a10fc2e37fa7edc8eae79ceb459ae1ea068e8df21b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://683969deceafc005dbe91b7dbc1fd159cc1da9f55e48112ff68947c40997891e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cff793d0cde25f9d7edf3abaaf81005870d7ad01f985cfcc3cf3f5a57ea062b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f189355dc4d43c8b1df6111c9c3a128e90b724b976602933c384817e4f51882\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ef9add89b50dd7b480368148d404e8f7128d92e6d921c3c89a17463706f3dca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://69094f2d145425aee384881c22bbc1f0478a24df582e7eedcc1cf79af015ce56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://69094f2d145425aee384881c22bbc1f0478a24df582e7eedcc1cf79af015ce56\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-23T08:49:41Z\\\",\\\"message\\\":\\\"oUUID:dce28c51-c9f1-478b-97c8-7e209d6e7cbe}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {e3c4661a-36a6-47f0-a6c0-a4ee741f2224}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0223 08:49:41.701782 6731 lb_config.go:1031] Cluster endpoints for openshift-network-diagnostics/network-check-target for network=default are: map[]\\\\nI0223 08:49:41.701745 6731 services_controller.go:452] Built service openshift-machine-api/machine-api-operator per-node LB for network=default: []services.LB{}\\\\nI0223 08:49:41.701823 6731 services_controller.go:444] Built service openshift-kube-controller-manager-operator/metrics LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI0223 08:49:41.701844 6731 services_controller.go:445] Built service openshift-kube-controller-manager-operator/metrics LB template configs for network=default: []services.lbConfig(nil)\\\\nI0223 08:49:41.701867 6731 services_controller.go:453] Built service openshift-machine-api/machine-api-operator template LB for network=default: []services.LB{}\\\\nI0223 08:49:41.701866 6731 services_controller.go:451] Built service openshift-kube-controller-manager-operator/metrics cluster-wide LB for network=default: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-kube-controller-manager-operator/metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[str\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T08:49:40Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-qkw6w_openshift-ovn-kubernetes(d0b5a971-c6f4-4518-9bb3-49d228275668)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7a97208ce587c7633bbea9ad618e0721a712dfd62ff249a5843018bc125922a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56ef19c4e2659a35760fb8e2bb8a79c35a283fa3f4b00766bdd237d1464bd933\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56ef19c4e2659a35760fb8e2bb8a79c35a283fa3f4b00766bdd237d1464bd933\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:49:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:49:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:49:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qkw6w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:43Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:43 crc kubenswrapper[4940]: I0223 08:49:43.110675 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:43 crc kubenswrapper[4940]: I0223 08:49:43.110721 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:43 crc kubenswrapper[4940]: I0223 08:49:43.110732 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:43 crc kubenswrapper[4940]: I0223 08:49:43.110749 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:43 crc kubenswrapper[4940]: I0223 08:49:43.110762 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:43Z","lastTransitionTime":"2026-02-23T08:49:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:43 crc kubenswrapper[4940]: I0223 08:49:43.131769 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47a2763e-a680-45c0-b4e8-0930c73e2e6b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4316680c09083874958b09e3d47897f844490827fafe87653b1459708ba55af3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c467a466c26e4cd6c762317dcc08a84b7bd61aa044e8d075f7ada83206764d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2a314775a13f5a612d19ad1c27ccadc74e4ad191c84bd78a1068105509e514a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://863243ea1e361e4f5cd70d64fa4a0c078de0c0529c9928b413db44a13924c474\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cca6b93a0655ce53ad6d1ccfe625533a67c58b3876c57fe4473d699c95938bd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc81c994f3f21269783d08f800aa4a58bdf43d229b5b61d7f40f8706230fd6c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc81c994f3f21269783d08f800aa4a58bdf43d229b5b61d7f40f8706230fd6c5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:47:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:47:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://260cdbf12890f13f411672fde6af3350b3ddc30824ac5b43110247ae1c7b74fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://260cdbf12890f13f411672fde6af3350b3ddc30824ac5b43110247ae1c7b74fc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:47:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:47:52Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://768f56feda9f5952088933827a03f9d23d7d807d1354578de2335e7c3e8beaaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://768f56feda9f5952088933827a03f9d23d7d807d1354578de2335e7c3e8beaaf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:47:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:47:49Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:43Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:43 crc kubenswrapper[4940]: I0223 08:49:43.141373 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ll9gt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac7931c6-f7d4-4166-b332-d954717f67c0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4dcef9eae99fcca399a3b47ff74ebb46cd4c0854422c9153d2ceb6f3a1ea8a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8clf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:49:37Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ll9gt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:43Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:43 crc kubenswrapper[4940]: I0223 08:49:43.213745 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:43 crc kubenswrapper[4940]: I0223 08:49:43.213787 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:43 crc kubenswrapper[4940]: I0223 08:49:43.213797 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:43 crc kubenswrapper[4940]: I0223 08:49:43.213812 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:43 crc kubenswrapper[4940]: I0223 08:49:43.213822 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:43Z","lastTransitionTime":"2026-02-23T08:49:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:43 crc kubenswrapper[4940]: I0223 08:49:43.316646 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:43 crc kubenswrapper[4940]: I0223 08:49:43.316697 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:43 crc kubenswrapper[4940]: I0223 08:49:43.316709 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:43 crc kubenswrapper[4940]: I0223 08:49:43.316726 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:43 crc kubenswrapper[4940]: I0223 08:49:43.316737 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:43Z","lastTransitionTime":"2026-02-23T08:49:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:43 crc kubenswrapper[4940]: I0223 08:49:43.336530 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gjpr8"] Feb 23 08:49:43 crc kubenswrapper[4940]: I0223 08:49:43.337457 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gjpr8" Feb 23 08:49:43 crc kubenswrapper[4940]: I0223 08:49:43.339781 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Feb 23 08:49:43 crc kubenswrapper[4940]: I0223 08:49:43.341483 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Feb 23 08:49:43 crc kubenswrapper[4940]: I0223 08:49:43.344537 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 08:49:43 crc kubenswrapper[4940]: E0223 08:49:43.344673 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 08:49:43 crc kubenswrapper[4940]: I0223 08:49:43.345169 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 08:49:43 crc kubenswrapper[4940]: E0223 08:49:43.345596 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 08:49:43 crc kubenswrapper[4940]: I0223 08:49:43.360072 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67d1926d-47ed-4a5c-b868-690599126446\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ee3bc200ebaed1c133ad32e65db44c8613c48203736dada71c0881d3d4f45a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6fefd6cf7ee54d7687a700f08f52ff20d438289f846da4d811f3295e85455c1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee77b54ead604e1f64b0057494a4ba082ac7f59289f563ce28ec6048485e3b07\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://947d88435f9dfeac492c7a476d5413d6c9f65fea2e6a8322bed3a197be4e7c11\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://947d88435f9dfeac492c7a476d5413d6c9f65fea2e6a8322bed3a197be4e7c11\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"message\\\":\\\"le observer\\\\nW0223 08:48:59.155726 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0223 08:48:59.155899 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0223 08:48:59.156685 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1318688766/tls.crt::/tmp/serving-cert-1318688766/tls.key\\\\\\\"\\\\nI0223 08:48:59.543716 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0223 08:48:59.548278 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0223 08:48:59.548307 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0223 08:48:59.548334 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0223 08:48:59.548340 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0223 08:48:59.558852 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0223 08:48:59.558871 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0223 08:48:59.558894 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0223 08:48:59.558902 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0223 08:48:59.558909 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0223 08:48:59.558912 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0223 08:48:59.558917 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0223 08:48:59.558922 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0223 08:48:59.560387 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T08:48:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c47633e36ba0e4b4070060e2925e103b6efa68771b18f7a7f429546f9f05c1d9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:53Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7d8e4f8739813cb8d24afc9999755886cd4a77958b8b4e1514fc0bc53403258\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7d8e4f8739813cb8d24afc9999755886cd4a77958b8b4e1514fc0bc53403258\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:47:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:47:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:47:49Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:43Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:43 crc kubenswrapper[4940]: I0223 08:49:43.362064 4940 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 15:39:28.57677004 +0000 UTC Feb 23 08:49:43 crc kubenswrapper[4940]: I0223 08:49:43.373540 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4vcwd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41834650-70c0-4558-9052-d7cdfc785e09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a8663adb017b0225e3a827e7670572c25e2b8123a2153e9f99830d8ddc60ebe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwtwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:49:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4vcwd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:43Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:43 crc kubenswrapper[4940]: I0223 08:49:43.392147 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b55d9910-222c-4c04-8a15-cecc288b8dd6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2696757a27ff629002d6fe762776487514e74ad9465357240b3f196cf9e3cec7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73db3b91fa5cdca73acde9a3bcb9062394a54e61e87020b5d656290a54aa4f70\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T08:48:50Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0223 08:48:20.958890 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0223 08:48:20.960636 1 observer_polling.go:159] Starting file observer\\\\nI0223 08:48:20.961488 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0223 08:48:20.962119 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0223 08:48:48.479316 1 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials\\\\nI0223 08:48:50.579389 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0223 08:48:50.579494 1 cmd.go:179] failed checking apiserver connectivity: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T08:48:20Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:48:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aceeb7fc620d96b7e2a474de52d7b3fb007417f4f1ec81e5853724d54381566\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dbb2f4667fe28c3ae9389cece3d7246f31a8bd82328370a699dae73e11b19a4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0c1a355a4218aa74f5a9f20e74427e20e7dd1864bfb683eeb2ac43ef6d7357f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:47:49Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:43Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:43 crc kubenswrapper[4940]: I0223 08:49:43.406825 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ca6a355de735d39d8c069429deed2a9983ba8d97d9210475c0b22791b6357d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:43Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:43 crc kubenswrapper[4940]: I0223 08:49:43.419310 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:43 crc kubenswrapper[4940]: I0223 08:49:43.419352 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:43 crc kubenswrapper[4940]: I0223 08:49:43.419362 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:43 crc kubenswrapper[4940]: I0223 08:49:43.419377 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:43 crc kubenswrapper[4940]: I0223 08:49:43.419390 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:43Z","lastTransitionTime":"2026-02-23T08:49:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:43 crc kubenswrapper[4940]: I0223 08:49:43.427669 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:43Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:43 crc kubenswrapper[4940]: I0223 08:49:43.441484 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-tj6ms" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"353c05f4-9ef1-4c0f-8388-8b56cc4c22d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1be9bb45923468ac8fc8f206f3493431ef579d16d158ba7cf330944b1eb7585\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j86pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e45b5826dae32da76487ca6179e00691e0f92748e895fbe42aa14d39a808a12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e45b5826dae32da76487ca6179e00691e0f92748e895fbe42aa14d39a808a12\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j86pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a9386fc4a4a81adf5f99f88359b958f8c47f37eb176a860c96e5e3b6a219f3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a9386fc4a4a81adf5f99f88359b958f8c47f37eb176a860c96e5e3b6a219f3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:49:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j86pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0e096f58de305f0df76444507c878fdf5eb1a134deb3725e1e46ac4d6581541\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0e096f58de305f0df76444507c878fdf5eb1a134deb3725e1e46ac4d6581541\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:49:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:49:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j86pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcc8e2b5ae391db63624b8bcbaa5cab45fcc5f168db2a52700d34ad2c3eb9b89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fcc8e2b5ae391db63624b8bcbaa5cab45fcc5f168db2a52700d34ad2c3eb9b89\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:49:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:49:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j86pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0b3f29a8c1eea72788476a0aac20ec99cb908df8983b5271448ae824b8ab96a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0b3f29a8c1eea72788476a0aac20ec99cb908df8983b5271448ae824b8ab96a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:49:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:49:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j86pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d249ae04e3dece913fbfa0f45169adf9d267043d3cf2dd93c4ace24c5fdf1dd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d249ae04e3dece913fbfa0f45169adf9d267043d3cf2dd93c4ace24c5fdf1dd6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:49:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:49:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j86pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:49:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-tj6ms\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:43Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:43 crc kubenswrapper[4940]: I0223 08:49:43.453142 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f5fabdbd24e079413b28b30b7e72c60763c64d08d55badde2d84d38dcbc8a09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bce84ef684469eb1b554c08d38cbd547ce6fa507b189a9f63c740583e66bc5e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:43Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:43 crc kubenswrapper[4940]: I0223 08:49:43.468656 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-czrqm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec3904ad-5d0b-46b4-9c13-68454d9a3cb2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5c3f117be2e1eba2edb562924ecec6d7ac44b508366bc7c63a86214ada740dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8jmt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:49:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-czrqm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:43Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:43 crc kubenswrapper[4940]: I0223 08:49:43.488317 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/28e2b63c-99a7-46aa-a8e2-cee2bf5d7066-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-gjpr8\" (UID: \"28e2b63c-99a7-46aa-a8e2-cee2bf5d7066\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gjpr8" Feb 23 08:49:43 crc kubenswrapper[4940]: I0223 08:49:43.488678 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/28e2b63c-99a7-46aa-a8e2-cee2bf5d7066-env-overrides\") pod \"ovnkube-control-plane-749d76644c-gjpr8\" (UID: \"28e2b63c-99a7-46aa-a8e2-cee2bf5d7066\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gjpr8" Feb 23 08:49:43 crc kubenswrapper[4940]: I0223 08:49:43.488786 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w4p7m\" (UniqueName: \"kubernetes.io/projected/28e2b63c-99a7-46aa-a8e2-cee2bf5d7066-kube-api-access-w4p7m\") pod \"ovnkube-control-plane-749d76644c-gjpr8\" (UID: \"28e2b63c-99a7-46aa-a8e2-cee2bf5d7066\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gjpr8" Feb 23 08:49:43 crc kubenswrapper[4940]: I0223 08:49:43.488913 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/28e2b63c-99a7-46aa-a8e2-cee2bf5d7066-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-gjpr8\" (UID: \"28e2b63c-99a7-46aa-a8e2-cee2bf5d7066\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gjpr8" Feb 23 08:49:43 crc kubenswrapper[4940]: I0223 08:49:43.491223 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qkw6w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0b5a971-c6f4-4518-9bb3-49d228275668\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://524ac500bdffecad2d1533ea863b5f2dc0acd130368e80faa476facf6817bcc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4558e2adbf43351cb79df3a10fc2e37fa7edc8eae79ceb459ae1ea068e8df21b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://683969deceafc005dbe91b7dbc1fd159cc1da9f55e48112ff68947c40997891e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cff793d0cde25f9d7edf3abaaf81005870d7ad01f985cfcc3cf3f5a57ea062b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f189355dc4d43c8b1df6111c9c3a128e90b724b976602933c384817e4f51882\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ef9add89b50dd7b480368148d404e8f7128d92e6d921c3c89a17463706f3dca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://69094f2d145425aee384881c22bbc1f0478a24df582e7eedcc1cf79af015ce56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://69094f2d145425aee384881c22bbc1f0478a24df582e7eedcc1cf79af015ce56\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-23T08:49:41Z\\\",\\\"message\\\":\\\"oUUID:dce28c51-c9f1-478b-97c8-7e209d6e7cbe}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {e3c4661a-36a6-47f0-a6c0-a4ee741f2224}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0223 08:49:41.701782 6731 lb_config.go:1031] Cluster endpoints for openshift-network-diagnostics/network-check-target for network=default are: map[]\\\\nI0223 08:49:41.701745 6731 services_controller.go:452] Built service openshift-machine-api/machine-api-operator per-node LB for network=default: []services.LB{}\\\\nI0223 08:49:41.701823 6731 services_controller.go:444] Built service openshift-kube-controller-manager-operator/metrics LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI0223 08:49:41.701844 6731 services_controller.go:445] Built service openshift-kube-controller-manager-operator/metrics LB template configs for network=default: []services.lbConfig(nil)\\\\nI0223 08:49:41.701867 6731 services_controller.go:453] Built service openshift-machine-api/machine-api-operator template LB for network=default: []services.LB{}\\\\nI0223 08:49:41.701866 6731 services_controller.go:451] Built service openshift-kube-controller-manager-operator/metrics cluster-wide LB for network=default: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-kube-controller-manager-operator/metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[str\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T08:49:40Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-qkw6w_openshift-ovn-kubernetes(d0b5a971-c6f4-4518-9bb3-49d228275668)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7a97208ce587c7633bbea9ad618e0721a712dfd62ff249a5843018bc125922a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56ef19c4e2659a35760fb8e2bb8a79c35a283fa3f4b00766bdd237d1464bd933\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56ef19c4e2659a35760fb8e2bb8a79c35a283fa3f4b00766bdd237d1464bd933\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:49:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:49:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:49:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qkw6w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:43Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:43 crc kubenswrapper[4940]: I0223 08:49:43.520321 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47a2763e-a680-45c0-b4e8-0930c73e2e6b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4316680c09083874958b09e3d47897f844490827fafe87653b1459708ba55af3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c467a466c26e4cd6c762317dcc08a84b7bd61aa044e8d075f7ada83206764d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2a314775a13f5a612d19ad1c27ccadc74e4ad191c84bd78a1068105509e514a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://863243ea1e361e4f5cd70d64fa4a0c078de0c0529c9928b413db44a13924c474\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cca6b93a0655ce53ad6d1ccfe625533a67c58b3876c57fe4473d699c95938bd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc81c994f3f21269783d08f800aa4a58bdf43d229b5b61d7f40f8706230fd6c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc81c994f3f21269783d08f800aa4a58bdf43d229b5b61d7f40f8706230fd6c5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:47:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:47:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://260cdbf12890f13f411672fde6af3350b3ddc30824ac5b43110247ae1c7b74fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://260cdbf12890f13f411672fde6af3350b3ddc30824ac5b43110247ae1c7b74fc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:47:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:47:52Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://768f56feda9f5952088933827a03f9d23d7d807d1354578de2335e7c3e8beaaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://768f56feda9f5952088933827a03f9d23d7d807d1354578de2335e7c3e8beaaf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:47:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:47:49Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:43Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:43 crc kubenswrapper[4940]: I0223 08:49:43.521449 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:43 crc kubenswrapper[4940]: I0223 08:49:43.521503 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:43 crc kubenswrapper[4940]: I0223 08:49:43.521516 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:43 crc kubenswrapper[4940]: I0223 08:49:43.521533 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:43 crc kubenswrapper[4940]: I0223 08:49:43.521548 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:43Z","lastTransitionTime":"2026-02-23T08:49:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:43 crc kubenswrapper[4940]: I0223 08:49:43.533559 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:43Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:43 crc kubenswrapper[4940]: I0223 08:49:43.548870 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4164a479bddce53118172c24d9dc0c8174560ac6f49d2a1fa7321845f7f4020a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:43Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:43 crc kubenswrapper[4940]: I0223 08:49:43.560558 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:43 crc kubenswrapper[4940]: I0223 08:49:43.560654 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:43 crc kubenswrapper[4940]: I0223 08:49:43.560679 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:43 crc kubenswrapper[4940]: I0223 08:49:43.560709 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:43 crc kubenswrapper[4940]: I0223 08:49:43.560733 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:43Z","lastTransitionTime":"2026-02-23T08:49:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:43 crc kubenswrapper[4940]: I0223 08:49:43.565969 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:43Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:43 crc kubenswrapper[4940]: I0223 08:49:43.577027 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ll9gt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac7931c6-f7d4-4166-b332-d954717f67c0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4dcef9eae99fcca399a3b47ff74ebb46cd4c0854422c9153d2ceb6f3a1ea8a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8clf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:49:37Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ll9gt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:43Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:43 crc kubenswrapper[4940]: E0223 08:49:43.578298 4940 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T08:49:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T08:49:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T08:49:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T08:49:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0b40f9a7-6d5b-496d-bcec-88183c6aba29\\\",\\\"systemUUID\\\":\\\"3c406e8c-0d77-4ead-8ee9-37cf28c01cc1\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:43Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:43 crc kubenswrapper[4940]: I0223 08:49:43.582311 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:43 crc kubenswrapper[4940]: I0223 08:49:43.582344 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:43 crc kubenswrapper[4940]: I0223 08:49:43.582375 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:43 crc kubenswrapper[4940]: I0223 08:49:43.582391 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:43 crc kubenswrapper[4940]: I0223 08:49:43.582400 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:43Z","lastTransitionTime":"2026-02-23T08:49:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:43 crc kubenswrapper[4940]: I0223 08:49:43.590018 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/28e2b63c-99a7-46aa-a8e2-cee2bf5d7066-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-gjpr8\" (UID: \"28e2b63c-99a7-46aa-a8e2-cee2bf5d7066\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gjpr8" Feb 23 08:49:43 crc kubenswrapper[4940]: I0223 08:49:43.590075 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/28e2b63c-99a7-46aa-a8e2-cee2bf5d7066-env-overrides\") pod \"ovnkube-control-plane-749d76644c-gjpr8\" (UID: \"28e2b63c-99a7-46aa-a8e2-cee2bf5d7066\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gjpr8" Feb 23 08:49:43 crc kubenswrapper[4940]: I0223 08:49:43.590128 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w4p7m\" (UniqueName: \"kubernetes.io/projected/28e2b63c-99a7-46aa-a8e2-cee2bf5d7066-kube-api-access-w4p7m\") pod \"ovnkube-control-plane-749d76644c-gjpr8\" (UID: \"28e2b63c-99a7-46aa-a8e2-cee2bf5d7066\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gjpr8" Feb 23 08:49:43 crc kubenswrapper[4940]: I0223 08:49:43.590168 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/28e2b63c-99a7-46aa-a8e2-cee2bf5d7066-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-gjpr8\" (UID: \"28e2b63c-99a7-46aa-a8e2-cee2bf5d7066\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gjpr8" Feb 23 08:49:43 crc kubenswrapper[4940]: I0223 08:49:43.590974 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/28e2b63c-99a7-46aa-a8e2-cee2bf5d7066-env-overrides\") pod \"ovnkube-control-plane-749d76644c-gjpr8\" (UID: \"28e2b63c-99a7-46aa-a8e2-cee2bf5d7066\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gjpr8" Feb 23 08:49:43 crc kubenswrapper[4940]: I0223 08:49:43.591471 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/28e2b63c-99a7-46aa-a8e2-cee2bf5d7066-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-gjpr8\" (UID: \"28e2b63c-99a7-46aa-a8e2-cee2bf5d7066\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gjpr8" Feb 23 08:49:43 crc kubenswrapper[4940]: I0223 08:49:43.595337 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f3f2cfd6-5ddf-436d-998f-440f1cc642b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a14bc73401f5a1f767af0ebe6b81d75f4f735acadd163be9aa0b78d067677a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpqgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a57b4e844c0d09d7debf8e4d2507287815cc75be308dc7010647ac2041181dc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpqgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:49:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-26mgs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:43Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:43 crc kubenswrapper[4940]: I0223 08:49:43.598408 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/28e2b63c-99a7-46aa-a8e2-cee2bf5d7066-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-gjpr8\" (UID: \"28e2b63c-99a7-46aa-a8e2-cee2bf5d7066\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gjpr8" Feb 23 08:49:43 crc kubenswrapper[4940]: E0223 08:49:43.601048 4940 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T08:49:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T08:49:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T08:49:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T08:49:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0b40f9a7-6d5b-496d-bcec-88183c6aba29\\\",\\\"systemUUID\\\":\\\"3c406e8c-0d77-4ead-8ee9-37cf28c01cc1\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:43Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:43 crc kubenswrapper[4940]: I0223 08:49:43.604100 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:43 crc kubenswrapper[4940]: I0223 08:49:43.604196 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:43 crc kubenswrapper[4940]: I0223 08:49:43.604257 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:43 crc kubenswrapper[4940]: I0223 08:49:43.604316 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:43 crc kubenswrapper[4940]: I0223 08:49:43.604378 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:43Z","lastTransitionTime":"2026-02-23T08:49:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:43 crc kubenswrapper[4940]: I0223 08:49:43.606952 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gjpr8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28e2b63c-99a7-46aa-a8e2-cee2bf5d7066\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4p7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4p7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:49:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gjpr8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:43Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:43 crc kubenswrapper[4940]: I0223 08:49:43.617879 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w4p7m\" (UniqueName: \"kubernetes.io/projected/28e2b63c-99a7-46aa-a8e2-cee2bf5d7066-kube-api-access-w4p7m\") pod \"ovnkube-control-plane-749d76644c-gjpr8\" (UID: \"28e2b63c-99a7-46aa-a8e2-cee2bf5d7066\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gjpr8" Feb 23 08:49:43 crc kubenswrapper[4940]: E0223 08:49:43.622569 4940 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T08:49:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T08:49:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T08:49:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T08:49:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0b40f9a7-6d5b-496d-bcec-88183c6aba29\\\",\\\"systemUUID\\\":\\\"3c406e8c-0d77-4ead-8ee9-37cf28c01cc1\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:43Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:43 crc kubenswrapper[4940]: I0223 08:49:43.626589 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:43 crc kubenswrapper[4940]: I0223 08:49:43.626745 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:43 crc kubenswrapper[4940]: I0223 08:49:43.626808 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:43 crc kubenswrapper[4940]: I0223 08:49:43.626867 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:43 crc kubenswrapper[4940]: I0223 08:49:43.626942 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:43Z","lastTransitionTime":"2026-02-23T08:49:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:43 crc kubenswrapper[4940]: E0223 08:49:43.644906 4940 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T08:49:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T08:49:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T08:49:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T08:49:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0b40f9a7-6d5b-496d-bcec-88183c6aba29\\\",\\\"systemUUID\\\":\\\"3c406e8c-0d77-4ead-8ee9-37cf28c01cc1\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:43Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:43 crc kubenswrapper[4940]: I0223 08:49:43.648922 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:43 crc kubenswrapper[4940]: I0223 08:49:43.648963 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:43 crc kubenswrapper[4940]: I0223 08:49:43.648977 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:43 crc kubenswrapper[4940]: I0223 08:49:43.648994 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:43 crc kubenswrapper[4940]: I0223 08:49:43.649006 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:43Z","lastTransitionTime":"2026-02-23T08:49:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:43 crc kubenswrapper[4940]: I0223 08:49:43.651098 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gjpr8" Feb 23 08:49:43 crc kubenswrapper[4940]: E0223 08:49:43.666785 4940 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T08:49:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T08:49:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T08:49:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T08:49:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0b40f9a7-6d5b-496d-bcec-88183c6aba29\\\",\\\"systemUUID\\\":\\\"3c406e8c-0d77-4ead-8ee9-37cf28c01cc1\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:43Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:43 crc kubenswrapper[4940]: E0223 08:49:43.667227 4940 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 23 08:49:43 crc kubenswrapper[4940]: I0223 08:49:43.669171 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:43 crc kubenswrapper[4940]: I0223 08:49:43.669342 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:43 crc kubenswrapper[4940]: I0223 08:49:43.669436 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:43 crc kubenswrapper[4940]: I0223 08:49:43.670152 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:43 crc kubenswrapper[4940]: I0223 08:49:43.670247 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:43Z","lastTransitionTime":"2026-02-23T08:49:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:43 crc kubenswrapper[4940]: I0223 08:49:43.773420 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:43 crc kubenswrapper[4940]: I0223 08:49:43.773464 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:43 crc kubenswrapper[4940]: I0223 08:49:43.773477 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:43 crc kubenswrapper[4940]: I0223 08:49:43.773495 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:43 crc kubenswrapper[4940]: I0223 08:49:43.773506 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:43Z","lastTransitionTime":"2026-02-23T08:49:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:43 crc kubenswrapper[4940]: I0223 08:49:43.875740 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:43 crc kubenswrapper[4940]: I0223 08:49:43.875774 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:43 crc kubenswrapper[4940]: I0223 08:49:43.875784 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:43 crc kubenswrapper[4940]: I0223 08:49:43.875816 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:43 crc kubenswrapper[4940]: I0223 08:49:43.875827 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:43Z","lastTransitionTime":"2026-02-23T08:49:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:43 crc kubenswrapper[4940]: I0223 08:49:43.921364 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gjpr8" event={"ID":"28e2b63c-99a7-46aa-a8e2-cee2bf5d7066","Type":"ContainerStarted","Data":"f6bce5b28c2d7a3946303e9ebd8738dd2337190884db0025e336d96f1641171c"} Feb 23 08:49:43 crc kubenswrapper[4940]: I0223 08:49:43.921412 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gjpr8" event={"ID":"28e2b63c-99a7-46aa-a8e2-cee2bf5d7066","Type":"ContainerStarted","Data":"1c73dbeaaf68408ce8608f922307d40077581c4c708f6f713d881298c06abdf0"} Feb 23 08:49:43 crc kubenswrapper[4940]: I0223 08:49:43.978445 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:43 crc kubenswrapper[4940]: I0223 08:49:43.978481 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:43 crc kubenswrapper[4940]: I0223 08:49:43.978493 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:43 crc kubenswrapper[4940]: I0223 08:49:43.978509 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:43 crc kubenswrapper[4940]: I0223 08:49:43.978521 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:43Z","lastTransitionTime":"2026-02-23T08:49:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:44 crc kubenswrapper[4940]: I0223 08:49:44.056780 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-jwb9b"] Feb 23 08:49:44 crc kubenswrapper[4940]: I0223 08:49:44.057288 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jwb9b" Feb 23 08:49:44 crc kubenswrapper[4940]: E0223 08:49:44.057346 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jwb9b" podUID="d8dd2da9-cea0-44f5-8c93-91b79c7f66ea" Feb 23 08:49:44 crc kubenswrapper[4940]: I0223 08:49:44.073739 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4vcwd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41834650-70c0-4558-9052-d7cdfc785e09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a8663adb017b0225e3a827e7670572c25e2b8123a2153e9f99830d8ddc60ebe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwtwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:49:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4vcwd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:44Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:44 crc kubenswrapper[4940]: I0223 08:49:44.081027 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:44 crc kubenswrapper[4940]: I0223 08:49:44.081072 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:44 crc kubenswrapper[4940]: I0223 08:49:44.081088 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:44 crc kubenswrapper[4940]: I0223 08:49:44.081110 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:44 crc kubenswrapper[4940]: I0223 08:49:44.081126 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:44Z","lastTransitionTime":"2026-02-23T08:49:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:44 crc kubenswrapper[4940]: I0223 08:49:44.090538 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67d1926d-47ed-4a5c-b868-690599126446\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ee3bc200ebaed1c133ad32e65db44c8613c48203736dada71c0881d3d4f45a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6fefd6cf7ee54d7687a700f08f52ff20d438289f846da4d811f3295e85455c1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee77b54ead604e1f64b0057494a4ba082ac7f59289f563ce28ec6048485e3b07\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://947d88435f9dfeac492c7a476d5413d6c9f65fea2e6a8322bed3a197be4e7c11\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://947d88435f9dfeac492c7a476d5413d6c9f65fea2e6a8322bed3a197be4e7c11\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"message\\\":\\\"le observer\\\\nW0223 08:48:59.155726 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0223 08:48:59.155899 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0223 08:48:59.156685 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1318688766/tls.crt::/tmp/serving-cert-1318688766/tls.key\\\\\\\"\\\\nI0223 08:48:59.543716 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0223 08:48:59.548278 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0223 08:48:59.548307 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0223 08:48:59.548334 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0223 08:48:59.548340 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0223 08:48:59.558852 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0223 08:48:59.558871 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0223 08:48:59.558894 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0223 08:48:59.558902 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0223 08:48:59.558909 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0223 08:48:59.558912 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0223 08:48:59.558917 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0223 08:48:59.558922 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0223 08:48:59.560387 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T08:48:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c47633e36ba0e4b4070060e2925e103b6efa68771b18f7a7f429546f9f05c1d9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:53Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7d8e4f8739813cb8d24afc9999755886cd4a77958b8b4e1514fc0bc53403258\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7d8e4f8739813cb8d24afc9999755886cd4a77958b8b4e1514fc0bc53403258\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:47:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:47:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:47:49Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:44Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:44 crc kubenswrapper[4940]: I0223 08:49:44.105787 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ca6a355de735d39d8c069429deed2a9983ba8d97d9210475c0b22791b6357d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:44Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:44 crc kubenswrapper[4940]: I0223 08:49:44.124034 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:44Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:44 crc kubenswrapper[4940]: I0223 08:49:44.152025 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-tj6ms" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"353c05f4-9ef1-4c0f-8388-8b56cc4c22d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1be9bb45923468ac8fc8f206f3493431ef579d16d158ba7cf330944b1eb7585\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j86pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e45b5826dae32da76487ca6179e00691e0f92748e895fbe42aa14d39a808a12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e45b5826dae32da76487ca6179e00691e0f92748e895fbe42aa14d39a808a12\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j86pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a9386fc4a4a81adf5f99f88359b958f8c47f37eb176a860c96e5e3b6a219f3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a9386fc4a4a81adf5f99f88359b958f8c47f37eb176a860c96e5e3b6a219f3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:49:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j86pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0e096f58de305f0df76444507c878fdf5eb1a134deb3725e1e46ac4d6581541\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0e096f58de305f0df76444507c878fdf5eb1a134deb3725e1e46ac4d6581541\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:49:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:49:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j86pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcc8e2b5ae391db63624b8bcbaa5cab45fcc5f168db2a52700d34ad2c3eb9b89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fcc8e2b5ae391db63624b8bcbaa5cab45fcc5f168db2a52700d34ad2c3eb9b89\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:49:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:49:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j86pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0b3f29a8c1eea72788476a0aac20ec99cb908df8983b5271448ae824b8ab96a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0b3f29a8c1eea72788476a0aac20ec99cb908df8983b5271448ae824b8ab96a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:49:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:49:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j86pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d249ae04e3dece913fbfa0f45169adf9d267043d3cf2dd93c4ace24c5fdf1dd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d249ae04e3dece913fbfa0f45169adf9d267043d3cf2dd93c4ace24c5fdf1dd6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:49:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:49:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j86pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:49:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-tj6ms\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:44Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:44 crc kubenswrapper[4940]: I0223 08:49:44.166133 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jwb9b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d8dd2da9-cea0-44f5-8c93-91b79c7f66ea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmbl4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmbl4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:49:44Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jwb9b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:44Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:44 crc kubenswrapper[4940]: I0223 08:49:44.180304 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b55d9910-222c-4c04-8a15-cecc288b8dd6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2696757a27ff629002d6fe762776487514e74ad9465357240b3f196cf9e3cec7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73db3b91fa5cdca73acde9a3bcb9062394a54e61e87020b5d656290a54aa4f70\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T08:48:50Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0223 08:48:20.958890 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0223 08:48:20.960636 1 observer_polling.go:159] Starting file observer\\\\nI0223 08:48:20.961488 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0223 08:48:20.962119 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0223 08:48:48.479316 1 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials\\\\nI0223 08:48:50.579389 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0223 08:48:50.579494 1 cmd.go:179] failed checking apiserver connectivity: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T08:48:20Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:48:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aceeb7fc620d96b7e2a474de52d7b3fb007417f4f1ec81e5853724d54381566\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dbb2f4667fe28c3ae9389cece3d7246f31a8bd82328370a699dae73e11b19a4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0c1a355a4218aa74f5a9f20e74427e20e7dd1864bfb683eeb2ac43ef6d7357f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:47:49Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:44Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:44 crc kubenswrapper[4940]: I0223 08:49:44.183723 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:44 crc kubenswrapper[4940]: I0223 08:49:44.183864 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:44 crc kubenswrapper[4940]: I0223 08:49:44.183957 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:44 crc kubenswrapper[4940]: I0223 08:49:44.184040 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:44 crc kubenswrapper[4940]: I0223 08:49:44.184122 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:44Z","lastTransitionTime":"2026-02-23T08:49:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:44 crc kubenswrapper[4940]: I0223 08:49:44.197042 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d8dd2da9-cea0-44f5-8c93-91b79c7f66ea-metrics-certs\") pod \"network-metrics-daemon-jwb9b\" (UID: \"d8dd2da9-cea0-44f5-8c93-91b79c7f66ea\") " pod="openshift-multus/network-metrics-daemon-jwb9b" Feb 23 08:49:44 crc kubenswrapper[4940]: I0223 08:49:44.197114 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qmbl4\" (UniqueName: \"kubernetes.io/projected/d8dd2da9-cea0-44f5-8c93-91b79c7f66ea-kube-api-access-qmbl4\") pod \"network-metrics-daemon-jwb9b\" (UID: \"d8dd2da9-cea0-44f5-8c93-91b79c7f66ea\") " pod="openshift-multus/network-metrics-daemon-jwb9b" Feb 23 08:49:44 crc kubenswrapper[4940]: I0223 08:49:44.201857 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47a2763e-a680-45c0-b4e8-0930c73e2e6b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4316680c09083874958b09e3d47897f844490827fafe87653b1459708ba55af3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c467a466c26e4cd6c762317dcc08a84b7bd61aa044e8d075f7ada83206764d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2a314775a13f5a612d19ad1c27ccadc74e4ad191c84bd78a1068105509e514a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://863243ea1e361e4f5cd70d64fa4a0c078de0c0529c9928b413db44a13924c474\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cca6b93a0655ce53ad6d1ccfe625533a67c58b3876c57fe4473d699c95938bd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc81c994f3f21269783d08f800aa4a58bdf43d229b5b61d7f40f8706230fd6c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc81c994f3f21269783d08f800aa4a58bdf43d229b5b61d7f40f8706230fd6c5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:47:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:47:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://260cdbf12890f13f411672fde6af3350b3ddc30824ac5b43110247ae1c7b74fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://260cdbf12890f13f411672fde6af3350b3ddc30824ac5b43110247ae1c7b74fc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:47:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:47:52Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://768f56feda9f5952088933827a03f9d23d7d807d1354578de2335e7c3e8beaaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://768f56feda9f5952088933827a03f9d23d7d807d1354578de2335e7c3e8beaaf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:47:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:47:49Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:44Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:44 crc kubenswrapper[4940]: I0223 08:49:44.212779 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:44Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:44 crc kubenswrapper[4940]: I0223 08:49:44.222948 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4164a479bddce53118172c24d9dc0c8174560ac6f49d2a1fa7321845f7f4020a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:44Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:44 crc kubenswrapper[4940]: I0223 08:49:44.232522 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:44Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:44 crc kubenswrapper[4940]: I0223 08:49:44.245235 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f5fabdbd24e079413b28b30b7e72c60763c64d08d55badde2d84d38dcbc8a09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bce84ef684469eb1b554c08d38cbd547ce6fa507b189a9f63c740583e66bc5e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:44Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:44 crc kubenswrapper[4940]: I0223 08:49:44.256604 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-czrqm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec3904ad-5d0b-46b4-9c13-68454d9a3cb2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5c3f117be2e1eba2edb562924ecec6d7ac44b508366bc7c63a86214ada740dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8jmt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:49:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-czrqm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:44Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:44 crc kubenswrapper[4940]: I0223 08:49:44.278899 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qkw6w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0b5a971-c6f4-4518-9bb3-49d228275668\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://524ac500bdffecad2d1533ea863b5f2dc0acd130368e80faa476facf6817bcc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4558e2adbf43351cb79df3a10fc2e37fa7edc8eae79ceb459ae1ea068e8df21b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://683969deceafc005dbe91b7dbc1fd159cc1da9f55e48112ff68947c40997891e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cff793d0cde25f9d7edf3abaaf81005870d7ad01f985cfcc3cf3f5a57ea062b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f189355dc4d43c8b1df6111c9c3a128e90b724b976602933c384817e4f51882\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ef9add89b50dd7b480368148d404e8f7128d92e6d921c3c89a17463706f3dca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://69094f2d145425aee384881c22bbc1f0478a24df582e7eedcc1cf79af015ce56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://69094f2d145425aee384881c22bbc1f0478a24df582e7eedcc1cf79af015ce56\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-23T08:49:41Z\\\",\\\"message\\\":\\\"oUUID:dce28c51-c9f1-478b-97c8-7e209d6e7cbe}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {e3c4661a-36a6-47f0-a6c0-a4ee741f2224}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0223 08:49:41.701782 6731 lb_config.go:1031] Cluster endpoints for openshift-network-diagnostics/network-check-target for network=default are: map[]\\\\nI0223 08:49:41.701745 6731 services_controller.go:452] Built service openshift-machine-api/machine-api-operator per-node LB for network=default: []services.LB{}\\\\nI0223 08:49:41.701823 6731 services_controller.go:444] Built service openshift-kube-controller-manager-operator/metrics LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI0223 08:49:41.701844 6731 services_controller.go:445] Built service openshift-kube-controller-manager-operator/metrics LB template configs for network=default: []services.lbConfig(nil)\\\\nI0223 08:49:41.701867 6731 services_controller.go:453] Built service openshift-machine-api/machine-api-operator template LB for network=default: []services.LB{}\\\\nI0223 08:49:41.701866 6731 services_controller.go:451] Built service openshift-kube-controller-manager-operator/metrics cluster-wide LB for network=default: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-kube-controller-manager-operator/metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[str\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T08:49:40Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-qkw6w_openshift-ovn-kubernetes(d0b5a971-c6f4-4518-9bb3-49d228275668)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7a97208ce587c7633bbea9ad618e0721a712dfd62ff249a5843018bc125922a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56ef19c4e2659a35760fb8e2bb8a79c35a283fa3f4b00766bdd237d1464bd933\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56ef19c4e2659a35760fb8e2bb8a79c35a283fa3f4b00766bdd237d1464bd933\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:49:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:49:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:49:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qkw6w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:44Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:44 crc kubenswrapper[4940]: I0223 08:49:44.287210 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:44 crc kubenswrapper[4940]: I0223 08:49:44.287440 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:44 crc kubenswrapper[4940]: I0223 08:49:44.287531 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:44 crc kubenswrapper[4940]: I0223 08:49:44.287620 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:44 crc kubenswrapper[4940]: I0223 08:49:44.287731 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:44Z","lastTransitionTime":"2026-02-23T08:49:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:44 crc kubenswrapper[4940]: I0223 08:49:44.293237 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ll9gt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac7931c6-f7d4-4166-b332-d954717f67c0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4dcef9eae99fcca399a3b47ff74ebb46cd4c0854422c9153d2ceb6f3a1ea8a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8clf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:49:37Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ll9gt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:44Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:44 crc kubenswrapper[4940]: I0223 08:49:44.298111 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qmbl4\" (UniqueName: \"kubernetes.io/projected/d8dd2da9-cea0-44f5-8c93-91b79c7f66ea-kube-api-access-qmbl4\") pod \"network-metrics-daemon-jwb9b\" (UID: \"d8dd2da9-cea0-44f5-8c93-91b79c7f66ea\") " pod="openshift-multus/network-metrics-daemon-jwb9b" Feb 23 08:49:44 crc kubenswrapper[4940]: I0223 08:49:44.298997 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d8dd2da9-cea0-44f5-8c93-91b79c7f66ea-metrics-certs\") pod \"network-metrics-daemon-jwb9b\" (UID: \"d8dd2da9-cea0-44f5-8c93-91b79c7f66ea\") " pod="openshift-multus/network-metrics-daemon-jwb9b" Feb 23 08:49:44 crc kubenswrapper[4940]: E0223 08:49:44.299297 4940 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 23 08:49:44 crc kubenswrapper[4940]: E0223 08:49:44.299388 4940 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d8dd2da9-cea0-44f5-8c93-91b79c7f66ea-metrics-certs podName:d8dd2da9-cea0-44f5-8c93-91b79c7f66ea nodeName:}" failed. No retries permitted until 2026-02-23 08:49:44.799369819 +0000 UTC m=+116.182575996 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/d8dd2da9-cea0-44f5-8c93-91b79c7f66ea-metrics-certs") pod "network-metrics-daemon-jwb9b" (UID: "d8dd2da9-cea0-44f5-8c93-91b79c7f66ea") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 23 08:49:44 crc kubenswrapper[4940]: I0223 08:49:44.307366 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gjpr8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28e2b63c-99a7-46aa-a8e2-cee2bf5d7066\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4p7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4p7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:49:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gjpr8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:44Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:44 crc kubenswrapper[4940]: I0223 08:49:44.314375 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qmbl4\" (UniqueName: \"kubernetes.io/projected/d8dd2da9-cea0-44f5-8c93-91b79c7f66ea-kube-api-access-qmbl4\") pod \"network-metrics-daemon-jwb9b\" (UID: \"d8dd2da9-cea0-44f5-8c93-91b79c7f66ea\") " pod="openshift-multus/network-metrics-daemon-jwb9b" Feb 23 08:49:44 crc kubenswrapper[4940]: I0223 08:49:44.318866 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f3f2cfd6-5ddf-436d-998f-440f1cc642b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a14bc73401f5a1f767af0ebe6b81d75f4f735acadd163be9aa0b78d067677a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpqgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a57b4e844c0d09d7debf8e4d2507287815cc75be308dc7010647ac2041181dc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpqgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:49:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-26mgs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:44Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:44 crc kubenswrapper[4940]: I0223 08:49:44.345133 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 08:49:44 crc kubenswrapper[4940]: E0223 08:49:44.345256 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 08:49:44 crc kubenswrapper[4940]: I0223 08:49:44.362449 4940 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 10:12:16.802648969 +0000 UTC Feb 23 08:49:44 crc kubenswrapper[4940]: I0223 08:49:44.390737 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:44 crc kubenswrapper[4940]: I0223 08:49:44.390791 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:44 crc kubenswrapper[4940]: I0223 08:49:44.390802 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:44 crc kubenswrapper[4940]: I0223 08:49:44.390818 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:44 crc kubenswrapper[4940]: I0223 08:49:44.390830 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:44Z","lastTransitionTime":"2026-02-23T08:49:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:44 crc kubenswrapper[4940]: I0223 08:49:44.493778 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:44 crc kubenswrapper[4940]: I0223 08:49:44.493815 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:44 crc kubenswrapper[4940]: I0223 08:49:44.493824 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:44 crc kubenswrapper[4940]: I0223 08:49:44.493840 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:44 crc kubenswrapper[4940]: I0223 08:49:44.493850 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:44Z","lastTransitionTime":"2026-02-23T08:49:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:44 crc kubenswrapper[4940]: I0223 08:49:44.596635 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:44 crc kubenswrapper[4940]: I0223 08:49:44.596680 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:44 crc kubenswrapper[4940]: I0223 08:49:44.596695 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:44 crc kubenswrapper[4940]: I0223 08:49:44.596712 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:44 crc kubenswrapper[4940]: I0223 08:49:44.596721 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:44Z","lastTransitionTime":"2026-02-23T08:49:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:44 crc kubenswrapper[4940]: I0223 08:49:44.700017 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:44 crc kubenswrapper[4940]: I0223 08:49:44.700070 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:44 crc kubenswrapper[4940]: I0223 08:49:44.700084 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:44 crc kubenswrapper[4940]: I0223 08:49:44.700104 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:44 crc kubenswrapper[4940]: I0223 08:49:44.700115 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:44Z","lastTransitionTime":"2026-02-23T08:49:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:44 crc kubenswrapper[4940]: I0223 08:49:44.802772 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:44 crc kubenswrapper[4940]: I0223 08:49:44.802829 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:44 crc kubenswrapper[4940]: I0223 08:49:44.802841 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:44 crc kubenswrapper[4940]: I0223 08:49:44.802863 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:44 crc kubenswrapper[4940]: I0223 08:49:44.802878 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:44Z","lastTransitionTime":"2026-02-23T08:49:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:44 crc kubenswrapper[4940]: I0223 08:49:44.804172 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d8dd2da9-cea0-44f5-8c93-91b79c7f66ea-metrics-certs\") pod \"network-metrics-daemon-jwb9b\" (UID: \"d8dd2da9-cea0-44f5-8c93-91b79c7f66ea\") " pod="openshift-multus/network-metrics-daemon-jwb9b" Feb 23 08:49:44 crc kubenswrapper[4940]: E0223 08:49:44.804375 4940 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 23 08:49:44 crc kubenswrapper[4940]: E0223 08:49:44.804451 4940 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d8dd2da9-cea0-44f5-8c93-91b79c7f66ea-metrics-certs podName:d8dd2da9-cea0-44f5-8c93-91b79c7f66ea nodeName:}" failed. No retries permitted until 2026-02-23 08:49:45.804431566 +0000 UTC m=+117.187637723 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/d8dd2da9-cea0-44f5-8c93-91b79c7f66ea-metrics-certs") pod "network-metrics-daemon-jwb9b" (UID: "d8dd2da9-cea0-44f5-8c93-91b79c7f66ea") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 23 08:49:44 crc kubenswrapper[4940]: I0223 08:49:44.905303 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:44 crc kubenswrapper[4940]: I0223 08:49:44.905357 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:44 crc kubenswrapper[4940]: I0223 08:49:44.905374 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:44 crc kubenswrapper[4940]: I0223 08:49:44.905398 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:44 crc kubenswrapper[4940]: I0223 08:49:44.905415 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:44Z","lastTransitionTime":"2026-02-23T08:49:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:44 crc kubenswrapper[4940]: I0223 08:49:44.926637 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gjpr8" event={"ID":"28e2b63c-99a7-46aa-a8e2-cee2bf5d7066","Type":"ContainerStarted","Data":"a5d550949b5cf8f3efe2267f466a5c769e2862ab361c86a14eef0baf874e7dd5"} Feb 23 08:49:44 crc kubenswrapper[4940]: I0223 08:49:44.958605 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47a2763e-a680-45c0-b4e8-0930c73e2e6b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4316680c09083874958b09e3d47897f844490827fafe87653b1459708ba55af3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c467a466c26e4cd6c762317dcc08a84b7bd61aa044e8d075f7ada83206764d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2a314775a13f5a612d19ad1c27ccadc74e4ad191c84bd78a1068105509e514a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://863243ea1e361e4f5cd70d64fa4a0c078de0c0529c9928b413db44a13924c474\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cca6b93a0655ce53ad6d1ccfe625533a67c58b3876c57fe4473d699c95938bd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc81c994f3f21269783d08f800aa4a58bdf43d229b5b61d7f40f8706230fd6c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc81c994f3f21269783d08f800aa4a58bdf43d229b5b61d7f40f8706230fd6c5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:47:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:47:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://260cdbf12890f13f411672fde6af3350b3ddc30824ac5b43110247ae1c7b74fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://260cdbf12890f13f411672fde6af3350b3ddc30824ac5b43110247ae1c7b74fc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:47:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:47:52Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://768f56feda9f5952088933827a03f9d23d7d807d1354578de2335e7c3e8beaaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://768f56feda9f5952088933827a03f9d23d7d807d1354578de2335e7c3e8beaaf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:47:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:47:49Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:44Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:44 crc kubenswrapper[4940]: I0223 08:49:44.976479 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:44Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:44 crc kubenswrapper[4940]: I0223 08:49:44.989145 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4164a479bddce53118172c24d9dc0c8174560ac6f49d2a1fa7321845f7f4020a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:44Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:45 crc kubenswrapper[4940]: I0223 08:49:45.001044 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:44Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:45 crc kubenswrapper[4940]: I0223 08:49:45.007683 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:45 crc kubenswrapper[4940]: I0223 08:49:45.007718 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:45 crc kubenswrapper[4940]: I0223 08:49:45.007730 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:45 crc kubenswrapper[4940]: I0223 08:49:45.007745 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:45 crc kubenswrapper[4940]: I0223 08:49:45.007755 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:45Z","lastTransitionTime":"2026-02-23T08:49:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:45 crc kubenswrapper[4940]: I0223 08:49:45.018855 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f5fabdbd24e079413b28b30b7e72c60763c64d08d55badde2d84d38dcbc8a09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bce84ef684469eb1b554c08d38cbd547ce6fa507b189a9f63c740583e66bc5e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:45Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:45 crc kubenswrapper[4940]: I0223 08:49:45.034049 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-czrqm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec3904ad-5d0b-46b4-9c13-68454d9a3cb2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5c3f117be2e1eba2edb562924ecec6d7ac44b508366bc7c63a86214ada740dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8jmt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:49:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-czrqm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:45Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:45 crc kubenswrapper[4940]: I0223 08:49:45.056259 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qkw6w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0b5a971-c6f4-4518-9bb3-49d228275668\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://524ac500bdffecad2d1533ea863b5f2dc0acd130368e80faa476facf6817bcc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4558e2adbf43351cb79df3a10fc2e37fa7edc8eae79ceb459ae1ea068e8df21b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://683969deceafc005dbe91b7dbc1fd159cc1da9f55e48112ff68947c40997891e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cff793d0cde25f9d7edf3abaaf81005870d7ad01f985cfcc3cf3f5a57ea062b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f189355dc4d43c8b1df6111c9c3a128e90b724b976602933c384817e4f51882\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ef9add89b50dd7b480368148d404e8f7128d92e6d921c3c89a17463706f3dca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://69094f2d145425aee384881c22bbc1f0478a24df582e7eedcc1cf79af015ce56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://69094f2d145425aee384881c22bbc1f0478a24df582e7eedcc1cf79af015ce56\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-23T08:49:41Z\\\",\\\"message\\\":\\\"oUUID:dce28c51-c9f1-478b-97c8-7e209d6e7cbe}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {e3c4661a-36a6-47f0-a6c0-a4ee741f2224}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0223 08:49:41.701782 6731 lb_config.go:1031] Cluster endpoints for openshift-network-diagnostics/network-check-target for network=default are: map[]\\\\nI0223 08:49:41.701745 6731 services_controller.go:452] Built service openshift-machine-api/machine-api-operator per-node LB for network=default: []services.LB{}\\\\nI0223 08:49:41.701823 6731 services_controller.go:444] Built service openshift-kube-controller-manager-operator/metrics LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI0223 08:49:41.701844 6731 services_controller.go:445] Built service openshift-kube-controller-manager-operator/metrics LB template configs for network=default: []services.lbConfig(nil)\\\\nI0223 08:49:41.701867 6731 services_controller.go:453] Built service openshift-machine-api/machine-api-operator template LB for network=default: []services.LB{}\\\\nI0223 08:49:41.701866 6731 services_controller.go:451] Built service openshift-kube-controller-manager-operator/metrics cluster-wide LB for network=default: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-kube-controller-manager-operator/metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[str\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T08:49:40Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-qkw6w_openshift-ovn-kubernetes(d0b5a971-c6f4-4518-9bb3-49d228275668)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7a97208ce587c7633bbea9ad618e0721a712dfd62ff249a5843018bc125922a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56ef19c4e2659a35760fb8e2bb8a79c35a283fa3f4b00766bdd237d1464bd933\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56ef19c4e2659a35760fb8e2bb8a79c35a283fa3f4b00766bdd237d1464bd933\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:49:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:49:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:49:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qkw6w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:45Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:45 crc kubenswrapper[4940]: I0223 08:49:45.069224 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ll9gt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac7931c6-f7d4-4166-b332-d954717f67c0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4dcef9eae99fcca399a3b47ff74ebb46cd4c0854422c9153d2ceb6f3a1ea8a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8clf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:49:37Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ll9gt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:45Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:45 crc kubenswrapper[4940]: I0223 08:49:45.081229 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gjpr8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28e2b63c-99a7-46aa-a8e2-cee2bf5d7066\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6bce5b28c2d7a3946303e9ebd8738dd2337190884db0025e336d96f1641171c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4p7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5d550949b5cf8f3efe2267f466a5c769e2862ab361c86a14eef0baf874e7dd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4p7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:49:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gjpr8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:45Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:45 crc kubenswrapper[4940]: I0223 08:49:45.092304 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f3f2cfd6-5ddf-436d-998f-440f1cc642b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a14bc73401f5a1f767af0ebe6b81d75f4f735acadd163be9aa0b78d067677a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpqgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a57b4e844c0d09d7debf8e4d2507287815cc75be308dc7010647ac2041181dc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpqgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:49:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-26mgs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:45Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:45 crc kubenswrapper[4940]: I0223 08:49:45.104471 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4vcwd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41834650-70c0-4558-9052-d7cdfc785e09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a8663adb017b0225e3a827e7670572c25e2b8123a2153e9f99830d8ddc60ebe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwtwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:49:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4vcwd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:45Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:45 crc kubenswrapper[4940]: I0223 08:49:45.109433 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:45 crc kubenswrapper[4940]: I0223 08:49:45.109640 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:45 crc kubenswrapper[4940]: I0223 08:49:45.109730 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:45 crc kubenswrapper[4940]: I0223 08:49:45.109812 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:45 crc kubenswrapper[4940]: I0223 08:49:45.109942 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:45Z","lastTransitionTime":"2026-02-23T08:49:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:45 crc kubenswrapper[4940]: I0223 08:49:45.120303 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67d1926d-47ed-4a5c-b868-690599126446\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ee3bc200ebaed1c133ad32e65db44c8613c48203736dada71c0881d3d4f45a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6fefd6cf7ee54d7687a700f08f52ff20d438289f846da4d811f3295e85455c1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee77b54ead604e1f64b0057494a4ba082ac7f59289f563ce28ec6048485e3b07\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://947d88435f9dfeac492c7a476d5413d6c9f65fea2e6a8322bed3a197be4e7c11\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://947d88435f9dfeac492c7a476d5413d6c9f65fea2e6a8322bed3a197be4e7c11\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"message\\\":\\\"le observer\\\\nW0223 08:48:59.155726 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0223 08:48:59.155899 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0223 08:48:59.156685 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1318688766/tls.crt::/tmp/serving-cert-1318688766/tls.key\\\\\\\"\\\\nI0223 08:48:59.543716 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0223 08:48:59.548278 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0223 08:48:59.548307 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0223 08:48:59.548334 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0223 08:48:59.548340 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0223 08:48:59.558852 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0223 08:48:59.558871 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0223 08:48:59.558894 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0223 08:48:59.558902 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0223 08:48:59.558909 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0223 08:48:59.558912 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0223 08:48:59.558917 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0223 08:48:59.558922 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0223 08:48:59.560387 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T08:48:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c47633e36ba0e4b4070060e2925e103b6efa68771b18f7a7f429546f9f05c1d9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:53Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7d8e4f8739813cb8d24afc9999755886cd4a77958b8b4e1514fc0bc53403258\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7d8e4f8739813cb8d24afc9999755886cd4a77958b8b4e1514fc0bc53403258\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:47:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:47:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:47:49Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:45Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:45 crc kubenswrapper[4940]: I0223 08:49:45.131048 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ca6a355de735d39d8c069429deed2a9983ba8d97d9210475c0b22791b6357d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:45Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:45 crc kubenswrapper[4940]: I0223 08:49:45.144890 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:45Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:45 crc kubenswrapper[4940]: I0223 08:49:45.159380 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-tj6ms" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"353c05f4-9ef1-4c0f-8388-8b56cc4c22d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1be9bb45923468ac8fc8f206f3493431ef579d16d158ba7cf330944b1eb7585\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j86pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e45b5826dae32da76487ca6179e00691e0f92748e895fbe42aa14d39a808a12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e45b5826dae32da76487ca6179e00691e0f92748e895fbe42aa14d39a808a12\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j86pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a9386fc4a4a81adf5f99f88359b958f8c47f37eb176a860c96e5e3b6a219f3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a9386fc4a4a81adf5f99f88359b958f8c47f37eb176a860c96e5e3b6a219f3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:49:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j86pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0e096f58de305f0df76444507c878fdf5eb1a134deb3725e1e46ac4d6581541\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0e096f58de305f0df76444507c878fdf5eb1a134deb3725e1e46ac4d6581541\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:49:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:49:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j86pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcc8e2b5ae391db63624b8bcbaa5cab45fcc5f168db2a52700d34ad2c3eb9b89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fcc8e2b5ae391db63624b8bcbaa5cab45fcc5f168db2a52700d34ad2c3eb9b89\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:49:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:49:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j86pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0b3f29a8c1eea72788476a0aac20ec99cb908df8983b5271448ae824b8ab96a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0b3f29a8c1eea72788476a0aac20ec99cb908df8983b5271448ae824b8ab96a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:49:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:49:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j86pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d249ae04e3dece913fbfa0f45169adf9d267043d3cf2dd93c4ace24c5fdf1dd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d249ae04e3dece913fbfa0f45169adf9d267043d3cf2dd93c4ace24c5fdf1dd6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:49:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:49:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j86pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:49:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-tj6ms\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:45Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:45 crc kubenswrapper[4940]: I0223 08:49:45.170247 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jwb9b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d8dd2da9-cea0-44f5-8c93-91b79c7f66ea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmbl4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmbl4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:49:44Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jwb9b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:45Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:45 crc kubenswrapper[4940]: I0223 08:49:45.182144 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b55d9910-222c-4c04-8a15-cecc288b8dd6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2696757a27ff629002d6fe762776487514e74ad9465357240b3f196cf9e3cec7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73db3b91fa5cdca73acde9a3bcb9062394a54e61e87020b5d656290a54aa4f70\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T08:48:50Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0223 08:48:20.958890 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0223 08:48:20.960636 1 observer_polling.go:159] Starting file observer\\\\nI0223 08:48:20.961488 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0223 08:48:20.962119 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0223 08:48:48.479316 1 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials\\\\nI0223 08:48:50.579389 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0223 08:48:50.579494 1 cmd.go:179] failed checking apiserver connectivity: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T08:48:20Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:48:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aceeb7fc620d96b7e2a474de52d7b3fb007417f4f1ec81e5853724d54381566\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dbb2f4667fe28c3ae9389cece3d7246f31a8bd82328370a699dae73e11b19a4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0c1a355a4218aa74f5a9f20e74427e20e7dd1864bfb683eeb2ac43ef6d7357f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:47:49Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:45Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:45 crc kubenswrapper[4940]: I0223 08:49:45.212606 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:45 crc kubenswrapper[4940]: I0223 08:49:45.212673 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:45 crc kubenswrapper[4940]: I0223 08:49:45.212688 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:45 crc kubenswrapper[4940]: I0223 08:49:45.212705 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:45 crc kubenswrapper[4940]: I0223 08:49:45.212715 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:45Z","lastTransitionTime":"2026-02-23T08:49:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:45 crc kubenswrapper[4940]: I0223 08:49:45.314874 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:45 crc kubenswrapper[4940]: I0223 08:49:45.315228 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:45 crc kubenswrapper[4940]: I0223 08:49:45.315242 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:45 crc kubenswrapper[4940]: I0223 08:49:45.315257 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:45 crc kubenswrapper[4940]: I0223 08:49:45.315274 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:45Z","lastTransitionTime":"2026-02-23T08:49:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:45 crc kubenswrapper[4940]: I0223 08:49:45.345542 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 08:49:45 crc kubenswrapper[4940]: I0223 08:49:45.345712 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 08:49:45 crc kubenswrapper[4940]: E0223 08:49:45.345920 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 08:49:45 crc kubenswrapper[4940]: E0223 08:49:45.346313 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 08:49:45 crc kubenswrapper[4940]: I0223 08:49:45.346431 4940 scope.go:117] "RemoveContainer" containerID="947d88435f9dfeac492c7a476d5413d6c9f65fea2e6a8322bed3a197be4e7c11" Feb 23 08:49:45 crc kubenswrapper[4940]: I0223 08:49:45.363352 4940 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 01:02:52.005936054 +0000 UTC Feb 23 08:49:45 crc kubenswrapper[4940]: I0223 08:49:45.417972 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:45 crc kubenswrapper[4940]: I0223 08:49:45.418016 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:45 crc kubenswrapper[4940]: I0223 08:49:45.418026 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:45 crc kubenswrapper[4940]: I0223 08:49:45.418041 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:45 crc kubenswrapper[4940]: I0223 08:49:45.418050 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:45Z","lastTransitionTime":"2026-02-23T08:49:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:45 crc kubenswrapper[4940]: I0223 08:49:45.520853 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:45 crc kubenswrapper[4940]: I0223 08:49:45.520988 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:45 crc kubenswrapper[4940]: I0223 08:49:45.521132 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:45 crc kubenswrapper[4940]: I0223 08:49:45.521270 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:45 crc kubenswrapper[4940]: I0223 08:49:45.521373 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:45Z","lastTransitionTime":"2026-02-23T08:49:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:45 crc kubenswrapper[4940]: I0223 08:49:45.623854 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:45 crc kubenswrapper[4940]: I0223 08:49:45.624075 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:45 crc kubenswrapper[4940]: I0223 08:49:45.624199 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:45 crc kubenswrapper[4940]: I0223 08:49:45.624421 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:45 crc kubenswrapper[4940]: I0223 08:49:45.624567 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:45Z","lastTransitionTime":"2026-02-23T08:49:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:45 crc kubenswrapper[4940]: I0223 08:49:45.726815 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:45 crc kubenswrapper[4940]: I0223 08:49:45.727109 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:45 crc kubenswrapper[4940]: I0223 08:49:45.727272 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:45 crc kubenswrapper[4940]: I0223 08:49:45.727446 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:45 crc kubenswrapper[4940]: I0223 08:49:45.727633 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:45Z","lastTransitionTime":"2026-02-23T08:49:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:45 crc kubenswrapper[4940]: I0223 08:49:45.813720 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d8dd2da9-cea0-44f5-8c93-91b79c7f66ea-metrics-certs\") pod \"network-metrics-daemon-jwb9b\" (UID: \"d8dd2da9-cea0-44f5-8c93-91b79c7f66ea\") " pod="openshift-multus/network-metrics-daemon-jwb9b" Feb 23 08:49:45 crc kubenswrapper[4940]: E0223 08:49:45.813869 4940 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 23 08:49:45 crc kubenswrapper[4940]: E0223 08:49:45.813926 4940 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d8dd2da9-cea0-44f5-8c93-91b79c7f66ea-metrics-certs podName:d8dd2da9-cea0-44f5-8c93-91b79c7f66ea nodeName:}" failed. No retries permitted until 2026-02-23 08:49:47.813909541 +0000 UTC m=+119.197115698 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/d8dd2da9-cea0-44f5-8c93-91b79c7f66ea-metrics-certs") pod "network-metrics-daemon-jwb9b" (UID: "d8dd2da9-cea0-44f5-8c93-91b79c7f66ea") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 23 08:49:45 crc kubenswrapper[4940]: I0223 08:49:45.829527 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:45 crc kubenswrapper[4940]: I0223 08:49:45.829565 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:45 crc kubenswrapper[4940]: I0223 08:49:45.829575 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:45 crc kubenswrapper[4940]: I0223 08:49:45.829590 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:45 crc kubenswrapper[4940]: I0223 08:49:45.829600 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:45Z","lastTransitionTime":"2026-02-23T08:49:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:45 crc kubenswrapper[4940]: I0223 08:49:45.931165 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:45 crc kubenswrapper[4940]: I0223 08:49:45.931214 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:45 crc kubenswrapper[4940]: I0223 08:49:45.931249 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:45 crc kubenswrapper[4940]: I0223 08:49:45.931276 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:45 crc kubenswrapper[4940]: I0223 08:49:45.931292 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:45Z","lastTransitionTime":"2026-02-23T08:49:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:45 crc kubenswrapper[4940]: I0223 08:49:45.932448 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/3.log" Feb 23 08:49:45 crc kubenswrapper[4940]: I0223 08:49:45.934254 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"36385cbf7f0935ced8cb20a3a65c7d1802e2faf23bce8aeca1a040662b878b0f"} Feb 23 08:49:45 crc kubenswrapper[4940]: I0223 08:49:45.934787 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 23 08:49:45 crc kubenswrapper[4940]: I0223 08:49:45.948714 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67d1926d-47ed-4a5c-b868-690599126446\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ee3bc200ebaed1c133ad32e65db44c8613c48203736dada71c0881d3d4f45a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6fefd6cf7ee54d7687a700f08f52ff20d438289f846da4d811f3295e85455c1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee77b54ead604e1f64b0057494a4ba082ac7f59289f563ce28ec6048485e3b07\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://36385cbf7f0935ced8cb20a3a65c7d1802e2faf23bce8aeca1a040662b878b0f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://947d88435f9dfeac492c7a476d5413d6c9f65fea2e6a8322bed3a197be4e7c11\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"message\\\":\\\"le observer\\\\nW0223 08:48:59.155726 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0223 08:48:59.155899 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0223 08:48:59.156685 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1318688766/tls.crt::/tmp/serving-cert-1318688766/tls.key\\\\\\\"\\\\nI0223 08:48:59.543716 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0223 08:48:59.548278 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0223 08:48:59.548307 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0223 08:48:59.548334 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0223 08:48:59.548340 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0223 08:48:59.558852 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0223 08:48:59.558871 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0223 08:48:59.558894 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0223 08:48:59.558902 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0223 08:48:59.558909 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0223 08:48:59.558912 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0223 08:48:59.558917 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0223 08:48:59.558922 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0223 08:48:59.560387 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T08:48:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c47633e36ba0e4b4070060e2925e103b6efa68771b18f7a7f429546f9f05c1d9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:53Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7d8e4f8739813cb8d24afc9999755886cd4a77958b8b4e1514fc0bc53403258\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7d8e4f8739813cb8d24afc9999755886cd4a77958b8b4e1514fc0bc53403258\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:47:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:47:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:47:49Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:45Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:45 crc kubenswrapper[4940]: I0223 08:49:45.959368 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4vcwd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41834650-70c0-4558-9052-d7cdfc785e09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a8663adb017b0225e3a827e7670572c25e2b8123a2153e9f99830d8ddc60ebe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwtwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:49:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4vcwd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:45Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:45 crc kubenswrapper[4940]: I0223 08:49:45.973038 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-tj6ms" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"353c05f4-9ef1-4c0f-8388-8b56cc4c22d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1be9bb45923468ac8fc8f206f3493431ef579d16d158ba7cf330944b1eb7585\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j86pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e45b5826dae32da76487ca6179e00691e0f92748e895fbe42aa14d39a808a12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e45b5826dae32da76487ca6179e00691e0f92748e895fbe42aa14d39a808a12\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j86pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a9386fc4a4a81adf5f99f88359b958f8c47f37eb176a860c96e5e3b6a219f3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a9386fc4a4a81adf5f99f88359b958f8c47f37eb176a860c96e5e3b6a219f3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:49:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j86pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0e096f58de305f0df76444507c878fdf5eb1a134deb3725e1e46ac4d6581541\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0e096f58de305f0df76444507c878fdf5eb1a134deb3725e1e46ac4d6581541\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:49:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:49:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j86pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcc8e2b5ae391db63624b8bcbaa5cab45fcc5f168db2a52700d34ad2c3eb9b89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fcc8e2b5ae391db63624b8bcbaa5cab45fcc5f168db2a52700d34ad2c3eb9b89\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:49:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:49:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j86pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0b3f29a8c1eea72788476a0aac20ec99cb908df8983b5271448ae824b8ab96a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0b3f29a8c1eea72788476a0aac20ec99cb908df8983b5271448ae824b8ab96a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:49:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:49:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j86pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d249ae04e3dece913fbfa0f45169adf9d267043d3cf2dd93c4ace24c5fdf1dd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d249ae04e3dece913fbfa0f45169adf9d267043d3cf2dd93c4ace24c5fdf1dd6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:49:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:49:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j86pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:49:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-tj6ms\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:45Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:45 crc kubenswrapper[4940]: I0223 08:49:45.984262 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jwb9b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d8dd2da9-cea0-44f5-8c93-91b79c7f66ea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmbl4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmbl4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:49:44Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jwb9b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:45Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:45 crc kubenswrapper[4940]: I0223 08:49:45.996887 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b55d9910-222c-4c04-8a15-cecc288b8dd6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2696757a27ff629002d6fe762776487514e74ad9465357240b3f196cf9e3cec7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73db3b91fa5cdca73acde9a3bcb9062394a54e61e87020b5d656290a54aa4f70\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T08:48:50Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0223 08:48:20.958890 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0223 08:48:20.960636 1 observer_polling.go:159] Starting file observer\\\\nI0223 08:48:20.961488 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0223 08:48:20.962119 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0223 08:48:48.479316 1 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials\\\\nI0223 08:48:50.579389 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0223 08:48:50.579494 1 cmd.go:179] failed checking apiserver connectivity: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T08:48:20Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:48:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aceeb7fc620d96b7e2a474de52d7b3fb007417f4f1ec81e5853724d54381566\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dbb2f4667fe28c3ae9389cece3d7246f31a8bd82328370a699dae73e11b19a4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0c1a355a4218aa74f5a9f20e74427e20e7dd1864bfb683eeb2ac43ef6d7357f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:47:49Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:45Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:46 crc kubenswrapper[4940]: I0223 08:49:46.010735 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ca6a355de735d39d8c069429deed2a9983ba8d97d9210475c0b22791b6357d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:46Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:46 crc kubenswrapper[4940]: I0223 08:49:46.023076 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:46Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:46 crc kubenswrapper[4940]: I0223 08:49:46.033296 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:46 crc kubenswrapper[4940]: I0223 08:49:46.033329 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:46 crc kubenswrapper[4940]: I0223 08:49:46.033339 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:46 crc kubenswrapper[4940]: I0223 08:49:46.033354 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:46 crc kubenswrapper[4940]: I0223 08:49:46.033365 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:46Z","lastTransitionTime":"2026-02-23T08:49:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:46 crc kubenswrapper[4940]: I0223 08:49:46.036452 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4164a479bddce53118172c24d9dc0c8174560ac6f49d2a1fa7321845f7f4020a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:46Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:46 crc kubenswrapper[4940]: I0223 08:49:46.048443 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:46Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:46 crc kubenswrapper[4940]: I0223 08:49:46.060820 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f5fabdbd24e079413b28b30b7e72c60763c64d08d55badde2d84d38dcbc8a09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bce84ef684469eb1b554c08d38cbd547ce6fa507b189a9f63c740583e66bc5e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:46Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:46 crc kubenswrapper[4940]: I0223 08:49:46.073244 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-czrqm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec3904ad-5d0b-46b4-9c13-68454d9a3cb2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5c3f117be2e1eba2edb562924ecec6d7ac44b508366bc7c63a86214ada740dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8jmt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:49:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-czrqm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:46Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:46 crc kubenswrapper[4940]: I0223 08:49:46.092369 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qkw6w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0b5a971-c6f4-4518-9bb3-49d228275668\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://524ac500bdffecad2d1533ea863b5f2dc0acd130368e80faa476facf6817bcc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4558e2adbf43351cb79df3a10fc2e37fa7edc8eae79ceb459ae1ea068e8df21b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://683969deceafc005dbe91b7dbc1fd159cc1da9f55e48112ff68947c40997891e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cff793d0cde25f9d7edf3abaaf81005870d7ad01f985cfcc3cf3f5a57ea062b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f189355dc4d43c8b1df6111c9c3a128e90b724b976602933c384817e4f51882\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ef9add89b50dd7b480368148d404e8f7128d92e6d921c3c89a17463706f3dca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://69094f2d145425aee384881c22bbc1f0478a24df582e7eedcc1cf79af015ce56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://69094f2d145425aee384881c22bbc1f0478a24df582e7eedcc1cf79af015ce56\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-23T08:49:41Z\\\",\\\"message\\\":\\\"oUUID:dce28c51-c9f1-478b-97c8-7e209d6e7cbe}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {e3c4661a-36a6-47f0-a6c0-a4ee741f2224}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0223 08:49:41.701782 6731 lb_config.go:1031] Cluster endpoints for openshift-network-diagnostics/network-check-target for network=default are: map[]\\\\nI0223 08:49:41.701745 6731 services_controller.go:452] Built service openshift-machine-api/machine-api-operator per-node LB for network=default: []services.LB{}\\\\nI0223 08:49:41.701823 6731 services_controller.go:444] Built service openshift-kube-controller-manager-operator/metrics LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI0223 08:49:41.701844 6731 services_controller.go:445] Built service openshift-kube-controller-manager-operator/metrics LB template configs for network=default: []services.lbConfig(nil)\\\\nI0223 08:49:41.701867 6731 services_controller.go:453] Built service openshift-machine-api/machine-api-operator template LB for network=default: []services.LB{}\\\\nI0223 08:49:41.701866 6731 services_controller.go:451] Built service openshift-kube-controller-manager-operator/metrics cluster-wide LB for network=default: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-kube-controller-manager-operator/metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[str\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T08:49:40Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-qkw6w_openshift-ovn-kubernetes(d0b5a971-c6f4-4518-9bb3-49d228275668)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7a97208ce587c7633bbea9ad618e0721a712dfd62ff249a5843018bc125922a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56ef19c4e2659a35760fb8e2bb8a79c35a283fa3f4b00766bdd237d1464bd933\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56ef19c4e2659a35760fb8e2bb8a79c35a283fa3f4b00766bdd237d1464bd933\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:49:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:49:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:49:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qkw6w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:46Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:46 crc kubenswrapper[4940]: I0223 08:49:46.111683 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47a2763e-a680-45c0-b4e8-0930c73e2e6b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4316680c09083874958b09e3d47897f844490827fafe87653b1459708ba55af3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c467a466c26e4cd6c762317dcc08a84b7bd61aa044e8d075f7ada83206764d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2a314775a13f5a612d19ad1c27ccadc74e4ad191c84bd78a1068105509e514a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://863243ea1e361e4f5cd70d64fa4a0c078de0c0529c9928b413db44a13924c474\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cca6b93a0655ce53ad6d1ccfe625533a67c58b3876c57fe4473d699c95938bd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc81c994f3f21269783d08f800aa4a58bdf43d229b5b61d7f40f8706230fd6c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc81c994f3f21269783d08f800aa4a58bdf43d229b5b61d7f40f8706230fd6c5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:47:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:47:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://260cdbf12890f13f411672fde6af3350b3ddc30824ac5b43110247ae1c7b74fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://260cdbf12890f13f411672fde6af3350b3ddc30824ac5b43110247ae1c7b74fc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:47:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:47:52Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://768f56feda9f5952088933827a03f9d23d7d807d1354578de2335e7c3e8beaaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://768f56feda9f5952088933827a03f9d23d7d807d1354578de2335e7c3e8beaaf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:47:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:47:49Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:46Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:46 crc kubenswrapper[4940]: I0223 08:49:46.125073 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:46Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:46 crc kubenswrapper[4940]: I0223 08:49:46.135031 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ll9gt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac7931c6-f7d4-4166-b332-d954717f67c0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4dcef9eae99fcca399a3b47ff74ebb46cd4c0854422c9153d2ceb6f3a1ea8a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8clf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:49:37Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ll9gt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:46Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:46 crc kubenswrapper[4940]: I0223 08:49:46.135394 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:46 crc kubenswrapper[4940]: I0223 08:49:46.135468 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:46 crc kubenswrapper[4940]: I0223 08:49:46.135529 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:46 crc kubenswrapper[4940]: I0223 08:49:46.135591 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:46 crc kubenswrapper[4940]: I0223 08:49:46.135667 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:46Z","lastTransitionTime":"2026-02-23T08:49:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:46 crc kubenswrapper[4940]: I0223 08:49:46.145384 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f3f2cfd6-5ddf-436d-998f-440f1cc642b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a14bc73401f5a1f767af0ebe6b81d75f4f735acadd163be9aa0b78d067677a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpqgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a57b4e844c0d09d7debf8e4d2507287815cc75be308dc7010647ac2041181dc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpqgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:49:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-26mgs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:46Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:46 crc kubenswrapper[4940]: I0223 08:49:46.155541 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gjpr8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28e2b63c-99a7-46aa-a8e2-cee2bf5d7066\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6bce5b28c2d7a3946303e9ebd8738dd2337190884db0025e336d96f1641171c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4p7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5d550949b5cf8f3efe2267f466a5c769e2862ab361c86a14eef0baf874e7dd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4p7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:49:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gjpr8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:46Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:46 crc kubenswrapper[4940]: I0223 08:49:46.238144 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:46 crc kubenswrapper[4940]: I0223 08:49:46.238529 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:46 crc kubenswrapper[4940]: I0223 08:49:46.238723 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:46 crc kubenswrapper[4940]: I0223 08:49:46.238886 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:46 crc kubenswrapper[4940]: I0223 08:49:46.239048 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:46Z","lastTransitionTime":"2026-02-23T08:49:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:46 crc kubenswrapper[4940]: I0223 08:49:46.341232 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:46 crc kubenswrapper[4940]: I0223 08:49:46.341704 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:46 crc kubenswrapper[4940]: I0223 08:49:46.341914 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:46 crc kubenswrapper[4940]: I0223 08:49:46.342105 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:46 crc kubenswrapper[4940]: I0223 08:49:46.342274 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:46Z","lastTransitionTime":"2026-02-23T08:49:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:46 crc kubenswrapper[4940]: I0223 08:49:46.345547 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 08:49:46 crc kubenswrapper[4940]: I0223 08:49:46.345876 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jwb9b" Feb 23 08:49:46 crc kubenswrapper[4940]: E0223 08:49:46.346007 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 08:49:46 crc kubenswrapper[4940]: E0223 08:49:46.346378 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jwb9b" podUID="d8dd2da9-cea0-44f5-8c93-91b79c7f66ea" Feb 23 08:49:46 crc kubenswrapper[4940]: I0223 08:49:46.363667 4940 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 16:40:13.686213767 +0000 UTC Feb 23 08:49:46 crc kubenswrapper[4940]: I0223 08:49:46.444621 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:46 crc kubenswrapper[4940]: I0223 08:49:46.444662 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:46 crc kubenswrapper[4940]: I0223 08:49:46.444672 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:46 crc kubenswrapper[4940]: I0223 08:49:46.444686 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:46 crc kubenswrapper[4940]: I0223 08:49:46.444696 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:46Z","lastTransitionTime":"2026-02-23T08:49:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:46 crc kubenswrapper[4940]: I0223 08:49:46.547339 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:46 crc kubenswrapper[4940]: I0223 08:49:46.547389 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:46 crc kubenswrapper[4940]: I0223 08:49:46.547403 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:46 crc kubenswrapper[4940]: I0223 08:49:46.547423 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:46 crc kubenswrapper[4940]: I0223 08:49:46.547439 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:46Z","lastTransitionTime":"2026-02-23T08:49:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:46 crc kubenswrapper[4940]: I0223 08:49:46.650358 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:46 crc kubenswrapper[4940]: I0223 08:49:46.650421 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:46 crc kubenswrapper[4940]: I0223 08:49:46.650440 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:46 crc kubenswrapper[4940]: I0223 08:49:46.650464 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:46 crc kubenswrapper[4940]: I0223 08:49:46.650481 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:46Z","lastTransitionTime":"2026-02-23T08:49:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:46 crc kubenswrapper[4940]: I0223 08:49:46.752873 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:46 crc kubenswrapper[4940]: I0223 08:49:46.752915 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:46 crc kubenswrapper[4940]: I0223 08:49:46.752925 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:46 crc kubenswrapper[4940]: I0223 08:49:46.752942 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:46 crc kubenswrapper[4940]: I0223 08:49:46.752952 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:46Z","lastTransitionTime":"2026-02-23T08:49:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:46 crc kubenswrapper[4940]: I0223 08:49:46.855380 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:46 crc kubenswrapper[4940]: I0223 08:49:46.855416 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:46 crc kubenswrapper[4940]: I0223 08:49:46.855425 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:46 crc kubenswrapper[4940]: I0223 08:49:46.855438 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:46 crc kubenswrapper[4940]: I0223 08:49:46.855447 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:46Z","lastTransitionTime":"2026-02-23T08:49:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:46 crc kubenswrapper[4940]: I0223 08:49:46.958210 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:46 crc kubenswrapper[4940]: I0223 08:49:46.958258 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:46 crc kubenswrapper[4940]: I0223 08:49:46.958272 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:46 crc kubenswrapper[4940]: I0223 08:49:46.958288 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:46 crc kubenswrapper[4940]: I0223 08:49:46.958299 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:46Z","lastTransitionTime":"2026-02-23T08:49:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:47 crc kubenswrapper[4940]: I0223 08:49:47.067755 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:47 crc kubenswrapper[4940]: I0223 08:49:47.067849 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:47 crc kubenswrapper[4940]: I0223 08:49:47.067874 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:47 crc kubenswrapper[4940]: I0223 08:49:47.067904 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:47 crc kubenswrapper[4940]: I0223 08:49:47.067925 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:47Z","lastTransitionTime":"2026-02-23T08:49:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:47 crc kubenswrapper[4940]: I0223 08:49:47.171006 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:47 crc kubenswrapper[4940]: I0223 08:49:47.171058 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:47 crc kubenswrapper[4940]: I0223 08:49:47.171074 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:47 crc kubenswrapper[4940]: I0223 08:49:47.171094 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:47 crc kubenswrapper[4940]: I0223 08:49:47.171110 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:47Z","lastTransitionTime":"2026-02-23T08:49:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:47 crc kubenswrapper[4940]: I0223 08:49:47.273580 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:47 crc kubenswrapper[4940]: I0223 08:49:47.273640 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:47 crc kubenswrapper[4940]: I0223 08:49:47.273650 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:47 crc kubenswrapper[4940]: I0223 08:49:47.273664 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:47 crc kubenswrapper[4940]: I0223 08:49:47.273674 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:47Z","lastTransitionTime":"2026-02-23T08:49:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:47 crc kubenswrapper[4940]: I0223 08:49:47.345569 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 08:49:47 crc kubenswrapper[4940]: E0223 08:49:47.345721 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 08:49:47 crc kubenswrapper[4940]: I0223 08:49:47.345858 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 08:49:47 crc kubenswrapper[4940]: E0223 08:49:47.346038 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 08:49:47 crc kubenswrapper[4940]: I0223 08:49:47.364073 4940 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 09:10:09.597839778 +0000 UTC Feb 23 08:49:47 crc kubenswrapper[4940]: I0223 08:49:47.376126 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:47 crc kubenswrapper[4940]: I0223 08:49:47.376191 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:47 crc kubenswrapper[4940]: I0223 08:49:47.376209 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:47 crc kubenswrapper[4940]: I0223 08:49:47.376233 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:47 crc kubenswrapper[4940]: I0223 08:49:47.376252 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:47Z","lastTransitionTime":"2026-02-23T08:49:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:47 crc kubenswrapper[4940]: I0223 08:49:47.478653 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:47 crc kubenswrapper[4940]: I0223 08:49:47.478691 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:47 crc kubenswrapper[4940]: I0223 08:49:47.478701 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:47 crc kubenswrapper[4940]: I0223 08:49:47.478717 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:47 crc kubenswrapper[4940]: I0223 08:49:47.478728 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:47Z","lastTransitionTime":"2026-02-23T08:49:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:47 crc kubenswrapper[4940]: I0223 08:49:47.581507 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:47 crc kubenswrapper[4940]: I0223 08:49:47.581552 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:47 crc kubenswrapper[4940]: I0223 08:49:47.581563 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:47 crc kubenswrapper[4940]: I0223 08:49:47.581579 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:47 crc kubenswrapper[4940]: I0223 08:49:47.581593 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:47Z","lastTransitionTime":"2026-02-23T08:49:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:47 crc kubenswrapper[4940]: I0223 08:49:47.684260 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:47 crc kubenswrapper[4940]: I0223 08:49:47.684311 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:47 crc kubenswrapper[4940]: I0223 08:49:47.684326 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:47 crc kubenswrapper[4940]: I0223 08:49:47.684345 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:47 crc kubenswrapper[4940]: I0223 08:49:47.684362 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:47Z","lastTransitionTime":"2026-02-23T08:49:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:47 crc kubenswrapper[4940]: I0223 08:49:47.786497 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:47 crc kubenswrapper[4940]: I0223 08:49:47.786577 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:47 crc kubenswrapper[4940]: I0223 08:49:47.786656 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:47 crc kubenswrapper[4940]: I0223 08:49:47.786726 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:47 crc kubenswrapper[4940]: I0223 08:49:47.786751 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:47Z","lastTransitionTime":"2026-02-23T08:49:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:47 crc kubenswrapper[4940]: I0223 08:49:47.830184 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d8dd2da9-cea0-44f5-8c93-91b79c7f66ea-metrics-certs\") pod \"network-metrics-daemon-jwb9b\" (UID: \"d8dd2da9-cea0-44f5-8c93-91b79c7f66ea\") " pod="openshift-multus/network-metrics-daemon-jwb9b" Feb 23 08:49:47 crc kubenswrapper[4940]: E0223 08:49:47.830360 4940 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 23 08:49:47 crc kubenswrapper[4940]: E0223 08:49:47.830427 4940 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d8dd2da9-cea0-44f5-8c93-91b79c7f66ea-metrics-certs podName:d8dd2da9-cea0-44f5-8c93-91b79c7f66ea nodeName:}" failed. No retries permitted until 2026-02-23 08:49:51.830410334 +0000 UTC m=+123.213616491 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/d8dd2da9-cea0-44f5-8c93-91b79c7f66ea-metrics-certs") pod "network-metrics-daemon-jwb9b" (UID: "d8dd2da9-cea0-44f5-8c93-91b79c7f66ea") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 23 08:49:47 crc kubenswrapper[4940]: I0223 08:49:47.889567 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:47 crc kubenswrapper[4940]: I0223 08:49:47.889670 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:47 crc kubenswrapper[4940]: I0223 08:49:47.889698 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:47 crc kubenswrapper[4940]: I0223 08:49:47.889728 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:47 crc kubenswrapper[4940]: I0223 08:49:47.889750 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:47Z","lastTransitionTime":"2026-02-23T08:49:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:47 crc kubenswrapper[4940]: I0223 08:49:47.992226 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:47 crc kubenswrapper[4940]: I0223 08:49:47.992276 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:47 crc kubenswrapper[4940]: I0223 08:49:47.992293 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:47 crc kubenswrapper[4940]: I0223 08:49:47.992317 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:47 crc kubenswrapper[4940]: I0223 08:49:47.992333 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:47Z","lastTransitionTime":"2026-02-23T08:49:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:48 crc kubenswrapper[4940]: I0223 08:49:48.095314 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:48 crc kubenswrapper[4940]: I0223 08:49:48.095356 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:48 crc kubenswrapper[4940]: I0223 08:49:48.095365 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:48 crc kubenswrapper[4940]: I0223 08:49:48.095380 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:48 crc kubenswrapper[4940]: I0223 08:49:48.095392 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:48Z","lastTransitionTime":"2026-02-23T08:49:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:48 crc kubenswrapper[4940]: I0223 08:49:48.198013 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:48 crc kubenswrapper[4940]: I0223 08:49:48.198059 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:48 crc kubenswrapper[4940]: I0223 08:49:48.198068 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:48 crc kubenswrapper[4940]: I0223 08:49:48.198083 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:48 crc kubenswrapper[4940]: I0223 08:49:48.198090 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:48Z","lastTransitionTime":"2026-02-23T08:49:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:48 crc kubenswrapper[4940]: I0223 08:49:48.300771 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:48 crc kubenswrapper[4940]: I0223 08:49:48.300821 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:48 crc kubenswrapper[4940]: I0223 08:49:48.300835 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:48 crc kubenswrapper[4940]: I0223 08:49:48.300852 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:48 crc kubenswrapper[4940]: I0223 08:49:48.300863 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:48Z","lastTransitionTime":"2026-02-23T08:49:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:49 crc kubenswrapper[4940]: E0223 08:49:49.587037 4940 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Feb 23 08:49:49 crc kubenswrapper[4940]: I0223 08:49:49.587326 4940 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 22:09:07.591440327 +0000 UTC Feb 23 08:49:49 crc kubenswrapper[4940]: I0223 08:49:49.590741 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 08:49:49 crc kubenswrapper[4940]: E0223 08:49:49.590876 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 08:49:49 crc kubenswrapper[4940]: I0223 08:49:49.590977 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 08:49:49 crc kubenswrapper[4940]: E0223 08:49:49.591104 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 08:49:49 crc kubenswrapper[4940]: I0223 08:49:49.591172 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jwb9b" Feb 23 08:49:49 crc kubenswrapper[4940]: E0223 08:49:49.591244 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jwb9b" podUID="d8dd2da9-cea0-44f5-8c93-91b79c7f66ea" Feb 23 08:49:49 crc kubenswrapper[4940]: I0223 08:49:49.591297 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 08:49:49 crc kubenswrapper[4940]: E0223 08:49:49.591353 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 08:49:49 crc kubenswrapper[4940]: E0223 08:49:49.594057 4940 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 23 08:49:49 crc kubenswrapper[4940]: I0223 08:49:49.609337 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67d1926d-47ed-4a5c-b868-690599126446\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ee3bc200ebaed1c133ad32e65db44c8613c48203736dada71c0881d3d4f45a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6fefd6cf7ee54d7687a700f08f52ff20d438289f846da4d811f3295e85455c1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee77b54ead604e1f64b0057494a4ba082ac7f59289f563ce28ec6048485e3b07\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://36385cbf7f0935ced8cb20a3a65c7d1802e2faf23bce8aeca1a040662b878b0f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://947d88435f9dfeac492c7a476d5413d6c9f65fea2e6a8322bed3a197be4e7c11\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"message\\\":\\\"le observer\\\\nW0223 08:48:59.155726 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0223 08:48:59.155899 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0223 08:48:59.156685 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1318688766/tls.crt::/tmp/serving-cert-1318688766/tls.key\\\\\\\"\\\\nI0223 08:48:59.543716 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0223 08:48:59.548278 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0223 08:48:59.548307 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0223 08:48:59.548334 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0223 08:48:59.548340 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0223 08:48:59.558852 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0223 08:48:59.558871 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0223 08:48:59.558894 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0223 08:48:59.558902 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0223 08:48:59.558909 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0223 08:48:59.558912 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0223 08:48:59.558917 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0223 08:48:59.558922 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0223 08:48:59.560387 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T08:48:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c47633e36ba0e4b4070060e2925e103b6efa68771b18f7a7f429546f9f05c1d9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:53Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7d8e4f8739813cb8d24afc9999755886cd4a77958b8b4e1514fc0bc53403258\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7d8e4f8739813cb8d24afc9999755886cd4a77958b8b4e1514fc0bc53403258\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:47:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:47:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:47:49Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:49Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:49 crc kubenswrapper[4940]: I0223 08:49:49.625761 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4vcwd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41834650-70c0-4558-9052-d7cdfc785e09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a8663adb017b0225e3a827e7670572c25e2b8123a2153e9f99830d8ddc60ebe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwtwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:49:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4vcwd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:49Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:49 crc kubenswrapper[4940]: I0223 08:49:49.637338 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b55d9910-222c-4c04-8a15-cecc288b8dd6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2696757a27ff629002d6fe762776487514e74ad9465357240b3f196cf9e3cec7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73db3b91fa5cdca73acde9a3bcb9062394a54e61e87020b5d656290a54aa4f70\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T08:48:50Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0223 08:48:20.958890 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0223 08:48:20.960636 1 observer_polling.go:159] Starting file observer\\\\nI0223 08:48:20.961488 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0223 08:48:20.962119 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0223 08:48:48.479316 1 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials\\\\nI0223 08:48:50.579389 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0223 08:48:50.579494 1 cmd.go:179] failed checking apiserver connectivity: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T08:48:20Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:48:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aceeb7fc620d96b7e2a474de52d7b3fb007417f4f1ec81e5853724d54381566\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dbb2f4667fe28c3ae9389cece3d7246f31a8bd82328370a699dae73e11b19a4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0c1a355a4218aa74f5a9f20e74427e20e7dd1864bfb683eeb2ac43ef6d7357f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:47:49Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:49Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:49 crc kubenswrapper[4940]: I0223 08:49:49.650033 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ca6a355de735d39d8c069429deed2a9983ba8d97d9210475c0b22791b6357d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:49Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:49 crc kubenswrapper[4940]: I0223 08:49:49.666123 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:49Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:49 crc kubenswrapper[4940]: I0223 08:49:49.679736 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-tj6ms" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"353c05f4-9ef1-4c0f-8388-8b56cc4c22d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1be9bb45923468ac8fc8f206f3493431ef579d16d158ba7cf330944b1eb7585\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j86pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e45b5826dae32da76487ca6179e00691e0f92748e895fbe42aa14d39a808a12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e45b5826dae32da76487ca6179e00691e0f92748e895fbe42aa14d39a808a12\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j86pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a9386fc4a4a81adf5f99f88359b958f8c47f37eb176a860c96e5e3b6a219f3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a9386fc4a4a81adf5f99f88359b958f8c47f37eb176a860c96e5e3b6a219f3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:49:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j86pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0e096f58de305f0df76444507c878fdf5eb1a134deb3725e1e46ac4d6581541\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0e096f58de305f0df76444507c878fdf5eb1a134deb3725e1e46ac4d6581541\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:49:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:49:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j86pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcc8e2b5ae391db63624b8bcbaa5cab45fcc5f168db2a52700d34ad2c3eb9b89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fcc8e2b5ae391db63624b8bcbaa5cab45fcc5f168db2a52700d34ad2c3eb9b89\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:49:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:49:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j86pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0b3f29a8c1eea72788476a0aac20ec99cb908df8983b5271448ae824b8ab96a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0b3f29a8c1eea72788476a0aac20ec99cb908df8983b5271448ae824b8ab96a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:49:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:49:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j86pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d249ae04e3dece913fbfa0f45169adf9d267043d3cf2dd93c4ace24c5fdf1dd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d249ae04e3dece913fbfa0f45169adf9d267043d3cf2dd93c4ace24c5fdf1dd6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:49:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:49:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j86pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:49:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-tj6ms\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:49Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:49 crc kubenswrapper[4940]: I0223 08:49:49.690404 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jwb9b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d8dd2da9-cea0-44f5-8c93-91b79c7f66ea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmbl4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmbl4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:49:44Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jwb9b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:49Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:49 crc kubenswrapper[4940]: I0223 08:49:49.704347 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f5fabdbd24e079413b28b30b7e72c60763c64d08d55badde2d84d38dcbc8a09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bce84ef684469eb1b554c08d38cbd547ce6fa507b189a9f63c740583e66bc5e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:49Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:49 crc kubenswrapper[4940]: I0223 08:49:49.716897 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-czrqm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec3904ad-5d0b-46b4-9c13-68454d9a3cb2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5c3f117be2e1eba2edb562924ecec6d7ac44b508366bc7c63a86214ada740dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8jmt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:49:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-czrqm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:49Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:49 crc kubenswrapper[4940]: I0223 08:49:49.734583 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qkw6w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0b5a971-c6f4-4518-9bb3-49d228275668\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://524ac500bdffecad2d1533ea863b5f2dc0acd130368e80faa476facf6817bcc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4558e2adbf43351cb79df3a10fc2e37fa7edc8eae79ceb459ae1ea068e8df21b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://683969deceafc005dbe91b7dbc1fd159cc1da9f55e48112ff68947c40997891e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cff793d0cde25f9d7edf3abaaf81005870d7ad01f985cfcc3cf3f5a57ea062b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f189355dc4d43c8b1df6111c9c3a128e90b724b976602933c384817e4f51882\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ef9add89b50dd7b480368148d404e8f7128d92e6d921c3c89a17463706f3dca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://69094f2d145425aee384881c22bbc1f0478a24df582e7eedcc1cf79af015ce56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://69094f2d145425aee384881c22bbc1f0478a24df582e7eedcc1cf79af015ce56\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-23T08:49:41Z\\\",\\\"message\\\":\\\"oUUID:dce28c51-c9f1-478b-97c8-7e209d6e7cbe}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {e3c4661a-36a6-47f0-a6c0-a4ee741f2224}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0223 08:49:41.701782 6731 lb_config.go:1031] Cluster endpoints for openshift-network-diagnostics/network-check-target for network=default are: map[]\\\\nI0223 08:49:41.701745 6731 services_controller.go:452] Built service openshift-machine-api/machine-api-operator per-node LB for network=default: []services.LB{}\\\\nI0223 08:49:41.701823 6731 services_controller.go:444] Built service openshift-kube-controller-manager-operator/metrics LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI0223 08:49:41.701844 6731 services_controller.go:445] Built service openshift-kube-controller-manager-operator/metrics LB template configs for network=default: []services.lbConfig(nil)\\\\nI0223 08:49:41.701867 6731 services_controller.go:453] Built service openshift-machine-api/machine-api-operator template LB for network=default: []services.LB{}\\\\nI0223 08:49:41.701866 6731 services_controller.go:451] Built service openshift-kube-controller-manager-operator/metrics cluster-wide LB for network=default: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-kube-controller-manager-operator/metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[str\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T08:49:40Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-qkw6w_openshift-ovn-kubernetes(d0b5a971-c6f4-4518-9bb3-49d228275668)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7a97208ce587c7633bbea9ad618e0721a712dfd62ff249a5843018bc125922a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56ef19c4e2659a35760fb8e2bb8a79c35a283fa3f4b00766bdd237d1464bd933\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56ef19c4e2659a35760fb8e2bb8a79c35a283fa3f4b00766bdd237d1464bd933\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:49:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:49:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:49:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qkw6w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:49Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:49 crc kubenswrapper[4940]: I0223 08:49:49.756212 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47a2763e-a680-45c0-b4e8-0930c73e2e6b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4316680c09083874958b09e3d47897f844490827fafe87653b1459708ba55af3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c467a466c26e4cd6c762317dcc08a84b7bd61aa044e8d075f7ada83206764d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2a314775a13f5a612d19ad1c27ccadc74e4ad191c84bd78a1068105509e514a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://863243ea1e361e4f5cd70d64fa4a0c078de0c0529c9928b413db44a13924c474\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cca6b93a0655ce53ad6d1ccfe625533a67c58b3876c57fe4473d699c95938bd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc81c994f3f21269783d08f800aa4a58bdf43d229b5b61d7f40f8706230fd6c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc81c994f3f21269783d08f800aa4a58bdf43d229b5b61d7f40f8706230fd6c5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:47:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:47:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://260cdbf12890f13f411672fde6af3350b3ddc30824ac5b43110247ae1c7b74fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://260cdbf12890f13f411672fde6af3350b3ddc30824ac5b43110247ae1c7b74fc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:47:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:47:52Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://768f56feda9f5952088933827a03f9d23d7d807d1354578de2335e7c3e8beaaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://768f56feda9f5952088933827a03f9d23d7d807d1354578de2335e7c3e8beaaf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:47:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:47:49Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:49Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:49 crc kubenswrapper[4940]: I0223 08:49:49.767929 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:49Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:49 crc kubenswrapper[4940]: I0223 08:49:49.780740 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4164a479bddce53118172c24d9dc0c8174560ac6f49d2a1fa7321845f7f4020a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:49Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:49 crc kubenswrapper[4940]: I0223 08:49:49.793722 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:49Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:49 crc kubenswrapper[4940]: I0223 08:49:49.803739 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ll9gt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac7931c6-f7d4-4166-b332-d954717f67c0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4dcef9eae99fcca399a3b47ff74ebb46cd4c0854422c9153d2ceb6f3a1ea8a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8clf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:49:37Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ll9gt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:49Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:49 crc kubenswrapper[4940]: I0223 08:49:49.816860 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f3f2cfd6-5ddf-436d-998f-440f1cc642b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a14bc73401f5a1f767af0ebe6b81d75f4f735acadd163be9aa0b78d067677a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpqgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a57b4e844c0d09d7debf8e4d2507287815cc75be308dc7010647ac2041181dc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpqgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:49:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-26mgs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:49Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:49 crc kubenswrapper[4940]: I0223 08:49:49.830161 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gjpr8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28e2b63c-99a7-46aa-a8e2-cee2bf5d7066\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6bce5b28c2d7a3946303e9ebd8738dd2337190884db0025e336d96f1641171c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4p7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5d550949b5cf8f3efe2267f466a5c769e2862ab361c86a14eef0baf874e7dd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4p7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:49:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gjpr8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:49Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:50 crc kubenswrapper[4940]: I0223 08:49:50.588329 4940 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 11:15:58.139042695 +0000 UTC Feb 23 08:49:51 crc kubenswrapper[4940]: I0223 08:49:51.345353 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 08:49:51 crc kubenswrapper[4940]: I0223 08:49:51.345415 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jwb9b" Feb 23 08:49:51 crc kubenswrapper[4940]: I0223 08:49:51.345368 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 08:49:51 crc kubenswrapper[4940]: I0223 08:49:51.345368 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 08:49:51 crc kubenswrapper[4940]: E0223 08:49:51.345676 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jwb9b" podUID="d8dd2da9-cea0-44f5-8c93-91b79c7f66ea" Feb 23 08:49:51 crc kubenswrapper[4940]: E0223 08:49:51.345760 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 08:49:51 crc kubenswrapper[4940]: E0223 08:49:51.345808 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 08:49:51 crc kubenswrapper[4940]: E0223 08:49:51.345893 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 08:49:51 crc kubenswrapper[4940]: I0223 08:49:51.588888 4940 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 17:34:08.899081355 +0000 UTC Feb 23 08:49:51 crc kubenswrapper[4940]: I0223 08:49:51.916142 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d8dd2da9-cea0-44f5-8c93-91b79c7f66ea-metrics-certs\") pod \"network-metrics-daemon-jwb9b\" (UID: \"d8dd2da9-cea0-44f5-8c93-91b79c7f66ea\") " pod="openshift-multus/network-metrics-daemon-jwb9b" Feb 23 08:49:51 crc kubenswrapper[4940]: E0223 08:49:51.916342 4940 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 23 08:49:51 crc kubenswrapper[4940]: E0223 08:49:51.916434 4940 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d8dd2da9-cea0-44f5-8c93-91b79c7f66ea-metrics-certs podName:d8dd2da9-cea0-44f5-8c93-91b79c7f66ea nodeName:}" failed. No retries permitted until 2026-02-23 08:49:59.91639033 +0000 UTC m=+131.299596487 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/d8dd2da9-cea0-44f5-8c93-91b79c7f66ea-metrics-certs") pod "network-metrics-daemon-jwb9b" (UID: "d8dd2da9-cea0-44f5-8c93-91b79c7f66ea") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 23 08:49:52 crc kubenswrapper[4940]: I0223 08:49:52.589711 4940 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 11:41:47.460419911 +0000 UTC Feb 23 08:49:53 crc kubenswrapper[4940]: I0223 08:49:53.344701 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jwb9b" Feb 23 08:49:53 crc kubenswrapper[4940]: I0223 08:49:53.345232 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 08:49:53 crc kubenswrapper[4940]: I0223 08:49:53.345267 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 08:49:53 crc kubenswrapper[4940]: E0223 08:49:53.345183 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jwb9b" podUID="d8dd2da9-cea0-44f5-8c93-91b79c7f66ea" Feb 23 08:49:53 crc kubenswrapper[4940]: I0223 08:49:53.345239 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 08:49:53 crc kubenswrapper[4940]: E0223 08:49:53.345402 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 08:49:53 crc kubenswrapper[4940]: E0223 08:49:53.345517 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 08:49:53 crc kubenswrapper[4940]: E0223 08:49:53.345701 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 08:49:53 crc kubenswrapper[4940]: I0223 08:49:53.358310 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Feb 23 08:49:53 crc kubenswrapper[4940]: I0223 08:49:53.590177 4940 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 01:24:48.189826209 +0000 UTC Feb 23 08:49:53 crc kubenswrapper[4940]: I0223 08:49:53.849537 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:53 crc kubenswrapper[4940]: I0223 08:49:53.849665 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:53 crc kubenswrapper[4940]: I0223 08:49:53.849691 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:53 crc kubenswrapper[4940]: I0223 08:49:53.849721 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:53 crc kubenswrapper[4940]: I0223 08:49:53.849743 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:53Z","lastTransitionTime":"2026-02-23T08:49:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:53 crc kubenswrapper[4940]: E0223 08:49:53.865504 4940 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T08:49:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T08:49:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T08:49:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T08:49:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0b40f9a7-6d5b-496d-bcec-88183c6aba29\\\",\\\"systemUUID\\\":\\\"3c406e8c-0d77-4ead-8ee9-37cf28c01cc1\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:53Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:53 crc kubenswrapper[4940]: I0223 08:49:53.870374 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:53 crc kubenswrapper[4940]: I0223 08:49:53.870415 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:53 crc kubenswrapper[4940]: I0223 08:49:53.870426 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:53 crc kubenswrapper[4940]: I0223 08:49:53.870442 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:53 crc kubenswrapper[4940]: I0223 08:49:53.870454 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:53Z","lastTransitionTime":"2026-02-23T08:49:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:53 crc kubenswrapper[4940]: E0223 08:49:53.888646 4940 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T08:49:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T08:49:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T08:49:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T08:49:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0b40f9a7-6d5b-496d-bcec-88183c6aba29\\\",\\\"systemUUID\\\":\\\"3c406e8c-0d77-4ead-8ee9-37cf28c01cc1\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:53Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:53 crc kubenswrapper[4940]: I0223 08:49:53.893386 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:53 crc kubenswrapper[4940]: I0223 08:49:53.893422 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:53 crc kubenswrapper[4940]: I0223 08:49:53.893430 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:53 crc kubenswrapper[4940]: I0223 08:49:53.893446 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:53 crc kubenswrapper[4940]: I0223 08:49:53.893457 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:53Z","lastTransitionTime":"2026-02-23T08:49:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:53 crc kubenswrapper[4940]: E0223 08:49:53.910213 4940 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T08:49:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T08:49:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T08:49:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T08:49:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0b40f9a7-6d5b-496d-bcec-88183c6aba29\\\",\\\"systemUUID\\\":\\\"3c406e8c-0d77-4ead-8ee9-37cf28c01cc1\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:53Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:53 crc kubenswrapper[4940]: I0223 08:49:53.913887 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:53 crc kubenswrapper[4940]: I0223 08:49:53.913921 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:53 crc kubenswrapper[4940]: I0223 08:49:53.913929 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:53 crc kubenswrapper[4940]: I0223 08:49:53.913944 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:53 crc kubenswrapper[4940]: I0223 08:49:53.913953 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:53Z","lastTransitionTime":"2026-02-23T08:49:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:53 crc kubenswrapper[4940]: E0223 08:49:53.929400 4940 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T08:49:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T08:49:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T08:49:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T08:49:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0b40f9a7-6d5b-496d-bcec-88183c6aba29\\\",\\\"systemUUID\\\":\\\"3c406e8c-0d77-4ead-8ee9-37cf28c01cc1\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:53Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:53 crc kubenswrapper[4940]: I0223 08:49:53.933293 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:49:53 crc kubenswrapper[4940]: I0223 08:49:53.933323 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:49:53 crc kubenswrapper[4940]: I0223 08:49:53.933333 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:49:53 crc kubenswrapper[4940]: I0223 08:49:53.933351 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:49:53 crc kubenswrapper[4940]: I0223 08:49:53.933361 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:49:53Z","lastTransitionTime":"2026-02-23T08:49:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:49:53 crc kubenswrapper[4940]: E0223 08:49:53.948290 4940 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T08:49:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T08:49:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T08:49:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T08:49:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0b40f9a7-6d5b-496d-bcec-88183c6aba29\\\",\\\"systemUUID\\\":\\\"3c406e8c-0d77-4ead-8ee9-37cf28c01cc1\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:53Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:53 crc kubenswrapper[4940]: E0223 08:49:53.948439 4940 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 23 08:49:54 crc kubenswrapper[4940]: I0223 08:49:54.591215 4940 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 15:53:08.593788404 +0000 UTC Feb 23 08:49:54 crc kubenswrapper[4940]: E0223 08:49:54.596120 4940 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 23 08:49:55 crc kubenswrapper[4940]: I0223 08:49:55.345338 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 08:49:55 crc kubenswrapper[4940]: I0223 08:49:55.345380 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jwb9b" Feb 23 08:49:55 crc kubenswrapper[4940]: I0223 08:49:55.345427 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 08:49:55 crc kubenswrapper[4940]: I0223 08:49:55.345891 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 08:49:55 crc kubenswrapper[4940]: E0223 08:49:55.346011 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 08:49:55 crc kubenswrapper[4940]: E0223 08:49:55.346163 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jwb9b" podUID="d8dd2da9-cea0-44f5-8c93-91b79c7f66ea" Feb 23 08:49:55 crc kubenswrapper[4940]: E0223 08:49:55.346422 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 08:49:55 crc kubenswrapper[4940]: E0223 08:49:55.347012 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 08:49:55 crc kubenswrapper[4940]: I0223 08:49:55.347388 4940 scope.go:117] "RemoveContainer" containerID="69094f2d145425aee384881c22bbc1f0478a24df582e7eedcc1cf79af015ce56" Feb 23 08:49:55 crc kubenswrapper[4940]: I0223 08:49:55.592197 4940 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 22:51:03.41604287 +0000 UTC Feb 23 08:49:55 crc kubenswrapper[4940]: I0223 08:49:55.628186 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qkw6w_d0b5a971-c6f4-4518-9bb3-49d228275668/ovnkube-controller/1.log" Feb 23 08:49:55 crc kubenswrapper[4940]: I0223 08:49:55.631379 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qkw6w" event={"ID":"d0b5a971-c6f4-4518-9bb3-49d228275668","Type":"ContainerStarted","Data":"f032dbff4d8efcc9eac6064907ff8c278bd88ec4f46b8cacc75876dc5a85ce97"} Feb 23 08:49:55 crc kubenswrapper[4940]: I0223 08:49:55.631904 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-qkw6w" Feb 23 08:49:55 crc kubenswrapper[4940]: I0223 08:49:55.648253 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b55d9910-222c-4c04-8a15-cecc288b8dd6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2696757a27ff629002d6fe762776487514e74ad9465357240b3f196cf9e3cec7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73db3b91fa5cdca73acde9a3bcb9062394a54e61e87020b5d656290a54aa4f70\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T08:48:50Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0223 08:48:20.958890 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0223 08:48:20.960636 1 observer_polling.go:159] Starting file observer\\\\nI0223 08:48:20.961488 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0223 08:48:20.962119 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0223 08:48:48.479316 1 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials\\\\nI0223 08:48:50.579389 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0223 08:48:50.579494 1 cmd.go:179] failed checking apiserver connectivity: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T08:48:20Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:48:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aceeb7fc620d96b7e2a474de52d7b3fb007417f4f1ec81e5853724d54381566\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dbb2f4667fe28c3ae9389cece3d7246f31a8bd82328370a699dae73e11b19a4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0c1a355a4218aa74f5a9f20e74427e20e7dd1864bfb683eeb2ac43ef6d7357f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:47:49Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:55Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:55 crc kubenswrapper[4940]: I0223 08:49:55.661913 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ca6a355de735d39d8c069429deed2a9983ba8d97d9210475c0b22791b6357d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:55Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:55 crc kubenswrapper[4940]: I0223 08:49:55.678054 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:55Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:55 crc kubenswrapper[4940]: I0223 08:49:55.699079 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-tj6ms" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"353c05f4-9ef1-4c0f-8388-8b56cc4c22d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1be9bb45923468ac8fc8f206f3493431ef579d16d158ba7cf330944b1eb7585\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j86pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e45b5826dae32da76487ca6179e00691e0f92748e895fbe42aa14d39a808a12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e45b5826dae32da76487ca6179e00691e0f92748e895fbe42aa14d39a808a12\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j86pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a9386fc4a4a81adf5f99f88359b958f8c47f37eb176a860c96e5e3b6a219f3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a9386fc4a4a81adf5f99f88359b958f8c47f37eb176a860c96e5e3b6a219f3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:49:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j86pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0e096f58de305f0df76444507c878fdf5eb1a134deb3725e1e46ac4d6581541\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0e096f58de305f0df76444507c878fdf5eb1a134deb3725e1e46ac4d6581541\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:49:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:49:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j86pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcc8e2b5ae391db63624b8bcbaa5cab45fcc5f168db2a52700d34ad2c3eb9b89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fcc8e2b5ae391db63624b8bcbaa5cab45fcc5f168db2a52700d34ad2c3eb9b89\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:49:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:49:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j86pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0b3f29a8c1eea72788476a0aac20ec99cb908df8983b5271448ae824b8ab96a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0b3f29a8c1eea72788476a0aac20ec99cb908df8983b5271448ae824b8ab96a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:49:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:49:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j86pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d249ae04e3dece913fbfa0f45169adf9d267043d3cf2dd93c4ace24c5fdf1dd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d249ae04e3dece913fbfa0f45169adf9d267043d3cf2dd93c4ace24c5fdf1dd6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:49:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:49:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j86pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:49:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-tj6ms\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:55Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:55 crc kubenswrapper[4940]: I0223 08:49:55.712546 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jwb9b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d8dd2da9-cea0-44f5-8c93-91b79c7f66ea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmbl4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmbl4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:49:44Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jwb9b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:55Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:55 crc kubenswrapper[4940]: I0223 08:49:55.741864 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47a2763e-a680-45c0-b4e8-0930c73e2e6b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4316680c09083874958b09e3d47897f844490827fafe87653b1459708ba55af3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c467a466c26e4cd6c762317dcc08a84b7bd61aa044e8d075f7ada83206764d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2a314775a13f5a612d19ad1c27ccadc74e4ad191c84bd78a1068105509e514a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://863243ea1e361e4f5cd70d64fa4a0c078de0c0529c9928b413db44a13924c474\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cca6b93a0655ce53ad6d1ccfe625533a67c58b3876c57fe4473d699c95938bd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc81c994f3f21269783d08f800aa4a58bdf43d229b5b61d7f40f8706230fd6c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc81c994f3f21269783d08f800aa4a58bdf43d229b5b61d7f40f8706230fd6c5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:47:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:47:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://260cdbf12890f13f411672fde6af3350b3ddc30824ac5b43110247ae1c7b74fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://260cdbf12890f13f411672fde6af3350b3ddc30824ac5b43110247ae1c7b74fc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:47:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:47:52Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://768f56feda9f5952088933827a03f9d23d7d807d1354578de2335e7c3e8beaaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://768f56feda9f5952088933827a03f9d23d7d807d1354578de2335e7c3e8beaaf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:47:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:47:49Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:55Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:55 crc kubenswrapper[4940]: I0223 08:49:55.755201 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:55Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:55 crc kubenswrapper[4940]: I0223 08:49:55.771668 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4164a479bddce53118172c24d9dc0c8174560ac6f49d2a1fa7321845f7f4020a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:55Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:55 crc kubenswrapper[4940]: I0223 08:49:55.789605 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:55Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:55 crc kubenswrapper[4940]: I0223 08:49:55.805025 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f5fabdbd24e079413b28b30b7e72c60763c64d08d55badde2d84d38dcbc8a09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bce84ef684469eb1b554c08d38cbd547ce6fa507b189a9f63c740583e66bc5e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:55Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:55 crc kubenswrapper[4940]: I0223 08:49:55.819756 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-czrqm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec3904ad-5d0b-46b4-9c13-68454d9a3cb2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5c3f117be2e1eba2edb562924ecec6d7ac44b508366bc7c63a86214ada740dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8jmt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:49:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-czrqm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:55Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:55 crc kubenswrapper[4940]: I0223 08:49:55.843880 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qkw6w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0b5a971-c6f4-4518-9bb3-49d228275668\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://524ac500bdffecad2d1533ea863b5f2dc0acd130368e80faa476facf6817bcc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4558e2adbf43351cb79df3a10fc2e37fa7edc8eae79ceb459ae1ea068e8df21b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://683969deceafc005dbe91b7dbc1fd159cc1da9f55e48112ff68947c40997891e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cff793d0cde25f9d7edf3abaaf81005870d7ad01f985cfcc3cf3f5a57ea062b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f189355dc4d43c8b1df6111c9c3a128e90b724b976602933c384817e4f51882\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ef9add89b50dd7b480368148d404e8f7128d92e6d921c3c89a17463706f3dca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f032dbff4d8efcc9eac6064907ff8c278bd88ec4f46b8cacc75876dc5a85ce97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://69094f2d145425aee384881c22bbc1f0478a24df582e7eedcc1cf79af015ce56\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-23T08:49:41Z\\\",\\\"message\\\":\\\"oUUID:dce28c51-c9f1-478b-97c8-7e209d6e7cbe}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {e3c4661a-36a6-47f0-a6c0-a4ee741f2224}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0223 08:49:41.701782 6731 lb_config.go:1031] Cluster endpoints for openshift-network-diagnostics/network-check-target for network=default are: map[]\\\\nI0223 08:49:41.701745 6731 services_controller.go:452] Built service openshift-machine-api/machine-api-operator per-node LB for network=default: []services.LB{}\\\\nI0223 08:49:41.701823 6731 services_controller.go:444] Built service openshift-kube-controller-manager-operator/metrics LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI0223 08:49:41.701844 6731 services_controller.go:445] Built service openshift-kube-controller-manager-operator/metrics LB template configs for network=default: []services.lbConfig(nil)\\\\nI0223 08:49:41.701867 6731 services_controller.go:453] Built service openshift-machine-api/machine-api-operator template LB for network=default: []services.LB{}\\\\nI0223 08:49:41.701866 6731 services_controller.go:451] Built service openshift-kube-controller-manager-operator/metrics cluster-wide LB for network=default: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-kube-controller-manager-operator/metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[str\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T08:49:40Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7a97208ce587c7633bbea9ad618e0721a712dfd62ff249a5843018bc125922a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56ef19c4e2659a35760fb8e2bb8a79c35a283fa3f4b00766bdd237d1464bd933\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56ef19c4e2659a35760fb8e2bb8a79c35a283fa3f4b00766bdd237d1464bd933\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:49:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:49:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:49:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qkw6w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:55Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:55 crc kubenswrapper[4940]: I0223 08:49:55.864945 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ll9gt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac7931c6-f7d4-4166-b332-d954717f67c0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4dcef9eae99fcca399a3b47ff74ebb46cd4c0854422c9153d2ceb6f3a1ea8a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8clf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:49:37Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ll9gt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:55Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:55 crc kubenswrapper[4940]: I0223 08:49:55.882270 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f3f2cfd6-5ddf-436d-998f-440f1cc642b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a14bc73401f5a1f767af0ebe6b81d75f4f735acadd163be9aa0b78d067677a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpqgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a57b4e844c0d09d7debf8e4d2507287815cc75be308dc7010647ac2041181dc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpqgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:49:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-26mgs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:55Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:55 crc kubenswrapper[4940]: I0223 08:49:55.896118 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gjpr8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28e2b63c-99a7-46aa-a8e2-cee2bf5d7066\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6bce5b28c2d7a3946303e9ebd8738dd2337190884db0025e336d96f1641171c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4p7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5d550949b5cf8f3efe2267f466a5c769e2862ab361c86a14eef0baf874e7dd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4p7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:49:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gjpr8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:55Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:55 crc kubenswrapper[4940]: I0223 08:49:55.911342 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67d1926d-47ed-4a5c-b868-690599126446\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ee3bc200ebaed1c133ad32e65db44c8613c48203736dada71c0881d3d4f45a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6fefd6cf7ee54d7687a700f08f52ff20d438289f846da4d811f3295e85455c1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee77b54ead604e1f64b0057494a4ba082ac7f59289f563ce28ec6048485e3b07\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://36385cbf7f0935ced8cb20a3a65c7d1802e2faf23bce8aeca1a040662b878b0f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://947d88435f9dfeac492c7a476d5413d6c9f65fea2e6a8322bed3a197be4e7c11\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"message\\\":\\\"le observer\\\\nW0223 08:48:59.155726 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0223 08:48:59.155899 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0223 08:48:59.156685 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1318688766/tls.crt::/tmp/serving-cert-1318688766/tls.key\\\\\\\"\\\\nI0223 08:48:59.543716 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0223 08:48:59.548278 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0223 08:48:59.548307 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0223 08:48:59.548334 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0223 08:48:59.548340 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0223 08:48:59.558852 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0223 08:48:59.558871 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0223 08:48:59.558894 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0223 08:48:59.558902 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0223 08:48:59.558909 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0223 08:48:59.558912 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0223 08:48:59.558917 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0223 08:48:59.558922 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0223 08:48:59.560387 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T08:48:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c47633e36ba0e4b4070060e2925e103b6efa68771b18f7a7f429546f9f05c1d9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:53Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7d8e4f8739813cb8d24afc9999755886cd4a77958b8b4e1514fc0bc53403258\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7d8e4f8739813cb8d24afc9999755886cd4a77958b8b4e1514fc0bc53403258\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:47:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:47:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:47:49Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:55Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:55 crc kubenswrapper[4940]: I0223 08:49:55.925745 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88c878ec-2b3e-452c-bc23-a395b09fa6aa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3a121c424818b098798b4f28bc000a66157862597ce373a4d4748fb75f179fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3ba561d0277929d8f55fcd1fe87d8f37a2d126b94eaf9ac25bc79ad17f61e26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://90753b936aa559224b0241e456ce800658b0cdaa5c92cf54972255319d1ff334\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d742db4469dc738b276ecd68cf5aea77ad5e6fb4460a4fdca4a626a9a073956f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d742db4469dc738b276ecd68cf5aea77ad5e6fb4460a4fdca4a626a9a073956f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:47:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:47:50Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:47:49Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:55Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:55 crc kubenswrapper[4940]: I0223 08:49:55.939788 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4vcwd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41834650-70c0-4558-9052-d7cdfc785e09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a8663adb017b0225e3a827e7670572c25e2b8123a2153e9f99830d8ddc60ebe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwtwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:49:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4vcwd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:55Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:56 crc kubenswrapper[4940]: I0223 08:49:56.593324 4940 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 21:05:26.484472177 +0000 UTC Feb 23 08:49:56 crc kubenswrapper[4940]: I0223 08:49:56.637440 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qkw6w_d0b5a971-c6f4-4518-9bb3-49d228275668/ovnkube-controller/2.log" Feb 23 08:49:56 crc kubenswrapper[4940]: I0223 08:49:56.638467 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qkw6w_d0b5a971-c6f4-4518-9bb3-49d228275668/ovnkube-controller/1.log" Feb 23 08:49:56 crc kubenswrapper[4940]: I0223 08:49:56.642737 4940 generic.go:334] "Generic (PLEG): container finished" podID="d0b5a971-c6f4-4518-9bb3-49d228275668" containerID="f032dbff4d8efcc9eac6064907ff8c278bd88ec4f46b8cacc75876dc5a85ce97" exitCode=1 Feb 23 08:49:56 crc kubenswrapper[4940]: I0223 08:49:56.642811 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qkw6w" event={"ID":"d0b5a971-c6f4-4518-9bb3-49d228275668","Type":"ContainerDied","Data":"f032dbff4d8efcc9eac6064907ff8c278bd88ec4f46b8cacc75876dc5a85ce97"} Feb 23 08:49:56 crc kubenswrapper[4940]: I0223 08:49:56.642893 4940 scope.go:117] "RemoveContainer" containerID="69094f2d145425aee384881c22bbc1f0478a24df582e7eedcc1cf79af015ce56" Feb 23 08:49:56 crc kubenswrapper[4940]: I0223 08:49:56.643867 4940 scope.go:117] "RemoveContainer" containerID="f032dbff4d8efcc9eac6064907ff8c278bd88ec4f46b8cacc75876dc5a85ce97" Feb 23 08:49:56 crc kubenswrapper[4940]: E0223 08:49:56.644054 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-qkw6w_openshift-ovn-kubernetes(d0b5a971-c6f4-4518-9bb3-49d228275668)\"" pod="openshift-ovn-kubernetes/ovnkube-node-qkw6w" podUID="d0b5a971-c6f4-4518-9bb3-49d228275668" Feb 23 08:49:56 crc kubenswrapper[4940]: I0223 08:49:56.659448 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f3f2cfd6-5ddf-436d-998f-440f1cc642b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a14bc73401f5a1f767af0ebe6b81d75f4f735acadd163be9aa0b78d067677a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpqgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a57b4e844c0d09d7debf8e4d2507287815cc75be308dc7010647ac2041181dc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpqgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:49:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-26mgs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:56Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:56 crc kubenswrapper[4940]: I0223 08:49:56.678204 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gjpr8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28e2b63c-99a7-46aa-a8e2-cee2bf5d7066\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6bce5b28c2d7a3946303e9ebd8738dd2337190884db0025e336d96f1641171c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4p7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5d550949b5cf8f3efe2267f466a5c769e2862ab361c86a14eef0baf874e7dd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4p7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:49:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gjpr8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:56Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:56 crc kubenswrapper[4940]: I0223 08:49:56.694752 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67d1926d-47ed-4a5c-b868-690599126446\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ee3bc200ebaed1c133ad32e65db44c8613c48203736dada71c0881d3d4f45a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6fefd6cf7ee54d7687a700f08f52ff20d438289f846da4d811f3295e85455c1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee77b54ead604e1f64b0057494a4ba082ac7f59289f563ce28ec6048485e3b07\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://36385cbf7f0935ced8cb20a3a65c7d1802e2faf23bce8aeca1a040662b878b0f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://947d88435f9dfeac492c7a476d5413d6c9f65fea2e6a8322bed3a197be4e7c11\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"message\\\":\\\"le observer\\\\nW0223 08:48:59.155726 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0223 08:48:59.155899 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0223 08:48:59.156685 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1318688766/tls.crt::/tmp/serving-cert-1318688766/tls.key\\\\\\\"\\\\nI0223 08:48:59.543716 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0223 08:48:59.548278 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0223 08:48:59.548307 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0223 08:48:59.548334 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0223 08:48:59.548340 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0223 08:48:59.558852 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0223 08:48:59.558871 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0223 08:48:59.558894 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0223 08:48:59.558902 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0223 08:48:59.558909 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0223 08:48:59.558912 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0223 08:48:59.558917 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0223 08:48:59.558922 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0223 08:48:59.560387 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T08:48:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c47633e36ba0e4b4070060e2925e103b6efa68771b18f7a7f429546f9f05c1d9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:53Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7d8e4f8739813cb8d24afc9999755886cd4a77958b8b4e1514fc0bc53403258\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7d8e4f8739813cb8d24afc9999755886cd4a77958b8b4e1514fc0bc53403258\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:47:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:47:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:47:49Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:56Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:56 crc kubenswrapper[4940]: I0223 08:49:56.710063 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88c878ec-2b3e-452c-bc23-a395b09fa6aa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3a121c424818b098798b4f28bc000a66157862597ce373a4d4748fb75f179fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3ba561d0277929d8f55fcd1fe87d8f37a2d126b94eaf9ac25bc79ad17f61e26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://90753b936aa559224b0241e456ce800658b0cdaa5c92cf54972255319d1ff334\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d742db4469dc738b276ecd68cf5aea77ad5e6fb4460a4fdca4a626a9a073956f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d742db4469dc738b276ecd68cf5aea77ad5e6fb4460a4fdca4a626a9a073956f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:47:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:47:50Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:47:49Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:56Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:56 crc kubenswrapper[4940]: I0223 08:49:56.722399 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4vcwd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41834650-70c0-4558-9052-d7cdfc785e09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a8663adb017b0225e3a827e7670572c25e2b8123a2153e9f99830d8ddc60ebe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwtwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:49:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4vcwd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:56Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:56 crc kubenswrapper[4940]: I0223 08:49:56.735933 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b55d9910-222c-4c04-8a15-cecc288b8dd6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2696757a27ff629002d6fe762776487514e74ad9465357240b3f196cf9e3cec7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73db3b91fa5cdca73acde9a3bcb9062394a54e61e87020b5d656290a54aa4f70\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T08:48:50Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0223 08:48:20.958890 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0223 08:48:20.960636 1 observer_polling.go:159] Starting file observer\\\\nI0223 08:48:20.961488 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0223 08:48:20.962119 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0223 08:48:48.479316 1 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials\\\\nI0223 08:48:50.579389 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0223 08:48:50.579494 1 cmd.go:179] failed checking apiserver connectivity: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T08:48:20Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:48:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aceeb7fc620d96b7e2a474de52d7b3fb007417f4f1ec81e5853724d54381566\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dbb2f4667fe28c3ae9389cece3d7246f31a8bd82328370a699dae73e11b19a4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0c1a355a4218aa74f5a9f20e74427e20e7dd1864bfb683eeb2ac43ef6d7357f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:47:49Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:56Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:56 crc kubenswrapper[4940]: I0223 08:49:56.751812 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ca6a355de735d39d8c069429deed2a9983ba8d97d9210475c0b22791b6357d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:56Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:56 crc kubenswrapper[4940]: I0223 08:49:56.767010 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:56Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:56 crc kubenswrapper[4940]: I0223 08:49:56.793956 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-tj6ms" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"353c05f4-9ef1-4c0f-8388-8b56cc4c22d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1be9bb45923468ac8fc8f206f3493431ef579d16d158ba7cf330944b1eb7585\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j86pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e45b5826dae32da76487ca6179e00691e0f92748e895fbe42aa14d39a808a12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e45b5826dae32da76487ca6179e00691e0f92748e895fbe42aa14d39a808a12\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j86pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a9386fc4a4a81adf5f99f88359b958f8c47f37eb176a860c96e5e3b6a219f3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a9386fc4a4a81adf5f99f88359b958f8c47f37eb176a860c96e5e3b6a219f3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:49:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j86pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0e096f58de305f0df76444507c878fdf5eb1a134deb3725e1e46ac4d6581541\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0e096f58de305f0df76444507c878fdf5eb1a134deb3725e1e46ac4d6581541\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:49:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:49:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j86pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcc8e2b5ae391db63624b8bcbaa5cab45fcc5f168db2a52700d34ad2c3eb9b89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fcc8e2b5ae391db63624b8bcbaa5cab45fcc5f168db2a52700d34ad2c3eb9b89\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:49:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:49:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j86pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0b3f29a8c1eea72788476a0aac20ec99cb908df8983b5271448ae824b8ab96a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0b3f29a8c1eea72788476a0aac20ec99cb908df8983b5271448ae824b8ab96a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:49:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:49:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j86pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d249ae04e3dece913fbfa0f45169adf9d267043d3cf2dd93c4ace24c5fdf1dd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d249ae04e3dece913fbfa0f45169adf9d267043d3cf2dd93c4ace24c5fdf1dd6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:49:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:49:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j86pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:49:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-tj6ms\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:56Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:56 crc kubenswrapper[4940]: I0223 08:49:56.805712 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jwb9b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d8dd2da9-cea0-44f5-8c93-91b79c7f66ea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmbl4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmbl4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:49:44Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jwb9b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:56Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:56 crc kubenswrapper[4940]: I0223 08:49:56.834885 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47a2763e-a680-45c0-b4e8-0930c73e2e6b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4316680c09083874958b09e3d47897f844490827fafe87653b1459708ba55af3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c467a466c26e4cd6c762317dcc08a84b7bd61aa044e8d075f7ada83206764d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2a314775a13f5a612d19ad1c27ccadc74e4ad191c84bd78a1068105509e514a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://863243ea1e361e4f5cd70d64fa4a0c078de0c0529c9928b413db44a13924c474\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cca6b93a0655ce53ad6d1ccfe625533a67c58b3876c57fe4473d699c95938bd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc81c994f3f21269783d08f800aa4a58bdf43d229b5b61d7f40f8706230fd6c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc81c994f3f21269783d08f800aa4a58bdf43d229b5b61d7f40f8706230fd6c5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:47:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:47:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://260cdbf12890f13f411672fde6af3350b3ddc30824ac5b43110247ae1c7b74fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://260cdbf12890f13f411672fde6af3350b3ddc30824ac5b43110247ae1c7b74fc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:47:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:47:52Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://768f56feda9f5952088933827a03f9d23d7d807d1354578de2335e7c3e8beaaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://768f56feda9f5952088933827a03f9d23d7d807d1354578de2335e7c3e8beaaf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:47:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:47:49Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:56Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:56 crc kubenswrapper[4940]: I0223 08:49:56.852360 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:56Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:56 crc kubenswrapper[4940]: I0223 08:49:56.869691 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4164a479bddce53118172c24d9dc0c8174560ac6f49d2a1fa7321845f7f4020a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:56Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:56 crc kubenswrapper[4940]: I0223 08:49:56.882207 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:56Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:56 crc kubenswrapper[4940]: I0223 08:49:56.894587 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f5fabdbd24e079413b28b30b7e72c60763c64d08d55badde2d84d38dcbc8a09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bce84ef684469eb1b554c08d38cbd547ce6fa507b189a9f63c740583e66bc5e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:56Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:56 crc kubenswrapper[4940]: I0223 08:49:56.909342 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-czrqm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec3904ad-5d0b-46b4-9c13-68454d9a3cb2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5c3f117be2e1eba2edb562924ecec6d7ac44b508366bc7c63a86214ada740dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8jmt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:49:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-czrqm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:56Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:56 crc kubenswrapper[4940]: I0223 08:49:56.929400 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qkw6w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0b5a971-c6f4-4518-9bb3-49d228275668\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://524ac500bdffecad2d1533ea863b5f2dc0acd130368e80faa476facf6817bcc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4558e2adbf43351cb79df3a10fc2e37fa7edc8eae79ceb459ae1ea068e8df21b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://683969deceafc005dbe91b7dbc1fd159cc1da9f55e48112ff68947c40997891e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cff793d0cde25f9d7edf3abaaf81005870d7ad01f985cfcc3cf3f5a57ea062b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f189355dc4d43c8b1df6111c9c3a128e90b724b976602933c384817e4f51882\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ef9add89b50dd7b480368148d404e8f7128d92e6d921c3c89a17463706f3dca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f032dbff4d8efcc9eac6064907ff8c278bd88ec4f46b8cacc75876dc5a85ce97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://69094f2d145425aee384881c22bbc1f0478a24df582e7eedcc1cf79af015ce56\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-23T08:49:41Z\\\",\\\"message\\\":\\\"oUUID:dce28c51-c9f1-478b-97c8-7e209d6e7cbe}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {e3c4661a-36a6-47f0-a6c0-a4ee741f2224}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0223 08:49:41.701782 6731 lb_config.go:1031] Cluster endpoints for openshift-network-diagnostics/network-check-target for network=default are: map[]\\\\nI0223 08:49:41.701745 6731 services_controller.go:452] Built service openshift-machine-api/machine-api-operator per-node LB for network=default: []services.LB{}\\\\nI0223 08:49:41.701823 6731 services_controller.go:444] Built service openshift-kube-controller-manager-operator/metrics LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI0223 08:49:41.701844 6731 services_controller.go:445] Built service openshift-kube-controller-manager-operator/metrics LB template configs for network=default: []services.lbConfig(nil)\\\\nI0223 08:49:41.701867 6731 services_controller.go:453] Built service openshift-machine-api/machine-api-operator template LB for network=default: []services.LB{}\\\\nI0223 08:49:41.701866 6731 services_controller.go:451] Built service openshift-kube-controller-manager-operator/metrics cluster-wide LB for network=default: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-kube-controller-manager-operator/metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[str\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T08:49:40Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f032dbff4d8efcc9eac6064907ff8c278bd88ec4f46b8cacc75876dc5a85ce97\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-23T08:49:56Z\\\",\\\"message\\\":\\\"te: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:56Z is after 2025-08-24T17:21:41Z]\\\\nI0223 08:49:56.237357 6968 ovn.go:134] Ensuring zone local for Pod openshift-kube-scheduler/openshift-kube-scheduler-crc in node crc\\\\nI0223 08:49:56.237358 6968 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-operator/iptables-alerter-4ln5h after 0 failed attempt(s)\\\\nI0223 08:49:56.237362 6968 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-scheduler/openshift-kube-scheduler-crc after 0 failed attempt(s)\\\\nI0223 08:49:56.237223 6968 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-operator-lifecycle-manager/package-server-manager-metrics]} name:Service_openshift-operator-lifecycle-manager/package-server-manager-metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.110:8443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {f9232b32-e89f-4c8e-acc4-c6801b70dcb0}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:NB_Global Row:map[] Rows:[] Columns:[] Mutations:[{Column:nb_\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T08:49:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7a97208ce587c7633bbea9ad618e0721a712dfd62ff249a5843018bc125922a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56ef19c4e2659a35760fb8e2bb8a79c35a283fa3f4b00766bdd237d1464bd933\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56ef19c4e2659a35760fb8e2bb8a79c35a283fa3f4b00766bdd237d1464bd933\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:49:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:49:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:49:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qkw6w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:56Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:56 crc kubenswrapper[4940]: I0223 08:49:56.942072 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ll9gt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac7931c6-f7d4-4166-b332-d954717f67c0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4dcef9eae99fcca399a3b47ff74ebb46cd4c0854422c9153d2ceb6f3a1ea8a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8clf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:49:37Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ll9gt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:56Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:57 crc kubenswrapper[4940]: I0223 08:49:57.345591 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jwb9b" Feb 23 08:49:57 crc kubenswrapper[4940]: I0223 08:49:57.345660 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 08:49:57 crc kubenswrapper[4940]: I0223 08:49:57.346206 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 08:49:57 crc kubenswrapper[4940]: I0223 08:49:57.346457 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 08:49:57 crc kubenswrapper[4940]: E0223 08:49:57.346465 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jwb9b" podUID="d8dd2da9-cea0-44f5-8c93-91b79c7f66ea" Feb 23 08:49:57 crc kubenswrapper[4940]: E0223 08:49:57.346588 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 08:49:57 crc kubenswrapper[4940]: E0223 08:49:57.346685 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 08:49:57 crc kubenswrapper[4940]: E0223 08:49:57.347223 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 08:49:57 crc kubenswrapper[4940]: I0223 08:49:57.594012 4940 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 11:15:48.71999697 +0000 UTC Feb 23 08:49:57 crc kubenswrapper[4940]: I0223 08:49:57.648604 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qkw6w_d0b5a971-c6f4-4518-9bb3-49d228275668/ovnkube-controller/2.log" Feb 23 08:49:57 crc kubenswrapper[4940]: I0223 08:49:57.652822 4940 scope.go:117] "RemoveContainer" containerID="f032dbff4d8efcc9eac6064907ff8c278bd88ec4f46b8cacc75876dc5a85ce97" Feb 23 08:49:57 crc kubenswrapper[4940]: E0223 08:49:57.652976 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-qkw6w_openshift-ovn-kubernetes(d0b5a971-c6f4-4518-9bb3-49d228275668)\"" pod="openshift-ovn-kubernetes/ovnkube-node-qkw6w" podUID="d0b5a971-c6f4-4518-9bb3-49d228275668" Feb 23 08:49:57 crc kubenswrapper[4940]: I0223 08:49:57.673469 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b55d9910-222c-4c04-8a15-cecc288b8dd6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2696757a27ff629002d6fe762776487514e74ad9465357240b3f196cf9e3cec7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73db3b91fa5cdca73acde9a3bcb9062394a54e61e87020b5d656290a54aa4f70\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T08:48:50Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0223 08:48:20.958890 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0223 08:48:20.960636 1 observer_polling.go:159] Starting file observer\\\\nI0223 08:48:20.961488 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0223 08:48:20.962119 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0223 08:48:48.479316 1 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials\\\\nI0223 08:48:50.579389 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0223 08:48:50.579494 1 cmd.go:179] failed checking apiserver connectivity: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T08:48:20Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:48:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aceeb7fc620d96b7e2a474de52d7b3fb007417f4f1ec81e5853724d54381566\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dbb2f4667fe28c3ae9389cece3d7246f31a8bd82328370a699dae73e11b19a4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0c1a355a4218aa74f5a9f20e74427e20e7dd1864bfb683eeb2ac43ef6d7357f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:47:49Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:57Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:57 crc kubenswrapper[4940]: I0223 08:49:57.684578 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ca6a355de735d39d8c069429deed2a9983ba8d97d9210475c0b22791b6357d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:57Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:57 crc kubenswrapper[4940]: I0223 08:49:57.695410 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:57Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:57 crc kubenswrapper[4940]: I0223 08:49:57.709342 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-tj6ms" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"353c05f4-9ef1-4c0f-8388-8b56cc4c22d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1be9bb45923468ac8fc8f206f3493431ef579d16d158ba7cf330944b1eb7585\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j86pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e45b5826dae32da76487ca6179e00691e0f92748e895fbe42aa14d39a808a12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e45b5826dae32da76487ca6179e00691e0f92748e895fbe42aa14d39a808a12\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j86pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a9386fc4a4a81adf5f99f88359b958f8c47f37eb176a860c96e5e3b6a219f3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a9386fc4a4a81adf5f99f88359b958f8c47f37eb176a860c96e5e3b6a219f3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:49:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j86pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0e096f58de305f0df76444507c878fdf5eb1a134deb3725e1e46ac4d6581541\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0e096f58de305f0df76444507c878fdf5eb1a134deb3725e1e46ac4d6581541\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:49:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:49:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j86pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcc8e2b5ae391db63624b8bcbaa5cab45fcc5f168db2a52700d34ad2c3eb9b89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fcc8e2b5ae391db63624b8bcbaa5cab45fcc5f168db2a52700d34ad2c3eb9b89\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:49:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:49:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j86pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0b3f29a8c1eea72788476a0aac20ec99cb908df8983b5271448ae824b8ab96a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0b3f29a8c1eea72788476a0aac20ec99cb908df8983b5271448ae824b8ab96a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:49:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:49:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j86pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d249ae04e3dece913fbfa0f45169adf9d267043d3cf2dd93c4ace24c5fdf1dd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d249ae04e3dece913fbfa0f45169adf9d267043d3cf2dd93c4ace24c5fdf1dd6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:49:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:49:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j86pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:49:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-tj6ms\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:57Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:57 crc kubenswrapper[4940]: I0223 08:49:57.720290 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jwb9b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d8dd2da9-cea0-44f5-8c93-91b79c7f66ea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmbl4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmbl4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:49:44Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jwb9b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:57Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:57 crc kubenswrapper[4940]: I0223 08:49:57.740022 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qkw6w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0b5a971-c6f4-4518-9bb3-49d228275668\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://524ac500bdffecad2d1533ea863b5f2dc0acd130368e80faa476facf6817bcc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4558e2adbf43351cb79df3a10fc2e37fa7edc8eae79ceb459ae1ea068e8df21b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://683969deceafc005dbe91b7dbc1fd159cc1da9f55e48112ff68947c40997891e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cff793d0cde25f9d7edf3abaaf81005870d7ad01f985cfcc3cf3f5a57ea062b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f189355dc4d43c8b1df6111c9c3a128e90b724b976602933c384817e4f51882\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ef9add89b50dd7b480368148d404e8f7128d92e6d921c3c89a17463706f3dca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f032dbff4d8efcc9eac6064907ff8c278bd88ec4f46b8cacc75876dc5a85ce97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f032dbff4d8efcc9eac6064907ff8c278bd88ec4f46b8cacc75876dc5a85ce97\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-23T08:49:56Z\\\",\\\"message\\\":\\\"te: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:56Z is after 2025-08-24T17:21:41Z]\\\\nI0223 08:49:56.237357 6968 ovn.go:134] Ensuring zone local for Pod openshift-kube-scheduler/openshift-kube-scheduler-crc in node crc\\\\nI0223 08:49:56.237358 6968 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-operator/iptables-alerter-4ln5h after 0 failed attempt(s)\\\\nI0223 08:49:56.237362 6968 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-scheduler/openshift-kube-scheduler-crc after 0 failed attempt(s)\\\\nI0223 08:49:56.237223 6968 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-operator-lifecycle-manager/package-server-manager-metrics]} name:Service_openshift-operator-lifecycle-manager/package-server-manager-metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.110:8443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {f9232b32-e89f-4c8e-acc4-c6801b70dcb0}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:NB_Global Row:map[] Rows:[] Columns:[] Mutations:[{Column:nb_\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T08:49:55Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-qkw6w_openshift-ovn-kubernetes(d0b5a971-c6f4-4518-9bb3-49d228275668)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7a97208ce587c7633bbea9ad618e0721a712dfd62ff249a5843018bc125922a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56ef19c4e2659a35760fb8e2bb8a79c35a283fa3f4b00766bdd237d1464bd933\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56ef19c4e2659a35760fb8e2bb8a79c35a283fa3f4b00766bdd237d1464bd933\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:49:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:49:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:49:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qkw6w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:57Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:57 crc kubenswrapper[4940]: I0223 08:49:57.760115 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47a2763e-a680-45c0-b4e8-0930c73e2e6b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4316680c09083874958b09e3d47897f844490827fafe87653b1459708ba55af3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c467a466c26e4cd6c762317dcc08a84b7bd61aa044e8d075f7ada83206764d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2a314775a13f5a612d19ad1c27ccadc74e4ad191c84bd78a1068105509e514a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://863243ea1e361e4f5cd70d64fa4a0c078de0c0529c9928b413db44a13924c474\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cca6b93a0655ce53ad6d1ccfe625533a67c58b3876c57fe4473d699c95938bd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc81c994f3f21269783d08f800aa4a58bdf43d229b5b61d7f40f8706230fd6c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc81c994f3f21269783d08f800aa4a58bdf43d229b5b61d7f40f8706230fd6c5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:47:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:47:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://260cdbf12890f13f411672fde6af3350b3ddc30824ac5b43110247ae1c7b74fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://260cdbf12890f13f411672fde6af3350b3ddc30824ac5b43110247ae1c7b74fc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:47:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:47:52Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://768f56feda9f5952088933827a03f9d23d7d807d1354578de2335e7c3e8beaaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://768f56feda9f5952088933827a03f9d23d7d807d1354578de2335e7c3e8beaaf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:47:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:47:49Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:57Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:57 crc kubenswrapper[4940]: I0223 08:49:57.771465 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:57Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:57 crc kubenswrapper[4940]: I0223 08:49:57.785786 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4164a479bddce53118172c24d9dc0c8174560ac6f49d2a1fa7321845f7f4020a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:57Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:57 crc kubenswrapper[4940]: I0223 08:49:57.797587 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:57Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:57 crc kubenswrapper[4940]: I0223 08:49:57.810022 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f5fabdbd24e079413b28b30b7e72c60763c64d08d55badde2d84d38dcbc8a09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bce84ef684469eb1b554c08d38cbd547ce6fa507b189a9f63c740583e66bc5e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:57Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:57 crc kubenswrapper[4940]: I0223 08:49:57.821499 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-czrqm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec3904ad-5d0b-46b4-9c13-68454d9a3cb2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5c3f117be2e1eba2edb562924ecec6d7ac44b508366bc7c63a86214ada740dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8jmt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:49:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-czrqm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:57Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:57 crc kubenswrapper[4940]: I0223 08:49:57.831978 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ll9gt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac7931c6-f7d4-4166-b332-d954717f67c0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4dcef9eae99fcca399a3b47ff74ebb46cd4c0854422c9153d2ceb6f3a1ea8a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8clf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:49:37Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ll9gt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:57Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:57 crc kubenswrapper[4940]: I0223 08:49:57.842210 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f3f2cfd6-5ddf-436d-998f-440f1cc642b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a14bc73401f5a1f767af0ebe6b81d75f4f735acadd163be9aa0b78d067677a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpqgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a57b4e844c0d09d7debf8e4d2507287815cc75be308dc7010647ac2041181dc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpqgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:49:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-26mgs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:57Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:57 crc kubenswrapper[4940]: I0223 08:49:57.851591 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gjpr8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28e2b63c-99a7-46aa-a8e2-cee2bf5d7066\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6bce5b28c2d7a3946303e9ebd8738dd2337190884db0025e336d96f1641171c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4p7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5d550949b5cf8f3efe2267f466a5c769e2862ab361c86a14eef0baf874e7dd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4p7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:49:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gjpr8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:57Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:57 crc kubenswrapper[4940]: I0223 08:49:57.865664 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67d1926d-47ed-4a5c-b868-690599126446\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ee3bc200ebaed1c133ad32e65db44c8613c48203736dada71c0881d3d4f45a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6fefd6cf7ee54d7687a700f08f52ff20d438289f846da4d811f3295e85455c1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee77b54ead604e1f64b0057494a4ba082ac7f59289f563ce28ec6048485e3b07\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://36385cbf7f0935ced8cb20a3a65c7d1802e2faf23bce8aeca1a040662b878b0f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://947d88435f9dfeac492c7a476d5413d6c9f65fea2e6a8322bed3a197be4e7c11\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"message\\\":\\\"le observer\\\\nW0223 08:48:59.155726 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0223 08:48:59.155899 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0223 08:48:59.156685 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1318688766/tls.crt::/tmp/serving-cert-1318688766/tls.key\\\\\\\"\\\\nI0223 08:48:59.543716 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0223 08:48:59.548278 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0223 08:48:59.548307 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0223 08:48:59.548334 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0223 08:48:59.548340 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0223 08:48:59.558852 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0223 08:48:59.558871 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0223 08:48:59.558894 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0223 08:48:59.558902 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0223 08:48:59.558909 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0223 08:48:59.558912 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0223 08:48:59.558917 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0223 08:48:59.558922 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0223 08:48:59.560387 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T08:48:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c47633e36ba0e4b4070060e2925e103b6efa68771b18f7a7f429546f9f05c1d9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:53Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7d8e4f8739813cb8d24afc9999755886cd4a77958b8b4e1514fc0bc53403258\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7d8e4f8739813cb8d24afc9999755886cd4a77958b8b4e1514fc0bc53403258\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:47:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:47:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:47:49Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:57Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:57 crc kubenswrapper[4940]: I0223 08:49:57.876166 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88c878ec-2b3e-452c-bc23-a395b09fa6aa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3a121c424818b098798b4f28bc000a66157862597ce373a4d4748fb75f179fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3ba561d0277929d8f55fcd1fe87d8f37a2d126b94eaf9ac25bc79ad17f61e26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://90753b936aa559224b0241e456ce800658b0cdaa5c92cf54972255319d1ff334\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d742db4469dc738b276ecd68cf5aea77ad5e6fb4460a4fdca4a626a9a073956f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d742db4469dc738b276ecd68cf5aea77ad5e6fb4460a4fdca4a626a9a073956f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:47:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:47:50Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:47:49Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:57Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:57 crc kubenswrapper[4940]: I0223 08:49:57.887126 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4vcwd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41834650-70c0-4558-9052-d7cdfc785e09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a8663adb017b0225e3a827e7670572c25e2b8123a2153e9f99830d8ddc60ebe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwtwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:49:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4vcwd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:57Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:58 crc kubenswrapper[4940]: I0223 08:49:58.594944 4940 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 07:01:03.381724739 +0000 UTC Feb 23 08:49:59 crc kubenswrapper[4940]: I0223 08:49:59.345318 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 08:49:59 crc kubenswrapper[4940]: I0223 08:49:59.345333 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jwb9b" Feb 23 08:49:59 crc kubenswrapper[4940]: I0223 08:49:59.345410 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 08:49:59 crc kubenswrapper[4940]: I0223 08:49:59.349875 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 08:49:59 crc kubenswrapper[4940]: E0223 08:49:59.349994 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jwb9b" podUID="d8dd2da9-cea0-44f5-8c93-91b79c7f66ea" Feb 23 08:49:59 crc kubenswrapper[4940]: E0223 08:49:59.350159 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 08:49:59 crc kubenswrapper[4940]: E0223 08:49:59.350244 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 08:49:59 crc kubenswrapper[4940]: E0223 08:49:59.350301 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 08:49:59 crc kubenswrapper[4940]: I0223 08:49:59.364903 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:59Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:59 crc kubenswrapper[4940]: I0223 08:49:59.383146 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-tj6ms" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"353c05f4-9ef1-4c0f-8388-8b56cc4c22d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1be9bb45923468ac8fc8f206f3493431ef579d16d158ba7cf330944b1eb7585\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j86pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e45b5826dae32da76487ca6179e00691e0f92748e895fbe42aa14d39a808a12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e45b5826dae32da76487ca6179e00691e0f92748e895fbe42aa14d39a808a12\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j86pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a9386fc4a4a81adf5f99f88359b958f8c47f37eb176a860c96e5e3b6a219f3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a9386fc4a4a81adf5f99f88359b958f8c47f37eb176a860c96e5e3b6a219f3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:49:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j86pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0e096f58de305f0df76444507c878fdf5eb1a134deb3725e1e46ac4d6581541\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0e096f58de305f0df76444507c878fdf5eb1a134deb3725e1e46ac4d6581541\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:49:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:49:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j86pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcc8e2b5ae391db63624b8bcbaa5cab45fcc5f168db2a52700d34ad2c3eb9b89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fcc8e2b5ae391db63624b8bcbaa5cab45fcc5f168db2a52700d34ad2c3eb9b89\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:49:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:49:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j86pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0b3f29a8c1eea72788476a0aac20ec99cb908df8983b5271448ae824b8ab96a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0b3f29a8c1eea72788476a0aac20ec99cb908df8983b5271448ae824b8ab96a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:49:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:49:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j86pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d249ae04e3dece913fbfa0f45169adf9d267043d3cf2dd93c4ace24c5fdf1dd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d249ae04e3dece913fbfa0f45169adf9d267043d3cf2dd93c4ace24c5fdf1dd6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:49:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:49:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j86pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:49:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-tj6ms\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:59Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:59 crc kubenswrapper[4940]: I0223 08:49:59.399496 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jwb9b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d8dd2da9-cea0-44f5-8c93-91b79c7f66ea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmbl4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmbl4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:49:44Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jwb9b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:59Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:59 crc kubenswrapper[4940]: I0223 08:49:59.417044 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b55d9910-222c-4c04-8a15-cecc288b8dd6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2696757a27ff629002d6fe762776487514e74ad9465357240b3f196cf9e3cec7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73db3b91fa5cdca73acde9a3bcb9062394a54e61e87020b5d656290a54aa4f70\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T08:48:50Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0223 08:48:20.958890 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0223 08:48:20.960636 1 observer_polling.go:159] Starting file observer\\\\nI0223 08:48:20.961488 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0223 08:48:20.962119 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0223 08:48:48.479316 1 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials\\\\nI0223 08:48:50.579389 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0223 08:48:50.579494 1 cmd.go:179] failed checking apiserver connectivity: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T08:48:20Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:48:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aceeb7fc620d96b7e2a474de52d7b3fb007417f4f1ec81e5853724d54381566\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dbb2f4667fe28c3ae9389cece3d7246f31a8bd82328370a699dae73e11b19a4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0c1a355a4218aa74f5a9f20e74427e20e7dd1864bfb683eeb2ac43ef6d7357f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:47:49Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:59Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:59 crc kubenswrapper[4940]: I0223 08:49:59.433455 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ca6a355de735d39d8c069429deed2a9983ba8d97d9210475c0b22791b6357d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:59Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:59 crc kubenswrapper[4940]: I0223 08:49:59.450499 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:59Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:59 crc kubenswrapper[4940]: I0223 08:49:59.469901 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4164a479bddce53118172c24d9dc0c8174560ac6f49d2a1fa7321845f7f4020a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:59Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:59 crc kubenswrapper[4940]: I0223 08:49:59.485750 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:59Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:59 crc kubenswrapper[4940]: I0223 08:49:59.503081 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f5fabdbd24e079413b28b30b7e72c60763c64d08d55badde2d84d38dcbc8a09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bce84ef684469eb1b554c08d38cbd547ce6fa507b189a9f63c740583e66bc5e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:59Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:59 crc kubenswrapper[4940]: I0223 08:49:59.520937 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-czrqm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec3904ad-5d0b-46b4-9c13-68454d9a3cb2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5c3f117be2e1eba2edb562924ecec6d7ac44b508366bc7c63a86214ada740dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8jmt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:49:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-czrqm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:59Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:59 crc kubenswrapper[4940]: I0223 08:49:59.547475 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qkw6w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0b5a971-c6f4-4518-9bb3-49d228275668\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://524ac500bdffecad2d1533ea863b5f2dc0acd130368e80faa476facf6817bcc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4558e2adbf43351cb79df3a10fc2e37fa7edc8eae79ceb459ae1ea068e8df21b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://683969deceafc005dbe91b7dbc1fd159cc1da9f55e48112ff68947c40997891e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cff793d0cde25f9d7edf3abaaf81005870d7ad01f985cfcc3cf3f5a57ea062b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f189355dc4d43c8b1df6111c9c3a128e90b724b976602933c384817e4f51882\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ef9add89b50dd7b480368148d404e8f7128d92e6d921c3c89a17463706f3dca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f032dbff4d8efcc9eac6064907ff8c278bd88ec4f46b8cacc75876dc5a85ce97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f032dbff4d8efcc9eac6064907ff8c278bd88ec4f46b8cacc75876dc5a85ce97\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-23T08:49:56Z\\\",\\\"message\\\":\\\"te: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:56Z is after 2025-08-24T17:21:41Z]\\\\nI0223 08:49:56.237357 6968 ovn.go:134] Ensuring zone local for Pod openshift-kube-scheduler/openshift-kube-scheduler-crc in node crc\\\\nI0223 08:49:56.237358 6968 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-operator/iptables-alerter-4ln5h after 0 failed attempt(s)\\\\nI0223 08:49:56.237362 6968 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-scheduler/openshift-kube-scheduler-crc after 0 failed attempt(s)\\\\nI0223 08:49:56.237223 6968 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-operator-lifecycle-manager/package-server-manager-metrics]} name:Service_openshift-operator-lifecycle-manager/package-server-manager-metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.110:8443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {f9232b32-e89f-4c8e-acc4-c6801b70dcb0}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:NB_Global Row:map[] Rows:[] Columns:[] Mutations:[{Column:nb_\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T08:49:55Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-qkw6w_openshift-ovn-kubernetes(d0b5a971-c6f4-4518-9bb3-49d228275668)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7a97208ce587c7633bbea9ad618e0721a712dfd62ff249a5843018bc125922a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56ef19c4e2659a35760fb8e2bb8a79c35a283fa3f4b00766bdd237d1464bd933\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56ef19c4e2659a35760fb8e2bb8a79c35a283fa3f4b00766bdd237d1464bd933\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:49:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:49:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:49:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qkw6w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:59Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:59 crc kubenswrapper[4940]: I0223 08:49:59.569142 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47a2763e-a680-45c0-b4e8-0930c73e2e6b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4316680c09083874958b09e3d47897f844490827fafe87653b1459708ba55af3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c467a466c26e4cd6c762317dcc08a84b7bd61aa044e8d075f7ada83206764d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2a314775a13f5a612d19ad1c27ccadc74e4ad191c84bd78a1068105509e514a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://863243ea1e361e4f5cd70d64fa4a0c078de0c0529c9928b413db44a13924c474\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cca6b93a0655ce53ad6d1ccfe625533a67c58b3876c57fe4473d699c95938bd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc81c994f3f21269783d08f800aa4a58bdf43d229b5b61d7f40f8706230fd6c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc81c994f3f21269783d08f800aa4a58bdf43d229b5b61d7f40f8706230fd6c5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:47:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:47:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://260cdbf12890f13f411672fde6af3350b3ddc30824ac5b43110247ae1c7b74fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://260cdbf12890f13f411672fde6af3350b3ddc30824ac5b43110247ae1c7b74fc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:47:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:47:52Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://768f56feda9f5952088933827a03f9d23d7d807d1354578de2335e7c3e8beaaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://768f56feda9f5952088933827a03f9d23d7d807d1354578de2335e7c3e8beaaf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:47:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:47:49Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:59Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:59 crc kubenswrapper[4940]: I0223 08:49:59.584191 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ll9gt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac7931c6-f7d4-4166-b332-d954717f67c0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4dcef9eae99fcca399a3b47ff74ebb46cd4c0854422c9153d2ceb6f3a1ea8a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8clf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:49:37Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ll9gt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:59Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:59 crc kubenswrapper[4940]: I0223 08:49:59.595047 4940 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 22:08:09.95684888 +0000 UTC Feb 23 08:49:59 crc kubenswrapper[4940]: E0223 08:49:59.596867 4940 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 23 08:49:59 crc kubenswrapper[4940]: I0223 08:49:59.603109 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f3f2cfd6-5ddf-436d-998f-440f1cc642b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a14bc73401f5a1f767af0ebe6b81d75f4f735acadd163be9aa0b78d067677a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpqgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a57b4e844c0d09d7debf8e4d2507287815cc75be308dc7010647ac2041181dc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpqgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:49:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-26mgs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:59Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:59 crc kubenswrapper[4940]: I0223 08:49:59.617258 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gjpr8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28e2b63c-99a7-46aa-a8e2-cee2bf5d7066\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6bce5b28c2d7a3946303e9ebd8738dd2337190884db0025e336d96f1641171c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4p7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5d550949b5cf8f3efe2267f466a5c769e2862ab361c86a14eef0baf874e7dd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4p7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:49:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gjpr8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:59Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:59 crc kubenswrapper[4940]: I0223 08:49:59.631011 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4vcwd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41834650-70c0-4558-9052-d7cdfc785e09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a8663adb017b0225e3a827e7670572c25e2b8123a2153e9f99830d8ddc60ebe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwtwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:49:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4vcwd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:59Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:59 crc kubenswrapper[4940]: I0223 08:49:59.649170 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67d1926d-47ed-4a5c-b868-690599126446\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ee3bc200ebaed1c133ad32e65db44c8613c48203736dada71c0881d3d4f45a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6fefd6cf7ee54d7687a700f08f52ff20d438289f846da4d811f3295e85455c1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee77b54ead604e1f64b0057494a4ba082ac7f59289f563ce28ec6048485e3b07\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://36385cbf7f0935ced8cb20a3a65c7d1802e2faf23bce8aeca1a040662b878b0f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://947d88435f9dfeac492c7a476d5413d6c9f65fea2e6a8322bed3a197be4e7c11\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"message\\\":\\\"le observer\\\\nW0223 08:48:59.155726 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0223 08:48:59.155899 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0223 08:48:59.156685 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1318688766/tls.crt::/tmp/serving-cert-1318688766/tls.key\\\\\\\"\\\\nI0223 08:48:59.543716 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0223 08:48:59.548278 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0223 08:48:59.548307 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0223 08:48:59.548334 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0223 08:48:59.548340 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0223 08:48:59.558852 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0223 08:48:59.558871 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0223 08:48:59.558894 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0223 08:48:59.558902 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0223 08:48:59.558909 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0223 08:48:59.558912 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0223 08:48:59.558917 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0223 08:48:59.558922 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0223 08:48:59.560387 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T08:48:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c47633e36ba0e4b4070060e2925e103b6efa68771b18f7a7f429546f9f05c1d9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:53Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7d8e4f8739813cb8d24afc9999755886cd4a77958b8b4e1514fc0bc53403258\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7d8e4f8739813cb8d24afc9999755886cd4a77958b8b4e1514fc0bc53403258\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:47:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:47:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:47:49Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:59Z is after 2025-08-24T17:21:41Z" Feb 23 08:49:59 crc kubenswrapper[4940]: I0223 08:49:59.665138 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88c878ec-2b3e-452c-bc23-a395b09fa6aa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3a121c424818b098798b4f28bc000a66157862597ce373a4d4748fb75f179fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3ba561d0277929d8f55fcd1fe87d8f37a2d126b94eaf9ac25bc79ad17f61e26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://90753b936aa559224b0241e456ce800658b0cdaa5c92cf54972255319d1ff334\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d742db4469dc738b276ecd68cf5aea77ad5e6fb4460a4fdca4a626a9a073956f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d742db4469dc738b276ecd68cf5aea77ad5e6fb4460a4fdca4a626a9a073956f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:47:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:47:50Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:47:49Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:59Z is after 2025-08-24T17:21:41Z" Feb 23 08:50:00 crc kubenswrapper[4940]: I0223 08:49:59.999983 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d8dd2da9-cea0-44f5-8c93-91b79c7f66ea-metrics-certs\") pod \"network-metrics-daemon-jwb9b\" (UID: \"d8dd2da9-cea0-44f5-8c93-91b79c7f66ea\") " pod="openshift-multus/network-metrics-daemon-jwb9b" Feb 23 08:50:00 crc kubenswrapper[4940]: E0223 08:50:00.000168 4940 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 23 08:50:00 crc kubenswrapper[4940]: E0223 08:50:00.000266 4940 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d8dd2da9-cea0-44f5-8c93-91b79c7f66ea-metrics-certs podName:d8dd2da9-cea0-44f5-8c93-91b79c7f66ea nodeName:}" failed. No retries permitted until 2026-02-23 08:50:16.000245588 +0000 UTC m=+147.383451845 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/d8dd2da9-cea0-44f5-8c93-91b79c7f66ea-metrics-certs") pod "network-metrics-daemon-jwb9b" (UID: "d8dd2da9-cea0-44f5-8c93-91b79c7f66ea") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 23 08:50:00 crc kubenswrapper[4940]: I0223 08:50:00.595563 4940 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 17:49:58.249493081 +0000 UTC Feb 23 08:50:01 crc kubenswrapper[4940]: I0223 08:50:01.345654 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jwb9b" Feb 23 08:50:01 crc kubenswrapper[4940]: I0223 08:50:01.345699 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 08:50:01 crc kubenswrapper[4940]: E0223 08:50:01.345803 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jwb9b" podUID="d8dd2da9-cea0-44f5-8c93-91b79c7f66ea" Feb 23 08:50:01 crc kubenswrapper[4940]: I0223 08:50:01.345878 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 08:50:01 crc kubenswrapper[4940]: I0223 08:50:01.345975 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 08:50:01 crc kubenswrapper[4940]: E0223 08:50:01.346143 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 08:50:01 crc kubenswrapper[4940]: E0223 08:50:01.346228 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 08:50:01 crc kubenswrapper[4940]: E0223 08:50:01.346356 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 08:50:01 crc kubenswrapper[4940]: I0223 08:50:01.595872 4940 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 17:13:56.195288331 +0000 UTC Feb 23 08:50:02 crc kubenswrapper[4940]: I0223 08:50:02.596599 4940 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 23:57:20.842955509 +0000 UTC Feb 23 08:50:03 crc kubenswrapper[4940]: I0223 08:50:03.233011 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 08:50:03 crc kubenswrapper[4940]: E0223 08:50:03.233208 4940 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 08:51:07.233172018 +0000 UTC m=+198.616378205 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 08:50:03 crc kubenswrapper[4940]: I0223 08:50:03.233330 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 08:50:03 crc kubenswrapper[4940]: I0223 08:50:03.233494 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 08:50:03 crc kubenswrapper[4940]: E0223 08:50:03.233581 4940 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 23 08:50:03 crc kubenswrapper[4940]: E0223 08:50:03.233662 4940 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 23 08:50:03 crc kubenswrapper[4940]: E0223 08:50:03.233712 4940 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-23 08:51:07.233701365 +0000 UTC m=+198.616907522 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 23 08:50:03 crc kubenswrapper[4940]: E0223 08:50:03.233728 4940 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-23 08:51:07.233721025 +0000 UTC m=+198.616927272 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 23 08:50:03 crc kubenswrapper[4940]: I0223 08:50:03.334235 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 08:50:03 crc kubenswrapper[4940]: I0223 08:50:03.334284 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 08:50:03 crc kubenswrapper[4940]: E0223 08:50:03.334422 4940 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 23 08:50:03 crc kubenswrapper[4940]: E0223 08:50:03.334476 4940 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 23 08:50:03 crc kubenswrapper[4940]: E0223 08:50:03.334490 4940 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 23 08:50:03 crc kubenswrapper[4940]: E0223 08:50:03.334549 4940 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-23 08:51:07.334531118 +0000 UTC m=+198.717737285 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 23 08:50:03 crc kubenswrapper[4940]: E0223 08:50:03.334427 4940 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 23 08:50:03 crc kubenswrapper[4940]: E0223 08:50:03.334578 4940 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 23 08:50:03 crc kubenswrapper[4940]: E0223 08:50:03.334629 4940 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 23 08:50:03 crc kubenswrapper[4940]: E0223 08:50:03.334688 4940 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-23 08:51:07.334672193 +0000 UTC m=+198.717878440 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 23 08:50:03 crc kubenswrapper[4940]: I0223 08:50:03.345336 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jwb9b" Feb 23 08:50:03 crc kubenswrapper[4940]: I0223 08:50:03.345364 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 08:50:03 crc kubenswrapper[4940]: E0223 08:50:03.345466 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jwb9b" podUID="d8dd2da9-cea0-44f5-8c93-91b79c7f66ea" Feb 23 08:50:03 crc kubenswrapper[4940]: I0223 08:50:03.345748 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 08:50:03 crc kubenswrapper[4940]: I0223 08:50:03.345750 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 08:50:03 crc kubenswrapper[4940]: E0223 08:50:03.346097 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 08:50:03 crc kubenswrapper[4940]: E0223 08:50:03.346204 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 08:50:03 crc kubenswrapper[4940]: E0223 08:50:03.346271 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 08:50:03 crc kubenswrapper[4940]: I0223 08:50:03.597480 4940 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 21:14:12.664966643 +0000 UTC Feb 23 08:50:04 crc kubenswrapper[4940]: I0223 08:50:04.225310 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 23 08:50:04 crc kubenswrapper[4940]: I0223 08:50:04.251256 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:50:04Z is after 2025-08-24T17:21:41Z" Feb 23 08:50:04 crc kubenswrapper[4940]: I0223 08:50:04.267132 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f5fabdbd24e079413b28b30b7e72c60763c64d08d55badde2d84d38dcbc8a09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bce84ef684469eb1b554c08d38cbd547ce6fa507b189a9f63c740583e66bc5e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:50:04Z is after 2025-08-24T17:21:41Z" Feb 23 08:50:04 crc kubenswrapper[4940]: I0223 08:50:04.283261 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-czrqm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec3904ad-5d0b-46b4-9c13-68454d9a3cb2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5c3f117be2e1eba2edb562924ecec6d7ac44b508366bc7c63a86214ada740dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8jmt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:49:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-czrqm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:50:04Z is after 2025-08-24T17:21:41Z" Feb 23 08:50:04 crc kubenswrapper[4940]: I0223 08:50:04.313257 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qkw6w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0b5a971-c6f4-4518-9bb3-49d228275668\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://524ac500bdffecad2d1533ea863b5f2dc0acd130368e80faa476facf6817bcc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4558e2adbf43351cb79df3a10fc2e37fa7edc8eae79ceb459ae1ea068e8df21b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://683969deceafc005dbe91b7dbc1fd159cc1da9f55e48112ff68947c40997891e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cff793d0cde25f9d7edf3abaaf81005870d7ad01f985cfcc3cf3f5a57ea062b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f189355dc4d43c8b1df6111c9c3a128e90b724b976602933c384817e4f51882\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ef9add89b50dd7b480368148d404e8f7128d92e6d921c3c89a17463706f3dca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f032dbff4d8efcc9eac6064907ff8c278bd88ec4f46b8cacc75876dc5a85ce97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f032dbff4d8efcc9eac6064907ff8c278bd88ec4f46b8cacc75876dc5a85ce97\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-23T08:49:56Z\\\",\\\"message\\\":\\\"te: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:56Z is after 2025-08-24T17:21:41Z]\\\\nI0223 08:49:56.237357 6968 ovn.go:134] Ensuring zone local for Pod openshift-kube-scheduler/openshift-kube-scheduler-crc in node crc\\\\nI0223 08:49:56.237358 6968 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-operator/iptables-alerter-4ln5h after 0 failed attempt(s)\\\\nI0223 08:49:56.237362 6968 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-scheduler/openshift-kube-scheduler-crc after 0 failed attempt(s)\\\\nI0223 08:49:56.237223 6968 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-operator-lifecycle-manager/package-server-manager-metrics]} name:Service_openshift-operator-lifecycle-manager/package-server-manager-metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.110:8443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {f9232b32-e89f-4c8e-acc4-c6801b70dcb0}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:NB_Global Row:map[] Rows:[] Columns:[] Mutations:[{Column:nb_\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T08:49:55Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-qkw6w_openshift-ovn-kubernetes(d0b5a971-c6f4-4518-9bb3-49d228275668)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7a97208ce587c7633bbea9ad618e0721a712dfd62ff249a5843018bc125922a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56ef19c4e2659a35760fb8e2bb8a79c35a283fa3f4b00766bdd237d1464bd933\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56ef19c4e2659a35760fb8e2bb8a79c35a283fa3f4b00766bdd237d1464bd933\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:49:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:49:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:49:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qkw6w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:50:04Z is after 2025-08-24T17:21:41Z" Feb 23 08:50:04 crc kubenswrapper[4940]: I0223 08:50:04.325026 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:50:04 crc kubenswrapper[4940]: I0223 08:50:04.325079 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:50:04 crc kubenswrapper[4940]: I0223 08:50:04.325090 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:50:04 crc kubenswrapper[4940]: I0223 08:50:04.325105 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:50:04 crc kubenswrapper[4940]: I0223 08:50:04.325118 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:50:04Z","lastTransitionTime":"2026-02-23T08:50:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:50:04 crc kubenswrapper[4940]: I0223 08:50:04.337808 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47a2763e-a680-45c0-b4e8-0930c73e2e6b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4316680c09083874958b09e3d47897f844490827fafe87653b1459708ba55af3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c467a466c26e4cd6c762317dcc08a84b7bd61aa044e8d075f7ada83206764d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2a314775a13f5a612d19ad1c27ccadc74e4ad191c84bd78a1068105509e514a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://863243ea1e361e4f5cd70d64fa4a0c078de0c0529c9928b413db44a13924c474\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cca6b93a0655ce53ad6d1ccfe625533a67c58b3876c57fe4473d699c95938bd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc81c994f3f21269783d08f800aa4a58bdf43d229b5b61d7f40f8706230fd6c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc81c994f3f21269783d08f800aa4a58bdf43d229b5b61d7f40f8706230fd6c5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:47:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:47:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://260cdbf12890f13f411672fde6af3350b3ddc30824ac5b43110247ae1c7b74fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://260cdbf12890f13f411672fde6af3350b3ddc30824ac5b43110247ae1c7b74fc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:47:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:47:52Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://768f56feda9f5952088933827a03f9d23d7d807d1354578de2335e7c3e8beaaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://768f56feda9f5952088933827a03f9d23d7d807d1354578de2335e7c3e8beaaf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:47:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:47:49Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:50:04Z is after 2025-08-24T17:21:41Z" Feb 23 08:50:04 crc kubenswrapper[4940]: E0223 08:50:04.343930 4940 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T08:50:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T08:50:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T08:50:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T08:50:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T08:50:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T08:50:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T08:50:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T08:50:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0b40f9a7-6d5b-496d-bcec-88183c6aba29\\\",\\\"systemUUID\\\":\\\"3c406e8c-0d77-4ead-8ee9-37cf28c01cc1\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:50:04Z is after 2025-08-24T17:21:41Z" Feb 23 08:50:04 crc kubenswrapper[4940]: I0223 08:50:04.347809 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:50:04 crc kubenswrapper[4940]: I0223 08:50:04.347878 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:50:04 crc kubenswrapper[4940]: I0223 08:50:04.347901 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:50:04 crc kubenswrapper[4940]: I0223 08:50:04.347933 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:50:04 crc kubenswrapper[4940]: I0223 08:50:04.347955 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:50:04Z","lastTransitionTime":"2026-02-23T08:50:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:50:04 crc kubenswrapper[4940]: I0223 08:50:04.353051 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:50:04Z is after 2025-08-24T17:21:41Z" Feb 23 08:50:04 crc kubenswrapper[4940]: E0223 08:50:04.360360 4940 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T08:50:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T08:50:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T08:50:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T08:50:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T08:50:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T08:50:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T08:50:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T08:50:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0b40f9a7-6d5b-496d-bcec-88183c6aba29\\\",\\\"systemUUID\\\":\\\"3c406e8c-0d77-4ead-8ee9-37cf28c01cc1\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:50:04Z is after 2025-08-24T17:21:41Z" Feb 23 08:50:04 crc kubenswrapper[4940]: I0223 08:50:04.363927 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:50:04 crc kubenswrapper[4940]: I0223 08:50:04.363980 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:50:04 crc kubenswrapper[4940]: I0223 08:50:04.363994 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:50:04 crc kubenswrapper[4940]: I0223 08:50:04.364013 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:50:04 crc kubenswrapper[4940]: I0223 08:50:04.364027 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:50:04Z","lastTransitionTime":"2026-02-23T08:50:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:50:04 crc kubenswrapper[4940]: I0223 08:50:04.372259 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4164a479bddce53118172c24d9dc0c8174560ac6f49d2a1fa7321845f7f4020a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:50:04Z is after 2025-08-24T17:21:41Z" Feb 23 08:50:04 crc kubenswrapper[4940]: E0223 08:50:04.376523 4940 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T08:50:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T08:50:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T08:50:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T08:50:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T08:50:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T08:50:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T08:50:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T08:50:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0b40f9a7-6d5b-496d-bcec-88183c6aba29\\\",\\\"systemUUID\\\":\\\"3c406e8c-0d77-4ead-8ee9-37cf28c01cc1\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:50:04Z is after 2025-08-24T17:21:41Z" Feb 23 08:50:04 crc kubenswrapper[4940]: I0223 08:50:04.381660 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:50:04 crc kubenswrapper[4940]: I0223 08:50:04.381759 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:50:04 crc kubenswrapper[4940]: I0223 08:50:04.381819 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:50:04 crc kubenswrapper[4940]: I0223 08:50:04.381843 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:50:04 crc kubenswrapper[4940]: I0223 08:50:04.381858 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:50:04Z","lastTransitionTime":"2026-02-23T08:50:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:50:04 crc kubenswrapper[4940]: I0223 08:50:04.384507 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ll9gt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac7931c6-f7d4-4166-b332-d954717f67c0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4dcef9eae99fcca399a3b47ff74ebb46cd4c0854422c9153d2ceb6f3a1ea8a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8clf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:49:37Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ll9gt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:50:04Z is after 2025-08-24T17:21:41Z" Feb 23 08:50:04 crc kubenswrapper[4940]: E0223 08:50:04.396738 4940 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T08:50:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T08:50:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T08:50:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T08:50:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T08:50:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T08:50:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T08:50:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T08:50:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0b40f9a7-6d5b-496d-bcec-88183c6aba29\\\",\\\"systemUUID\\\":\\\"3c406e8c-0d77-4ead-8ee9-37cf28c01cc1\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:50:04Z is after 2025-08-24T17:21:41Z" Feb 23 08:50:04 crc kubenswrapper[4940]: I0223 08:50:04.400397 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:50:04 crc kubenswrapper[4940]: I0223 08:50:04.400444 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:50:04 crc kubenswrapper[4940]: I0223 08:50:04.400456 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:50:04 crc kubenswrapper[4940]: I0223 08:50:04.400471 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:50:04 crc kubenswrapper[4940]: I0223 08:50:04.400484 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:50:04Z","lastTransitionTime":"2026-02-23T08:50:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:50:04 crc kubenswrapper[4940]: I0223 08:50:04.400969 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f3f2cfd6-5ddf-436d-998f-440f1cc642b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a14bc73401f5a1f767af0ebe6b81d75f4f735acadd163be9aa0b78d067677a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpqgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a57b4e844c0d09d7debf8e4d2507287815cc75be308dc7010647ac2041181dc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpqgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:49:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-26mgs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:50:04Z is after 2025-08-24T17:21:41Z" Feb 23 08:50:04 crc kubenswrapper[4940]: E0223 08:50:04.411401 4940 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T08:50:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T08:50:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T08:50:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T08:50:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T08:50:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T08:50:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T08:50:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T08:50:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0b40f9a7-6d5b-496d-bcec-88183c6aba29\\\",\\\"systemUUID\\\":\\\"3c406e8c-0d77-4ead-8ee9-37cf28c01cc1\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:50:04Z is after 2025-08-24T17:21:41Z" Feb 23 08:50:04 crc kubenswrapper[4940]: E0223 08:50:04.411558 4940 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 23 08:50:04 crc kubenswrapper[4940]: I0223 08:50:04.413034 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gjpr8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28e2b63c-99a7-46aa-a8e2-cee2bf5d7066\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6bce5b28c2d7a3946303e9ebd8738dd2337190884db0025e336d96f1641171c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4p7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5d550949b5cf8f3efe2267f466a5c769e2862ab361c86a14eef0baf874e7dd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4p7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:49:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gjpr8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:50:04Z is after 2025-08-24T17:21:41Z" Feb 23 08:50:04 crc kubenswrapper[4940]: I0223 08:50:04.440633 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67d1926d-47ed-4a5c-b868-690599126446\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ee3bc200ebaed1c133ad32e65db44c8613c48203736dada71c0881d3d4f45a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6fefd6cf7ee54d7687a700f08f52ff20d438289f846da4d811f3295e85455c1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee77b54ead604e1f64b0057494a4ba082ac7f59289f563ce28ec6048485e3b07\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://36385cbf7f0935ced8cb20a3a65c7d1802e2faf23bce8aeca1a040662b878b0f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://947d88435f9dfeac492c7a476d5413d6c9f65fea2e6a8322bed3a197be4e7c11\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"message\\\":\\\"le observer\\\\nW0223 08:48:59.155726 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0223 08:48:59.155899 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0223 08:48:59.156685 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1318688766/tls.crt::/tmp/serving-cert-1318688766/tls.key\\\\\\\"\\\\nI0223 08:48:59.543716 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0223 08:48:59.548278 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0223 08:48:59.548307 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0223 08:48:59.548334 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0223 08:48:59.548340 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0223 08:48:59.558852 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0223 08:48:59.558871 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0223 08:48:59.558894 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0223 08:48:59.558902 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0223 08:48:59.558909 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0223 08:48:59.558912 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0223 08:48:59.558917 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0223 08:48:59.558922 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0223 08:48:59.560387 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T08:48:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c47633e36ba0e4b4070060e2925e103b6efa68771b18f7a7f429546f9f05c1d9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:53Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7d8e4f8739813cb8d24afc9999755886cd4a77958b8b4e1514fc0bc53403258\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7d8e4f8739813cb8d24afc9999755886cd4a77958b8b4e1514fc0bc53403258\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:47:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:47:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:47:49Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:50:04Z is after 2025-08-24T17:21:41Z" Feb 23 08:50:04 crc kubenswrapper[4940]: I0223 08:50:04.464535 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88c878ec-2b3e-452c-bc23-a395b09fa6aa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3a121c424818b098798b4f28bc000a66157862597ce373a4d4748fb75f179fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3ba561d0277929d8f55fcd1fe87d8f37a2d126b94eaf9ac25bc79ad17f61e26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://90753b936aa559224b0241e456ce800658b0cdaa5c92cf54972255319d1ff334\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d742db4469dc738b276ecd68cf5aea77ad5e6fb4460a4fdca4a626a9a073956f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d742db4469dc738b276ecd68cf5aea77ad5e6fb4460a4fdca4a626a9a073956f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:47:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:47:50Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:47:49Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:50:04Z is after 2025-08-24T17:21:41Z" Feb 23 08:50:04 crc kubenswrapper[4940]: I0223 08:50:04.474322 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4vcwd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41834650-70c0-4558-9052-d7cdfc785e09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a8663adb017b0225e3a827e7670572c25e2b8123a2153e9f99830d8ddc60ebe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwtwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:49:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4vcwd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:50:04Z is after 2025-08-24T17:21:41Z" Feb 23 08:50:04 crc kubenswrapper[4940]: I0223 08:50:04.486426 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jwb9b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d8dd2da9-cea0-44f5-8c93-91b79c7f66ea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmbl4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmbl4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:49:44Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jwb9b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:50:04Z is after 2025-08-24T17:21:41Z" Feb 23 08:50:04 crc kubenswrapper[4940]: I0223 08:50:04.499118 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b55d9910-222c-4c04-8a15-cecc288b8dd6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2696757a27ff629002d6fe762776487514e74ad9465357240b3f196cf9e3cec7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73db3b91fa5cdca73acde9a3bcb9062394a54e61e87020b5d656290a54aa4f70\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T08:48:50Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0223 08:48:20.958890 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0223 08:48:20.960636 1 observer_polling.go:159] Starting file observer\\\\nI0223 08:48:20.961488 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0223 08:48:20.962119 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0223 08:48:48.479316 1 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials\\\\nI0223 08:48:50.579389 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0223 08:48:50.579494 1 cmd.go:179] failed checking apiserver connectivity: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T08:48:20Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:48:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aceeb7fc620d96b7e2a474de52d7b3fb007417f4f1ec81e5853724d54381566\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dbb2f4667fe28c3ae9389cece3d7246f31a8bd82328370a699dae73e11b19a4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0c1a355a4218aa74f5a9f20e74427e20e7dd1864bfb683eeb2ac43ef6d7357f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:47:49Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:50:04Z is after 2025-08-24T17:21:41Z" Feb 23 08:50:04 crc kubenswrapper[4940]: I0223 08:50:04.511983 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ca6a355de735d39d8c069429deed2a9983ba8d97d9210475c0b22791b6357d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:50:04Z is after 2025-08-24T17:21:41Z" Feb 23 08:50:04 crc kubenswrapper[4940]: I0223 08:50:04.524427 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:50:04Z is after 2025-08-24T17:21:41Z" Feb 23 08:50:04 crc kubenswrapper[4940]: I0223 08:50:04.544354 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-tj6ms" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"353c05f4-9ef1-4c0f-8388-8b56cc4c22d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1be9bb45923468ac8fc8f206f3493431ef579d16d158ba7cf330944b1eb7585\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j86pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e45b5826dae32da76487ca6179e00691e0f92748e895fbe42aa14d39a808a12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e45b5826dae32da76487ca6179e00691e0f92748e895fbe42aa14d39a808a12\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j86pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a9386fc4a4a81adf5f99f88359b958f8c47f37eb176a860c96e5e3b6a219f3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a9386fc4a4a81adf5f99f88359b958f8c47f37eb176a860c96e5e3b6a219f3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:49:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j86pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0e096f58de305f0df76444507c878fdf5eb1a134deb3725e1e46ac4d6581541\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0e096f58de305f0df76444507c878fdf5eb1a134deb3725e1e46ac4d6581541\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:49:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:49:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j86pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcc8e2b5ae391db63624b8bcbaa5cab45fcc5f168db2a52700d34ad2c3eb9b89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fcc8e2b5ae391db63624b8bcbaa5cab45fcc5f168db2a52700d34ad2c3eb9b89\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:49:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:49:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j86pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0b3f29a8c1eea72788476a0aac20ec99cb908df8983b5271448ae824b8ab96a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0b3f29a8c1eea72788476a0aac20ec99cb908df8983b5271448ae824b8ab96a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:49:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:49:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j86pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d249ae04e3dece913fbfa0f45169adf9d267043d3cf2dd93c4ace24c5fdf1dd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d249ae04e3dece913fbfa0f45169adf9d267043d3cf2dd93c4ace24c5fdf1dd6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:49:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:49:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j86pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:49:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-tj6ms\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:50:04Z is after 2025-08-24T17:21:41Z" Feb 23 08:50:04 crc kubenswrapper[4940]: I0223 08:50:04.597960 4940 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 01:10:21.397237099 +0000 UTC Feb 23 08:50:04 crc kubenswrapper[4940]: E0223 08:50:04.598652 4940 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 23 08:50:05 crc kubenswrapper[4940]: I0223 08:50:05.345682 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jwb9b" Feb 23 08:50:05 crc kubenswrapper[4940]: I0223 08:50:05.345817 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 08:50:05 crc kubenswrapper[4940]: I0223 08:50:05.345868 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 08:50:05 crc kubenswrapper[4940]: I0223 08:50:05.346104 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 08:50:05 crc kubenswrapper[4940]: E0223 08:50:05.346089 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jwb9b" podUID="d8dd2da9-cea0-44f5-8c93-91b79c7f66ea" Feb 23 08:50:05 crc kubenswrapper[4940]: E0223 08:50:05.346241 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 08:50:05 crc kubenswrapper[4940]: E0223 08:50:05.346365 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 08:50:05 crc kubenswrapper[4940]: E0223 08:50:05.346509 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 08:50:05 crc kubenswrapper[4940]: I0223 08:50:05.599013 4940 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 13:48:05.085084118 +0000 UTC Feb 23 08:50:06 crc kubenswrapper[4940]: I0223 08:50:06.599869 4940 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 23:09:06.121912624 +0000 UTC Feb 23 08:50:07 crc kubenswrapper[4940]: I0223 08:50:07.344712 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 08:50:07 crc kubenswrapper[4940]: I0223 08:50:07.344739 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jwb9b" Feb 23 08:50:07 crc kubenswrapper[4940]: I0223 08:50:07.344784 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 08:50:07 crc kubenswrapper[4940]: E0223 08:50:07.344864 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 08:50:07 crc kubenswrapper[4940]: I0223 08:50:07.345072 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 08:50:07 crc kubenswrapper[4940]: E0223 08:50:07.345058 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jwb9b" podUID="d8dd2da9-cea0-44f5-8c93-91b79c7f66ea" Feb 23 08:50:07 crc kubenswrapper[4940]: E0223 08:50:07.345146 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 08:50:07 crc kubenswrapper[4940]: E0223 08:50:07.345212 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 08:50:07 crc kubenswrapper[4940]: I0223 08:50:07.600494 4940 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 04:16:11.968210618 +0000 UTC Feb 23 08:50:08 crc kubenswrapper[4940]: I0223 08:50:08.601447 4940 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 19:11:33.327190912 +0000 UTC Feb 23 08:50:09 crc kubenswrapper[4940]: I0223 08:50:09.344822 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 08:50:09 crc kubenswrapper[4940]: I0223 08:50:09.344902 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jwb9b" Feb 23 08:50:09 crc kubenswrapper[4940]: I0223 08:50:09.345830 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 08:50:09 crc kubenswrapper[4940]: E0223 08:50:09.346020 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jwb9b" podUID="d8dd2da9-cea0-44f5-8c93-91b79c7f66ea" Feb 23 08:50:09 crc kubenswrapper[4940]: E0223 08:50:09.346101 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 08:50:09 crc kubenswrapper[4940]: E0223 08:50:09.346178 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 08:50:09 crc kubenswrapper[4940]: I0223 08:50:09.345819 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 08:50:09 crc kubenswrapper[4940]: E0223 08:50:09.346815 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 08:50:09 crc kubenswrapper[4940]: I0223 08:50:09.360432 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67d1926d-47ed-4a5c-b868-690599126446\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ee3bc200ebaed1c133ad32e65db44c8613c48203736dada71c0881d3d4f45a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6fefd6cf7ee54d7687a700f08f52ff20d438289f846da4d811f3295e85455c1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee77b54ead604e1f64b0057494a4ba082ac7f59289f563ce28ec6048485e3b07\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://36385cbf7f0935ced8cb20a3a65c7d1802e2faf23bce8aeca1a040662b878b0f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://947d88435f9dfeac492c7a476d5413d6c9f65fea2e6a8322bed3a197be4e7c11\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"message\\\":\\\"le observer\\\\nW0223 08:48:59.155726 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0223 08:48:59.155899 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0223 08:48:59.156685 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1318688766/tls.crt::/tmp/serving-cert-1318688766/tls.key\\\\\\\"\\\\nI0223 08:48:59.543716 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0223 08:48:59.548278 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0223 08:48:59.548307 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0223 08:48:59.548334 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0223 08:48:59.548340 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0223 08:48:59.558852 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0223 08:48:59.558871 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0223 08:48:59.558894 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0223 08:48:59.558902 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0223 08:48:59.558909 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0223 08:48:59.558912 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0223 08:48:59.558917 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0223 08:48:59.558922 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0223 08:48:59.560387 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T08:48:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c47633e36ba0e4b4070060e2925e103b6efa68771b18f7a7f429546f9f05c1d9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:53Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7d8e4f8739813cb8d24afc9999755886cd4a77958b8b4e1514fc0bc53403258\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7d8e4f8739813cb8d24afc9999755886cd4a77958b8b4e1514fc0bc53403258\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:47:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:47:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:47:49Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:50:09Z is after 2025-08-24T17:21:41Z" Feb 23 08:50:09 crc kubenswrapper[4940]: I0223 08:50:09.370932 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88c878ec-2b3e-452c-bc23-a395b09fa6aa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3a121c424818b098798b4f28bc000a66157862597ce373a4d4748fb75f179fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3ba561d0277929d8f55fcd1fe87d8f37a2d126b94eaf9ac25bc79ad17f61e26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://90753b936aa559224b0241e456ce800658b0cdaa5c92cf54972255319d1ff334\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d742db4469dc738b276ecd68cf5aea77ad5e6fb4460a4fdca4a626a9a073956f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d742db4469dc738b276ecd68cf5aea77ad5e6fb4460a4fdca4a626a9a073956f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:47:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:47:50Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:47:49Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:50:09Z is after 2025-08-24T17:21:41Z" Feb 23 08:50:09 crc kubenswrapper[4940]: I0223 08:50:09.380967 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4vcwd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41834650-70c0-4558-9052-d7cdfc785e09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a8663adb017b0225e3a827e7670572c25e2b8123a2153e9f99830d8ddc60ebe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwtwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:49:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4vcwd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:50:09Z is after 2025-08-24T17:21:41Z" Feb 23 08:50:09 crc kubenswrapper[4940]: I0223 08:50:09.398129 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b55d9910-222c-4c04-8a15-cecc288b8dd6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2696757a27ff629002d6fe762776487514e74ad9465357240b3f196cf9e3cec7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73db3b91fa5cdca73acde9a3bcb9062394a54e61e87020b5d656290a54aa4f70\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T08:48:50Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0223 08:48:20.958890 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0223 08:48:20.960636 1 observer_polling.go:159] Starting file observer\\\\nI0223 08:48:20.961488 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0223 08:48:20.962119 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0223 08:48:48.479316 1 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials\\\\nI0223 08:48:50.579389 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0223 08:48:50.579494 1 cmd.go:179] failed checking apiserver connectivity: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T08:48:20Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:48:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aceeb7fc620d96b7e2a474de52d7b3fb007417f4f1ec81e5853724d54381566\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dbb2f4667fe28c3ae9389cece3d7246f31a8bd82328370a699dae73e11b19a4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0c1a355a4218aa74f5a9f20e74427e20e7dd1864bfb683eeb2ac43ef6d7357f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:47:49Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:50:09Z is after 2025-08-24T17:21:41Z" Feb 23 08:50:09 crc kubenswrapper[4940]: I0223 08:50:09.412811 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ca6a355de735d39d8c069429deed2a9983ba8d97d9210475c0b22791b6357d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:50:09Z is after 2025-08-24T17:21:41Z" Feb 23 08:50:09 crc kubenswrapper[4940]: I0223 08:50:09.426736 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:50:09Z is after 2025-08-24T17:21:41Z" Feb 23 08:50:09 crc kubenswrapper[4940]: I0223 08:50:09.447784 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-tj6ms" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"353c05f4-9ef1-4c0f-8388-8b56cc4c22d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1be9bb45923468ac8fc8f206f3493431ef579d16d158ba7cf330944b1eb7585\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j86pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e45b5826dae32da76487ca6179e00691e0f92748e895fbe42aa14d39a808a12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e45b5826dae32da76487ca6179e00691e0f92748e895fbe42aa14d39a808a12\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j86pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a9386fc4a4a81adf5f99f88359b958f8c47f37eb176a860c96e5e3b6a219f3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a9386fc4a4a81adf5f99f88359b958f8c47f37eb176a860c96e5e3b6a219f3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:49:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j86pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0e096f58de305f0df76444507c878fdf5eb1a134deb3725e1e46ac4d6581541\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0e096f58de305f0df76444507c878fdf5eb1a134deb3725e1e46ac4d6581541\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:49:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:49:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j86pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcc8e2b5ae391db63624b8bcbaa5cab45fcc5f168db2a52700d34ad2c3eb9b89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fcc8e2b5ae391db63624b8bcbaa5cab45fcc5f168db2a52700d34ad2c3eb9b89\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:49:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:49:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j86pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0b3f29a8c1eea72788476a0aac20ec99cb908df8983b5271448ae824b8ab96a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0b3f29a8c1eea72788476a0aac20ec99cb908df8983b5271448ae824b8ab96a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:49:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:49:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j86pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d249ae04e3dece913fbfa0f45169adf9d267043d3cf2dd93c4ace24c5fdf1dd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d249ae04e3dece913fbfa0f45169adf9d267043d3cf2dd93c4ace24c5fdf1dd6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:49:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:49:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j86pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:49:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-tj6ms\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:50:09Z is after 2025-08-24T17:21:41Z" Feb 23 08:50:09 crc kubenswrapper[4940]: I0223 08:50:09.461715 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jwb9b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d8dd2da9-cea0-44f5-8c93-91b79c7f66ea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmbl4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmbl4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:49:44Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jwb9b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:50:09Z is after 2025-08-24T17:21:41Z" Feb 23 08:50:09 crc kubenswrapper[4940]: I0223 08:50:09.490731 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qkw6w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0b5a971-c6f4-4518-9bb3-49d228275668\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://524ac500bdffecad2d1533ea863b5f2dc0acd130368e80faa476facf6817bcc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4558e2adbf43351cb79df3a10fc2e37fa7edc8eae79ceb459ae1ea068e8df21b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://683969deceafc005dbe91b7dbc1fd159cc1da9f55e48112ff68947c40997891e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cff793d0cde25f9d7edf3abaaf81005870d7ad01f985cfcc3cf3f5a57ea062b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f189355dc4d43c8b1df6111c9c3a128e90b724b976602933c384817e4f51882\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ef9add89b50dd7b480368148d404e8f7128d92e6d921c3c89a17463706f3dca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f032dbff4d8efcc9eac6064907ff8c278bd88ec4f46b8cacc75876dc5a85ce97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f032dbff4d8efcc9eac6064907ff8c278bd88ec4f46b8cacc75876dc5a85ce97\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-23T08:49:56Z\\\",\\\"message\\\":\\\"te: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:56Z is after 2025-08-24T17:21:41Z]\\\\nI0223 08:49:56.237357 6968 ovn.go:134] Ensuring zone local for Pod openshift-kube-scheduler/openshift-kube-scheduler-crc in node crc\\\\nI0223 08:49:56.237358 6968 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-operator/iptables-alerter-4ln5h after 0 failed attempt(s)\\\\nI0223 08:49:56.237362 6968 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-scheduler/openshift-kube-scheduler-crc after 0 failed attempt(s)\\\\nI0223 08:49:56.237223 6968 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-operator-lifecycle-manager/package-server-manager-metrics]} name:Service_openshift-operator-lifecycle-manager/package-server-manager-metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.110:8443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {f9232b32-e89f-4c8e-acc4-c6801b70dcb0}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:NB_Global Row:map[] Rows:[] Columns:[] Mutations:[{Column:nb_\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T08:49:55Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-qkw6w_openshift-ovn-kubernetes(d0b5a971-c6f4-4518-9bb3-49d228275668)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7a97208ce587c7633bbea9ad618e0721a712dfd62ff249a5843018bc125922a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56ef19c4e2659a35760fb8e2bb8a79c35a283fa3f4b00766bdd237d1464bd933\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56ef19c4e2659a35760fb8e2bb8a79c35a283fa3f4b00766bdd237d1464bd933\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:49:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:49:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:49:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qkw6w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:50:09Z is after 2025-08-24T17:21:41Z" Feb 23 08:50:09 crc kubenswrapper[4940]: I0223 08:50:09.519388 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47a2763e-a680-45c0-b4e8-0930c73e2e6b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4316680c09083874958b09e3d47897f844490827fafe87653b1459708ba55af3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c467a466c26e4cd6c762317dcc08a84b7bd61aa044e8d075f7ada83206764d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2a314775a13f5a612d19ad1c27ccadc74e4ad191c84bd78a1068105509e514a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://863243ea1e361e4f5cd70d64fa4a0c078de0c0529c9928b413db44a13924c474\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cca6b93a0655ce53ad6d1ccfe625533a67c58b3876c57fe4473d699c95938bd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc81c994f3f21269783d08f800aa4a58bdf43d229b5b61d7f40f8706230fd6c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc81c994f3f21269783d08f800aa4a58bdf43d229b5b61d7f40f8706230fd6c5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:47:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:47:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://260cdbf12890f13f411672fde6af3350b3ddc30824ac5b43110247ae1c7b74fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://260cdbf12890f13f411672fde6af3350b3ddc30824ac5b43110247ae1c7b74fc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:47:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:47:52Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://768f56feda9f5952088933827a03f9d23d7d807d1354578de2335e7c3e8beaaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://768f56feda9f5952088933827a03f9d23d7d807d1354578de2335e7c3e8beaaf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:47:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:47:49Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:50:09Z is after 2025-08-24T17:21:41Z" Feb 23 08:50:09 crc kubenswrapper[4940]: I0223 08:50:09.537898 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:50:09Z is after 2025-08-24T17:21:41Z" Feb 23 08:50:09 crc kubenswrapper[4940]: I0223 08:50:09.556895 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4164a479bddce53118172c24d9dc0c8174560ac6f49d2a1fa7321845f7f4020a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:50:09Z is after 2025-08-24T17:21:41Z" Feb 23 08:50:09 crc kubenswrapper[4940]: I0223 08:50:09.570769 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:50:09Z is after 2025-08-24T17:21:41Z" Feb 23 08:50:09 crc kubenswrapper[4940]: I0223 08:50:09.588103 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f5fabdbd24e079413b28b30b7e72c60763c64d08d55badde2d84d38dcbc8a09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bce84ef684469eb1b554c08d38cbd547ce6fa507b189a9f63c740583e66bc5e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:50:09Z is after 2025-08-24T17:21:41Z" Feb 23 08:50:09 crc kubenswrapper[4940]: E0223 08:50:09.599093 4940 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 23 08:50:09 crc kubenswrapper[4940]: I0223 08:50:09.601560 4940 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 09:11:33.001355607 +0000 UTC Feb 23 08:50:09 crc kubenswrapper[4940]: I0223 08:50:09.607948 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-czrqm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec3904ad-5d0b-46b4-9c13-68454d9a3cb2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5c3f117be2e1eba2edb562924ecec6d7ac44b508366bc7c63a86214ada740dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8jmt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:49:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-czrqm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:50:09Z is after 2025-08-24T17:21:41Z" Feb 23 08:50:09 crc kubenswrapper[4940]: I0223 08:50:09.620437 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ll9gt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac7931c6-f7d4-4166-b332-d954717f67c0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4dcef9eae99fcca399a3b47ff74ebb46cd4c0854422c9153d2ceb6f3a1ea8a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8clf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:49:37Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ll9gt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:50:09Z is after 2025-08-24T17:21:41Z" Feb 23 08:50:09 crc kubenswrapper[4940]: I0223 08:50:09.633191 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f3f2cfd6-5ddf-436d-998f-440f1cc642b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a14bc73401f5a1f767af0ebe6b81d75f4f735acadd163be9aa0b78d067677a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpqgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a57b4e844c0d09d7debf8e4d2507287815cc75be308dc7010647ac2041181dc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpqgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:49:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-26mgs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:50:09Z is after 2025-08-24T17:21:41Z" Feb 23 08:50:09 crc kubenswrapper[4940]: I0223 08:50:09.645175 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gjpr8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28e2b63c-99a7-46aa-a8e2-cee2bf5d7066\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6bce5b28c2d7a3946303e9ebd8738dd2337190884db0025e336d96f1641171c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4p7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5d550949b5cf8f3efe2267f466a5c769e2862ab361c86a14eef0baf874e7dd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4p7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:49:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gjpr8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:50:09Z is after 2025-08-24T17:21:41Z" Feb 23 08:50:10 crc kubenswrapper[4940]: I0223 08:50:10.602467 4940 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 22:15:11.809424842 +0000 UTC Feb 23 08:50:11 crc kubenswrapper[4940]: I0223 08:50:11.344817 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 08:50:11 crc kubenswrapper[4940]: E0223 08:50:11.345008 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 08:50:11 crc kubenswrapper[4940]: I0223 08:50:11.345321 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 08:50:11 crc kubenswrapper[4940]: I0223 08:50:11.345386 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jwb9b" Feb 23 08:50:11 crc kubenswrapper[4940]: I0223 08:50:11.345524 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 08:50:11 crc kubenswrapper[4940]: E0223 08:50:11.345575 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 08:50:11 crc kubenswrapper[4940]: E0223 08:50:11.345802 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jwb9b" podUID="d8dd2da9-cea0-44f5-8c93-91b79c7f66ea" Feb 23 08:50:11 crc kubenswrapper[4940]: E0223 08:50:11.346006 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 08:50:11 crc kubenswrapper[4940]: I0223 08:50:11.603058 4940 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 14:42:23.942743585 +0000 UTC Feb 23 08:50:12 crc kubenswrapper[4940]: I0223 08:50:12.346396 4940 scope.go:117] "RemoveContainer" containerID="f032dbff4d8efcc9eac6064907ff8c278bd88ec4f46b8cacc75876dc5a85ce97" Feb 23 08:50:12 crc kubenswrapper[4940]: E0223 08:50:12.346653 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-qkw6w_openshift-ovn-kubernetes(d0b5a971-c6f4-4518-9bb3-49d228275668)\"" pod="openshift-ovn-kubernetes/ovnkube-node-qkw6w" podUID="d0b5a971-c6f4-4518-9bb3-49d228275668" Feb 23 08:50:12 crc kubenswrapper[4940]: I0223 08:50:12.603946 4940 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 16:06:28.81035375 +0000 UTC Feb 23 08:50:13 crc kubenswrapper[4940]: I0223 08:50:13.344990 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 08:50:13 crc kubenswrapper[4940]: I0223 08:50:13.345016 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jwb9b" Feb 23 08:50:13 crc kubenswrapper[4940]: E0223 08:50:13.345163 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 08:50:13 crc kubenswrapper[4940]: I0223 08:50:13.345206 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 08:50:13 crc kubenswrapper[4940]: E0223 08:50:13.345297 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jwb9b" podUID="d8dd2da9-cea0-44f5-8c93-91b79c7f66ea" Feb 23 08:50:13 crc kubenswrapper[4940]: E0223 08:50:13.345340 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 08:50:13 crc kubenswrapper[4940]: I0223 08:50:13.345635 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 08:50:13 crc kubenswrapper[4940]: E0223 08:50:13.345714 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 08:50:13 crc kubenswrapper[4940]: I0223 08:50:13.604663 4940 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 01:06:47.083953464 +0000 UTC Feb 23 08:50:14 crc kubenswrapper[4940]: E0223 08:50:14.600501 4940 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 23 08:50:14 crc kubenswrapper[4940]: I0223 08:50:14.605154 4940 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 09:59:05.431597619 +0000 UTC Feb 23 08:50:14 crc kubenswrapper[4940]: I0223 08:50:14.693063 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:50:14 crc kubenswrapper[4940]: I0223 08:50:14.693144 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:50:14 crc kubenswrapper[4940]: I0223 08:50:14.693163 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:50:14 crc kubenswrapper[4940]: I0223 08:50:14.693195 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:50:14 crc kubenswrapper[4940]: I0223 08:50:14.693214 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:50:14Z","lastTransitionTime":"2026-02-23T08:50:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:50:14 crc kubenswrapper[4940]: E0223 08:50:14.707010 4940 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T08:50:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T08:50:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T08:50:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T08:50:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T08:50:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T08:50:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T08:50:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T08:50:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0b40f9a7-6d5b-496d-bcec-88183c6aba29\\\",\\\"systemUUID\\\":\\\"3c406e8c-0d77-4ead-8ee9-37cf28c01cc1\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:50:14Z is after 2025-08-24T17:21:41Z" Feb 23 08:50:14 crc kubenswrapper[4940]: I0223 08:50:14.712205 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:50:14 crc kubenswrapper[4940]: I0223 08:50:14.712425 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:50:14 crc kubenswrapper[4940]: I0223 08:50:14.712499 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:50:14 crc kubenswrapper[4940]: I0223 08:50:14.712567 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:50:14 crc kubenswrapper[4940]: I0223 08:50:14.712740 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:50:14Z","lastTransitionTime":"2026-02-23T08:50:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:50:14 crc kubenswrapper[4940]: E0223 08:50:14.726325 4940 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T08:50:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T08:50:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T08:50:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T08:50:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T08:50:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T08:50:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T08:50:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T08:50:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0b40f9a7-6d5b-496d-bcec-88183c6aba29\\\",\\\"systemUUID\\\":\\\"3c406e8c-0d77-4ead-8ee9-37cf28c01cc1\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:50:14Z is after 2025-08-24T17:21:41Z" Feb 23 08:50:14 crc kubenswrapper[4940]: I0223 08:50:14.730265 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:50:14 crc kubenswrapper[4940]: I0223 08:50:14.730463 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:50:14 crc kubenswrapper[4940]: I0223 08:50:14.730547 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:50:14 crc kubenswrapper[4940]: I0223 08:50:14.730671 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:50:14 crc kubenswrapper[4940]: I0223 08:50:14.730758 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:50:14Z","lastTransitionTime":"2026-02-23T08:50:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:50:14 crc kubenswrapper[4940]: E0223 08:50:14.747306 4940 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T08:50:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T08:50:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T08:50:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T08:50:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T08:50:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T08:50:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T08:50:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T08:50:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0b40f9a7-6d5b-496d-bcec-88183c6aba29\\\",\\\"systemUUID\\\":\\\"3c406e8c-0d77-4ead-8ee9-37cf28c01cc1\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:50:14Z is after 2025-08-24T17:21:41Z" Feb 23 08:50:14 crc kubenswrapper[4940]: I0223 08:50:14.752697 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:50:14 crc kubenswrapper[4940]: I0223 08:50:14.752847 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:50:14 crc kubenswrapper[4940]: I0223 08:50:14.752924 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:50:14 crc kubenswrapper[4940]: I0223 08:50:14.753019 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:50:14 crc kubenswrapper[4940]: I0223 08:50:14.753113 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:50:14Z","lastTransitionTime":"2026-02-23T08:50:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:50:14 crc kubenswrapper[4940]: E0223 08:50:14.775061 4940 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T08:50:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T08:50:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T08:50:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T08:50:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T08:50:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T08:50:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T08:50:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T08:50:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0b40f9a7-6d5b-496d-bcec-88183c6aba29\\\",\\\"systemUUID\\\":\\\"3c406e8c-0d77-4ead-8ee9-37cf28c01cc1\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:50:14Z is after 2025-08-24T17:21:41Z" Feb 23 08:50:14 crc kubenswrapper[4940]: I0223 08:50:14.780412 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:50:14 crc kubenswrapper[4940]: I0223 08:50:14.780470 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:50:14 crc kubenswrapper[4940]: I0223 08:50:14.780486 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:50:14 crc kubenswrapper[4940]: I0223 08:50:14.780510 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:50:14 crc kubenswrapper[4940]: I0223 08:50:14.780523 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:50:14Z","lastTransitionTime":"2026-02-23T08:50:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:50:14 crc kubenswrapper[4940]: E0223 08:50:14.795789 4940 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T08:50:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T08:50:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T08:50:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T08:50:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T08:50:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T08:50:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T08:50:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T08:50:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0b40f9a7-6d5b-496d-bcec-88183c6aba29\\\",\\\"systemUUID\\\":\\\"3c406e8c-0d77-4ead-8ee9-37cf28c01cc1\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:50:14Z is after 2025-08-24T17:21:41Z" Feb 23 08:50:14 crc kubenswrapper[4940]: E0223 08:50:14.795946 4940 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 23 08:50:15 crc kubenswrapper[4940]: I0223 08:50:15.345640 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 08:50:15 crc kubenswrapper[4940]: I0223 08:50:15.345663 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jwb9b" Feb 23 08:50:15 crc kubenswrapper[4940]: E0223 08:50:15.345786 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 08:50:15 crc kubenswrapper[4940]: I0223 08:50:15.345840 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 08:50:15 crc kubenswrapper[4940]: E0223 08:50:15.345901 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jwb9b" podUID="d8dd2da9-cea0-44f5-8c93-91b79c7f66ea" Feb 23 08:50:15 crc kubenswrapper[4940]: I0223 08:50:15.346021 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 08:50:15 crc kubenswrapper[4940]: E0223 08:50:15.346192 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 08:50:15 crc kubenswrapper[4940]: E0223 08:50:15.346374 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 08:50:15 crc kubenswrapper[4940]: I0223 08:50:15.605475 4940 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-18 00:34:27.450197144 +0000 UTC Feb 23 08:50:16 crc kubenswrapper[4940]: I0223 08:50:16.061699 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d8dd2da9-cea0-44f5-8c93-91b79c7f66ea-metrics-certs\") pod \"network-metrics-daemon-jwb9b\" (UID: \"d8dd2da9-cea0-44f5-8c93-91b79c7f66ea\") " pod="openshift-multus/network-metrics-daemon-jwb9b" Feb 23 08:50:16 crc kubenswrapper[4940]: E0223 08:50:16.061870 4940 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 23 08:50:16 crc kubenswrapper[4940]: E0223 08:50:16.061981 4940 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d8dd2da9-cea0-44f5-8c93-91b79c7f66ea-metrics-certs podName:d8dd2da9-cea0-44f5-8c93-91b79c7f66ea nodeName:}" failed. No retries permitted until 2026-02-23 08:50:48.061952422 +0000 UTC m=+179.445158619 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/d8dd2da9-cea0-44f5-8c93-91b79c7f66ea-metrics-certs") pod "network-metrics-daemon-jwb9b" (UID: "d8dd2da9-cea0-44f5-8c93-91b79c7f66ea") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 23 08:50:16 crc kubenswrapper[4940]: I0223 08:50:16.605939 4940 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 18:00:40.055864461 +0000 UTC Feb 23 08:50:17 crc kubenswrapper[4940]: I0223 08:50:17.344768 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jwb9b" Feb 23 08:50:17 crc kubenswrapper[4940]: I0223 08:50:17.344762 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 08:50:17 crc kubenswrapper[4940]: E0223 08:50:17.344977 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jwb9b" podUID="d8dd2da9-cea0-44f5-8c93-91b79c7f66ea" Feb 23 08:50:17 crc kubenswrapper[4940]: I0223 08:50:17.344794 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 08:50:17 crc kubenswrapper[4940]: I0223 08:50:17.344798 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 08:50:17 crc kubenswrapper[4940]: E0223 08:50:17.345184 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 08:50:17 crc kubenswrapper[4940]: E0223 08:50:17.345206 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 08:50:17 crc kubenswrapper[4940]: E0223 08:50:17.345267 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 08:50:17 crc kubenswrapper[4940]: I0223 08:50:17.607012 4940 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 15:54:27.148469055 +0000 UTC Feb 23 08:50:18 crc kubenswrapper[4940]: I0223 08:50:18.362048 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Feb 23 08:50:18 crc kubenswrapper[4940]: I0223 08:50:18.607343 4940 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 04:45:54.142039622 +0000 UTC Feb 23 08:50:18 crc kubenswrapper[4940]: I0223 08:50:18.728482 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-czrqm_ec3904ad-5d0b-46b4-9c13-68454d9a3cb2/kube-multus/0.log" Feb 23 08:50:18 crc kubenswrapper[4940]: I0223 08:50:18.728570 4940 generic.go:334] "Generic (PLEG): container finished" podID="ec3904ad-5d0b-46b4-9c13-68454d9a3cb2" containerID="f5c3f117be2e1eba2edb562924ecec6d7ac44b508366bc7c63a86214ada740dc" exitCode=1 Feb 23 08:50:18 crc kubenswrapper[4940]: I0223 08:50:18.728743 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-czrqm" event={"ID":"ec3904ad-5d0b-46b4-9c13-68454d9a3cb2","Type":"ContainerDied","Data":"f5c3f117be2e1eba2edb562924ecec6d7ac44b508366bc7c63a86214ada740dc"} Feb 23 08:50:18 crc kubenswrapper[4940]: I0223 08:50:18.730359 4940 scope.go:117] "RemoveContainer" containerID="f5c3f117be2e1eba2edb562924ecec6d7ac44b508366bc7c63a86214ada740dc" Feb 23 08:50:18 crc kubenswrapper[4940]: I0223 08:50:18.750227 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:50:18Z is after 2025-08-24T17:21:41Z" Feb 23 08:50:18 crc kubenswrapper[4940]: I0223 08:50:18.771462 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f5fabdbd24e079413b28b30b7e72c60763c64d08d55badde2d84d38dcbc8a09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bce84ef684469eb1b554c08d38cbd547ce6fa507b189a9f63c740583e66bc5e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:50:18Z is after 2025-08-24T17:21:41Z" Feb 23 08:50:18 crc kubenswrapper[4940]: I0223 08:50:18.790559 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-czrqm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec3904ad-5d0b-46b4-9c13-68454d9a3cb2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:50:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:50:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5c3f117be2e1eba2edb562924ecec6d7ac44b508366bc7c63a86214ada740dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5c3f117be2e1eba2edb562924ecec6d7ac44b508366bc7c63a86214ada740dc\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-23T08:50:18Z\\\",\\\"message\\\":\\\"2026-02-23T08:49:33+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_ef4ba046-8b26-4715-b009-2f66c82885e5\\\\n2026-02-23T08:49:33+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_ef4ba046-8b26-4715-b009-2f66c82885e5 to /host/opt/cni/bin/\\\\n2026-02-23T08:49:33Z [verbose] multus-daemon started\\\\n2026-02-23T08:49:33Z [verbose] Readiness Indicator file check\\\\n2026-02-23T08:50:18Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T08:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8jmt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:49:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-czrqm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:50:18Z is after 2025-08-24T17:21:41Z" Feb 23 08:50:18 crc kubenswrapper[4940]: I0223 08:50:18.816553 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qkw6w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0b5a971-c6f4-4518-9bb3-49d228275668\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://524ac500bdffecad2d1533ea863b5f2dc0acd130368e80faa476facf6817bcc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4558e2adbf43351cb79df3a10fc2e37fa7edc8eae79ceb459ae1ea068e8df21b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://683969deceafc005dbe91b7dbc1fd159cc1da9f55e48112ff68947c40997891e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cff793d0cde25f9d7edf3abaaf81005870d7ad01f985cfcc3cf3f5a57ea062b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f189355dc4d43c8b1df6111c9c3a128e90b724b976602933c384817e4f51882\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ef9add89b50dd7b480368148d404e8f7128d92e6d921c3c89a17463706f3dca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f032dbff4d8efcc9eac6064907ff8c278bd88ec4f46b8cacc75876dc5a85ce97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f032dbff4d8efcc9eac6064907ff8c278bd88ec4f46b8cacc75876dc5a85ce97\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-23T08:49:56Z\\\",\\\"message\\\":\\\"te: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:56Z is after 2025-08-24T17:21:41Z]\\\\nI0223 08:49:56.237357 6968 ovn.go:134] Ensuring zone local for Pod openshift-kube-scheduler/openshift-kube-scheduler-crc in node crc\\\\nI0223 08:49:56.237358 6968 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-operator/iptables-alerter-4ln5h after 0 failed attempt(s)\\\\nI0223 08:49:56.237362 6968 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-scheduler/openshift-kube-scheduler-crc after 0 failed attempt(s)\\\\nI0223 08:49:56.237223 6968 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-operator-lifecycle-manager/package-server-manager-metrics]} name:Service_openshift-operator-lifecycle-manager/package-server-manager-metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.110:8443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {f9232b32-e89f-4c8e-acc4-c6801b70dcb0}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:NB_Global Row:map[] Rows:[] Columns:[] Mutations:[{Column:nb_\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T08:49:55Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-qkw6w_openshift-ovn-kubernetes(d0b5a971-c6f4-4518-9bb3-49d228275668)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7a97208ce587c7633bbea9ad618e0721a712dfd62ff249a5843018bc125922a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56ef19c4e2659a35760fb8e2bb8a79c35a283fa3f4b00766bdd237d1464bd933\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56ef19c4e2659a35760fb8e2bb8a79c35a283fa3f4b00766bdd237d1464bd933\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:49:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:49:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:49:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qkw6w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:50:18Z is after 2025-08-24T17:21:41Z" Feb 23 08:50:18 crc kubenswrapper[4940]: I0223 08:50:18.828897 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7a3f942c-4bf7-4e48-84ef-83ca0a07bee7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4aa4e3eae7c81a14b6302a982f67ee3d9ecc29fea292654bb6ed70d209b7b202\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0f77059ccd04d8f502c2652a5e62a780bf57c1beb0bde2aa974f3d46c798f64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0f77059ccd04d8f502c2652a5e62a780bf57c1beb0bde2aa974f3d46c798f64\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:47:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:47:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:47:49Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:50:18Z is after 2025-08-24T17:21:41Z" Feb 23 08:50:18 crc kubenswrapper[4940]: I0223 08:50:18.847119 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47a2763e-a680-45c0-b4e8-0930c73e2e6b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4316680c09083874958b09e3d47897f844490827fafe87653b1459708ba55af3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c467a466c26e4cd6c762317dcc08a84b7bd61aa044e8d075f7ada83206764d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2a314775a13f5a612d19ad1c27ccadc74e4ad191c84bd78a1068105509e514a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://863243ea1e361e4f5cd70d64fa4a0c078de0c0529c9928b413db44a13924c474\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cca6b93a0655ce53ad6d1ccfe625533a67c58b3876c57fe4473d699c95938bd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc81c994f3f21269783d08f800aa4a58bdf43d229b5b61d7f40f8706230fd6c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc81c994f3f21269783d08f800aa4a58bdf43d229b5b61d7f40f8706230fd6c5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:47:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:47:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://260cdbf12890f13f411672fde6af3350b3ddc30824ac5b43110247ae1c7b74fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://260cdbf12890f13f411672fde6af3350b3ddc30824ac5b43110247ae1c7b74fc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:47:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:47:52Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://768f56feda9f5952088933827a03f9d23d7d807d1354578de2335e7c3e8beaaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://768f56feda9f5952088933827a03f9d23d7d807d1354578de2335e7c3e8beaaf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:47:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:47:49Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:50:18Z is after 2025-08-24T17:21:41Z" Feb 23 08:50:18 crc kubenswrapper[4940]: I0223 08:50:18.862813 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:50:18Z is after 2025-08-24T17:21:41Z" Feb 23 08:50:18 crc kubenswrapper[4940]: I0223 08:50:18.879285 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4164a479bddce53118172c24d9dc0c8174560ac6f49d2a1fa7321845f7f4020a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:50:18Z is after 2025-08-24T17:21:41Z" Feb 23 08:50:18 crc kubenswrapper[4940]: I0223 08:50:18.890871 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ll9gt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac7931c6-f7d4-4166-b332-d954717f67c0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4dcef9eae99fcca399a3b47ff74ebb46cd4c0854422c9153d2ceb6f3a1ea8a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8clf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:49:37Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ll9gt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:50:18Z is after 2025-08-24T17:21:41Z" Feb 23 08:50:18 crc kubenswrapper[4940]: I0223 08:50:18.904083 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f3f2cfd6-5ddf-436d-998f-440f1cc642b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a14bc73401f5a1f767af0ebe6b81d75f4f735acadd163be9aa0b78d067677a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpqgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a57b4e844c0d09d7debf8e4d2507287815cc75be308dc7010647ac2041181dc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpqgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:49:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-26mgs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:50:18Z is after 2025-08-24T17:21:41Z" Feb 23 08:50:18 crc kubenswrapper[4940]: I0223 08:50:18.920985 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gjpr8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28e2b63c-99a7-46aa-a8e2-cee2bf5d7066\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6bce5b28c2d7a3946303e9ebd8738dd2337190884db0025e336d96f1641171c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4p7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5d550949b5cf8f3efe2267f466a5c769e2862ab361c86a14eef0baf874e7dd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4p7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:49:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gjpr8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:50:18Z is after 2025-08-24T17:21:41Z" Feb 23 08:50:18 crc kubenswrapper[4940]: I0223 08:50:18.936689 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67d1926d-47ed-4a5c-b868-690599126446\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ee3bc200ebaed1c133ad32e65db44c8613c48203736dada71c0881d3d4f45a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6fefd6cf7ee54d7687a700f08f52ff20d438289f846da4d811f3295e85455c1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee77b54ead604e1f64b0057494a4ba082ac7f59289f563ce28ec6048485e3b07\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://36385cbf7f0935ced8cb20a3a65c7d1802e2faf23bce8aeca1a040662b878b0f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://947d88435f9dfeac492c7a476d5413d6c9f65fea2e6a8322bed3a197be4e7c11\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"message\\\":\\\"le observer\\\\nW0223 08:48:59.155726 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0223 08:48:59.155899 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0223 08:48:59.156685 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1318688766/tls.crt::/tmp/serving-cert-1318688766/tls.key\\\\\\\"\\\\nI0223 08:48:59.543716 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0223 08:48:59.548278 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0223 08:48:59.548307 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0223 08:48:59.548334 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0223 08:48:59.548340 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0223 08:48:59.558852 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0223 08:48:59.558871 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0223 08:48:59.558894 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0223 08:48:59.558902 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0223 08:48:59.558909 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0223 08:48:59.558912 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0223 08:48:59.558917 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0223 08:48:59.558922 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0223 08:48:59.560387 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T08:48:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c47633e36ba0e4b4070060e2925e103b6efa68771b18f7a7f429546f9f05c1d9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:53Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7d8e4f8739813cb8d24afc9999755886cd4a77958b8b4e1514fc0bc53403258\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7d8e4f8739813cb8d24afc9999755886cd4a77958b8b4e1514fc0bc53403258\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:47:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:47:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:47:49Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:50:18Z is after 2025-08-24T17:21:41Z" Feb 23 08:50:18 crc kubenswrapper[4940]: I0223 08:50:18.949182 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88c878ec-2b3e-452c-bc23-a395b09fa6aa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3a121c424818b098798b4f28bc000a66157862597ce373a4d4748fb75f179fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3ba561d0277929d8f55fcd1fe87d8f37a2d126b94eaf9ac25bc79ad17f61e26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://90753b936aa559224b0241e456ce800658b0cdaa5c92cf54972255319d1ff334\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d742db4469dc738b276ecd68cf5aea77ad5e6fb4460a4fdca4a626a9a073956f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d742db4469dc738b276ecd68cf5aea77ad5e6fb4460a4fdca4a626a9a073956f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:47:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:47:50Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:47:49Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:50:18Z is after 2025-08-24T17:21:41Z" Feb 23 08:50:18 crc kubenswrapper[4940]: I0223 08:50:18.959601 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4vcwd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41834650-70c0-4558-9052-d7cdfc785e09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a8663adb017b0225e3a827e7670572c25e2b8123a2153e9f99830d8ddc60ebe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwtwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:49:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4vcwd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:50:18Z is after 2025-08-24T17:21:41Z" Feb 23 08:50:18 crc kubenswrapper[4940]: I0223 08:50:18.979762 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jwb9b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d8dd2da9-cea0-44f5-8c93-91b79c7f66ea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmbl4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmbl4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:49:44Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jwb9b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:50:18Z is after 2025-08-24T17:21:41Z" Feb 23 08:50:18 crc kubenswrapper[4940]: I0223 08:50:18.994580 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b55d9910-222c-4c04-8a15-cecc288b8dd6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2696757a27ff629002d6fe762776487514e74ad9465357240b3f196cf9e3cec7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73db3b91fa5cdca73acde9a3bcb9062394a54e61e87020b5d656290a54aa4f70\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T08:48:50Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0223 08:48:20.958890 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0223 08:48:20.960636 1 observer_polling.go:159] Starting file observer\\\\nI0223 08:48:20.961488 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0223 08:48:20.962119 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0223 08:48:48.479316 1 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials\\\\nI0223 08:48:50.579389 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0223 08:48:50.579494 1 cmd.go:179] failed checking apiserver connectivity: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T08:48:20Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:48:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aceeb7fc620d96b7e2a474de52d7b3fb007417f4f1ec81e5853724d54381566\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dbb2f4667fe28c3ae9389cece3d7246f31a8bd82328370a699dae73e11b19a4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0c1a355a4218aa74f5a9f20e74427e20e7dd1864bfb683eeb2ac43ef6d7357f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:47:49Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:50:18Z is after 2025-08-24T17:21:41Z" Feb 23 08:50:19 crc kubenswrapper[4940]: I0223 08:50:19.007872 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ca6a355de735d39d8c069429deed2a9983ba8d97d9210475c0b22791b6357d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:50:19Z is after 2025-08-24T17:21:41Z" Feb 23 08:50:19 crc kubenswrapper[4940]: I0223 08:50:19.020781 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:50:19Z is after 2025-08-24T17:21:41Z" Feb 23 08:50:19 crc kubenswrapper[4940]: I0223 08:50:19.041868 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-tj6ms" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"353c05f4-9ef1-4c0f-8388-8b56cc4c22d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1be9bb45923468ac8fc8f206f3493431ef579d16d158ba7cf330944b1eb7585\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j86pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e45b5826dae32da76487ca6179e00691e0f92748e895fbe42aa14d39a808a12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e45b5826dae32da76487ca6179e00691e0f92748e895fbe42aa14d39a808a12\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j86pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a9386fc4a4a81adf5f99f88359b958f8c47f37eb176a860c96e5e3b6a219f3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a9386fc4a4a81adf5f99f88359b958f8c47f37eb176a860c96e5e3b6a219f3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:49:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j86pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0e096f58de305f0df76444507c878fdf5eb1a134deb3725e1e46ac4d6581541\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0e096f58de305f0df76444507c878fdf5eb1a134deb3725e1e46ac4d6581541\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:49:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:49:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j86pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcc8e2b5ae391db63624b8bcbaa5cab45fcc5f168db2a52700d34ad2c3eb9b89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fcc8e2b5ae391db63624b8bcbaa5cab45fcc5f168db2a52700d34ad2c3eb9b89\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:49:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:49:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j86pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0b3f29a8c1eea72788476a0aac20ec99cb908df8983b5271448ae824b8ab96a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0b3f29a8c1eea72788476a0aac20ec99cb908df8983b5271448ae824b8ab96a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:49:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:49:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j86pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d249ae04e3dece913fbfa0f45169adf9d267043d3cf2dd93c4ace24c5fdf1dd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d249ae04e3dece913fbfa0f45169adf9d267043d3cf2dd93c4ace24c5fdf1dd6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:49:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:49:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j86pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:49:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-tj6ms\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:50:19Z is after 2025-08-24T17:21:41Z" Feb 23 08:50:19 crc kubenswrapper[4940]: I0223 08:50:19.345099 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jwb9b" Feb 23 08:50:19 crc kubenswrapper[4940]: E0223 08:50:19.345326 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jwb9b" podUID="d8dd2da9-cea0-44f5-8c93-91b79c7f66ea" Feb 23 08:50:19 crc kubenswrapper[4940]: I0223 08:50:19.345502 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 08:50:19 crc kubenswrapper[4940]: E0223 08:50:19.345706 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 08:50:19 crc kubenswrapper[4940]: I0223 08:50:19.345771 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 08:50:19 crc kubenswrapper[4940]: E0223 08:50:19.345951 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 08:50:19 crc kubenswrapper[4940]: I0223 08:50:19.346141 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 08:50:19 crc kubenswrapper[4940]: E0223 08:50:19.346579 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 08:50:19 crc kubenswrapper[4940]: I0223 08:50:19.357176 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4vcwd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41834650-70c0-4558-9052-d7cdfc785e09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a8663adb017b0225e3a827e7670572c25e2b8123a2153e9f99830d8ddc60ebe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwtwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:49:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4vcwd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:50:19Z is after 2025-08-24T17:21:41Z" Feb 23 08:50:19 crc kubenswrapper[4940]: I0223 08:50:19.373480 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67d1926d-47ed-4a5c-b868-690599126446\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ee3bc200ebaed1c133ad32e65db44c8613c48203736dada71c0881d3d4f45a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6fefd6cf7ee54d7687a700f08f52ff20d438289f846da4d811f3295e85455c1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee77b54ead604e1f64b0057494a4ba082ac7f59289f563ce28ec6048485e3b07\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://36385cbf7f0935ced8cb20a3a65c7d1802e2faf23bce8aeca1a040662b878b0f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://947d88435f9dfeac492c7a476d5413d6c9f65fea2e6a8322bed3a197be4e7c11\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"message\\\":\\\"le observer\\\\nW0223 08:48:59.155726 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0223 08:48:59.155899 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0223 08:48:59.156685 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1318688766/tls.crt::/tmp/serving-cert-1318688766/tls.key\\\\\\\"\\\\nI0223 08:48:59.543716 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0223 08:48:59.548278 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0223 08:48:59.548307 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0223 08:48:59.548334 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0223 08:48:59.548340 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0223 08:48:59.558852 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0223 08:48:59.558871 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0223 08:48:59.558894 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0223 08:48:59.558902 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0223 08:48:59.558909 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0223 08:48:59.558912 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0223 08:48:59.558917 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0223 08:48:59.558922 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0223 08:48:59.560387 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T08:48:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c47633e36ba0e4b4070060e2925e103b6efa68771b18f7a7f429546f9f05c1d9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:53Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7d8e4f8739813cb8d24afc9999755886cd4a77958b8b4e1514fc0bc53403258\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7d8e4f8739813cb8d24afc9999755886cd4a77958b8b4e1514fc0bc53403258\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:47:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:47:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:47:49Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:50:19Z is after 2025-08-24T17:21:41Z" Feb 23 08:50:19 crc kubenswrapper[4940]: I0223 08:50:19.390571 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88c878ec-2b3e-452c-bc23-a395b09fa6aa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3a121c424818b098798b4f28bc000a66157862597ce373a4d4748fb75f179fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3ba561d0277929d8f55fcd1fe87d8f37a2d126b94eaf9ac25bc79ad17f61e26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://90753b936aa559224b0241e456ce800658b0cdaa5c92cf54972255319d1ff334\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d742db4469dc738b276ecd68cf5aea77ad5e6fb4460a4fdca4a626a9a073956f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d742db4469dc738b276ecd68cf5aea77ad5e6fb4460a4fdca4a626a9a073956f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:47:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:47:50Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:47:49Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:50:19Z is after 2025-08-24T17:21:41Z" Feb 23 08:50:19 crc kubenswrapper[4940]: I0223 08:50:19.404800 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:50:19Z is after 2025-08-24T17:21:41Z" Feb 23 08:50:19 crc kubenswrapper[4940]: I0223 08:50:19.418947 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-tj6ms" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"353c05f4-9ef1-4c0f-8388-8b56cc4c22d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1be9bb45923468ac8fc8f206f3493431ef579d16d158ba7cf330944b1eb7585\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j86pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e45b5826dae32da76487ca6179e00691e0f92748e895fbe42aa14d39a808a12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e45b5826dae32da76487ca6179e00691e0f92748e895fbe42aa14d39a808a12\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j86pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a9386fc4a4a81adf5f99f88359b958f8c47f37eb176a860c96e5e3b6a219f3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a9386fc4a4a81adf5f99f88359b958f8c47f37eb176a860c96e5e3b6a219f3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:49:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j86pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0e096f58de305f0df76444507c878fdf5eb1a134deb3725e1e46ac4d6581541\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0e096f58de305f0df76444507c878fdf5eb1a134deb3725e1e46ac4d6581541\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:49:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:49:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j86pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcc8e2b5ae391db63624b8bcbaa5cab45fcc5f168db2a52700d34ad2c3eb9b89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fcc8e2b5ae391db63624b8bcbaa5cab45fcc5f168db2a52700d34ad2c3eb9b89\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:49:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:49:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j86pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0b3f29a8c1eea72788476a0aac20ec99cb908df8983b5271448ae824b8ab96a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0b3f29a8c1eea72788476a0aac20ec99cb908df8983b5271448ae824b8ab96a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:49:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:49:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j86pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d249ae04e3dece913fbfa0f45169adf9d267043d3cf2dd93c4ace24c5fdf1dd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d249ae04e3dece913fbfa0f45169adf9d267043d3cf2dd93c4ace24c5fdf1dd6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:49:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:49:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j86pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:49:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-tj6ms\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:50:19Z is after 2025-08-24T17:21:41Z" Feb 23 08:50:19 crc kubenswrapper[4940]: I0223 08:50:19.429747 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jwb9b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d8dd2da9-cea0-44f5-8c93-91b79c7f66ea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmbl4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmbl4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:49:44Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jwb9b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:50:19Z is after 2025-08-24T17:21:41Z" Feb 23 08:50:19 crc kubenswrapper[4940]: I0223 08:50:19.440935 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b55d9910-222c-4c04-8a15-cecc288b8dd6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2696757a27ff629002d6fe762776487514e74ad9465357240b3f196cf9e3cec7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73db3b91fa5cdca73acde9a3bcb9062394a54e61e87020b5d656290a54aa4f70\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T08:48:50Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0223 08:48:20.958890 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0223 08:48:20.960636 1 observer_polling.go:159] Starting file observer\\\\nI0223 08:48:20.961488 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0223 08:48:20.962119 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0223 08:48:48.479316 1 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials\\\\nI0223 08:48:50.579389 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0223 08:48:50.579494 1 cmd.go:179] failed checking apiserver connectivity: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T08:48:20Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:48:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aceeb7fc620d96b7e2a474de52d7b3fb007417f4f1ec81e5853724d54381566\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dbb2f4667fe28c3ae9389cece3d7246f31a8bd82328370a699dae73e11b19a4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0c1a355a4218aa74f5a9f20e74427e20e7dd1864bfb683eeb2ac43ef6d7357f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:47:49Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:50:19Z is after 2025-08-24T17:21:41Z" Feb 23 08:50:19 crc kubenswrapper[4940]: I0223 08:50:19.453369 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ca6a355de735d39d8c069429deed2a9983ba8d97d9210475c0b22791b6357d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:50:19Z is after 2025-08-24T17:21:41Z" Feb 23 08:50:19 crc kubenswrapper[4940]: I0223 08:50:19.467821 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:50:19Z is after 2025-08-24T17:21:41Z" Feb 23 08:50:19 crc kubenswrapper[4940]: I0223 08:50:19.483496 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4164a479bddce53118172c24d9dc0c8174560ac6f49d2a1fa7321845f7f4020a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:50:19Z is after 2025-08-24T17:21:41Z" Feb 23 08:50:19 crc kubenswrapper[4940]: I0223 08:50:19.497817 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:50:19Z is after 2025-08-24T17:21:41Z" Feb 23 08:50:19 crc kubenswrapper[4940]: I0223 08:50:19.513514 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f5fabdbd24e079413b28b30b7e72c60763c64d08d55badde2d84d38dcbc8a09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bce84ef684469eb1b554c08d38cbd547ce6fa507b189a9f63c740583e66bc5e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:50:19Z is after 2025-08-24T17:21:41Z" Feb 23 08:50:19 crc kubenswrapper[4940]: I0223 08:50:19.529369 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-czrqm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec3904ad-5d0b-46b4-9c13-68454d9a3cb2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:50:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:50:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5c3f117be2e1eba2edb562924ecec6d7ac44b508366bc7c63a86214ada740dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5c3f117be2e1eba2edb562924ecec6d7ac44b508366bc7c63a86214ada740dc\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-23T08:50:18Z\\\",\\\"message\\\":\\\"2026-02-23T08:49:33+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_ef4ba046-8b26-4715-b009-2f66c82885e5\\\\n2026-02-23T08:49:33+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_ef4ba046-8b26-4715-b009-2f66c82885e5 to /host/opt/cni/bin/\\\\n2026-02-23T08:49:33Z [verbose] multus-daemon started\\\\n2026-02-23T08:49:33Z [verbose] Readiness Indicator file check\\\\n2026-02-23T08:50:18Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T08:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8jmt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:49:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-czrqm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:50:19Z is after 2025-08-24T17:21:41Z" Feb 23 08:50:19 crc kubenswrapper[4940]: I0223 08:50:19.549820 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qkw6w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0b5a971-c6f4-4518-9bb3-49d228275668\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://524ac500bdffecad2d1533ea863b5f2dc0acd130368e80faa476facf6817bcc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4558e2adbf43351cb79df3a10fc2e37fa7edc8eae79ceb459ae1ea068e8df21b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://683969deceafc005dbe91b7dbc1fd159cc1da9f55e48112ff68947c40997891e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cff793d0cde25f9d7edf3abaaf81005870d7ad01f985cfcc3cf3f5a57ea062b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f189355dc4d43c8b1df6111c9c3a128e90b724b976602933c384817e4f51882\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ef9add89b50dd7b480368148d404e8f7128d92e6d921c3c89a17463706f3dca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f032dbff4d8efcc9eac6064907ff8c278bd88ec4f46b8cacc75876dc5a85ce97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f032dbff4d8efcc9eac6064907ff8c278bd88ec4f46b8cacc75876dc5a85ce97\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-23T08:49:56Z\\\",\\\"message\\\":\\\"te: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:56Z is after 2025-08-24T17:21:41Z]\\\\nI0223 08:49:56.237357 6968 ovn.go:134] Ensuring zone local for Pod openshift-kube-scheduler/openshift-kube-scheduler-crc in node crc\\\\nI0223 08:49:56.237358 6968 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-operator/iptables-alerter-4ln5h after 0 failed attempt(s)\\\\nI0223 08:49:56.237362 6968 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-scheduler/openshift-kube-scheduler-crc after 0 failed attempt(s)\\\\nI0223 08:49:56.237223 6968 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-operator-lifecycle-manager/package-server-manager-metrics]} name:Service_openshift-operator-lifecycle-manager/package-server-manager-metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.110:8443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {f9232b32-e89f-4c8e-acc4-c6801b70dcb0}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:NB_Global Row:map[] Rows:[] Columns:[] Mutations:[{Column:nb_\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T08:49:55Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-qkw6w_openshift-ovn-kubernetes(d0b5a971-c6f4-4518-9bb3-49d228275668)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7a97208ce587c7633bbea9ad618e0721a712dfd62ff249a5843018bc125922a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56ef19c4e2659a35760fb8e2bb8a79c35a283fa3f4b00766bdd237d1464bd933\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56ef19c4e2659a35760fb8e2bb8a79c35a283fa3f4b00766bdd237d1464bd933\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:49:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:49:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:49:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qkw6w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:50:19Z is after 2025-08-24T17:21:41Z" Feb 23 08:50:19 crc kubenswrapper[4940]: I0223 08:50:19.563652 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7a3f942c-4bf7-4e48-84ef-83ca0a07bee7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4aa4e3eae7c81a14b6302a982f67ee3d9ecc29fea292654bb6ed70d209b7b202\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0f77059ccd04d8f502c2652a5e62a780bf57c1beb0bde2aa974f3d46c798f64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0f77059ccd04d8f502c2652a5e62a780bf57c1beb0bde2aa974f3d46c798f64\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:47:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:47:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:47:49Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:50:19Z is after 2025-08-24T17:21:41Z" Feb 23 08:50:19 crc kubenswrapper[4940]: I0223 08:50:19.591291 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47a2763e-a680-45c0-b4e8-0930c73e2e6b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4316680c09083874958b09e3d47897f844490827fafe87653b1459708ba55af3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c467a466c26e4cd6c762317dcc08a84b7bd61aa044e8d075f7ada83206764d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2a314775a13f5a612d19ad1c27ccadc74e4ad191c84bd78a1068105509e514a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://863243ea1e361e4f5cd70d64fa4a0c078de0c0529c9928b413db44a13924c474\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cca6b93a0655ce53ad6d1ccfe625533a67c58b3876c57fe4473d699c95938bd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc81c994f3f21269783d08f800aa4a58bdf43d229b5b61d7f40f8706230fd6c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc81c994f3f21269783d08f800aa4a58bdf43d229b5b61d7f40f8706230fd6c5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:47:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:47:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://260cdbf12890f13f411672fde6af3350b3ddc30824ac5b43110247ae1c7b74fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://260cdbf12890f13f411672fde6af3350b3ddc30824ac5b43110247ae1c7b74fc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:47:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:47:52Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://768f56feda9f5952088933827a03f9d23d7d807d1354578de2335e7c3e8beaaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://768f56feda9f5952088933827a03f9d23d7d807d1354578de2335e7c3e8beaaf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:47:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:47:49Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:50:19Z is after 2025-08-24T17:21:41Z" Feb 23 08:50:19 crc kubenswrapper[4940]: E0223 08:50:19.601130 4940 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 23 08:50:19 crc kubenswrapper[4940]: I0223 08:50:19.607551 4940 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 16:39:49.513376495 +0000 UTC Feb 23 08:50:19 crc kubenswrapper[4940]: I0223 08:50:19.614267 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ll9gt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac7931c6-f7d4-4166-b332-d954717f67c0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4dcef9eae99fcca399a3b47ff74ebb46cd4c0854422c9153d2ceb6f3a1ea8a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8clf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:49:37Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ll9gt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:50:19Z is after 2025-08-24T17:21:41Z" Feb 23 08:50:19 crc kubenswrapper[4940]: I0223 08:50:19.630913 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f3f2cfd6-5ddf-436d-998f-440f1cc642b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a14bc73401f5a1f767af0ebe6b81d75f4f735acadd163be9aa0b78d067677a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpqgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a57b4e844c0d09d7debf8e4d2507287815cc75be308dc7010647ac2041181dc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpqgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:49:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-26mgs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:50:19Z is after 2025-08-24T17:21:41Z" Feb 23 08:50:19 crc kubenswrapper[4940]: I0223 08:50:19.643605 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gjpr8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28e2b63c-99a7-46aa-a8e2-cee2bf5d7066\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6bce5b28c2d7a3946303e9ebd8738dd2337190884db0025e336d96f1641171c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4p7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5d550949b5cf8f3efe2267f466a5c769e2862ab361c86a14eef0baf874e7dd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4p7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:49:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gjpr8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:50:19Z is after 2025-08-24T17:21:41Z" Feb 23 08:50:19 crc kubenswrapper[4940]: I0223 08:50:19.735023 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-czrqm_ec3904ad-5d0b-46b4-9c13-68454d9a3cb2/kube-multus/0.log" Feb 23 08:50:19 crc kubenswrapper[4940]: I0223 08:50:19.735110 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-czrqm" event={"ID":"ec3904ad-5d0b-46b4-9c13-68454d9a3cb2","Type":"ContainerStarted","Data":"fa9251aa0cd26d68256087c8d2f9d36cae16abcb8db328bb7d53cdd7bfacafd0"} Feb 23 08:50:19 crc kubenswrapper[4940]: I0223 08:50:19.756040 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f3f2cfd6-5ddf-436d-998f-440f1cc642b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a14bc73401f5a1f767af0ebe6b81d75f4f735acadd163be9aa0b78d067677a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpqgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a57b4e844c0d09d7debf8e4d2507287815cc75be308dc7010647ac2041181dc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpqgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:49:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-26mgs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:50:19Z is after 2025-08-24T17:21:41Z" Feb 23 08:50:19 crc kubenswrapper[4940]: I0223 08:50:19.770880 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gjpr8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28e2b63c-99a7-46aa-a8e2-cee2bf5d7066\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6bce5b28c2d7a3946303e9ebd8738dd2337190884db0025e336d96f1641171c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4p7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5d550949b5cf8f3efe2267f466a5c769e2862ab361c86a14eef0baf874e7dd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4p7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:49:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gjpr8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:50:19Z is after 2025-08-24T17:21:41Z" Feb 23 08:50:19 crc kubenswrapper[4940]: I0223 08:50:19.787056 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67d1926d-47ed-4a5c-b868-690599126446\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ee3bc200ebaed1c133ad32e65db44c8613c48203736dada71c0881d3d4f45a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6fefd6cf7ee54d7687a700f08f52ff20d438289f846da4d811f3295e85455c1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee77b54ead604e1f64b0057494a4ba082ac7f59289f563ce28ec6048485e3b07\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://36385cbf7f0935ced8cb20a3a65c7d1802e2faf23bce8aeca1a040662b878b0f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://947d88435f9dfeac492c7a476d5413d6c9f65fea2e6a8322bed3a197be4e7c11\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"message\\\":\\\"le observer\\\\nW0223 08:48:59.155726 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0223 08:48:59.155899 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0223 08:48:59.156685 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1318688766/tls.crt::/tmp/serving-cert-1318688766/tls.key\\\\\\\"\\\\nI0223 08:48:59.543716 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0223 08:48:59.548278 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0223 08:48:59.548307 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0223 08:48:59.548334 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0223 08:48:59.548340 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0223 08:48:59.558852 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0223 08:48:59.558871 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0223 08:48:59.558894 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0223 08:48:59.558902 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0223 08:48:59.558909 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0223 08:48:59.558912 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0223 08:48:59.558917 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0223 08:48:59.558922 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0223 08:48:59.560387 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T08:48:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c47633e36ba0e4b4070060e2925e103b6efa68771b18f7a7f429546f9f05c1d9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:53Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7d8e4f8739813cb8d24afc9999755886cd4a77958b8b4e1514fc0bc53403258\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7d8e4f8739813cb8d24afc9999755886cd4a77958b8b4e1514fc0bc53403258\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:47:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:47:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:47:49Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:50:19Z is after 2025-08-24T17:21:41Z" Feb 23 08:50:19 crc kubenswrapper[4940]: I0223 08:50:19.802635 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88c878ec-2b3e-452c-bc23-a395b09fa6aa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3a121c424818b098798b4f28bc000a66157862597ce373a4d4748fb75f179fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3ba561d0277929d8f55fcd1fe87d8f37a2d126b94eaf9ac25bc79ad17f61e26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://90753b936aa559224b0241e456ce800658b0cdaa5c92cf54972255319d1ff334\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d742db4469dc738b276ecd68cf5aea77ad5e6fb4460a4fdca4a626a9a073956f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d742db4469dc738b276ecd68cf5aea77ad5e6fb4460a4fdca4a626a9a073956f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:47:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:47:50Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:47:49Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:50:19Z is after 2025-08-24T17:21:41Z" Feb 23 08:50:19 crc kubenswrapper[4940]: I0223 08:50:19.816634 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4vcwd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41834650-70c0-4558-9052-d7cdfc785e09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a8663adb017b0225e3a827e7670572c25e2b8123a2153e9f99830d8ddc60ebe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwtwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:49:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4vcwd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:50:19Z is after 2025-08-24T17:21:41Z" Feb 23 08:50:19 crc kubenswrapper[4940]: I0223 08:50:19.830232 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b55d9910-222c-4c04-8a15-cecc288b8dd6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2696757a27ff629002d6fe762776487514e74ad9465357240b3f196cf9e3cec7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73db3b91fa5cdca73acde9a3bcb9062394a54e61e87020b5d656290a54aa4f70\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T08:48:50Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0223 08:48:20.958890 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0223 08:48:20.960636 1 observer_polling.go:159] Starting file observer\\\\nI0223 08:48:20.961488 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0223 08:48:20.962119 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0223 08:48:48.479316 1 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials\\\\nI0223 08:48:50.579389 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0223 08:48:50.579494 1 cmd.go:179] failed checking apiserver connectivity: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T08:48:20Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:48:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aceeb7fc620d96b7e2a474de52d7b3fb007417f4f1ec81e5853724d54381566\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dbb2f4667fe28c3ae9389cece3d7246f31a8bd82328370a699dae73e11b19a4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0c1a355a4218aa74f5a9f20e74427e20e7dd1864bfb683eeb2ac43ef6d7357f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:47:49Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:50:19Z is after 2025-08-24T17:21:41Z" Feb 23 08:50:19 crc kubenswrapper[4940]: I0223 08:50:19.843467 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ca6a355de735d39d8c069429deed2a9983ba8d97d9210475c0b22791b6357d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:50:19Z is after 2025-08-24T17:21:41Z" Feb 23 08:50:19 crc kubenswrapper[4940]: I0223 08:50:19.856793 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:50:19Z is after 2025-08-24T17:21:41Z" Feb 23 08:50:19 crc kubenswrapper[4940]: I0223 08:50:19.875878 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-tj6ms" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"353c05f4-9ef1-4c0f-8388-8b56cc4c22d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1be9bb45923468ac8fc8f206f3493431ef579d16d158ba7cf330944b1eb7585\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j86pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e45b5826dae32da76487ca6179e00691e0f92748e895fbe42aa14d39a808a12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e45b5826dae32da76487ca6179e00691e0f92748e895fbe42aa14d39a808a12\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j86pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a9386fc4a4a81adf5f99f88359b958f8c47f37eb176a860c96e5e3b6a219f3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a9386fc4a4a81adf5f99f88359b958f8c47f37eb176a860c96e5e3b6a219f3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:49:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j86pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0e096f58de305f0df76444507c878fdf5eb1a134deb3725e1e46ac4d6581541\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0e096f58de305f0df76444507c878fdf5eb1a134deb3725e1e46ac4d6581541\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:49:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:49:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j86pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcc8e2b5ae391db63624b8bcbaa5cab45fcc5f168db2a52700d34ad2c3eb9b89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fcc8e2b5ae391db63624b8bcbaa5cab45fcc5f168db2a52700d34ad2c3eb9b89\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:49:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:49:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j86pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0b3f29a8c1eea72788476a0aac20ec99cb908df8983b5271448ae824b8ab96a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0b3f29a8c1eea72788476a0aac20ec99cb908df8983b5271448ae824b8ab96a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:49:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:49:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j86pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d249ae04e3dece913fbfa0f45169adf9d267043d3cf2dd93c4ace24c5fdf1dd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d249ae04e3dece913fbfa0f45169adf9d267043d3cf2dd93c4ace24c5fdf1dd6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:49:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:49:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j86pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:49:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-tj6ms\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:50:19Z is after 2025-08-24T17:21:41Z" Feb 23 08:50:19 crc kubenswrapper[4940]: I0223 08:50:19.888001 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jwb9b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d8dd2da9-cea0-44f5-8c93-91b79c7f66ea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmbl4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmbl4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:49:44Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jwb9b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:50:19Z is after 2025-08-24T17:21:41Z" Feb 23 08:50:19 crc kubenswrapper[4940]: I0223 08:50:19.910738 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qkw6w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0b5a971-c6f4-4518-9bb3-49d228275668\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://524ac500bdffecad2d1533ea863b5f2dc0acd130368e80faa476facf6817bcc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4558e2adbf43351cb79df3a10fc2e37fa7edc8eae79ceb459ae1ea068e8df21b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://683969deceafc005dbe91b7dbc1fd159cc1da9f55e48112ff68947c40997891e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cff793d0cde25f9d7edf3abaaf81005870d7ad01f985cfcc3cf3f5a57ea062b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f189355dc4d43c8b1df6111c9c3a128e90b724b976602933c384817e4f51882\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ef9add89b50dd7b480368148d404e8f7128d92e6d921c3c89a17463706f3dca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f032dbff4d8efcc9eac6064907ff8c278bd88ec4f46b8cacc75876dc5a85ce97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f032dbff4d8efcc9eac6064907ff8c278bd88ec4f46b8cacc75876dc5a85ce97\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-23T08:49:56Z\\\",\\\"message\\\":\\\"te: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:56Z is after 2025-08-24T17:21:41Z]\\\\nI0223 08:49:56.237357 6968 ovn.go:134] Ensuring zone local for Pod openshift-kube-scheduler/openshift-kube-scheduler-crc in node crc\\\\nI0223 08:49:56.237358 6968 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-operator/iptables-alerter-4ln5h after 0 failed attempt(s)\\\\nI0223 08:49:56.237362 6968 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-scheduler/openshift-kube-scheduler-crc after 0 failed attempt(s)\\\\nI0223 08:49:56.237223 6968 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-operator-lifecycle-manager/package-server-manager-metrics]} name:Service_openshift-operator-lifecycle-manager/package-server-manager-metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.110:8443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {f9232b32-e89f-4c8e-acc4-c6801b70dcb0}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:NB_Global Row:map[] Rows:[] Columns:[] Mutations:[{Column:nb_\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T08:49:55Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-qkw6w_openshift-ovn-kubernetes(d0b5a971-c6f4-4518-9bb3-49d228275668)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7a97208ce587c7633bbea9ad618e0721a712dfd62ff249a5843018bc125922a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56ef19c4e2659a35760fb8e2bb8a79c35a283fa3f4b00766bdd237d1464bd933\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56ef19c4e2659a35760fb8e2bb8a79c35a283fa3f4b00766bdd237d1464bd933\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:49:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:49:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:49:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qkw6w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:50:19Z is after 2025-08-24T17:21:41Z" Feb 23 08:50:19 crc kubenswrapper[4940]: I0223 08:50:19.921548 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7a3f942c-4bf7-4e48-84ef-83ca0a07bee7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4aa4e3eae7c81a14b6302a982f67ee3d9ecc29fea292654bb6ed70d209b7b202\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0f77059ccd04d8f502c2652a5e62a780bf57c1beb0bde2aa974f3d46c798f64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0f77059ccd04d8f502c2652a5e62a780bf57c1beb0bde2aa974f3d46c798f64\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:47:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:47:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:47:49Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:50:19Z is after 2025-08-24T17:21:41Z" Feb 23 08:50:19 crc kubenswrapper[4940]: I0223 08:50:19.940859 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47a2763e-a680-45c0-b4e8-0930c73e2e6b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4316680c09083874958b09e3d47897f844490827fafe87653b1459708ba55af3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c467a466c26e4cd6c762317dcc08a84b7bd61aa044e8d075f7ada83206764d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2a314775a13f5a612d19ad1c27ccadc74e4ad191c84bd78a1068105509e514a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://863243ea1e361e4f5cd70d64fa4a0c078de0c0529c9928b413db44a13924c474\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cca6b93a0655ce53ad6d1ccfe625533a67c58b3876c57fe4473d699c95938bd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc81c994f3f21269783d08f800aa4a58bdf43d229b5b61d7f40f8706230fd6c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc81c994f3f21269783d08f800aa4a58bdf43d229b5b61d7f40f8706230fd6c5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:47:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:47:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://260cdbf12890f13f411672fde6af3350b3ddc30824ac5b43110247ae1c7b74fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://260cdbf12890f13f411672fde6af3350b3ddc30824ac5b43110247ae1c7b74fc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:47:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:47:52Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://768f56feda9f5952088933827a03f9d23d7d807d1354578de2335e7c3e8beaaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://768f56feda9f5952088933827a03f9d23d7d807d1354578de2335e7c3e8beaaf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:47:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:47:49Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:50:19Z is after 2025-08-24T17:21:41Z" Feb 23 08:50:19 crc kubenswrapper[4940]: I0223 08:50:19.956372 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:50:19Z is after 2025-08-24T17:21:41Z" Feb 23 08:50:19 crc kubenswrapper[4940]: I0223 08:50:19.971194 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4164a479bddce53118172c24d9dc0c8174560ac6f49d2a1fa7321845f7f4020a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:50:19Z is after 2025-08-24T17:21:41Z" Feb 23 08:50:19 crc kubenswrapper[4940]: I0223 08:50:19.985510 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:50:19Z is after 2025-08-24T17:21:41Z" Feb 23 08:50:19 crc kubenswrapper[4940]: I0223 08:50:19.999356 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f5fabdbd24e079413b28b30b7e72c60763c64d08d55badde2d84d38dcbc8a09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bce84ef684469eb1b554c08d38cbd547ce6fa507b189a9f63c740583e66bc5e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:50:19Z is after 2025-08-24T17:21:41Z" Feb 23 08:50:20 crc kubenswrapper[4940]: I0223 08:50:20.012397 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-czrqm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec3904ad-5d0b-46b4-9c13-68454d9a3cb2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:50:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:50:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa9251aa0cd26d68256087c8d2f9d36cae16abcb8db328bb7d53cdd7bfacafd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5c3f117be2e1eba2edb562924ecec6d7ac44b508366bc7c63a86214ada740dc\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-23T08:50:18Z\\\",\\\"message\\\":\\\"2026-02-23T08:49:33+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_ef4ba046-8b26-4715-b009-2f66c82885e5\\\\n2026-02-23T08:49:33+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_ef4ba046-8b26-4715-b009-2f66c82885e5 to /host/opt/cni/bin/\\\\n2026-02-23T08:49:33Z [verbose] multus-daemon started\\\\n2026-02-23T08:49:33Z [verbose] Readiness Indicator file check\\\\n2026-02-23T08:50:18Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T08:49:31Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:50:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8jmt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:49:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-czrqm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:50:20Z is after 2025-08-24T17:21:41Z" Feb 23 08:50:20 crc kubenswrapper[4940]: I0223 08:50:20.021276 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ll9gt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac7931c6-f7d4-4166-b332-d954717f67c0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4dcef9eae99fcca399a3b47ff74ebb46cd4c0854422c9153d2ceb6f3a1ea8a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8clf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:49:37Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ll9gt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:50:20Z is after 2025-08-24T17:21:41Z" Feb 23 08:50:20 crc kubenswrapper[4940]: I0223 08:50:20.608995 4940 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 18:53:45.746673177 +0000 UTC Feb 23 08:50:21 crc kubenswrapper[4940]: I0223 08:50:21.345407 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jwb9b" Feb 23 08:50:21 crc kubenswrapper[4940]: I0223 08:50:21.345526 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 08:50:21 crc kubenswrapper[4940]: E0223 08:50:21.345653 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jwb9b" podUID="d8dd2da9-cea0-44f5-8c93-91b79c7f66ea" Feb 23 08:50:21 crc kubenswrapper[4940]: E0223 08:50:21.345811 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 08:50:21 crc kubenswrapper[4940]: I0223 08:50:21.345935 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 08:50:21 crc kubenswrapper[4940]: E0223 08:50:21.346067 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 08:50:21 crc kubenswrapper[4940]: I0223 08:50:21.346270 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 08:50:21 crc kubenswrapper[4940]: E0223 08:50:21.346505 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 08:50:21 crc kubenswrapper[4940]: I0223 08:50:21.610163 4940 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 16:12:13.616305139 +0000 UTC Feb 23 08:50:22 crc kubenswrapper[4940]: I0223 08:50:22.611280 4940 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 20:23:20.133251551 +0000 UTC Feb 23 08:50:23 crc kubenswrapper[4940]: I0223 08:50:23.345139 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jwb9b" Feb 23 08:50:23 crc kubenswrapper[4940]: I0223 08:50:23.345181 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 08:50:23 crc kubenswrapper[4940]: I0223 08:50:23.345244 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 08:50:23 crc kubenswrapper[4940]: E0223 08:50:23.345546 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jwb9b" podUID="d8dd2da9-cea0-44f5-8c93-91b79c7f66ea" Feb 23 08:50:23 crc kubenswrapper[4940]: E0223 08:50:23.345827 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 08:50:23 crc kubenswrapper[4940]: I0223 08:50:23.345838 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 08:50:23 crc kubenswrapper[4940]: E0223 08:50:23.346320 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 08:50:23 crc kubenswrapper[4940]: E0223 08:50:23.346506 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 08:50:23 crc kubenswrapper[4940]: I0223 08:50:23.346779 4940 scope.go:117] "RemoveContainer" containerID="f032dbff4d8efcc9eac6064907ff8c278bd88ec4f46b8cacc75876dc5a85ce97" Feb 23 08:50:23 crc kubenswrapper[4940]: I0223 08:50:23.611960 4940 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 05:33:13.063240762 +0000 UTC Feb 23 08:50:23 crc kubenswrapper[4940]: I0223 08:50:23.750014 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qkw6w_d0b5a971-c6f4-4518-9bb3-49d228275668/ovnkube-controller/2.log" Feb 23 08:50:23 crc kubenswrapper[4940]: I0223 08:50:23.752428 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qkw6w" event={"ID":"d0b5a971-c6f4-4518-9bb3-49d228275668","Type":"ContainerStarted","Data":"1649bf6fd3e252298bb7f2414c8ed3d153ffbc4ec0a8497b91cc94cd41c0359f"} Feb 23 08:50:23 crc kubenswrapper[4940]: I0223 08:50:23.752889 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-qkw6w" Feb 23 08:50:23 crc kubenswrapper[4940]: I0223 08:50:23.767326 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f3f2cfd6-5ddf-436d-998f-440f1cc642b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a14bc73401f5a1f767af0ebe6b81d75f4f735acadd163be9aa0b78d067677a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpqgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a57b4e844c0d09d7debf8e4d2507287815cc75be308dc7010647ac2041181dc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpqgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:49:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-26mgs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:50:23Z is after 2025-08-24T17:21:41Z" Feb 23 08:50:23 crc kubenswrapper[4940]: I0223 08:50:23.778223 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gjpr8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28e2b63c-99a7-46aa-a8e2-cee2bf5d7066\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6bce5b28c2d7a3946303e9ebd8738dd2337190884db0025e336d96f1641171c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4p7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5d550949b5cf8f3efe2267f466a5c769e2862ab361c86a14eef0baf874e7dd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4p7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:49:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gjpr8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:50:23Z is after 2025-08-24T17:21:41Z" Feb 23 08:50:23 crc kubenswrapper[4940]: I0223 08:50:23.790540 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4vcwd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41834650-70c0-4558-9052-d7cdfc785e09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a8663adb017b0225e3a827e7670572c25e2b8123a2153e9f99830d8ddc60ebe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwtwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:49:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4vcwd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:50:23Z is after 2025-08-24T17:21:41Z" Feb 23 08:50:23 crc kubenswrapper[4940]: I0223 08:50:23.818110 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67d1926d-47ed-4a5c-b868-690599126446\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ee3bc200ebaed1c133ad32e65db44c8613c48203736dada71c0881d3d4f45a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6fefd6cf7ee54d7687a700f08f52ff20d438289f846da4d811f3295e85455c1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee77b54ead604e1f64b0057494a4ba082ac7f59289f563ce28ec6048485e3b07\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://36385cbf7f0935ced8cb20a3a65c7d1802e2faf23bce8aeca1a040662b878b0f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://947d88435f9dfeac492c7a476d5413d6c9f65fea2e6a8322bed3a197be4e7c11\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"message\\\":\\\"le observer\\\\nW0223 08:48:59.155726 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0223 08:48:59.155899 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0223 08:48:59.156685 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1318688766/tls.crt::/tmp/serving-cert-1318688766/tls.key\\\\\\\"\\\\nI0223 08:48:59.543716 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0223 08:48:59.548278 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0223 08:48:59.548307 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0223 08:48:59.548334 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0223 08:48:59.548340 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0223 08:48:59.558852 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0223 08:48:59.558871 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0223 08:48:59.558894 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0223 08:48:59.558902 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0223 08:48:59.558909 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0223 08:48:59.558912 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0223 08:48:59.558917 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0223 08:48:59.558922 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0223 08:48:59.560387 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T08:48:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c47633e36ba0e4b4070060e2925e103b6efa68771b18f7a7f429546f9f05c1d9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:53Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7d8e4f8739813cb8d24afc9999755886cd4a77958b8b4e1514fc0bc53403258\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7d8e4f8739813cb8d24afc9999755886cd4a77958b8b4e1514fc0bc53403258\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:47:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:47:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:47:49Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:50:23Z is after 2025-08-24T17:21:41Z" Feb 23 08:50:23 crc kubenswrapper[4940]: I0223 08:50:23.830898 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88c878ec-2b3e-452c-bc23-a395b09fa6aa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3a121c424818b098798b4f28bc000a66157862597ce373a4d4748fb75f179fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3ba561d0277929d8f55fcd1fe87d8f37a2d126b94eaf9ac25bc79ad17f61e26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://90753b936aa559224b0241e456ce800658b0cdaa5c92cf54972255319d1ff334\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d742db4469dc738b276ecd68cf5aea77ad5e6fb4460a4fdca4a626a9a073956f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d742db4469dc738b276ecd68cf5aea77ad5e6fb4460a4fdca4a626a9a073956f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:47:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:47:50Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:47:49Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:50:23Z is after 2025-08-24T17:21:41Z" Feb 23 08:50:23 crc kubenswrapper[4940]: I0223 08:50:23.866450 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:50:23Z is after 2025-08-24T17:21:41Z" Feb 23 08:50:23 crc kubenswrapper[4940]: I0223 08:50:23.883231 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-tj6ms" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"353c05f4-9ef1-4c0f-8388-8b56cc4c22d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1be9bb45923468ac8fc8f206f3493431ef579d16d158ba7cf330944b1eb7585\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j86pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e45b5826dae32da76487ca6179e00691e0f92748e895fbe42aa14d39a808a12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e45b5826dae32da76487ca6179e00691e0f92748e895fbe42aa14d39a808a12\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j86pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a9386fc4a4a81adf5f99f88359b958f8c47f37eb176a860c96e5e3b6a219f3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a9386fc4a4a81adf5f99f88359b958f8c47f37eb176a860c96e5e3b6a219f3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:49:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j86pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0e096f58de305f0df76444507c878fdf5eb1a134deb3725e1e46ac4d6581541\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0e096f58de305f0df76444507c878fdf5eb1a134deb3725e1e46ac4d6581541\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:49:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:49:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j86pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcc8e2b5ae391db63624b8bcbaa5cab45fcc5f168db2a52700d34ad2c3eb9b89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fcc8e2b5ae391db63624b8bcbaa5cab45fcc5f168db2a52700d34ad2c3eb9b89\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:49:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:49:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j86pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0b3f29a8c1eea72788476a0aac20ec99cb908df8983b5271448ae824b8ab96a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0b3f29a8c1eea72788476a0aac20ec99cb908df8983b5271448ae824b8ab96a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:49:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:49:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j86pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d249ae04e3dece913fbfa0f45169adf9d267043d3cf2dd93c4ace24c5fdf1dd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d249ae04e3dece913fbfa0f45169adf9d267043d3cf2dd93c4ace24c5fdf1dd6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:49:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:49:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j86pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:49:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-tj6ms\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:50:23Z is after 2025-08-24T17:21:41Z" Feb 23 08:50:23 crc kubenswrapper[4940]: I0223 08:50:23.896041 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jwb9b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d8dd2da9-cea0-44f5-8c93-91b79c7f66ea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmbl4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmbl4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:49:44Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jwb9b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:50:23Z is after 2025-08-24T17:21:41Z" Feb 23 08:50:23 crc kubenswrapper[4940]: I0223 08:50:23.911564 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b55d9910-222c-4c04-8a15-cecc288b8dd6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2696757a27ff629002d6fe762776487514e74ad9465357240b3f196cf9e3cec7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73db3b91fa5cdca73acde9a3bcb9062394a54e61e87020b5d656290a54aa4f70\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T08:48:50Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0223 08:48:20.958890 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0223 08:48:20.960636 1 observer_polling.go:159] Starting file observer\\\\nI0223 08:48:20.961488 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0223 08:48:20.962119 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0223 08:48:48.479316 1 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials\\\\nI0223 08:48:50.579389 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0223 08:48:50.579494 1 cmd.go:179] failed checking apiserver connectivity: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T08:48:20Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:48:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aceeb7fc620d96b7e2a474de52d7b3fb007417f4f1ec81e5853724d54381566\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dbb2f4667fe28c3ae9389cece3d7246f31a8bd82328370a699dae73e11b19a4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0c1a355a4218aa74f5a9f20e74427e20e7dd1864bfb683eeb2ac43ef6d7357f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:47:49Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:50:23Z is after 2025-08-24T17:21:41Z" Feb 23 08:50:23 crc kubenswrapper[4940]: I0223 08:50:23.950056 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ca6a355de735d39d8c069429deed2a9983ba8d97d9210475c0b22791b6357d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:50:23Z is after 2025-08-24T17:21:41Z" Feb 23 08:50:23 crc kubenswrapper[4940]: I0223 08:50:23.973330 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:50:23Z is after 2025-08-24T17:21:41Z" Feb 23 08:50:23 crc kubenswrapper[4940]: I0223 08:50:23.994434 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4164a479bddce53118172c24d9dc0c8174560ac6f49d2a1fa7321845f7f4020a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:50:23Z is after 2025-08-24T17:21:41Z" Feb 23 08:50:24 crc kubenswrapper[4940]: I0223 08:50:24.007497 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:50:24Z is after 2025-08-24T17:21:41Z" Feb 23 08:50:24 crc kubenswrapper[4940]: I0223 08:50:24.020514 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f5fabdbd24e079413b28b30b7e72c60763c64d08d55badde2d84d38dcbc8a09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bce84ef684469eb1b554c08d38cbd547ce6fa507b189a9f63c740583e66bc5e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:50:24Z is after 2025-08-24T17:21:41Z" Feb 23 08:50:24 crc kubenswrapper[4940]: I0223 08:50:24.037311 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-czrqm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec3904ad-5d0b-46b4-9c13-68454d9a3cb2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:50:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:50:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa9251aa0cd26d68256087c8d2f9d36cae16abcb8db328bb7d53cdd7bfacafd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5c3f117be2e1eba2edb562924ecec6d7ac44b508366bc7c63a86214ada740dc\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-23T08:50:18Z\\\",\\\"message\\\":\\\"2026-02-23T08:49:33+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_ef4ba046-8b26-4715-b009-2f66c82885e5\\\\n2026-02-23T08:49:33+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_ef4ba046-8b26-4715-b009-2f66c82885e5 to /host/opt/cni/bin/\\\\n2026-02-23T08:49:33Z [verbose] multus-daemon started\\\\n2026-02-23T08:49:33Z [verbose] Readiness Indicator file check\\\\n2026-02-23T08:50:18Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T08:49:31Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:50:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8jmt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:49:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-czrqm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:50:24Z is after 2025-08-24T17:21:41Z" Feb 23 08:50:24 crc kubenswrapper[4940]: I0223 08:50:24.059185 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qkw6w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0b5a971-c6f4-4518-9bb3-49d228275668\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://524ac500bdffecad2d1533ea863b5f2dc0acd130368e80faa476facf6817bcc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4558e2adbf43351cb79df3a10fc2e37fa7edc8eae79ceb459ae1ea068e8df21b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://683969deceafc005dbe91b7dbc1fd159cc1da9f55e48112ff68947c40997891e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cff793d0cde25f9d7edf3abaaf81005870d7ad01f985cfcc3cf3f5a57ea062b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f189355dc4d43c8b1df6111c9c3a128e90b724b976602933c384817e4f51882\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ef9add89b50dd7b480368148d404e8f7128d92e6d921c3c89a17463706f3dca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1649bf6fd3e252298bb7f2414c8ed3d153ffbc4ec0a8497b91cc94cd41c0359f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f032dbff4d8efcc9eac6064907ff8c278bd88ec4f46b8cacc75876dc5a85ce97\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-23T08:49:56Z\\\",\\\"message\\\":\\\"te: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:56Z is after 2025-08-24T17:21:41Z]\\\\nI0223 08:49:56.237357 6968 ovn.go:134] Ensuring zone local for Pod openshift-kube-scheduler/openshift-kube-scheduler-crc in node crc\\\\nI0223 08:49:56.237358 6968 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-operator/iptables-alerter-4ln5h after 0 failed attempt(s)\\\\nI0223 08:49:56.237362 6968 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-scheduler/openshift-kube-scheduler-crc after 0 failed attempt(s)\\\\nI0223 08:49:56.237223 6968 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-operator-lifecycle-manager/package-server-manager-metrics]} name:Service_openshift-operator-lifecycle-manager/package-server-manager-metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.110:8443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {f9232b32-e89f-4c8e-acc4-c6801b70dcb0}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:NB_Global Row:map[] Rows:[] Columns:[] Mutations:[{Column:nb_\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T08:49:55Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:50:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7a97208ce587c7633bbea9ad618e0721a712dfd62ff249a5843018bc125922a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56ef19c4e2659a35760fb8e2bb8a79c35a283fa3f4b00766bdd237d1464bd933\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56ef19c4e2659a35760fb8e2bb8a79c35a283fa3f4b00766bdd237d1464bd933\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:49:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:49:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:49:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qkw6w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:50:24Z is after 2025-08-24T17:21:41Z" Feb 23 08:50:24 crc kubenswrapper[4940]: I0223 08:50:24.070781 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7a3f942c-4bf7-4e48-84ef-83ca0a07bee7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4aa4e3eae7c81a14b6302a982f67ee3d9ecc29fea292654bb6ed70d209b7b202\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0f77059ccd04d8f502c2652a5e62a780bf57c1beb0bde2aa974f3d46c798f64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0f77059ccd04d8f502c2652a5e62a780bf57c1beb0bde2aa974f3d46c798f64\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:47:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:47:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:47:49Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:50:24Z is after 2025-08-24T17:21:41Z" Feb 23 08:50:24 crc kubenswrapper[4940]: I0223 08:50:24.091947 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47a2763e-a680-45c0-b4e8-0930c73e2e6b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4316680c09083874958b09e3d47897f844490827fafe87653b1459708ba55af3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c467a466c26e4cd6c762317dcc08a84b7bd61aa044e8d075f7ada83206764d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2a314775a13f5a612d19ad1c27ccadc74e4ad191c84bd78a1068105509e514a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://863243ea1e361e4f5cd70d64fa4a0c078de0c0529c9928b413db44a13924c474\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cca6b93a0655ce53ad6d1ccfe625533a67c58b3876c57fe4473d699c95938bd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc81c994f3f21269783d08f800aa4a58bdf43d229b5b61d7f40f8706230fd6c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc81c994f3f21269783d08f800aa4a58bdf43d229b5b61d7f40f8706230fd6c5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:47:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:47:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://260cdbf12890f13f411672fde6af3350b3ddc30824ac5b43110247ae1c7b74fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://260cdbf12890f13f411672fde6af3350b3ddc30824ac5b43110247ae1c7b74fc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:47:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:47:52Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://768f56feda9f5952088933827a03f9d23d7d807d1354578de2335e7c3e8beaaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://768f56feda9f5952088933827a03f9d23d7d807d1354578de2335e7c3e8beaaf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:47:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:47:49Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:50:24Z is after 2025-08-24T17:21:41Z" Feb 23 08:50:24 crc kubenswrapper[4940]: I0223 08:50:24.107402 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ll9gt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac7931c6-f7d4-4166-b332-d954717f67c0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4dcef9eae99fcca399a3b47ff74ebb46cd4c0854422c9153d2ceb6f3a1ea8a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8clf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:49:37Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ll9gt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:50:24Z is after 2025-08-24T17:21:41Z" Feb 23 08:50:24 crc kubenswrapper[4940]: E0223 08:50:24.602037 4940 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 23 08:50:24 crc kubenswrapper[4940]: I0223 08:50:24.612695 4940 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 08:06:57.78604936 +0000 UTC Feb 23 08:50:24 crc kubenswrapper[4940]: I0223 08:50:24.759228 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qkw6w_d0b5a971-c6f4-4518-9bb3-49d228275668/ovnkube-controller/3.log" Feb 23 08:50:24 crc kubenswrapper[4940]: I0223 08:50:24.760361 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qkw6w_d0b5a971-c6f4-4518-9bb3-49d228275668/ovnkube-controller/2.log" Feb 23 08:50:24 crc kubenswrapper[4940]: I0223 08:50:24.764866 4940 generic.go:334] "Generic (PLEG): container finished" podID="d0b5a971-c6f4-4518-9bb3-49d228275668" containerID="1649bf6fd3e252298bb7f2414c8ed3d153ffbc4ec0a8497b91cc94cd41c0359f" exitCode=1 Feb 23 08:50:24 crc kubenswrapper[4940]: I0223 08:50:24.764935 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qkw6w" event={"ID":"d0b5a971-c6f4-4518-9bb3-49d228275668","Type":"ContainerDied","Data":"1649bf6fd3e252298bb7f2414c8ed3d153ffbc4ec0a8497b91cc94cd41c0359f"} Feb 23 08:50:24 crc kubenswrapper[4940]: I0223 08:50:24.765045 4940 scope.go:117] "RemoveContainer" containerID="f032dbff4d8efcc9eac6064907ff8c278bd88ec4f46b8cacc75876dc5a85ce97" Feb 23 08:50:24 crc kubenswrapper[4940]: I0223 08:50:24.766311 4940 scope.go:117] "RemoveContainer" containerID="1649bf6fd3e252298bb7f2414c8ed3d153ffbc4ec0a8497b91cc94cd41c0359f" Feb 23 08:50:24 crc kubenswrapper[4940]: E0223 08:50:24.766866 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-qkw6w_openshift-ovn-kubernetes(d0b5a971-c6f4-4518-9bb3-49d228275668)\"" pod="openshift-ovn-kubernetes/ovnkube-node-qkw6w" podUID="d0b5a971-c6f4-4518-9bb3-49d228275668" Feb 23 08:50:24 crc kubenswrapper[4940]: I0223 08:50:24.788007 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f3f2cfd6-5ddf-436d-998f-440f1cc642b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a14bc73401f5a1f767af0ebe6b81d75f4f735acadd163be9aa0b78d067677a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpqgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a57b4e844c0d09d7debf8e4d2507287815cc75be308dc7010647ac2041181dc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpqgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:49:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-26mgs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:50:24Z is after 2025-08-24T17:21:41Z" Feb 23 08:50:24 crc kubenswrapper[4940]: I0223 08:50:24.807721 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gjpr8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28e2b63c-99a7-46aa-a8e2-cee2bf5d7066\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6bce5b28c2d7a3946303e9ebd8738dd2337190884db0025e336d96f1641171c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4p7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5d550949b5cf8f3efe2267f466a5c769e2862ab361c86a14eef0baf874e7dd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4p7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:49:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gjpr8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:50:24Z is after 2025-08-24T17:21:41Z" Feb 23 08:50:24 crc kubenswrapper[4940]: I0223 08:50:24.823630 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4vcwd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41834650-70c0-4558-9052-d7cdfc785e09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a8663adb017b0225e3a827e7670572c25e2b8123a2153e9f99830d8ddc60ebe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwtwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:49:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4vcwd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:50:24Z is after 2025-08-24T17:21:41Z" Feb 23 08:50:24 crc kubenswrapper[4940]: I0223 08:50:24.844476 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67d1926d-47ed-4a5c-b868-690599126446\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ee3bc200ebaed1c133ad32e65db44c8613c48203736dada71c0881d3d4f45a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6fefd6cf7ee54d7687a700f08f52ff20d438289f846da4d811f3295e85455c1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee77b54ead604e1f64b0057494a4ba082ac7f59289f563ce28ec6048485e3b07\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://36385cbf7f0935ced8cb20a3a65c7d1802e2faf23bce8aeca1a040662b878b0f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://947d88435f9dfeac492c7a476d5413d6c9f65fea2e6a8322bed3a197be4e7c11\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"message\\\":\\\"le observer\\\\nW0223 08:48:59.155726 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0223 08:48:59.155899 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0223 08:48:59.156685 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1318688766/tls.crt::/tmp/serving-cert-1318688766/tls.key\\\\\\\"\\\\nI0223 08:48:59.543716 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0223 08:48:59.548278 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0223 08:48:59.548307 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0223 08:48:59.548334 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0223 08:48:59.548340 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0223 08:48:59.558852 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0223 08:48:59.558871 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0223 08:48:59.558894 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0223 08:48:59.558902 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0223 08:48:59.558909 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0223 08:48:59.558912 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0223 08:48:59.558917 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0223 08:48:59.558922 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0223 08:48:59.560387 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T08:48:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c47633e36ba0e4b4070060e2925e103b6efa68771b18f7a7f429546f9f05c1d9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:53Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7d8e4f8739813cb8d24afc9999755886cd4a77958b8b4e1514fc0bc53403258\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7d8e4f8739813cb8d24afc9999755886cd4a77958b8b4e1514fc0bc53403258\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:47:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:47:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:47:49Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:50:24Z is after 2025-08-24T17:21:41Z" Feb 23 08:50:24 crc kubenswrapper[4940]: I0223 08:50:24.863482 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88c878ec-2b3e-452c-bc23-a395b09fa6aa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3a121c424818b098798b4f28bc000a66157862597ce373a4d4748fb75f179fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3ba561d0277929d8f55fcd1fe87d8f37a2d126b94eaf9ac25bc79ad17f61e26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://90753b936aa559224b0241e456ce800658b0cdaa5c92cf54972255319d1ff334\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d742db4469dc738b276ecd68cf5aea77ad5e6fb4460a4fdca4a626a9a073956f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d742db4469dc738b276ecd68cf5aea77ad5e6fb4460a4fdca4a626a9a073956f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:47:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:47:50Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:47:49Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:50:24Z is after 2025-08-24T17:21:41Z" Feb 23 08:50:24 crc kubenswrapper[4940]: I0223 08:50:24.880732 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:50:24Z is after 2025-08-24T17:21:41Z" Feb 23 08:50:24 crc kubenswrapper[4940]: I0223 08:50:24.898000 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-tj6ms" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"353c05f4-9ef1-4c0f-8388-8b56cc4c22d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1be9bb45923468ac8fc8f206f3493431ef579d16d158ba7cf330944b1eb7585\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j86pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e45b5826dae32da76487ca6179e00691e0f92748e895fbe42aa14d39a808a12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e45b5826dae32da76487ca6179e00691e0f92748e895fbe42aa14d39a808a12\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j86pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a9386fc4a4a81adf5f99f88359b958f8c47f37eb176a860c96e5e3b6a219f3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a9386fc4a4a81adf5f99f88359b958f8c47f37eb176a860c96e5e3b6a219f3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:49:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j86pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0e096f58de305f0df76444507c878fdf5eb1a134deb3725e1e46ac4d6581541\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0e096f58de305f0df76444507c878fdf5eb1a134deb3725e1e46ac4d6581541\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:49:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:49:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j86pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcc8e2b5ae391db63624b8bcbaa5cab45fcc5f168db2a52700d34ad2c3eb9b89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fcc8e2b5ae391db63624b8bcbaa5cab45fcc5f168db2a52700d34ad2c3eb9b89\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:49:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:49:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j86pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0b3f29a8c1eea72788476a0aac20ec99cb908df8983b5271448ae824b8ab96a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0b3f29a8c1eea72788476a0aac20ec99cb908df8983b5271448ae824b8ab96a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:49:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:49:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j86pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d249ae04e3dece913fbfa0f45169adf9d267043d3cf2dd93c4ace24c5fdf1dd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d249ae04e3dece913fbfa0f45169adf9d267043d3cf2dd93c4ace24c5fdf1dd6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:49:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:49:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j86pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:49:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-tj6ms\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:50:24Z is after 2025-08-24T17:21:41Z" Feb 23 08:50:24 crc kubenswrapper[4940]: I0223 08:50:24.912180 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jwb9b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d8dd2da9-cea0-44f5-8c93-91b79c7f66ea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmbl4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmbl4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:49:44Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jwb9b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:50:24Z is after 2025-08-24T17:21:41Z" Feb 23 08:50:24 crc kubenswrapper[4940]: I0223 08:50:24.931551 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b55d9910-222c-4c04-8a15-cecc288b8dd6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2696757a27ff629002d6fe762776487514e74ad9465357240b3f196cf9e3cec7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73db3b91fa5cdca73acde9a3bcb9062394a54e61e87020b5d656290a54aa4f70\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T08:48:50Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0223 08:48:20.958890 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0223 08:48:20.960636 1 observer_polling.go:159] Starting file observer\\\\nI0223 08:48:20.961488 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0223 08:48:20.962119 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0223 08:48:48.479316 1 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials\\\\nI0223 08:48:50.579389 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0223 08:48:50.579494 1 cmd.go:179] failed checking apiserver connectivity: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T08:48:20Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:48:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aceeb7fc620d96b7e2a474de52d7b3fb007417f4f1ec81e5853724d54381566\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dbb2f4667fe28c3ae9389cece3d7246f31a8bd82328370a699dae73e11b19a4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0c1a355a4218aa74f5a9f20e74427e20e7dd1864bfb683eeb2ac43ef6d7357f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:47:49Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:50:24Z is after 2025-08-24T17:21:41Z" Feb 23 08:50:24 crc kubenswrapper[4940]: I0223 08:50:24.945327 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ca6a355de735d39d8c069429deed2a9983ba8d97d9210475c0b22791b6357d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:50:24Z is after 2025-08-24T17:21:41Z" Feb 23 08:50:24 crc kubenswrapper[4940]: I0223 08:50:24.961112 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:50:24Z is after 2025-08-24T17:21:41Z" Feb 23 08:50:24 crc kubenswrapper[4940]: I0223 08:50:24.974457 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4164a479bddce53118172c24d9dc0c8174560ac6f49d2a1fa7321845f7f4020a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:50:24Z is after 2025-08-24T17:21:41Z" Feb 23 08:50:24 crc kubenswrapper[4940]: I0223 08:50:24.988450 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:50:24Z is after 2025-08-24T17:21:41Z" Feb 23 08:50:25 crc kubenswrapper[4940]: I0223 08:50:25.004557 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f5fabdbd24e079413b28b30b7e72c60763c64d08d55badde2d84d38dcbc8a09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bce84ef684469eb1b554c08d38cbd547ce6fa507b189a9f63c740583e66bc5e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:50:25Z is after 2025-08-24T17:21:41Z" Feb 23 08:50:25 crc kubenswrapper[4940]: I0223 08:50:25.018182 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-czrqm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec3904ad-5d0b-46b4-9c13-68454d9a3cb2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:50:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:50:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa9251aa0cd26d68256087c8d2f9d36cae16abcb8db328bb7d53cdd7bfacafd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5c3f117be2e1eba2edb562924ecec6d7ac44b508366bc7c63a86214ada740dc\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-23T08:50:18Z\\\",\\\"message\\\":\\\"2026-02-23T08:49:33+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_ef4ba046-8b26-4715-b009-2f66c82885e5\\\\n2026-02-23T08:49:33+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_ef4ba046-8b26-4715-b009-2f66c82885e5 to /host/opt/cni/bin/\\\\n2026-02-23T08:49:33Z [verbose] multus-daemon started\\\\n2026-02-23T08:49:33Z [verbose] Readiness Indicator file check\\\\n2026-02-23T08:50:18Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T08:49:31Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:50:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8jmt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:49:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-czrqm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:50:25Z is after 2025-08-24T17:21:41Z" Feb 23 08:50:25 crc kubenswrapper[4940]: I0223 08:50:25.039386 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qkw6w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0b5a971-c6f4-4518-9bb3-49d228275668\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://524ac500bdffecad2d1533ea863b5f2dc0acd130368e80faa476facf6817bcc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4558e2adbf43351cb79df3a10fc2e37fa7edc8eae79ceb459ae1ea068e8df21b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://683969deceafc005dbe91b7dbc1fd159cc1da9f55e48112ff68947c40997891e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cff793d0cde25f9d7edf3abaaf81005870d7ad01f985cfcc3cf3f5a57ea062b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f189355dc4d43c8b1df6111c9c3a128e90b724b976602933c384817e4f51882\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ef9add89b50dd7b480368148d404e8f7128d92e6d921c3c89a17463706f3dca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1649bf6fd3e252298bb7f2414c8ed3d153ffbc4ec0a8497b91cc94cd41c0359f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f032dbff4d8efcc9eac6064907ff8c278bd88ec4f46b8cacc75876dc5a85ce97\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-23T08:49:56Z\\\",\\\"message\\\":\\\"te: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:49:56Z is after 2025-08-24T17:21:41Z]\\\\nI0223 08:49:56.237357 6968 ovn.go:134] Ensuring zone local for Pod openshift-kube-scheduler/openshift-kube-scheduler-crc in node crc\\\\nI0223 08:49:56.237358 6968 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-operator/iptables-alerter-4ln5h after 0 failed attempt(s)\\\\nI0223 08:49:56.237362 6968 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-scheduler/openshift-kube-scheduler-crc after 0 failed attempt(s)\\\\nI0223 08:49:56.237223 6968 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-operator-lifecycle-manager/package-server-manager-metrics]} name:Service_openshift-operator-lifecycle-manager/package-server-manager-metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.110:8443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {f9232b32-e89f-4c8e-acc4-c6801b70dcb0}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:NB_Global Row:map[] Rows:[] Columns:[] Mutations:[{Column:nb_\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T08:49:55Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1649bf6fd3e252298bb7f2414c8ed3d153ffbc4ec0a8497b91cc94cd41c0359f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-23T08:50:24Z\\\",\\\"message\\\":\\\" for network=default: []services.LB{}\\\\nI0223 08:50:24.347280 7295 obj_retry.go:386] Retry successful for *v1.Pod openshift-ovn-kubernetes/ovnkube-node-qkw6w after 0 failed attempt(s)\\\\nI0223 08:50:24.347288 7295 services_controller.go:453] Built service openshift-ingress-canary/ingress-canary template LB for network=default: []services.LB{}\\\\nF0223 08:50:24.347294 7295 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:50:24Z is after 2025-08-24T17:21:41Z]\\\\nI0223 08:50:24.347304 7295 services_controller.go:454] Service openshift-ingress-canary/ingress-canary for network=default has 2 cluster-wide, 0 per-node configs, 0 t\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T08:50:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7a97208ce587c7633bbea9ad618e0721a712dfd62ff249a5843018bc125922a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56ef19c4e2659a35760fb8e2bb8a79c35a283fa3f4b00766bdd237d1464bd933\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56ef19c4e2659a35760fb8e2bb8a79c35a283fa3f4b00766bdd237d1464bd933\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:49:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:49:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:49:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qkw6w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:50:25Z is after 2025-08-24T17:21:41Z" Feb 23 08:50:25 crc kubenswrapper[4940]: I0223 08:50:25.050858 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7a3f942c-4bf7-4e48-84ef-83ca0a07bee7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4aa4e3eae7c81a14b6302a982f67ee3d9ecc29fea292654bb6ed70d209b7b202\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0f77059ccd04d8f502c2652a5e62a780bf57c1beb0bde2aa974f3d46c798f64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0f77059ccd04d8f502c2652a5e62a780bf57c1beb0bde2aa974f3d46c798f64\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:47:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:47:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:47:49Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:50:25Z is after 2025-08-24T17:21:41Z" Feb 23 08:50:25 crc kubenswrapper[4940]: I0223 08:50:25.070129 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:50:25 crc kubenswrapper[4940]: I0223 08:50:25.070174 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:50:25 crc kubenswrapper[4940]: I0223 08:50:25.070188 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:50:25 crc kubenswrapper[4940]: I0223 08:50:25.070206 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:50:25 crc kubenswrapper[4940]: I0223 08:50:25.070219 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:50:25Z","lastTransitionTime":"2026-02-23T08:50:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:50:25 crc kubenswrapper[4940]: I0223 08:50:25.070594 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47a2763e-a680-45c0-b4e8-0930c73e2e6b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4316680c09083874958b09e3d47897f844490827fafe87653b1459708ba55af3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c467a466c26e4cd6c762317dcc08a84b7bd61aa044e8d075f7ada83206764d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2a314775a13f5a612d19ad1c27ccadc74e4ad191c84bd78a1068105509e514a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://863243ea1e361e4f5cd70d64fa4a0c078de0c0529c9928b413db44a13924c474\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cca6b93a0655ce53ad6d1ccfe625533a67c58b3876c57fe4473d699c95938bd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc81c994f3f21269783d08f800aa4a58bdf43d229b5b61d7f40f8706230fd6c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc81c994f3f21269783d08f800aa4a58bdf43d229b5b61d7f40f8706230fd6c5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:47:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:47:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://260cdbf12890f13f411672fde6af3350b3ddc30824ac5b43110247ae1c7b74fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://260cdbf12890f13f411672fde6af3350b3ddc30824ac5b43110247ae1c7b74fc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:47:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:47:52Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://768f56feda9f5952088933827a03f9d23d7d807d1354578de2335e7c3e8beaaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://768f56feda9f5952088933827a03f9d23d7d807d1354578de2335e7c3e8beaaf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:47:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:47:49Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:50:25Z is after 2025-08-24T17:21:41Z" Feb 23 08:50:25 crc kubenswrapper[4940]: I0223 08:50:25.081201 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ll9gt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac7931c6-f7d4-4166-b332-d954717f67c0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4dcef9eae99fcca399a3b47ff74ebb46cd4c0854422c9153d2ceb6f3a1ea8a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8clf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:49:37Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ll9gt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:50:25Z is after 2025-08-24T17:21:41Z" Feb 23 08:50:25 crc kubenswrapper[4940]: E0223 08:50:25.083094 4940 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T08:50:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T08:50:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T08:50:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T08:50:25Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T08:50:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T08:50:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T08:50:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T08:50:25Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0b40f9a7-6d5b-496d-bcec-88183c6aba29\\\",\\\"systemUUID\\\":\\\"3c406e8c-0d77-4ead-8ee9-37cf28c01cc1\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:50:25Z is after 2025-08-24T17:21:41Z" Feb 23 08:50:25 crc kubenswrapper[4940]: I0223 08:50:25.086354 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:50:25 crc kubenswrapper[4940]: I0223 08:50:25.086407 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:50:25 crc kubenswrapper[4940]: I0223 08:50:25.086433 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:50:25 crc kubenswrapper[4940]: I0223 08:50:25.086455 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:50:25 crc kubenswrapper[4940]: I0223 08:50:25.086469 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:50:25Z","lastTransitionTime":"2026-02-23T08:50:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:50:25 crc kubenswrapper[4940]: E0223 08:50:25.100070 4940 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T08:50:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T08:50:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T08:50:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T08:50:25Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T08:50:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T08:50:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T08:50:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T08:50:25Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0b40f9a7-6d5b-496d-bcec-88183c6aba29\\\",\\\"systemUUID\\\":\\\"3c406e8c-0d77-4ead-8ee9-37cf28c01cc1\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:50:25Z is after 2025-08-24T17:21:41Z" Feb 23 08:50:25 crc kubenswrapper[4940]: I0223 08:50:25.103982 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:50:25 crc kubenswrapper[4940]: I0223 08:50:25.104017 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:50:25 crc kubenswrapper[4940]: I0223 08:50:25.104027 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:50:25 crc kubenswrapper[4940]: I0223 08:50:25.104043 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:50:25 crc kubenswrapper[4940]: I0223 08:50:25.104055 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:50:25Z","lastTransitionTime":"2026-02-23T08:50:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:50:25 crc kubenswrapper[4940]: E0223 08:50:25.115574 4940 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T08:50:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T08:50:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T08:50:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T08:50:25Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T08:50:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T08:50:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T08:50:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T08:50:25Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0b40f9a7-6d5b-496d-bcec-88183c6aba29\\\",\\\"systemUUID\\\":\\\"3c406e8c-0d77-4ead-8ee9-37cf28c01cc1\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:50:25Z is after 2025-08-24T17:21:41Z" Feb 23 08:50:25 crc kubenswrapper[4940]: I0223 08:50:25.118753 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:50:25 crc kubenswrapper[4940]: I0223 08:50:25.118789 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:50:25 crc kubenswrapper[4940]: I0223 08:50:25.118799 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:50:25 crc kubenswrapper[4940]: I0223 08:50:25.118816 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:50:25 crc kubenswrapper[4940]: I0223 08:50:25.118829 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:50:25Z","lastTransitionTime":"2026-02-23T08:50:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:50:25 crc kubenswrapper[4940]: E0223 08:50:25.131539 4940 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T08:50:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T08:50:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T08:50:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T08:50:25Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T08:50:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T08:50:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T08:50:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T08:50:25Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0b40f9a7-6d5b-496d-bcec-88183c6aba29\\\",\\\"systemUUID\\\":\\\"3c406e8c-0d77-4ead-8ee9-37cf28c01cc1\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:50:25Z is after 2025-08-24T17:21:41Z" Feb 23 08:50:25 crc kubenswrapper[4940]: I0223 08:50:25.135749 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:50:25 crc kubenswrapper[4940]: I0223 08:50:25.135794 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:50:25 crc kubenswrapper[4940]: I0223 08:50:25.135809 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:50:25 crc kubenswrapper[4940]: I0223 08:50:25.135828 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:50:25 crc kubenswrapper[4940]: I0223 08:50:25.135844 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:50:25Z","lastTransitionTime":"2026-02-23T08:50:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:50:25 crc kubenswrapper[4940]: E0223 08:50:25.155563 4940 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T08:50:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T08:50:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T08:50:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T08:50:25Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T08:50:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T08:50:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T08:50:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T08:50:25Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0b40f9a7-6d5b-496d-bcec-88183c6aba29\\\",\\\"systemUUID\\\":\\\"3c406e8c-0d77-4ead-8ee9-37cf28c01cc1\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:50:25Z is after 2025-08-24T17:21:41Z" Feb 23 08:50:25 crc kubenswrapper[4940]: E0223 08:50:25.155751 4940 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 23 08:50:25 crc kubenswrapper[4940]: I0223 08:50:25.345271 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 08:50:25 crc kubenswrapper[4940]: I0223 08:50:25.345333 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 08:50:25 crc kubenswrapper[4940]: I0223 08:50:25.345433 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jwb9b" Feb 23 08:50:25 crc kubenswrapper[4940]: I0223 08:50:25.345291 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 08:50:25 crc kubenswrapper[4940]: E0223 08:50:25.345425 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 08:50:25 crc kubenswrapper[4940]: E0223 08:50:25.345500 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jwb9b" podUID="d8dd2da9-cea0-44f5-8c93-91b79c7f66ea" Feb 23 08:50:25 crc kubenswrapper[4940]: E0223 08:50:25.345543 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 08:50:25 crc kubenswrapper[4940]: E0223 08:50:25.345588 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 08:50:25 crc kubenswrapper[4940]: I0223 08:50:25.613203 4940 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 08:01:55.294680596 +0000 UTC Feb 23 08:50:25 crc kubenswrapper[4940]: I0223 08:50:25.772456 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qkw6w_d0b5a971-c6f4-4518-9bb3-49d228275668/ovnkube-controller/3.log" Feb 23 08:50:25 crc kubenswrapper[4940]: I0223 08:50:25.777557 4940 scope.go:117] "RemoveContainer" containerID="1649bf6fd3e252298bb7f2414c8ed3d153ffbc4ec0a8497b91cc94cd41c0359f" Feb 23 08:50:25 crc kubenswrapper[4940]: E0223 08:50:25.777786 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-qkw6w_openshift-ovn-kubernetes(d0b5a971-c6f4-4518-9bb3-49d228275668)\"" pod="openshift-ovn-kubernetes/ovnkube-node-qkw6w" podUID="d0b5a971-c6f4-4518-9bb3-49d228275668" Feb 23 08:50:25 crc kubenswrapper[4940]: I0223 08:50:25.791870 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7a3f942c-4bf7-4e48-84ef-83ca0a07bee7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4aa4e3eae7c81a14b6302a982f67ee3d9ecc29fea292654bb6ed70d209b7b202\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0f77059ccd04d8f502c2652a5e62a780bf57c1beb0bde2aa974f3d46c798f64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0f77059ccd04d8f502c2652a5e62a780bf57c1beb0bde2aa974f3d46c798f64\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:47:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:47:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:47:49Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:50:25Z is after 2025-08-24T17:21:41Z" Feb 23 08:50:25 crc kubenswrapper[4940]: I0223 08:50:25.818916 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47a2763e-a680-45c0-b4e8-0930c73e2e6b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4316680c09083874958b09e3d47897f844490827fafe87653b1459708ba55af3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c467a466c26e4cd6c762317dcc08a84b7bd61aa044e8d075f7ada83206764d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2a314775a13f5a612d19ad1c27ccadc74e4ad191c84bd78a1068105509e514a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://863243ea1e361e4f5cd70d64fa4a0c078de0c0529c9928b413db44a13924c474\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cca6b93a0655ce53ad6d1ccfe625533a67c58b3876c57fe4473d699c95938bd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc81c994f3f21269783d08f800aa4a58bdf43d229b5b61d7f40f8706230fd6c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc81c994f3f21269783d08f800aa4a58bdf43d229b5b61d7f40f8706230fd6c5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:47:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:47:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://260cdbf12890f13f411672fde6af3350b3ddc30824ac5b43110247ae1c7b74fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://260cdbf12890f13f411672fde6af3350b3ddc30824ac5b43110247ae1c7b74fc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:47:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:47:52Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://768f56feda9f5952088933827a03f9d23d7d807d1354578de2335e7c3e8beaaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://768f56feda9f5952088933827a03f9d23d7d807d1354578de2335e7c3e8beaaf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:47:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:47:49Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:50:25Z is after 2025-08-24T17:21:41Z" Feb 23 08:50:25 crc kubenswrapper[4940]: I0223 08:50:25.834720 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:50:25Z is after 2025-08-24T17:21:41Z" Feb 23 08:50:25 crc kubenswrapper[4940]: I0223 08:50:25.849486 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4164a479bddce53118172c24d9dc0c8174560ac6f49d2a1fa7321845f7f4020a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:50:25Z is after 2025-08-24T17:21:41Z" Feb 23 08:50:25 crc kubenswrapper[4940]: I0223 08:50:25.863978 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:50:25Z is after 2025-08-24T17:21:41Z" Feb 23 08:50:25 crc kubenswrapper[4940]: I0223 08:50:25.883479 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f5fabdbd24e079413b28b30b7e72c60763c64d08d55badde2d84d38dcbc8a09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bce84ef684469eb1b554c08d38cbd547ce6fa507b189a9f63c740583e66bc5e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:50:25Z is after 2025-08-24T17:21:41Z" Feb 23 08:50:25 crc kubenswrapper[4940]: I0223 08:50:25.897711 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-czrqm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec3904ad-5d0b-46b4-9c13-68454d9a3cb2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:50:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:50:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa9251aa0cd26d68256087c8d2f9d36cae16abcb8db328bb7d53cdd7bfacafd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5c3f117be2e1eba2edb562924ecec6d7ac44b508366bc7c63a86214ada740dc\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-23T08:50:18Z\\\",\\\"message\\\":\\\"2026-02-23T08:49:33+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_ef4ba046-8b26-4715-b009-2f66c82885e5\\\\n2026-02-23T08:49:33+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_ef4ba046-8b26-4715-b009-2f66c82885e5 to /host/opt/cni/bin/\\\\n2026-02-23T08:49:33Z [verbose] multus-daemon started\\\\n2026-02-23T08:49:33Z [verbose] Readiness Indicator file check\\\\n2026-02-23T08:50:18Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T08:49:31Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:50:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8jmt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:49:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-czrqm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:50:25Z is after 2025-08-24T17:21:41Z" Feb 23 08:50:25 crc kubenswrapper[4940]: I0223 08:50:25.919389 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qkw6w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0b5a971-c6f4-4518-9bb3-49d228275668\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://524ac500bdffecad2d1533ea863b5f2dc0acd130368e80faa476facf6817bcc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4558e2adbf43351cb79df3a10fc2e37fa7edc8eae79ceb459ae1ea068e8df21b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://683969deceafc005dbe91b7dbc1fd159cc1da9f55e48112ff68947c40997891e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cff793d0cde25f9d7edf3abaaf81005870d7ad01f985cfcc3cf3f5a57ea062b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f189355dc4d43c8b1df6111c9c3a128e90b724b976602933c384817e4f51882\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ef9add89b50dd7b480368148d404e8f7128d92e6d921c3c89a17463706f3dca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1649bf6fd3e252298bb7f2414c8ed3d153ffbc4ec0a8497b91cc94cd41c0359f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1649bf6fd3e252298bb7f2414c8ed3d153ffbc4ec0a8497b91cc94cd41c0359f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-23T08:50:24Z\\\",\\\"message\\\":\\\" for network=default: []services.LB{}\\\\nI0223 08:50:24.347280 7295 obj_retry.go:386] Retry successful for *v1.Pod openshift-ovn-kubernetes/ovnkube-node-qkw6w after 0 failed attempt(s)\\\\nI0223 08:50:24.347288 7295 services_controller.go:453] Built service openshift-ingress-canary/ingress-canary template LB for network=default: []services.LB{}\\\\nF0223 08:50:24.347294 7295 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:50:24Z is after 2025-08-24T17:21:41Z]\\\\nI0223 08:50:24.347304 7295 services_controller.go:454] Service openshift-ingress-canary/ingress-canary for network=default has 2 cluster-wide, 0 per-node configs, 0 t\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T08:50:23Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-qkw6w_openshift-ovn-kubernetes(d0b5a971-c6f4-4518-9bb3-49d228275668)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7a97208ce587c7633bbea9ad618e0721a712dfd62ff249a5843018bc125922a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56ef19c4e2659a35760fb8e2bb8a79c35a283fa3f4b00766bdd237d1464bd933\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56ef19c4e2659a35760fb8e2bb8a79c35a283fa3f4b00766bdd237d1464bd933\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:49:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:49:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:49:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qkw6w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:50:25Z is after 2025-08-24T17:21:41Z" Feb 23 08:50:25 crc kubenswrapper[4940]: I0223 08:50:25.933143 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ll9gt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac7931c6-f7d4-4166-b332-d954717f67c0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4dcef9eae99fcca399a3b47ff74ebb46cd4c0854422c9153d2ceb6f3a1ea8a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8clf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:49:37Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ll9gt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:50:25Z is after 2025-08-24T17:21:41Z" Feb 23 08:50:25 crc kubenswrapper[4940]: I0223 08:50:25.948412 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f3f2cfd6-5ddf-436d-998f-440f1cc642b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a14bc73401f5a1f767af0ebe6b81d75f4f735acadd163be9aa0b78d067677a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpqgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a57b4e844c0d09d7debf8e4d2507287815cc75be308dc7010647ac2041181dc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpqgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:49:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-26mgs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:50:25Z is after 2025-08-24T17:21:41Z" Feb 23 08:50:25 crc kubenswrapper[4940]: I0223 08:50:25.964555 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gjpr8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28e2b63c-99a7-46aa-a8e2-cee2bf5d7066\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6bce5b28c2d7a3946303e9ebd8738dd2337190884db0025e336d96f1641171c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4p7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5d550949b5cf8f3efe2267f466a5c769e2862ab361c86a14eef0baf874e7dd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4p7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:49:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gjpr8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:50:25Z is after 2025-08-24T17:21:41Z" Feb 23 08:50:25 crc kubenswrapper[4940]: I0223 08:50:25.983251 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67d1926d-47ed-4a5c-b868-690599126446\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ee3bc200ebaed1c133ad32e65db44c8613c48203736dada71c0881d3d4f45a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6fefd6cf7ee54d7687a700f08f52ff20d438289f846da4d811f3295e85455c1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee77b54ead604e1f64b0057494a4ba082ac7f59289f563ce28ec6048485e3b07\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://36385cbf7f0935ced8cb20a3a65c7d1802e2faf23bce8aeca1a040662b878b0f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://947d88435f9dfeac492c7a476d5413d6c9f65fea2e6a8322bed3a197be4e7c11\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"message\\\":\\\"le observer\\\\nW0223 08:48:59.155726 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0223 08:48:59.155899 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0223 08:48:59.156685 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1318688766/tls.crt::/tmp/serving-cert-1318688766/tls.key\\\\\\\"\\\\nI0223 08:48:59.543716 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0223 08:48:59.548278 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0223 08:48:59.548307 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0223 08:48:59.548334 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0223 08:48:59.548340 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0223 08:48:59.558852 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0223 08:48:59.558871 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0223 08:48:59.558894 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0223 08:48:59.558902 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0223 08:48:59.558909 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0223 08:48:59.558912 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0223 08:48:59.558917 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0223 08:48:59.558922 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0223 08:48:59.560387 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T08:48:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c47633e36ba0e4b4070060e2925e103b6efa68771b18f7a7f429546f9f05c1d9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:53Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7d8e4f8739813cb8d24afc9999755886cd4a77958b8b4e1514fc0bc53403258\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7d8e4f8739813cb8d24afc9999755886cd4a77958b8b4e1514fc0bc53403258\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:47:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:47:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:47:49Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:50:25Z is after 2025-08-24T17:21:41Z" Feb 23 08:50:25 crc kubenswrapper[4940]: I0223 08:50:25.997604 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88c878ec-2b3e-452c-bc23-a395b09fa6aa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3a121c424818b098798b4f28bc000a66157862597ce373a4d4748fb75f179fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3ba561d0277929d8f55fcd1fe87d8f37a2d126b94eaf9ac25bc79ad17f61e26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://90753b936aa559224b0241e456ce800658b0cdaa5c92cf54972255319d1ff334\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d742db4469dc738b276ecd68cf5aea77ad5e6fb4460a4fdca4a626a9a073956f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d742db4469dc738b276ecd68cf5aea77ad5e6fb4460a4fdca4a626a9a073956f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:47:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:47:50Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:47:49Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:50:25Z is after 2025-08-24T17:21:41Z" Feb 23 08:50:26 crc kubenswrapper[4940]: I0223 08:50:26.010487 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4vcwd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41834650-70c0-4558-9052-d7cdfc785e09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a8663adb017b0225e3a827e7670572c25e2b8123a2153e9f99830d8ddc60ebe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwtwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:49:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4vcwd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:50:26Z is after 2025-08-24T17:21:41Z" Feb 23 08:50:26 crc kubenswrapper[4940]: I0223 08:50:26.025443 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b55d9910-222c-4c04-8a15-cecc288b8dd6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2696757a27ff629002d6fe762776487514e74ad9465357240b3f196cf9e3cec7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73db3b91fa5cdca73acde9a3bcb9062394a54e61e87020b5d656290a54aa4f70\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T08:48:50Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0223 08:48:20.958890 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0223 08:48:20.960636 1 observer_polling.go:159] Starting file observer\\\\nI0223 08:48:20.961488 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0223 08:48:20.962119 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0223 08:48:48.479316 1 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials\\\\nI0223 08:48:50.579389 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0223 08:48:50.579494 1 cmd.go:179] failed checking apiserver connectivity: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T08:48:20Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:48:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aceeb7fc620d96b7e2a474de52d7b3fb007417f4f1ec81e5853724d54381566\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dbb2f4667fe28c3ae9389cece3d7246f31a8bd82328370a699dae73e11b19a4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0c1a355a4218aa74f5a9f20e74427e20e7dd1864bfb683eeb2ac43ef6d7357f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:47:49Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:50:26Z is after 2025-08-24T17:21:41Z" Feb 23 08:50:26 crc kubenswrapper[4940]: I0223 08:50:26.038571 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ca6a355de735d39d8c069429deed2a9983ba8d97d9210475c0b22791b6357d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:50:26Z is after 2025-08-24T17:21:41Z" Feb 23 08:50:26 crc kubenswrapper[4940]: I0223 08:50:26.051003 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:50:26Z is after 2025-08-24T17:21:41Z" Feb 23 08:50:26 crc kubenswrapper[4940]: I0223 08:50:26.066649 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-tj6ms" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"353c05f4-9ef1-4c0f-8388-8b56cc4c22d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1be9bb45923468ac8fc8f206f3493431ef579d16d158ba7cf330944b1eb7585\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j86pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e45b5826dae32da76487ca6179e00691e0f92748e895fbe42aa14d39a808a12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e45b5826dae32da76487ca6179e00691e0f92748e895fbe42aa14d39a808a12\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j86pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a9386fc4a4a81adf5f99f88359b958f8c47f37eb176a860c96e5e3b6a219f3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a9386fc4a4a81adf5f99f88359b958f8c47f37eb176a860c96e5e3b6a219f3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:49:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j86pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0e096f58de305f0df76444507c878fdf5eb1a134deb3725e1e46ac4d6581541\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0e096f58de305f0df76444507c878fdf5eb1a134deb3725e1e46ac4d6581541\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:49:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:49:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j86pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcc8e2b5ae391db63624b8bcbaa5cab45fcc5f168db2a52700d34ad2c3eb9b89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fcc8e2b5ae391db63624b8bcbaa5cab45fcc5f168db2a52700d34ad2c3eb9b89\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:49:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:49:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j86pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0b3f29a8c1eea72788476a0aac20ec99cb908df8983b5271448ae824b8ab96a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0b3f29a8c1eea72788476a0aac20ec99cb908df8983b5271448ae824b8ab96a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:49:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:49:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j86pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d249ae04e3dece913fbfa0f45169adf9d267043d3cf2dd93c4ace24c5fdf1dd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d249ae04e3dece913fbfa0f45169adf9d267043d3cf2dd93c4ace24c5fdf1dd6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:49:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:49:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j86pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:49:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-tj6ms\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:50:26Z is after 2025-08-24T17:21:41Z" Feb 23 08:50:26 crc kubenswrapper[4940]: I0223 08:50:26.078078 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jwb9b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d8dd2da9-cea0-44f5-8c93-91b79c7f66ea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmbl4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmbl4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:49:44Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jwb9b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:50:26Z is after 2025-08-24T17:21:41Z" Feb 23 08:50:26 crc kubenswrapper[4940]: I0223 08:50:26.614020 4940 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 09:32:36.120674474 +0000 UTC Feb 23 08:50:27 crc kubenswrapper[4940]: I0223 08:50:27.345694 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jwb9b" Feb 23 08:50:27 crc kubenswrapper[4940]: I0223 08:50:27.345689 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 08:50:27 crc kubenswrapper[4940]: I0223 08:50:27.345761 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 08:50:27 crc kubenswrapper[4940]: E0223 08:50:27.346291 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 08:50:27 crc kubenswrapper[4940]: E0223 08:50:27.346218 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jwb9b" podUID="d8dd2da9-cea0-44f5-8c93-91b79c7f66ea" Feb 23 08:50:27 crc kubenswrapper[4940]: I0223 08:50:27.345843 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 08:50:27 crc kubenswrapper[4940]: E0223 08:50:27.346448 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 08:50:27 crc kubenswrapper[4940]: E0223 08:50:27.346536 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 08:50:27 crc kubenswrapper[4940]: I0223 08:50:27.614911 4940 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 07:25:27.251290395 +0000 UTC Feb 23 08:50:28 crc kubenswrapper[4940]: I0223 08:50:28.615152 4940 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 07:20:30.585002835 +0000 UTC Feb 23 08:50:29 crc kubenswrapper[4940]: I0223 08:50:29.345581 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jwb9b" Feb 23 08:50:29 crc kubenswrapper[4940]: I0223 08:50:29.345680 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 08:50:29 crc kubenswrapper[4940]: I0223 08:50:29.345593 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 08:50:29 crc kubenswrapper[4940]: E0223 08:50:29.345869 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jwb9b" podUID="d8dd2da9-cea0-44f5-8c93-91b79c7f66ea" Feb 23 08:50:29 crc kubenswrapper[4940]: I0223 08:50:29.345922 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 08:50:29 crc kubenswrapper[4940]: E0223 08:50:29.345987 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 08:50:29 crc kubenswrapper[4940]: E0223 08:50:29.345934 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 08:50:29 crc kubenswrapper[4940]: E0223 08:50:29.346302 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 08:50:29 crc kubenswrapper[4940]: I0223 08:50:29.361110 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4vcwd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41834650-70c0-4558-9052-d7cdfc785e09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a8663adb017b0225e3a827e7670572c25e2b8123a2153e9f99830d8ddc60ebe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dwtwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:49:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4vcwd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:50:29Z is after 2025-08-24T17:21:41Z" Feb 23 08:50:29 crc kubenswrapper[4940]: I0223 08:50:29.379660 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67d1926d-47ed-4a5c-b868-690599126446\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:50:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ee3bc200ebaed1c133ad32e65db44c8613c48203736dada71c0881d3d4f45a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6fefd6cf7ee54d7687a700f08f52ff20d438289f846da4d811f3295e85455c1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee77b54ead604e1f64b0057494a4ba082ac7f59289f563ce28ec6048485e3b07\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://36385cbf7f0935ced8cb20a3a65c7d1802e2faf23bce8aeca1a040662b878b0f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://947d88435f9dfeac492c7a476d5413d6c9f65fea2e6a8322bed3a197be4e7c11\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"message\\\":\\\"le observer\\\\nW0223 08:48:59.155726 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0223 08:48:59.155899 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0223 08:48:59.156685 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1318688766/tls.crt::/tmp/serving-cert-1318688766/tls.key\\\\\\\"\\\\nI0223 08:48:59.543716 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0223 08:48:59.548278 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0223 08:48:59.548307 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0223 08:48:59.548334 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0223 08:48:59.548340 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0223 08:48:59.558852 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0223 08:48:59.558871 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0223 08:48:59.558894 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0223 08:48:59.558902 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0223 08:48:59.558909 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0223 08:48:59.558912 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0223 08:48:59.558917 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0223 08:48:59.558922 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0223 08:48:59.560387 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T08:48:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c47633e36ba0e4b4070060e2925e103b6efa68771b18f7a7f429546f9f05c1d9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:53Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7d8e4f8739813cb8d24afc9999755886cd4a77958b8b4e1514fc0bc53403258\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7d8e4f8739813cb8d24afc9999755886cd4a77958b8b4e1514fc0bc53403258\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:47:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:47:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:47:49Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:50:29Z is after 2025-08-24T17:21:41Z" Feb 23 08:50:29 crc kubenswrapper[4940]: I0223 08:50:29.390751 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88c878ec-2b3e-452c-bc23-a395b09fa6aa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3a121c424818b098798b4f28bc000a66157862597ce373a4d4748fb75f179fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3ba561d0277929d8f55fcd1fe87d8f37a2d126b94eaf9ac25bc79ad17f61e26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://90753b936aa559224b0241e456ce800658b0cdaa5c92cf54972255319d1ff334\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d742db4469dc738b276ecd68cf5aea77ad5e6fb4460a4fdca4a626a9a073956f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d742db4469dc738b276ecd68cf5aea77ad5e6fb4460a4fdca4a626a9a073956f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:47:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:47:50Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:47:49Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:50:29Z is after 2025-08-24T17:21:41Z" Feb 23 08:50:29 crc kubenswrapper[4940]: I0223 08:50:29.406133 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:50:29Z is after 2025-08-24T17:21:41Z" Feb 23 08:50:29 crc kubenswrapper[4940]: I0223 08:50:29.426798 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-tj6ms" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"353c05f4-9ef1-4c0f-8388-8b56cc4c22d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1be9bb45923468ac8fc8f206f3493431ef579d16d158ba7cf330944b1eb7585\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j86pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e45b5826dae32da76487ca6179e00691e0f92748e895fbe42aa14d39a808a12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e45b5826dae32da76487ca6179e00691e0f92748e895fbe42aa14d39a808a12\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j86pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a9386fc4a4a81adf5f99f88359b958f8c47f37eb176a860c96e5e3b6a219f3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a9386fc4a4a81adf5f99f88359b958f8c47f37eb176a860c96e5e3b6a219f3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:49:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j86pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0e096f58de305f0df76444507c878fdf5eb1a134deb3725e1e46ac4d6581541\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0e096f58de305f0df76444507c878fdf5eb1a134deb3725e1e46ac4d6581541\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:49:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:49:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j86pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcc8e2b5ae391db63624b8bcbaa5cab45fcc5f168db2a52700d34ad2c3eb9b89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fcc8e2b5ae391db63624b8bcbaa5cab45fcc5f168db2a52700d34ad2c3eb9b89\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:49:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:49:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j86pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0b3f29a8c1eea72788476a0aac20ec99cb908df8983b5271448ae824b8ab96a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0b3f29a8c1eea72788476a0aac20ec99cb908df8983b5271448ae824b8ab96a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:49:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:49:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j86pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d249ae04e3dece913fbfa0f45169adf9d267043d3cf2dd93c4ace24c5fdf1dd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d249ae04e3dece913fbfa0f45169adf9d267043d3cf2dd93c4ace24c5fdf1dd6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:49:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:49:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j86pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:49:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-tj6ms\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:50:29Z is after 2025-08-24T17:21:41Z" Feb 23 08:50:29 crc kubenswrapper[4940]: I0223 08:50:29.439870 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jwb9b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d8dd2da9-cea0-44f5-8c93-91b79c7f66ea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmbl4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmbl4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:49:44Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jwb9b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:50:29Z is after 2025-08-24T17:21:41Z" Feb 23 08:50:29 crc kubenswrapper[4940]: I0223 08:50:29.456478 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b55d9910-222c-4c04-8a15-cecc288b8dd6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2696757a27ff629002d6fe762776487514e74ad9465357240b3f196cf9e3cec7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73db3b91fa5cdca73acde9a3bcb9062394a54e61e87020b5d656290a54aa4f70\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T08:48:50Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0223 08:48:20.958890 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0223 08:48:20.960636 1 observer_polling.go:159] Starting file observer\\\\nI0223 08:48:20.961488 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0223 08:48:20.962119 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0223 08:48:48.479316 1 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials\\\\nI0223 08:48:50.579389 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0223 08:48:50.579494 1 cmd.go:179] failed checking apiserver connectivity: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T08:48:20Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:48:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aceeb7fc620d96b7e2a474de52d7b3fb007417f4f1ec81e5853724d54381566\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dbb2f4667fe28c3ae9389cece3d7246f31a8bd82328370a699dae73e11b19a4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0c1a355a4218aa74f5a9f20e74427e20e7dd1864bfb683eeb2ac43ef6d7357f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:47:49Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:50:29Z is after 2025-08-24T17:21:41Z" Feb 23 08:50:29 crc kubenswrapper[4940]: I0223 08:50:29.468467 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ca6a355de735d39d8c069429deed2a9983ba8d97d9210475c0b22791b6357d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:50:29Z is after 2025-08-24T17:21:41Z" Feb 23 08:50:29 crc kubenswrapper[4940]: I0223 08:50:29.482266 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:50:29Z is after 2025-08-24T17:21:41Z" Feb 23 08:50:29 crc kubenswrapper[4940]: I0223 08:50:29.497755 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4164a479bddce53118172c24d9dc0c8174560ac6f49d2a1fa7321845f7f4020a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:50:29Z is after 2025-08-24T17:21:41Z" Feb 23 08:50:29 crc kubenswrapper[4940]: I0223 08:50:29.512380 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:50:29Z is after 2025-08-24T17:21:41Z" Feb 23 08:50:29 crc kubenswrapper[4940]: I0223 08:50:29.527114 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f5fabdbd24e079413b28b30b7e72c60763c64d08d55badde2d84d38dcbc8a09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bce84ef684469eb1b554c08d38cbd547ce6fa507b189a9f63c740583e66bc5e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:50:29Z is after 2025-08-24T17:21:41Z" Feb 23 08:50:29 crc kubenswrapper[4940]: I0223 08:50:29.546832 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-czrqm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec3904ad-5d0b-46b4-9c13-68454d9a3cb2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:50:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:50:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa9251aa0cd26d68256087c8d2f9d36cae16abcb8db328bb7d53cdd7bfacafd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5c3f117be2e1eba2edb562924ecec6d7ac44b508366bc7c63a86214ada740dc\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-23T08:50:18Z\\\",\\\"message\\\":\\\"2026-02-23T08:49:33+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_ef4ba046-8b26-4715-b009-2f66c82885e5\\\\n2026-02-23T08:49:33+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_ef4ba046-8b26-4715-b009-2f66c82885e5 to /host/opt/cni/bin/\\\\n2026-02-23T08:49:33Z [verbose] multus-daemon started\\\\n2026-02-23T08:49:33Z [verbose] Readiness Indicator file check\\\\n2026-02-23T08:50:18Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T08:49:31Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:50:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8jmt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:49:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-czrqm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:50:29Z is after 2025-08-24T17:21:41Z" Feb 23 08:50:29 crc kubenswrapper[4940]: I0223 08:50:29.575658 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qkw6w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0b5a971-c6f4-4518-9bb3-49d228275668\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://524ac500bdffecad2d1533ea863b5f2dc0acd130368e80faa476facf6817bcc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4558e2adbf43351cb79df3a10fc2e37fa7edc8eae79ceb459ae1ea068e8df21b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://683969deceafc005dbe91b7dbc1fd159cc1da9f55e48112ff68947c40997891e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cff793d0cde25f9d7edf3abaaf81005870d7ad01f985cfcc3cf3f5a57ea062b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f189355dc4d43c8b1df6111c9c3a128e90b724b976602933c384817e4f51882\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ef9add89b50dd7b480368148d404e8f7128d92e6d921c3c89a17463706f3dca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1649bf6fd3e252298bb7f2414c8ed3d153ffbc4ec0a8497b91cc94cd41c0359f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1649bf6fd3e252298bb7f2414c8ed3d153ffbc4ec0a8497b91cc94cd41c0359f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-23T08:50:24Z\\\",\\\"message\\\":\\\" for network=default: []services.LB{}\\\\nI0223 08:50:24.347280 7295 obj_retry.go:386] Retry successful for *v1.Pod openshift-ovn-kubernetes/ovnkube-node-qkw6w after 0 failed attempt(s)\\\\nI0223 08:50:24.347288 7295 services_controller.go:453] Built service openshift-ingress-canary/ingress-canary template LB for network=default: []services.LB{}\\\\nF0223 08:50:24.347294 7295 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:50:24Z is after 2025-08-24T17:21:41Z]\\\\nI0223 08:50:24.347304 7295 services_controller.go:454] Service openshift-ingress-canary/ingress-canary for network=default has 2 cluster-wide, 0 per-node configs, 0 t\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T08:50:23Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-qkw6w_openshift-ovn-kubernetes(d0b5a971-c6f4-4518-9bb3-49d228275668)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7a97208ce587c7633bbea9ad618e0721a712dfd62ff249a5843018bc125922a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56ef19c4e2659a35760fb8e2bb8a79c35a283fa3f4b00766bdd237d1464bd933\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56ef19c4e2659a35760fb8e2bb8a79c35a283fa3f4b00766bdd237d1464bd933\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:49:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:49:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sq2ck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:49:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qkw6w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:50:29Z is after 2025-08-24T17:21:41Z" Feb 23 08:50:29 crc kubenswrapper[4940]: I0223 08:50:29.586651 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7a3f942c-4bf7-4e48-84ef-83ca0a07bee7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4aa4e3eae7c81a14b6302a982f67ee3d9ecc29fea292654bb6ed70d209b7b202\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0f77059ccd04d8f502c2652a5e62a780bf57c1beb0bde2aa974f3d46c798f64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0f77059ccd04d8f502c2652a5e62a780bf57c1beb0bde2aa974f3d46c798f64\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:47:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:47:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:47:49Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:50:29Z is after 2025-08-24T17:21:41Z" Feb 23 08:50:29 crc kubenswrapper[4940]: E0223 08:50:29.602607 4940 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 23 08:50:29 crc kubenswrapper[4940]: I0223 08:50:29.615890 4940 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 20:31:15.899186493 +0000 UTC Feb 23 08:50:29 crc kubenswrapper[4940]: I0223 08:50:29.624552 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47a2763e-a680-45c0-b4e8-0930c73e2e6b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:48:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:47:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4316680c09083874958b09e3d47897f844490827fafe87653b1459708ba55af3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c467a466c26e4cd6c762317dcc08a84b7bd61aa044e8d075f7ada83206764d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2a314775a13f5a612d19ad1c27ccadc74e4ad191c84bd78a1068105509e514a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://863243ea1e361e4f5cd70d64fa4a0c078de0c0529c9928b413db44a13924c474\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cca6b93a0655ce53ad6d1ccfe625533a67c58b3876c57fe4473d699c95938bd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:47:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc81c994f3f21269783d08f800aa4a58bdf43d229b5b61d7f40f8706230fd6c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc81c994f3f21269783d08f800aa4a58bdf43d229b5b61d7f40f8706230fd6c5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:47:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:47:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://260cdbf12890f13f411672fde6af3350b3ddc30824ac5b43110247ae1c7b74fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://260cdbf12890f13f411672fde6af3350b3ddc30824ac5b43110247ae1c7b74fc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:47:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:47:52Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://768f56feda9f5952088933827a03f9d23d7d807d1354578de2335e7c3e8beaaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://768f56feda9f5952088933827a03f9d23d7d807d1354578de2335e7c3e8beaaf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T08:47:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T08:47:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:47:49Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:50:29Z is after 2025-08-24T17:21:41Z" Feb 23 08:50:29 crc kubenswrapper[4940]: I0223 08:50:29.637294 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ll9gt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac7931c6-f7d4-4166-b332-d954717f67c0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4dcef9eae99fcca399a3b47ff74ebb46cd4c0854422c9153d2ceb6f3a1ea8a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8clf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:49:37Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ll9gt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:50:29Z is after 2025-08-24T17:21:41Z" Feb 23 08:50:29 crc kubenswrapper[4940]: I0223 08:50:29.648041 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f3f2cfd6-5ddf-436d-998f-440f1cc642b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a14bc73401f5a1f767af0ebe6b81d75f4f735acadd163be9aa0b78d067677a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpqgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a57b4e844c0d09d7debf8e4d2507287815cc75be308dc7010647ac2041181dc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpqgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:49:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-26mgs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:50:29Z is after 2025-08-24T17:21:41Z" Feb 23 08:50:29 crc kubenswrapper[4940]: I0223 08:50:29.662514 4940 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gjpr8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28e2b63c-99a7-46aa-a8e2-cee2bf5d7066\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T08:49:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6bce5b28c2d7a3946303e9ebd8738dd2337190884db0025e336d96f1641171c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4p7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5d550949b5cf8f3efe2267f466a5c769e2862ab361c86a14eef0baf874e7dd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T08:49:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4p7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T08:49:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gjpr8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T08:50:29Z is after 2025-08-24T17:21:41Z" Feb 23 08:50:30 crc kubenswrapper[4940]: I0223 08:50:30.616422 4940 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 08:58:47.886390047 +0000 UTC Feb 23 08:50:31 crc kubenswrapper[4940]: I0223 08:50:31.345220 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 08:50:31 crc kubenswrapper[4940]: E0223 08:50:31.345448 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 08:50:31 crc kubenswrapper[4940]: I0223 08:50:31.345693 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 08:50:31 crc kubenswrapper[4940]: E0223 08:50:31.345871 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 08:50:31 crc kubenswrapper[4940]: I0223 08:50:31.345887 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 08:50:31 crc kubenswrapper[4940]: E0223 08:50:31.346085 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 08:50:31 crc kubenswrapper[4940]: I0223 08:50:31.346217 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jwb9b" Feb 23 08:50:31 crc kubenswrapper[4940]: E0223 08:50:31.346369 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jwb9b" podUID="d8dd2da9-cea0-44f5-8c93-91b79c7f66ea" Feb 23 08:50:31 crc kubenswrapper[4940]: I0223 08:50:31.617557 4940 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 15:59:38.769876637 +0000 UTC Feb 23 08:50:32 crc kubenswrapper[4940]: I0223 08:50:32.618167 4940 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 08:36:02.978512903 +0000 UTC Feb 23 08:50:33 crc kubenswrapper[4940]: I0223 08:50:33.345645 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 08:50:33 crc kubenswrapper[4940]: I0223 08:50:33.345789 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 08:50:33 crc kubenswrapper[4940]: E0223 08:50:33.345830 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 08:50:33 crc kubenswrapper[4940]: I0223 08:50:33.345845 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 08:50:33 crc kubenswrapper[4940]: I0223 08:50:33.345876 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jwb9b" Feb 23 08:50:33 crc kubenswrapper[4940]: E0223 08:50:33.346022 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 08:50:33 crc kubenswrapper[4940]: E0223 08:50:33.346096 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 08:50:33 crc kubenswrapper[4940]: E0223 08:50:33.346173 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jwb9b" podUID="d8dd2da9-cea0-44f5-8c93-91b79c7f66ea" Feb 23 08:50:33 crc kubenswrapper[4940]: I0223 08:50:33.618282 4940 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 20:06:42.931235407 +0000 UTC Feb 23 08:50:34 crc kubenswrapper[4940]: E0223 08:50:34.603637 4940 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 23 08:50:34 crc kubenswrapper[4940]: I0223 08:50:34.618375 4940 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 18:18:09.403664505 +0000 UTC Feb 23 08:50:35 crc kubenswrapper[4940]: I0223 08:50:35.345311 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 08:50:35 crc kubenswrapper[4940]: I0223 08:50:35.345362 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 08:50:35 crc kubenswrapper[4940]: I0223 08:50:35.345406 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jwb9b" Feb 23 08:50:35 crc kubenswrapper[4940]: I0223 08:50:35.345320 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 08:50:35 crc kubenswrapper[4940]: E0223 08:50:35.345516 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 08:50:35 crc kubenswrapper[4940]: E0223 08:50:35.345842 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 08:50:35 crc kubenswrapper[4940]: E0223 08:50:35.345983 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 08:50:35 crc kubenswrapper[4940]: E0223 08:50:35.346028 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jwb9b" podUID="d8dd2da9-cea0-44f5-8c93-91b79c7f66ea" Feb 23 08:50:35 crc kubenswrapper[4940]: I0223 08:50:35.497180 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 08:50:35 crc kubenswrapper[4940]: I0223 08:50:35.497215 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 08:50:35 crc kubenswrapper[4940]: I0223 08:50:35.497225 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 08:50:35 crc kubenswrapper[4940]: I0223 08:50:35.497241 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 08:50:35 crc kubenswrapper[4940]: I0223 08:50:35.497252 4940 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T08:50:35Z","lastTransitionTime":"2026-02-23T08:50:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 08:50:35 crc kubenswrapper[4940]: I0223 08:50:35.547009 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-cvmmg"] Feb 23 08:50:35 crc kubenswrapper[4940]: I0223 08:50:35.547377 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-cvmmg" Feb 23 08:50:35 crc kubenswrapper[4940]: I0223 08:50:35.550337 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Feb 23 08:50:35 crc kubenswrapper[4940]: I0223 08:50:35.551167 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Feb 23 08:50:35 crc kubenswrapper[4940]: I0223 08:50:35.551316 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Feb 23 08:50:35 crc kubenswrapper[4940]: I0223 08:50:35.553450 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Feb 23 08:50:35 crc kubenswrapper[4940]: I0223 08:50:35.583365 4940 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" podStartSLOduration=98.583333257 podStartE2EDuration="1m38.583333257s" podCreationTimestamp="2026-02-23 08:48:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 08:50:35.56792936 +0000 UTC m=+166.951135547" watchObservedRunningTime="2026-02-23 08:50:35.583333257 +0000 UTC m=+166.966539414" Feb 23 08:50:35 crc kubenswrapper[4940]: I0223 08:50:35.599372 4940 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gjpr8" podStartSLOduration=98.599341524 podStartE2EDuration="1m38.599341524s" podCreationTimestamp="2026-02-23 08:48:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 08:50:35.584210215 +0000 UTC m=+166.967416382" watchObservedRunningTime="2026-02-23 08:50:35.599341524 +0000 UTC m=+166.982547691" Feb 23 08:50:35 crc kubenswrapper[4940]: I0223 08:50:35.616834 4940 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=96.616807617 podStartE2EDuration="1m36.616807617s" podCreationTimestamp="2026-02-23 08:48:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 08:50:35.600068497 +0000 UTC m=+166.983274684" watchObservedRunningTime="2026-02-23 08:50:35.616807617 +0000 UTC m=+167.000013784" Feb 23 08:50:35 crc kubenswrapper[4940]: I0223 08:50:35.617352 4940 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=42.617343294 podStartE2EDuration="42.617343294s" podCreationTimestamp="2026-02-23 08:49:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 08:50:35.617263452 +0000 UTC m=+167.000469609" watchObservedRunningTime="2026-02-23 08:50:35.617343294 +0000 UTC m=+167.000549461" Feb 23 08:50:35 crc kubenswrapper[4940]: I0223 08:50:35.618668 4940 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 07:11:58.444748213 +0000 UTC Feb 23 08:50:35 crc kubenswrapper[4940]: I0223 08:50:35.618753 4940 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Feb 23 08:50:35 crc kubenswrapper[4940]: I0223 08:50:35.628969 4940 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Feb 23 08:50:35 crc kubenswrapper[4940]: I0223 08:50:35.630144 4940 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-4vcwd" podStartSLOduration=98.630121857 podStartE2EDuration="1m38.630121857s" podCreationTimestamp="2026-02-23 08:48:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 08:50:35.629471136 +0000 UTC m=+167.012677293" watchObservedRunningTime="2026-02-23 08:50:35.630121857 +0000 UTC m=+167.013328034" Feb 23 08:50:35 crc kubenswrapper[4940]: I0223 08:50:35.664064 4940 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=93.664040431 podStartE2EDuration="1m33.664040431s" podCreationTimestamp="2026-02-23 08:49:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 08:50:35.64790682 +0000 UTC m=+167.031112987" watchObservedRunningTime="2026-02-23 08:50:35.664040431 +0000 UTC m=+167.047246598" Feb 23 08:50:35 crc kubenswrapper[4940]: I0223 08:50:35.691305 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/9dcf729c-528d-4c30-be1d-6d97502eefcb-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-cvmmg\" (UID: \"9dcf729c-528d-4c30-be1d-6d97502eefcb\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-cvmmg" Feb 23 08:50:35 crc kubenswrapper[4940]: I0223 08:50:35.691461 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9dcf729c-528d-4c30-be1d-6d97502eefcb-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-cvmmg\" (UID: \"9dcf729c-528d-4c30-be1d-6d97502eefcb\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-cvmmg" Feb 23 08:50:35 crc kubenswrapper[4940]: I0223 08:50:35.691563 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/9dcf729c-528d-4c30-be1d-6d97502eefcb-service-ca\") pod \"cluster-version-operator-5c965bbfc6-cvmmg\" (UID: \"9dcf729c-528d-4c30-be1d-6d97502eefcb\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-cvmmg" Feb 23 08:50:35 crc kubenswrapper[4940]: I0223 08:50:35.691652 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9dcf729c-528d-4c30-be1d-6d97502eefcb-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-cvmmg\" (UID: \"9dcf729c-528d-4c30-be1d-6d97502eefcb\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-cvmmg" Feb 23 08:50:35 crc kubenswrapper[4940]: I0223 08:50:35.691691 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/9dcf729c-528d-4c30-be1d-6d97502eefcb-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-cvmmg\" (UID: \"9dcf729c-528d-4c30-be1d-6d97502eefcb\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-cvmmg" Feb 23 08:50:35 crc kubenswrapper[4940]: I0223 08:50:35.697428 4940 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-tj6ms" podStartSLOduration=98.697406317 podStartE2EDuration="1m38.697406317s" podCreationTimestamp="2026-02-23 08:48:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 08:50:35.696894992 +0000 UTC m=+167.080101179" watchObservedRunningTime="2026-02-23 08:50:35.697406317 +0000 UTC m=+167.080612474" Feb 23 08:50:35 crc kubenswrapper[4940]: I0223 08:50:35.754373 4940 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=17.754351985 podStartE2EDuration="17.754351985s" podCreationTimestamp="2026-02-23 08:50:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 08:50:35.753970663 +0000 UTC m=+167.137176820" watchObservedRunningTime="2026-02-23 08:50:35.754351985 +0000 UTC m=+167.137558142" Feb 23 08:50:35 crc kubenswrapper[4940]: I0223 08:50:35.786444 4940 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=59.78642011 podStartE2EDuration="59.78642011s" podCreationTimestamp="2026-02-23 08:49:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 08:50:35.785206981 +0000 UTC m=+167.168413148" watchObservedRunningTime="2026-02-23 08:50:35.78642011 +0000 UTC m=+167.169626267" Feb 23 08:50:35 crc kubenswrapper[4940]: I0223 08:50:35.792456 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/9dcf729c-528d-4c30-be1d-6d97502eefcb-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-cvmmg\" (UID: \"9dcf729c-528d-4c30-be1d-6d97502eefcb\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-cvmmg" Feb 23 08:50:35 crc kubenswrapper[4940]: I0223 08:50:35.792519 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/9dcf729c-528d-4c30-be1d-6d97502eefcb-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-cvmmg\" (UID: \"9dcf729c-528d-4c30-be1d-6d97502eefcb\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-cvmmg" Feb 23 08:50:35 crc kubenswrapper[4940]: I0223 08:50:35.792670 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9dcf729c-528d-4c30-be1d-6d97502eefcb-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-cvmmg\" (UID: \"9dcf729c-528d-4c30-be1d-6d97502eefcb\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-cvmmg" Feb 23 08:50:35 crc kubenswrapper[4940]: I0223 08:50:35.792733 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/9dcf729c-528d-4c30-be1d-6d97502eefcb-service-ca\") pod \"cluster-version-operator-5c965bbfc6-cvmmg\" (UID: \"9dcf729c-528d-4c30-be1d-6d97502eefcb\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-cvmmg" Feb 23 08:50:35 crc kubenswrapper[4940]: I0223 08:50:35.792769 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9dcf729c-528d-4c30-be1d-6d97502eefcb-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-cvmmg\" (UID: \"9dcf729c-528d-4c30-be1d-6d97502eefcb\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-cvmmg" Feb 23 08:50:35 crc kubenswrapper[4940]: I0223 08:50:35.793099 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/9dcf729c-528d-4c30-be1d-6d97502eefcb-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-cvmmg\" (UID: \"9dcf729c-528d-4c30-be1d-6d97502eefcb\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-cvmmg" Feb 23 08:50:35 crc kubenswrapper[4940]: I0223 08:50:35.793100 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/9dcf729c-528d-4c30-be1d-6d97502eefcb-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-cvmmg\" (UID: \"9dcf729c-528d-4c30-be1d-6d97502eefcb\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-cvmmg" Feb 23 08:50:35 crc kubenswrapper[4940]: I0223 08:50:35.794167 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/9dcf729c-528d-4c30-be1d-6d97502eefcb-service-ca\") pod \"cluster-version-operator-5c965bbfc6-cvmmg\" (UID: \"9dcf729c-528d-4c30-be1d-6d97502eefcb\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-cvmmg" Feb 23 08:50:35 crc kubenswrapper[4940]: I0223 08:50:35.806452 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9dcf729c-528d-4c30-be1d-6d97502eefcb-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-cvmmg\" (UID: \"9dcf729c-528d-4c30-be1d-6d97502eefcb\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-cvmmg" Feb 23 08:50:35 crc kubenswrapper[4940]: I0223 08:50:35.813348 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9dcf729c-528d-4c30-be1d-6d97502eefcb-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-cvmmg\" (UID: \"9dcf729c-528d-4c30-be1d-6d97502eefcb\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-cvmmg" Feb 23 08:50:35 crc kubenswrapper[4940]: I0223 08:50:35.861116 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-cvmmg" Feb 23 08:50:35 crc kubenswrapper[4940]: I0223 08:50:35.864526 4940 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-czrqm" podStartSLOduration=98.86449668 podStartE2EDuration="1m38.86449668s" podCreationTimestamp="2026-02-23 08:48:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 08:50:35.8638897 +0000 UTC m=+167.247095857" watchObservedRunningTime="2026-02-23 08:50:35.86449668 +0000 UTC m=+167.247702837" Feb 23 08:50:35 crc kubenswrapper[4940]: I0223 08:50:35.875562 4940 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-ll9gt" podStartSLOduration=98.875537366 podStartE2EDuration="1m38.875537366s" podCreationTimestamp="2026-02-23 08:48:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 08:50:35.874318126 +0000 UTC m=+167.257524293" watchObservedRunningTime="2026-02-23 08:50:35.875537366 +0000 UTC m=+167.258743523" Feb 23 08:50:36 crc kubenswrapper[4940]: I0223 08:50:36.816071 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-cvmmg" event={"ID":"9dcf729c-528d-4c30-be1d-6d97502eefcb","Type":"ContainerStarted","Data":"ea777bffc3633283d2b05c8060ea8f2d665db7a829ba1fcefb1c2c0b5e770eb4"} Feb 23 08:50:36 crc kubenswrapper[4940]: I0223 08:50:36.816807 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-cvmmg" event={"ID":"9dcf729c-528d-4c30-be1d-6d97502eefcb","Type":"ContainerStarted","Data":"bb8f7329b544d71bfedd9e7ad9ae759ad8f0e3ebfbe2df5b2528c729e7b56c01"} Feb 23 08:50:37 crc kubenswrapper[4940]: I0223 08:50:37.345121 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 08:50:37 crc kubenswrapper[4940]: I0223 08:50:37.345184 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jwb9b" Feb 23 08:50:37 crc kubenswrapper[4940]: I0223 08:50:37.345290 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 08:50:37 crc kubenswrapper[4940]: I0223 08:50:37.345317 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 08:50:37 crc kubenswrapper[4940]: E0223 08:50:37.345503 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jwb9b" podUID="d8dd2da9-cea0-44f5-8c93-91b79c7f66ea" Feb 23 08:50:37 crc kubenswrapper[4940]: E0223 08:50:37.345901 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 08:50:37 crc kubenswrapper[4940]: E0223 08:50:37.346008 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 08:50:37 crc kubenswrapper[4940]: E0223 08:50:37.346117 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 08:50:39 crc kubenswrapper[4940]: I0223 08:50:39.345729 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 08:50:39 crc kubenswrapper[4940]: I0223 08:50:39.345780 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 08:50:39 crc kubenswrapper[4940]: I0223 08:50:39.345863 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 08:50:39 crc kubenswrapper[4940]: E0223 08:50:39.347335 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 08:50:39 crc kubenswrapper[4940]: I0223 08:50:39.347576 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jwb9b" Feb 23 08:50:39 crc kubenswrapper[4940]: E0223 08:50:39.347673 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 08:50:39 crc kubenswrapper[4940]: I0223 08:50:39.347744 4940 scope.go:117] "RemoveContainer" containerID="1649bf6fd3e252298bb7f2414c8ed3d153ffbc4ec0a8497b91cc94cd41c0359f" Feb 23 08:50:39 crc kubenswrapper[4940]: E0223 08:50:39.347807 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 08:50:39 crc kubenswrapper[4940]: E0223 08:50:39.347839 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jwb9b" podUID="d8dd2da9-cea0-44f5-8c93-91b79c7f66ea" Feb 23 08:50:39 crc kubenswrapper[4940]: E0223 08:50:39.347973 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-qkw6w_openshift-ovn-kubernetes(d0b5a971-c6f4-4518-9bb3-49d228275668)\"" pod="openshift-ovn-kubernetes/ovnkube-node-qkw6w" podUID="d0b5a971-c6f4-4518-9bb3-49d228275668" Feb 23 08:50:39 crc kubenswrapper[4940]: E0223 08:50:39.604144 4940 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 23 08:50:41 crc kubenswrapper[4940]: I0223 08:50:41.345502 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 08:50:41 crc kubenswrapper[4940]: I0223 08:50:41.345591 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 08:50:41 crc kubenswrapper[4940]: I0223 08:50:41.345591 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jwb9b" Feb 23 08:50:41 crc kubenswrapper[4940]: E0223 08:50:41.345701 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 08:50:41 crc kubenswrapper[4940]: I0223 08:50:41.345884 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 08:50:41 crc kubenswrapper[4940]: E0223 08:50:41.345944 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jwb9b" podUID="d8dd2da9-cea0-44f5-8c93-91b79c7f66ea" Feb 23 08:50:41 crc kubenswrapper[4940]: E0223 08:50:41.345848 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 08:50:41 crc kubenswrapper[4940]: E0223 08:50:41.346063 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 08:50:43 crc kubenswrapper[4940]: I0223 08:50:43.345583 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 08:50:43 crc kubenswrapper[4940]: I0223 08:50:43.345658 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 08:50:43 crc kubenswrapper[4940]: I0223 08:50:43.345674 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 08:50:43 crc kubenswrapper[4940]: E0223 08:50:43.345831 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 08:50:43 crc kubenswrapper[4940]: E0223 08:50:43.345968 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 08:50:43 crc kubenswrapper[4940]: E0223 08:50:43.346322 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 08:50:43 crc kubenswrapper[4940]: I0223 08:50:43.346700 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jwb9b" Feb 23 08:50:43 crc kubenswrapper[4940]: E0223 08:50:43.346840 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jwb9b" podUID="d8dd2da9-cea0-44f5-8c93-91b79c7f66ea" Feb 23 08:50:44 crc kubenswrapper[4940]: E0223 08:50:44.605534 4940 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 23 08:50:45 crc kubenswrapper[4940]: I0223 08:50:45.345009 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jwb9b" Feb 23 08:50:45 crc kubenswrapper[4940]: E0223 08:50:45.345126 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jwb9b" podUID="d8dd2da9-cea0-44f5-8c93-91b79c7f66ea" Feb 23 08:50:45 crc kubenswrapper[4940]: I0223 08:50:45.345234 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 08:50:45 crc kubenswrapper[4940]: I0223 08:50:45.345249 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 08:50:45 crc kubenswrapper[4940]: E0223 08:50:45.345321 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 08:50:45 crc kubenswrapper[4940]: E0223 08:50:45.345497 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 08:50:45 crc kubenswrapper[4940]: I0223 08:50:45.345696 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 08:50:45 crc kubenswrapper[4940]: E0223 08:50:45.345960 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 08:50:47 crc kubenswrapper[4940]: I0223 08:50:47.344901 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jwb9b" Feb 23 08:50:47 crc kubenswrapper[4940]: I0223 08:50:47.344968 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 08:50:47 crc kubenswrapper[4940]: I0223 08:50:47.345000 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 08:50:47 crc kubenswrapper[4940]: E0223 08:50:47.346238 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 08:50:47 crc kubenswrapper[4940]: I0223 08:50:47.345035 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 08:50:47 crc kubenswrapper[4940]: E0223 08:50:47.346563 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 08:50:47 crc kubenswrapper[4940]: E0223 08:50:47.346377 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 08:50:47 crc kubenswrapper[4940]: E0223 08:50:47.346703 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jwb9b" podUID="d8dd2da9-cea0-44f5-8c93-91b79c7f66ea" Feb 23 08:50:48 crc kubenswrapper[4940]: I0223 08:50:48.129361 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d8dd2da9-cea0-44f5-8c93-91b79c7f66ea-metrics-certs\") pod \"network-metrics-daemon-jwb9b\" (UID: \"d8dd2da9-cea0-44f5-8c93-91b79c7f66ea\") " pod="openshift-multus/network-metrics-daemon-jwb9b" Feb 23 08:50:48 crc kubenswrapper[4940]: E0223 08:50:48.129489 4940 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 23 08:50:48 crc kubenswrapper[4940]: E0223 08:50:48.129560 4940 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d8dd2da9-cea0-44f5-8c93-91b79c7f66ea-metrics-certs podName:d8dd2da9-cea0-44f5-8c93-91b79c7f66ea nodeName:}" failed. No retries permitted until 2026-02-23 08:51:52.129541274 +0000 UTC m=+243.512747431 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/d8dd2da9-cea0-44f5-8c93-91b79c7f66ea-metrics-certs") pod "network-metrics-daemon-jwb9b" (UID: "d8dd2da9-cea0-44f5-8c93-91b79c7f66ea") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 23 08:50:49 crc kubenswrapper[4940]: I0223 08:50:49.345250 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 08:50:49 crc kubenswrapper[4940]: I0223 08:50:49.345293 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jwb9b" Feb 23 08:50:49 crc kubenswrapper[4940]: I0223 08:50:49.347378 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 08:50:49 crc kubenswrapper[4940]: I0223 08:50:49.347417 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 08:50:49 crc kubenswrapper[4940]: E0223 08:50:49.347541 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jwb9b" podUID="d8dd2da9-cea0-44f5-8c93-91b79c7f66ea" Feb 23 08:50:49 crc kubenswrapper[4940]: E0223 08:50:49.347636 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 08:50:49 crc kubenswrapper[4940]: E0223 08:50:49.347686 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 08:50:49 crc kubenswrapper[4940]: E0223 08:50:49.348078 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 08:50:49 crc kubenswrapper[4940]: E0223 08:50:49.606391 4940 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 23 08:50:51 crc kubenswrapper[4940]: I0223 08:50:51.344654 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 08:50:51 crc kubenswrapper[4940]: I0223 08:50:51.344704 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jwb9b" Feb 23 08:50:51 crc kubenswrapper[4940]: I0223 08:50:51.344732 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 08:50:51 crc kubenswrapper[4940]: I0223 08:50:51.344752 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 08:50:51 crc kubenswrapper[4940]: E0223 08:50:51.344975 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 08:50:51 crc kubenswrapper[4940]: E0223 08:50:51.345917 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 08:50:51 crc kubenswrapper[4940]: E0223 08:50:51.346007 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 08:50:51 crc kubenswrapper[4940]: E0223 08:50:51.346046 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jwb9b" podUID="d8dd2da9-cea0-44f5-8c93-91b79c7f66ea" Feb 23 08:50:52 crc kubenswrapper[4940]: I0223 08:50:52.346528 4940 scope.go:117] "RemoveContainer" containerID="1649bf6fd3e252298bb7f2414c8ed3d153ffbc4ec0a8497b91cc94cd41c0359f" Feb 23 08:50:52 crc kubenswrapper[4940]: E0223 08:50:52.346853 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-qkw6w_openshift-ovn-kubernetes(d0b5a971-c6f4-4518-9bb3-49d228275668)\"" pod="openshift-ovn-kubernetes/ovnkube-node-qkw6w" podUID="d0b5a971-c6f4-4518-9bb3-49d228275668" Feb 23 08:50:53 crc kubenswrapper[4940]: I0223 08:50:53.345868 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jwb9b" Feb 23 08:50:53 crc kubenswrapper[4940]: I0223 08:50:53.345941 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 08:50:53 crc kubenswrapper[4940]: I0223 08:50:53.345891 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 08:50:53 crc kubenswrapper[4940]: I0223 08:50:53.346026 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 08:50:53 crc kubenswrapper[4940]: E0223 08:50:53.346195 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jwb9b" podUID="d8dd2da9-cea0-44f5-8c93-91b79c7f66ea" Feb 23 08:50:53 crc kubenswrapper[4940]: E0223 08:50:53.346599 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 08:50:53 crc kubenswrapper[4940]: E0223 08:50:53.346852 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 08:50:53 crc kubenswrapper[4940]: E0223 08:50:53.346927 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 08:50:54 crc kubenswrapper[4940]: E0223 08:50:54.607789 4940 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 23 08:50:55 crc kubenswrapper[4940]: I0223 08:50:55.345231 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 08:50:55 crc kubenswrapper[4940]: I0223 08:50:55.345276 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 08:50:55 crc kubenswrapper[4940]: E0223 08:50:55.345447 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 08:50:55 crc kubenswrapper[4940]: I0223 08:50:55.345731 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jwb9b" Feb 23 08:50:55 crc kubenswrapper[4940]: I0223 08:50:55.345849 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 08:50:55 crc kubenswrapper[4940]: E0223 08:50:55.345898 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 08:50:55 crc kubenswrapper[4940]: E0223 08:50:55.346245 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jwb9b" podUID="d8dd2da9-cea0-44f5-8c93-91b79c7f66ea" Feb 23 08:50:55 crc kubenswrapper[4940]: E0223 08:50:55.346344 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 08:50:57 crc kubenswrapper[4940]: I0223 08:50:57.345087 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 08:50:57 crc kubenswrapper[4940]: I0223 08:50:57.345186 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 08:50:57 crc kubenswrapper[4940]: E0223 08:50:57.345310 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 08:50:57 crc kubenswrapper[4940]: I0223 08:50:57.345212 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 08:50:57 crc kubenswrapper[4940]: E0223 08:50:57.345409 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 08:50:57 crc kubenswrapper[4940]: E0223 08:50:57.345712 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 08:50:57 crc kubenswrapper[4940]: I0223 08:50:57.345760 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jwb9b" Feb 23 08:50:57 crc kubenswrapper[4940]: E0223 08:50:57.345878 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jwb9b" podUID="d8dd2da9-cea0-44f5-8c93-91b79c7f66ea" Feb 23 08:50:59 crc kubenswrapper[4940]: I0223 08:50:59.344930 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 08:50:59 crc kubenswrapper[4940]: I0223 08:50:59.345007 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 08:50:59 crc kubenswrapper[4940]: I0223 08:50:59.344996 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jwb9b" Feb 23 08:50:59 crc kubenswrapper[4940]: I0223 08:50:59.345086 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 08:50:59 crc kubenswrapper[4940]: E0223 08:50:59.347321 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 08:50:59 crc kubenswrapper[4940]: E0223 08:50:59.347448 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 08:50:59 crc kubenswrapper[4940]: E0223 08:50:59.347664 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jwb9b" podUID="d8dd2da9-cea0-44f5-8c93-91b79c7f66ea" Feb 23 08:50:59 crc kubenswrapper[4940]: E0223 08:50:59.347810 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 08:50:59 crc kubenswrapper[4940]: E0223 08:50:59.609560 4940 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 23 08:51:01 crc kubenswrapper[4940]: I0223 08:51:01.345046 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 08:51:01 crc kubenswrapper[4940]: I0223 08:51:01.345139 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 08:51:01 crc kubenswrapper[4940]: E0223 08:51:01.345192 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 08:51:01 crc kubenswrapper[4940]: I0223 08:51:01.345236 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 08:51:01 crc kubenswrapper[4940]: I0223 08:51:01.345248 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jwb9b" Feb 23 08:51:01 crc kubenswrapper[4940]: E0223 08:51:01.345379 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 08:51:01 crc kubenswrapper[4940]: E0223 08:51:01.345463 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 08:51:01 crc kubenswrapper[4940]: E0223 08:51:01.345595 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jwb9b" podUID="d8dd2da9-cea0-44f5-8c93-91b79c7f66ea" Feb 23 08:51:03 crc kubenswrapper[4940]: I0223 08:51:03.344760 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 08:51:03 crc kubenswrapper[4940]: I0223 08:51:03.344794 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 08:51:03 crc kubenswrapper[4940]: E0223 08:51:03.344935 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 08:51:03 crc kubenswrapper[4940]: I0223 08:51:03.345016 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 08:51:03 crc kubenswrapper[4940]: E0223 08:51:03.345129 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 08:51:03 crc kubenswrapper[4940]: I0223 08:51:03.345247 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jwb9b" Feb 23 08:51:03 crc kubenswrapper[4940]: E0223 08:51:03.345225 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 08:51:03 crc kubenswrapper[4940]: E0223 08:51:03.345387 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jwb9b" podUID="d8dd2da9-cea0-44f5-8c93-91b79c7f66ea" Feb 23 08:51:04 crc kubenswrapper[4940]: E0223 08:51:04.611058 4940 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 23 08:51:04 crc kubenswrapper[4940]: I0223 08:51:04.913162 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-czrqm_ec3904ad-5d0b-46b4-9c13-68454d9a3cb2/kube-multus/1.log" Feb 23 08:51:04 crc kubenswrapper[4940]: I0223 08:51:04.913954 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-czrqm_ec3904ad-5d0b-46b4-9c13-68454d9a3cb2/kube-multus/0.log" Feb 23 08:51:04 crc kubenswrapper[4940]: I0223 08:51:04.914041 4940 generic.go:334] "Generic (PLEG): container finished" podID="ec3904ad-5d0b-46b4-9c13-68454d9a3cb2" containerID="fa9251aa0cd26d68256087c8d2f9d36cae16abcb8db328bb7d53cdd7bfacafd0" exitCode=1 Feb 23 08:51:04 crc kubenswrapper[4940]: I0223 08:51:04.914179 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-czrqm" event={"ID":"ec3904ad-5d0b-46b4-9c13-68454d9a3cb2","Type":"ContainerDied","Data":"fa9251aa0cd26d68256087c8d2f9d36cae16abcb8db328bb7d53cdd7bfacafd0"} Feb 23 08:51:04 crc kubenswrapper[4940]: I0223 08:51:04.914271 4940 scope.go:117] "RemoveContainer" containerID="f5c3f117be2e1eba2edb562924ecec6d7ac44b508366bc7c63a86214ada740dc" Feb 23 08:51:04 crc kubenswrapper[4940]: I0223 08:51:04.914746 4940 scope.go:117] "RemoveContainer" containerID="fa9251aa0cd26d68256087c8d2f9d36cae16abcb8db328bb7d53cdd7bfacafd0" Feb 23 08:51:04 crc kubenswrapper[4940]: E0223 08:51:04.915016 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-czrqm_openshift-multus(ec3904ad-5d0b-46b4-9c13-68454d9a3cb2)\"" pod="openshift-multus/multus-czrqm" podUID="ec3904ad-5d0b-46b4-9c13-68454d9a3cb2" Feb 23 08:51:04 crc kubenswrapper[4940]: I0223 08:51:04.936244 4940 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-cvmmg" podStartSLOduration=127.936218934 podStartE2EDuration="2m7.936218934s" podCreationTimestamp="2026-02-23 08:48:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 08:50:36.833802476 +0000 UTC m=+168.217008693" watchObservedRunningTime="2026-02-23 08:51:04.936218934 +0000 UTC m=+196.319425111" Feb 23 08:51:05 crc kubenswrapper[4940]: I0223 08:51:05.344764 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 08:51:05 crc kubenswrapper[4940]: I0223 08:51:05.344840 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jwb9b" Feb 23 08:51:05 crc kubenswrapper[4940]: I0223 08:51:05.344765 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 08:51:05 crc kubenswrapper[4940]: I0223 08:51:05.345038 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 08:51:05 crc kubenswrapper[4940]: E0223 08:51:05.344983 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 08:51:05 crc kubenswrapper[4940]: E0223 08:51:05.345190 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jwb9b" podUID="d8dd2da9-cea0-44f5-8c93-91b79c7f66ea" Feb 23 08:51:05 crc kubenswrapper[4940]: E0223 08:51:05.345355 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 08:51:05 crc kubenswrapper[4940]: E0223 08:51:05.345553 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 08:51:05 crc kubenswrapper[4940]: I0223 08:51:05.926131 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-czrqm_ec3904ad-5d0b-46b4-9c13-68454d9a3cb2/kube-multus/1.log" Feb 23 08:51:06 crc kubenswrapper[4940]: I0223 08:51:06.345720 4940 scope.go:117] "RemoveContainer" containerID="1649bf6fd3e252298bb7f2414c8ed3d153ffbc4ec0a8497b91cc94cd41c0359f" Feb 23 08:51:06 crc kubenswrapper[4940]: I0223 08:51:06.931785 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qkw6w_d0b5a971-c6f4-4518-9bb3-49d228275668/ovnkube-controller/3.log" Feb 23 08:51:06 crc kubenswrapper[4940]: I0223 08:51:06.934248 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qkw6w" event={"ID":"d0b5a971-c6f4-4518-9bb3-49d228275668","Type":"ContainerStarted","Data":"340eaaffa94ddb12c76a83ddbb966d6a4f34ca8e74f15b11fe931f5f2c8cca12"} Feb 23 08:51:06 crc kubenswrapper[4940]: I0223 08:51:06.934670 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-qkw6w" Feb 23 08:51:06 crc kubenswrapper[4940]: I0223 08:51:06.965475 4940 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-qkw6w" podStartSLOduration=129.965443581 podStartE2EDuration="2m9.965443581s" podCreationTimestamp="2026-02-23 08:48:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 08:51:06.963483918 +0000 UTC m=+198.346690075" watchObservedRunningTime="2026-02-23 08:51:06.965443581 +0000 UTC m=+198.348649738" Feb 23 08:51:07 crc kubenswrapper[4940]: I0223 08:51:07.261205 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 08:51:07 crc kubenswrapper[4940]: I0223 08:51:07.261307 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 08:51:07 crc kubenswrapper[4940]: E0223 08:51:07.261393 4940 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 23 08:51:07 crc kubenswrapper[4940]: E0223 08:51:07.261413 4940 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 08:53:09.261355437 +0000 UTC m=+320.644561634 (durationBeforeRetry 2m2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 08:51:07 crc kubenswrapper[4940]: E0223 08:51:07.261460 4940 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-23 08:53:09.26144532 +0000 UTC m=+320.644651507 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 23 08:51:07 crc kubenswrapper[4940]: I0223 08:51:07.261571 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 08:51:07 crc kubenswrapper[4940]: E0223 08:51:07.261651 4940 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 23 08:51:07 crc kubenswrapper[4940]: E0223 08:51:07.261683 4940 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-23 08:53:09.261669797 +0000 UTC m=+320.644875954 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 23 08:51:07 crc kubenswrapper[4940]: I0223 08:51:07.344863 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 08:51:07 crc kubenswrapper[4940]: I0223 08:51:07.344935 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 08:51:07 crc kubenswrapper[4940]: I0223 08:51:07.344953 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 08:51:07 crc kubenswrapper[4940]: I0223 08:51:07.344885 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jwb9b" Feb 23 08:51:07 crc kubenswrapper[4940]: E0223 08:51:07.345053 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 08:51:07 crc kubenswrapper[4940]: E0223 08:51:07.345154 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jwb9b" podUID="d8dd2da9-cea0-44f5-8c93-91b79c7f66ea" Feb 23 08:51:07 crc kubenswrapper[4940]: E0223 08:51:07.345230 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 08:51:07 crc kubenswrapper[4940]: E0223 08:51:07.345312 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 08:51:07 crc kubenswrapper[4940]: I0223 08:51:07.362529 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 08:51:07 crc kubenswrapper[4940]: I0223 08:51:07.362590 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 08:51:07 crc kubenswrapper[4940]: E0223 08:51:07.362751 4940 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 23 08:51:07 crc kubenswrapper[4940]: E0223 08:51:07.362770 4940 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 23 08:51:07 crc kubenswrapper[4940]: E0223 08:51:07.362782 4940 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 23 08:51:07 crc kubenswrapper[4940]: E0223 08:51:07.362831 4940 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-23 08:53:09.362815188 +0000 UTC m=+320.746021345 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 23 08:51:07 crc kubenswrapper[4940]: E0223 08:51:07.363025 4940 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 23 08:51:07 crc kubenswrapper[4940]: E0223 08:51:07.363098 4940 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 23 08:51:07 crc kubenswrapper[4940]: E0223 08:51:07.363123 4940 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 23 08:51:07 crc kubenswrapper[4940]: E0223 08:51:07.363223 4940 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-23 08:53:09.363196601 +0000 UTC m=+320.746402798 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 23 08:51:07 crc kubenswrapper[4940]: I0223 08:51:07.415733 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-jwb9b"] Feb 23 08:51:07 crc kubenswrapper[4940]: I0223 08:51:07.937519 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jwb9b" Feb 23 08:51:07 crc kubenswrapper[4940]: E0223 08:51:07.937701 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jwb9b" podUID="d8dd2da9-cea0-44f5-8c93-91b79c7f66ea" Feb 23 08:51:09 crc kubenswrapper[4940]: I0223 08:51:09.345391 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 08:51:09 crc kubenswrapper[4940]: I0223 08:51:09.345467 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 08:51:09 crc kubenswrapper[4940]: I0223 08:51:09.346093 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 08:51:09 crc kubenswrapper[4940]: I0223 08:51:09.346721 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jwb9b" Feb 23 08:51:09 crc kubenswrapper[4940]: E0223 08:51:09.346968 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jwb9b" podUID="d8dd2da9-cea0-44f5-8c93-91b79c7f66ea" Feb 23 08:51:09 crc kubenswrapper[4940]: E0223 08:51:09.346844 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 08:51:09 crc kubenswrapper[4940]: E0223 08:51:09.346894 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 08:51:09 crc kubenswrapper[4940]: E0223 08:51:09.346708 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 08:51:09 crc kubenswrapper[4940]: E0223 08:51:09.612379 4940 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 23 08:51:11 crc kubenswrapper[4940]: I0223 08:51:11.345480 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 08:51:11 crc kubenswrapper[4940]: I0223 08:51:11.345636 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 08:51:11 crc kubenswrapper[4940]: E0223 08:51:11.345710 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 08:51:11 crc kubenswrapper[4940]: I0223 08:51:11.345743 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jwb9b" Feb 23 08:51:11 crc kubenswrapper[4940]: I0223 08:51:11.345786 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 08:51:11 crc kubenswrapper[4940]: E0223 08:51:11.345923 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 08:51:11 crc kubenswrapper[4940]: E0223 08:51:11.346027 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 08:51:11 crc kubenswrapper[4940]: E0223 08:51:11.346181 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jwb9b" podUID="d8dd2da9-cea0-44f5-8c93-91b79c7f66ea" Feb 23 08:51:13 crc kubenswrapper[4940]: I0223 08:51:13.345228 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 08:51:13 crc kubenswrapper[4940]: I0223 08:51:13.345228 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 08:51:13 crc kubenswrapper[4940]: E0223 08:51:13.345393 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 08:51:13 crc kubenswrapper[4940]: I0223 08:51:13.345249 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 08:51:13 crc kubenswrapper[4940]: E0223 08:51:13.345465 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 08:51:13 crc kubenswrapper[4940]: I0223 08:51:13.345248 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jwb9b" Feb 23 08:51:13 crc kubenswrapper[4940]: E0223 08:51:13.345548 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jwb9b" podUID="d8dd2da9-cea0-44f5-8c93-91b79c7f66ea" Feb 23 08:51:13 crc kubenswrapper[4940]: E0223 08:51:13.345598 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 08:51:14 crc kubenswrapper[4940]: E0223 08:51:14.614152 4940 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 23 08:51:15 crc kubenswrapper[4940]: I0223 08:51:15.345493 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 08:51:15 crc kubenswrapper[4940]: I0223 08:51:15.345535 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jwb9b" Feb 23 08:51:15 crc kubenswrapper[4940]: I0223 08:51:15.345559 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 08:51:15 crc kubenswrapper[4940]: E0223 08:51:15.345697 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 08:51:15 crc kubenswrapper[4940]: I0223 08:51:15.345783 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 08:51:15 crc kubenswrapper[4940]: E0223 08:51:15.345962 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jwb9b" podUID="d8dd2da9-cea0-44f5-8c93-91b79c7f66ea" Feb 23 08:51:15 crc kubenswrapper[4940]: E0223 08:51:15.346266 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 08:51:15 crc kubenswrapper[4940]: E0223 08:51:15.346454 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 08:51:15 crc kubenswrapper[4940]: I0223 08:51:15.346575 4940 scope.go:117] "RemoveContainer" containerID="fa9251aa0cd26d68256087c8d2f9d36cae16abcb8db328bb7d53cdd7bfacafd0" Feb 23 08:51:15 crc kubenswrapper[4940]: I0223 08:51:15.967726 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-czrqm_ec3904ad-5d0b-46b4-9c13-68454d9a3cb2/kube-multus/1.log" Feb 23 08:51:15 crc kubenswrapper[4940]: I0223 08:51:15.967795 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-czrqm" event={"ID":"ec3904ad-5d0b-46b4-9c13-68454d9a3cb2","Type":"ContainerStarted","Data":"15e9457644bb5f9d8fa27855d11196a713b74e007cbba9692fff4c486fc19e99"} Feb 23 08:51:17 crc kubenswrapper[4940]: I0223 08:51:17.345311 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 08:51:17 crc kubenswrapper[4940]: I0223 08:51:17.345359 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 08:51:17 crc kubenswrapper[4940]: E0223 08:51:17.346033 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 08:51:17 crc kubenswrapper[4940]: I0223 08:51:17.345438 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jwb9b" Feb 23 08:51:17 crc kubenswrapper[4940]: I0223 08:51:17.345396 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 08:51:17 crc kubenswrapper[4940]: E0223 08:51:17.346154 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jwb9b" podUID="d8dd2da9-cea0-44f5-8c93-91b79c7f66ea" Feb 23 08:51:17 crc kubenswrapper[4940]: E0223 08:51:17.346090 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 08:51:17 crc kubenswrapper[4940]: E0223 08:51:17.346341 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 08:51:19 crc kubenswrapper[4940]: I0223 08:51:19.344849 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 08:51:19 crc kubenswrapper[4940]: I0223 08:51:19.344993 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jwb9b" Feb 23 08:51:19 crc kubenswrapper[4940]: I0223 08:51:19.345031 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 08:51:19 crc kubenswrapper[4940]: I0223 08:51:19.346336 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 08:51:19 crc kubenswrapper[4940]: E0223 08:51:19.346353 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 08:51:19 crc kubenswrapper[4940]: E0223 08:51:19.346788 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jwb9b" podUID="d8dd2da9-cea0-44f5-8c93-91b79c7f66ea" Feb 23 08:51:19 crc kubenswrapper[4940]: E0223 08:51:19.346885 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 08:51:19 crc kubenswrapper[4940]: E0223 08:51:19.347140 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 08:51:21 crc kubenswrapper[4940]: I0223 08:51:21.345464 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 08:51:21 crc kubenswrapper[4940]: I0223 08:51:21.345684 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 08:51:21 crc kubenswrapper[4940]: I0223 08:51:21.345808 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jwb9b" Feb 23 08:51:21 crc kubenswrapper[4940]: I0223 08:51:21.346828 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 08:51:21 crc kubenswrapper[4940]: I0223 08:51:21.349976 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Feb 23 08:51:21 crc kubenswrapper[4940]: I0223 08:51:21.350420 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Feb 23 08:51:21 crc kubenswrapper[4940]: I0223 08:51:21.350584 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Feb 23 08:51:21 crc kubenswrapper[4940]: I0223 08:51:21.350767 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Feb 23 08:51:21 crc kubenswrapper[4940]: I0223 08:51:21.350590 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Feb 23 08:51:21 crc kubenswrapper[4940]: I0223 08:51:21.351372 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.292763 4940 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.342279 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-xlz2g"] Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.343017 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-xlz2g" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.345659 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.346926 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.346956 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.353045 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-2mrdb"] Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.353667 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-2mrdb" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.360958 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.361507 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-8d22c"] Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.361917 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-8d22c" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.362401 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-slpn2"] Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.362588 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.362635 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.362741 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-slpn2" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.363330 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.363501 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.363450 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.365942 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.367373 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.367541 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.368940 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-7xd5s"] Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.369354 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-7xd5s" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.375836 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.376815 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.376859 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.376901 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.377015 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.377044 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.377345 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.377399 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.377347 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.377565 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.377685 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.377752 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.377932 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.377945 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-224dd"] Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.377936 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.378412 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-224dd" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.378461 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.378683 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-rrhk2"] Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.378823 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.379079 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.379084 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-rrhk2" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.379607 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.380121 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-lb8pn"] Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.380415 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-lb8pn" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.380935 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-95wjd"] Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.381277 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-95wjd" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.384677 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-vc9hk"] Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.385162 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-6qcpm"] Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.385418 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-kzrfw"] Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.385775 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.385820 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-vc9hk" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.385783 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-kzrfw" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.386168 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-6qcpm" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.399731 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.401197 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.401271 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.401473 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.401589 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.401698 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.401727 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.401835 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.401851 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.401928 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.401980 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.402008 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.402081 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.402100 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.402183 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.402328 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.402465 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.402941 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.403084 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.403205 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.403320 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.402082 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-klkgk"] Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.407360 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.411028 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.411333 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.422526 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-znvc9"] Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.422819 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.423109 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-8ls95"] Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.423706 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-8ls95" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.423824 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-5tf64"] Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.424146 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-klkgk" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.424404 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-5tf64" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.424438 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-znvc9" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.424638 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.424673 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.425070 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.425281 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.425514 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.426578 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-n95bv"] Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.428901 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-2zgdn"] Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.429301 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-2zgdn" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.429574 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-n95bv" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.432687 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.435339 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.437383 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.439678 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.444258 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.445754 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-6g7xj"] Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.446361 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-wwhg8"] Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.446883 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-wwhg8" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.447102 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-6g7xj" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.447148 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-2c95j"] Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.447978 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.448238 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.448354 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.448838 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.448954 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.449130 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.449429 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.449531 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.449651 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.449755 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.449870 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.449899 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.449759 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.450189 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.450310 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.450422 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.450526 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.450651 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.451504 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.454734 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.455260 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.455529 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.455834 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.457531 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-mg5lr"] Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.457923 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-2c95j" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.458318 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-pw8l4"] Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.458719 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29530605-25rxm"] Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.458840 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-mg5lr" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.459132 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29530605-25rxm" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.459582 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-pw8l4" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.461754 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.462173 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-f9p7v"] Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.482831 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-whcws"] Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.483486 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-whcws" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.486230 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-f9p7v" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.486665 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.486753 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.487002 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.487135 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d1df31f2-53d1-42d0-9e24-a5ce6ad604d6-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-7xd5s\" (UID: \"d1df31f2-53d1-42d0-9e24-a5ce6ad604d6\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-7xd5s" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.487241 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d1df31f2-53d1-42d0-9e24-a5ce6ad604d6-config\") pod \"authentication-operator-69f744f599-7xd5s\" (UID: \"d1df31f2-53d1-42d0-9e24-a5ce6ad604d6\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-7xd5s" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.487347 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/550e2596-a506-465a-91ab-44280e7727d3-trusted-ca\") pod \"console-operator-58897d9998-xlz2g\" (UID: \"550e2596-a506-465a-91ab-44280e7727d3\") " pod="openshift-console-operator/console-operator-58897d9998-xlz2g" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.487278 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.487440 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/550e2596-a506-465a-91ab-44280e7727d3-config\") pod \"console-operator-58897d9998-xlz2g\" (UID: \"550e2596-a506-465a-91ab-44280e7727d3\") " pod="openshift-console-operator/console-operator-58897d9998-xlz2g" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.487653 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/973d0526-cb43-44e0-adc5-9a5438c906f9-auth-proxy-config\") pod \"machine-approver-56656f9798-vc9hk\" (UID: \"973d0526-cb43-44e0-adc5-9a5438c906f9\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-vc9hk" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.487748 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/973d0526-cb43-44e0-adc5-9a5438c906f9-config\") pod \"machine-approver-56656f9798-vc9hk\" (UID: \"973d0526-cb43-44e0-adc5-9a5438c906f9\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-vc9hk" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.487858 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/af5df6df-7f6c-40a3-b1da-44af29cdee8b-serving-cert\") pod \"route-controller-manager-6576b87f9c-2mrdb\" (UID: \"af5df6df-7f6c-40a3-b1da-44af29cdee8b\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-2mrdb" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.487965 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2vswr\" (UniqueName: \"kubernetes.io/projected/af5df6df-7f6c-40a3-b1da-44af29cdee8b-kube-api-access-2vswr\") pod \"route-controller-manager-6576b87f9c-2mrdb\" (UID: \"af5df6df-7f6c-40a3-b1da-44af29cdee8b\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-2mrdb" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.488065 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/550e2596-a506-465a-91ab-44280e7727d3-serving-cert\") pod \"console-operator-58897d9998-xlz2g\" (UID: \"550e2596-a506-465a-91ab-44280e7727d3\") " pod="openshift-console-operator/console-operator-58897d9998-xlz2g" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.488172 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d1df31f2-53d1-42d0-9e24-a5ce6ad604d6-serving-cert\") pod \"authentication-operator-69f744f599-7xd5s\" (UID: \"d1df31f2-53d1-42d0-9e24-a5ce6ad604d6\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-7xd5s" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.488279 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/af5df6df-7f6c-40a3-b1da-44af29cdee8b-config\") pod \"route-controller-manager-6576b87f9c-2mrdb\" (UID: \"af5df6df-7f6c-40a3-b1da-44af29cdee8b\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-2mrdb" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.488394 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n7t7z\" (UniqueName: \"kubernetes.io/projected/550e2596-a506-465a-91ab-44280e7727d3-kube-api-access-n7t7z\") pod \"console-operator-58897d9998-xlz2g\" (UID: \"550e2596-a506-465a-91ab-44280e7727d3\") " pod="openshift-console-operator/console-operator-58897d9998-xlz2g" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.487389 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.487415 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.487437 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.488685 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/539d5f30-be10-4fc5-bf8f-11442d337bac-config\") pod \"openshift-apiserver-operator-796bbdcf4f-8d22c\" (UID: \"539d5f30-be10-4fc5-bf8f-11442d337bac\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-8d22c" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.488776 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d1df31f2-53d1-42d0-9e24-a5ce6ad604d6-service-ca-bundle\") pod \"authentication-operator-69f744f599-7xd5s\" (UID: \"d1df31f2-53d1-42d0-9e24-a5ce6ad604d6\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-7xd5s" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.487459 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.487725 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.488878 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lhxrv\" (UniqueName: \"kubernetes.io/projected/539d5f30-be10-4fc5-bf8f-11442d337bac-kube-api-access-lhxrv\") pod \"openshift-apiserver-operator-796bbdcf4f-8d22c\" (UID: \"539d5f30-be10-4fc5-bf8f-11442d337bac\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-8d22c" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.489089 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8v6jx\" (UniqueName: \"kubernetes.io/projected/d1df31f2-53d1-42d0-9e24-a5ce6ad604d6-kube-api-access-8v6jx\") pod \"authentication-operator-69f744f599-7xd5s\" (UID: \"d1df31f2-53d1-42d0-9e24-a5ce6ad604d6\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-7xd5s" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.489140 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/973d0526-cb43-44e0-adc5-9a5438c906f9-machine-approver-tls\") pod \"machine-approver-56656f9798-vc9hk\" (UID: \"973d0526-cb43-44e0-adc5-9a5438c906f9\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-vc9hk" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.489185 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/539d5f30-be10-4fc5-bf8f-11442d337bac-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-8d22c\" (UID: \"539d5f30-be10-4fc5-bf8f-11442d337bac\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-8d22c" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.489247 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r9j95\" (UniqueName: \"kubernetes.io/projected/973d0526-cb43-44e0-adc5-9a5438c906f9-kube-api-access-r9j95\") pod \"machine-approver-56656f9798-vc9hk\" (UID: \"973d0526-cb43-44e0-adc5-9a5438c906f9\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-vc9hk" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.489283 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/af5df6df-7f6c-40a3-b1da-44af29cdee8b-client-ca\") pod \"route-controller-manager-6576b87f9c-2mrdb\" (UID: \"af5df6df-7f6c-40a3-b1da-44af29cdee8b\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-2mrdb" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.489547 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.491428 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9xdrp"] Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.492689 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.497627 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-wtxsb"] Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.497883 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.498282 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-glhjd"] Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.498442 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-wtxsb" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.498444 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9xdrp" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.499102 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.499626 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-687p7"] Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.502358 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-687p7" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.502753 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-glhjd" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.505473 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8smd9"] Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.506017 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8smd9" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.506830 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.507496 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-nmq4g"] Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.508063 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-nmq4g" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.508953 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lz7cz"] Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.509901 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lz7cz" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.510693 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-z7w6c"] Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.511200 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-z7w6c" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.512069 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-hrnj4"] Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.513991 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-xlz2g"] Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.514082 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-hrnj4" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.515466 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-tqnlh"] Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.516429 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-tqnlh" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.518943 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-j9x9v"] Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.519344 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-j9x9v" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.521007 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-wzbgr"] Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.521808 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-wzbgr" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.524818 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-8d22c"] Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.525777 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-224dd"] Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.526359 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.527602 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-2mrdb"] Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.527650 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-lb8pn"] Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.529167 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-slpn2"] Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.532259 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-95wjd"] Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.533645 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-7xd5s"] Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.534820 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-8ls95"] Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.536848 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-kzrfw"] Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.537949 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29530605-25rxm"] Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.538951 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-whcws"] Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.539954 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-mg5lr"] Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.540810 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-6g7xj"] Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.542058 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-pw8l4"] Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.543350 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-glhjd"] Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.545093 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.545890 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-wtxsb"] Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.547168 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-6qcpm"] Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.548233 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-wwhg8"] Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.549596 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-687p7"] Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.550986 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8smd9"] Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.552102 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9xdrp"] Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.553365 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-n95bv"] Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.554720 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-f9p7v"] Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.555641 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-rrhk2"] Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.557675 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-5tf64"] Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.559270 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-nmq4g"] Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.560945 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-2c95j"] Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.567175 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-znvc9"] Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.568680 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-2zgdn"] Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.571176 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-klkgk"] Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.571211 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-sppfz"] Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.572458 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-m4hz9"] Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.573218 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-6f7z7"] Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.573605 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-6f7z7" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.573869 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-m4hz9" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.573962 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-sppfz" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.574601 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-zgt46"] Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.574601 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.575637 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-zgt46" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.577028 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lz7cz"] Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.578714 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-z7w6c"] Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.581760 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-hrnj4"] Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.584852 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.589471 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-j9x9v"] Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.590311 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n7t7z\" (UniqueName: \"kubernetes.io/projected/550e2596-a506-465a-91ab-44280e7727d3-kube-api-access-n7t7z\") pod \"console-operator-58897d9998-xlz2g\" (UID: \"550e2596-a506-465a-91ab-44280e7727d3\") " pod="openshift-console-operator/console-operator-58897d9998-xlz2g" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.590452 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/539d5f30-be10-4fc5-bf8f-11442d337bac-config\") pod \"openshift-apiserver-operator-796bbdcf4f-8d22c\" (UID: \"539d5f30-be10-4fc5-bf8f-11442d337bac\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-8d22c" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.590554 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d1df31f2-53d1-42d0-9e24-a5ce6ad604d6-service-ca-bundle\") pod \"authentication-operator-69f744f599-7xd5s\" (UID: \"d1df31f2-53d1-42d0-9e24-a5ce6ad604d6\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-7xd5s" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.590669 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lhxrv\" (UniqueName: \"kubernetes.io/projected/539d5f30-be10-4fc5-bf8f-11442d337bac-kube-api-access-lhxrv\") pod \"openshift-apiserver-operator-796bbdcf4f-8d22c\" (UID: \"539d5f30-be10-4fc5-bf8f-11442d337bac\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-8d22c" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.590749 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8v6jx\" (UniqueName: \"kubernetes.io/projected/d1df31f2-53d1-42d0-9e24-a5ce6ad604d6-kube-api-access-8v6jx\") pod \"authentication-operator-69f744f599-7xd5s\" (UID: \"d1df31f2-53d1-42d0-9e24-a5ce6ad604d6\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-7xd5s" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.590846 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/973d0526-cb43-44e0-adc5-9a5438c906f9-machine-approver-tls\") pod \"machine-approver-56656f9798-vc9hk\" (UID: \"973d0526-cb43-44e0-adc5-9a5438c906f9\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-vc9hk" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.591102 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/539d5f30-be10-4fc5-bf8f-11442d337bac-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-8d22c\" (UID: \"539d5f30-be10-4fc5-bf8f-11442d337bac\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-8d22c" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.591227 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r9j95\" (UniqueName: \"kubernetes.io/projected/973d0526-cb43-44e0-adc5-9a5438c906f9-kube-api-access-r9j95\") pod \"machine-approver-56656f9798-vc9hk\" (UID: \"973d0526-cb43-44e0-adc5-9a5438c906f9\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-vc9hk" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.591329 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/af5df6df-7f6c-40a3-b1da-44af29cdee8b-client-ca\") pod \"route-controller-manager-6576b87f9c-2mrdb\" (UID: \"af5df6df-7f6c-40a3-b1da-44af29cdee8b\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-2mrdb" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.591405 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d1df31f2-53d1-42d0-9e24-a5ce6ad604d6-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-7xd5s\" (UID: \"d1df31f2-53d1-42d0-9e24-a5ce6ad604d6\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-7xd5s" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.591473 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d1df31f2-53d1-42d0-9e24-a5ce6ad604d6-config\") pod \"authentication-operator-69f744f599-7xd5s\" (UID: \"d1df31f2-53d1-42d0-9e24-a5ce6ad604d6\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-7xd5s" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.591531 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d1df31f2-53d1-42d0-9e24-a5ce6ad604d6-service-ca-bundle\") pod \"authentication-operator-69f744f599-7xd5s\" (UID: \"d1df31f2-53d1-42d0-9e24-a5ce6ad604d6\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-7xd5s" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.591543 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/550e2596-a506-465a-91ab-44280e7727d3-trusted-ca\") pod \"console-operator-58897d9998-xlz2g\" (UID: \"550e2596-a506-465a-91ab-44280e7727d3\") " pod="openshift-console-operator/console-operator-58897d9998-xlz2g" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.591693 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/550e2596-a506-465a-91ab-44280e7727d3-config\") pod \"console-operator-58897d9998-xlz2g\" (UID: \"550e2596-a506-465a-91ab-44280e7727d3\") " pod="openshift-console-operator/console-operator-58897d9998-xlz2g" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.591771 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/973d0526-cb43-44e0-adc5-9a5438c906f9-auth-proxy-config\") pod \"machine-approver-56656f9798-vc9hk\" (UID: \"973d0526-cb43-44e0-adc5-9a5438c906f9\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-vc9hk" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.591840 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/973d0526-cb43-44e0-adc5-9a5438c906f9-config\") pod \"machine-approver-56656f9798-vc9hk\" (UID: \"973d0526-cb43-44e0-adc5-9a5438c906f9\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-vc9hk" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.591915 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/af5df6df-7f6c-40a3-b1da-44af29cdee8b-serving-cert\") pod \"route-controller-manager-6576b87f9c-2mrdb\" (UID: \"af5df6df-7f6c-40a3-b1da-44af29cdee8b\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-2mrdb" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.591996 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2vswr\" (UniqueName: \"kubernetes.io/projected/af5df6df-7f6c-40a3-b1da-44af29cdee8b-kube-api-access-2vswr\") pod \"route-controller-manager-6576b87f9c-2mrdb\" (UID: \"af5df6df-7f6c-40a3-b1da-44af29cdee8b\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-2mrdb" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.592067 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/550e2596-a506-465a-91ab-44280e7727d3-serving-cert\") pod \"console-operator-58897d9998-xlz2g\" (UID: \"550e2596-a506-465a-91ab-44280e7727d3\") " pod="openshift-console-operator/console-operator-58897d9998-xlz2g" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.592138 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d1df31f2-53d1-42d0-9e24-a5ce6ad604d6-serving-cert\") pod \"authentication-operator-69f744f599-7xd5s\" (UID: \"d1df31f2-53d1-42d0-9e24-a5ce6ad604d6\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-7xd5s" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.592236 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/af5df6df-7f6c-40a3-b1da-44af29cdee8b-config\") pod \"route-controller-manager-6576b87f9c-2mrdb\" (UID: \"af5df6df-7f6c-40a3-b1da-44af29cdee8b\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-2mrdb" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.592298 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/539d5f30-be10-4fc5-bf8f-11442d337bac-config\") pod \"openshift-apiserver-operator-796bbdcf4f-8d22c\" (UID: \"539d5f30-be10-4fc5-bf8f-11442d337bac\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-8d22c" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.593378 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-sppfz"] Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.593426 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-wzbgr"] Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.593438 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-m4hz9"] Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.595040 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-zgt46"] Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.595235 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/af5df6df-7f6c-40a3-b1da-44af29cdee8b-client-ca\") pod \"route-controller-manager-6576b87f9c-2mrdb\" (UID: \"af5df6df-7f6c-40a3-b1da-44af29cdee8b\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-2mrdb" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.596046 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/973d0526-cb43-44e0-adc5-9a5438c906f9-config\") pod \"machine-approver-56656f9798-vc9hk\" (UID: \"973d0526-cb43-44e0-adc5-9a5438c906f9\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-vc9hk" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.596503 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/973d0526-cb43-44e0-adc5-9a5438c906f9-machine-approver-tls\") pod \"machine-approver-56656f9798-vc9hk\" (UID: \"973d0526-cb43-44e0-adc5-9a5438c906f9\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-vc9hk" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.596674 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d1df31f2-53d1-42d0-9e24-a5ce6ad604d6-config\") pod \"authentication-operator-69f744f599-7xd5s\" (UID: \"d1df31f2-53d1-42d0-9e24-a5ce6ad604d6\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-7xd5s" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.596803 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d1df31f2-53d1-42d0-9e24-a5ce6ad604d6-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-7xd5s\" (UID: \"d1df31f2-53d1-42d0-9e24-a5ce6ad604d6\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-7xd5s" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.597382 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/550e2596-a506-465a-91ab-44280e7727d3-config\") pod \"console-operator-58897d9998-xlz2g\" (UID: \"550e2596-a506-465a-91ab-44280e7727d3\") " pod="openshift-console-operator/console-operator-58897d9998-xlz2g" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.597474 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/550e2596-a506-465a-91ab-44280e7727d3-trusted-ca\") pod \"console-operator-58897d9998-xlz2g\" (UID: \"550e2596-a506-465a-91ab-44280e7727d3\") " pod="openshift-console-operator/console-operator-58897d9998-xlz2g" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.598053 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/af5df6df-7f6c-40a3-b1da-44af29cdee8b-config\") pod \"route-controller-manager-6576b87f9c-2mrdb\" (UID: \"af5df6df-7f6c-40a3-b1da-44af29cdee8b\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-2mrdb" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.598470 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/539d5f30-be10-4fc5-bf8f-11442d337bac-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-8d22c\" (UID: \"539d5f30-be10-4fc5-bf8f-11442d337bac\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-8d22c" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.599231 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/973d0526-cb43-44e0-adc5-9a5438c906f9-auth-proxy-config\") pod \"machine-approver-56656f9798-vc9hk\" (UID: \"973d0526-cb43-44e0-adc5-9a5438c906f9\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-vc9hk" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.599558 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/af5df6df-7f6c-40a3-b1da-44af29cdee8b-serving-cert\") pod \"route-controller-manager-6576b87f9c-2mrdb\" (UID: \"af5df6df-7f6c-40a3-b1da-44af29cdee8b\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-2mrdb" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.600583 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d1df31f2-53d1-42d0-9e24-a5ce6ad604d6-serving-cert\") pod \"authentication-operator-69f744f599-7xd5s\" (UID: \"d1df31f2-53d1-42d0-9e24-a5ce6ad604d6\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-7xd5s" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.602490 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/550e2596-a506-465a-91ab-44280e7727d3-serving-cert\") pod \"console-operator-58897d9998-xlz2g\" (UID: \"550e2596-a506-465a-91ab-44280e7727d3\") " pod="openshift-console-operator/console-operator-58897d9998-xlz2g" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.612755 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.625591 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.645318 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.665221 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.686354 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.705499 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.725189 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.745251 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.793538 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kzrfw\" (UID: \"05cfdf5e-5390-4f32-986d-02872c05f444\") " pod="openshift-image-registry/image-registry-697d97f7c8-kzrfw" Feb 23 08:51:26 crc kubenswrapper[4940]: E0223 08:51:26.793982 4940 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 08:51:27.293969103 +0000 UTC m=+218.677175260 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kzrfw" (UID: "05cfdf5e-5390-4f32-986d-02872c05f444") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.794978 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e13ea819-2f94-423e-ab3f-c7b6d03ad686-serving-cert\") pod \"controller-manager-879f6c89f-224dd\" (UID: \"e13ea819-2f94-423e-ab3f-c7b6d03ad686\") " pod="openshift-controller-manager/controller-manager-879f6c89f-224dd" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.795134 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/05cfdf5e-5390-4f32-986d-02872c05f444-ca-trust-extracted\") pod \"image-registry-697d97f7c8-kzrfw\" (UID: \"05cfdf5e-5390-4f32-986d-02872c05f444\") " pod="openshift-image-registry/image-registry-697d97f7c8-kzrfw" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.795221 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-94mcm\" (UniqueName: \"kubernetes.io/projected/916e1f6f-2bfc-41e7-86c2-6c379e3638c1-kube-api-access-94mcm\") pod \"downloads-7954f5f757-6qcpm\" (UID: \"916e1f6f-2bfc-41e7-86c2-6c379e3638c1\") " pod="openshift-console/downloads-7954f5f757-6qcpm" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.795294 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e13ea819-2f94-423e-ab3f-c7b6d03ad686-config\") pod \"controller-manager-879f6c89f-224dd\" (UID: \"e13ea819-2f94-423e-ab3f-c7b6d03ad686\") " pod="openshift-controller-manager/controller-manager-879f6c89f-224dd" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.795367 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/c733e45d-a072-4619-b2f8-aea6d77b112f-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-rrhk2\" (UID: \"c733e45d-a072-4619-b2f8-aea6d77b112f\") " pod="openshift-authentication/oauth-openshift-558db77b4-rrhk2" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.795433 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/c733e45d-a072-4619-b2f8-aea6d77b112f-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-rrhk2\" (UID: \"c733e45d-a072-4619-b2f8-aea6d77b112f\") " pod="openshift-authentication/oauth-openshift-558db77b4-rrhk2" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.795508 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/c733e45d-a072-4619-b2f8-aea6d77b112f-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-rrhk2\" (UID: \"c733e45d-a072-4619-b2f8-aea6d77b112f\") " pod="openshift-authentication/oauth-openshift-558db77b4-rrhk2" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.795585 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4m4cz\" (UniqueName: \"kubernetes.io/projected/e13ea819-2f94-423e-ab3f-c7b6d03ad686-kube-api-access-4m4cz\") pod \"controller-manager-879f6c89f-224dd\" (UID: \"e13ea819-2f94-423e-ab3f-c7b6d03ad686\") " pod="openshift-controller-manager/controller-manager-879f6c89f-224dd" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.795725 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/48ab701f-fc67-4d29-9cad-337e223f6f87-serving-cert\") pod \"apiserver-7bbb656c7d-slpn2\" (UID: \"48ab701f-fc67-4d29-9cad-337e223f6f87\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-slpn2" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.795873 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c733e45d-a072-4619-b2f8-aea6d77b112f-audit-dir\") pod \"oauth-openshift-558db77b4-rrhk2\" (UID: \"c733e45d-a072-4619-b2f8-aea6d77b112f\") " pod="openshift-authentication/oauth-openshift-558db77b4-rrhk2" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.795952 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/48ab701f-fc67-4d29-9cad-337e223f6f87-etcd-client\") pod \"apiserver-7bbb656c7d-slpn2\" (UID: \"48ab701f-fc67-4d29-9cad-337e223f6f87\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-slpn2" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.796148 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/48ab701f-fc67-4d29-9cad-337e223f6f87-audit-policies\") pod \"apiserver-7bbb656c7d-slpn2\" (UID: \"48ab701f-fc67-4d29-9cad-337e223f6f87\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-slpn2" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.796290 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/05cfdf5e-5390-4f32-986d-02872c05f444-installation-pull-secrets\") pod \"image-registry-697d97f7c8-kzrfw\" (UID: \"05cfdf5e-5390-4f32-986d-02872c05f444\") " pod="openshift-image-registry/image-registry-697d97f7c8-kzrfw" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.796390 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/48ab701f-fc67-4d29-9cad-337e223f6f87-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-slpn2\" (UID: \"48ab701f-fc67-4d29-9cad-337e223f6f87\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-slpn2" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.796466 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/c733e45d-a072-4619-b2f8-aea6d77b112f-audit-policies\") pod \"oauth-openshift-558db77b4-rrhk2\" (UID: \"c733e45d-a072-4619-b2f8-aea6d77b112f\") " pod="openshift-authentication/oauth-openshift-558db77b4-rrhk2" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.796531 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/c733e45d-a072-4619-b2f8-aea6d77b112f-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-rrhk2\" (UID: \"c733e45d-a072-4619-b2f8-aea6d77b112f\") " pod="openshift-authentication/oauth-openshift-558db77b4-rrhk2" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.796604 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/c733e45d-a072-4619-b2f8-aea6d77b112f-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-rrhk2\" (UID: \"c733e45d-a072-4619-b2f8-aea6d77b112f\") " pod="openshift-authentication/oauth-openshift-558db77b4-rrhk2" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.796701 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c733e45d-a072-4619-b2f8-aea6d77b112f-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-rrhk2\" (UID: \"c733e45d-a072-4619-b2f8-aea6d77b112f\") " pod="openshift-authentication/oauth-openshift-558db77b4-rrhk2" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.796775 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e13ea819-2f94-423e-ab3f-c7b6d03ad686-client-ca\") pod \"controller-manager-879f6c89f-224dd\" (UID: \"e13ea819-2f94-423e-ab3f-c7b6d03ad686\") " pod="openshift-controller-manager/controller-manager-879f6c89f-224dd" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.796879 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9a9d3cd7-c717-4851-b4f3-2bb17d88d0bd-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-lb8pn\" (UID: \"9a9d3cd7-c717-4851-b4f3-2bb17d88d0bd\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-lb8pn" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.796972 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e13ea819-2f94-423e-ab3f-c7b6d03ad686-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-224dd\" (UID: \"e13ea819-2f94-423e-ab3f-c7b6d03ad686\") " pod="openshift-controller-manager/controller-manager-879f6c89f-224dd" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.797059 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-snx9t\" (UniqueName: \"kubernetes.io/projected/9a9d3cd7-c717-4851-b4f3-2bb17d88d0bd-kube-api-access-snx9t\") pod \"cluster-image-registry-operator-dc59b4c8b-lb8pn\" (UID: \"9a9d3cd7-c717-4851-b4f3-2bb17d88d0bd\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-lb8pn" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.797764 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/c733e45d-a072-4619-b2f8-aea6d77b112f-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-rrhk2\" (UID: \"c733e45d-a072-4619-b2f8-aea6d77b112f\") " pod="openshift-authentication/oauth-openshift-558db77b4-rrhk2" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.797939 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ltvbx\" (UniqueName: \"kubernetes.io/projected/c733e45d-a072-4619-b2f8-aea6d77b112f-kube-api-access-ltvbx\") pod \"oauth-openshift-558db77b4-rrhk2\" (UID: \"c733e45d-a072-4619-b2f8-aea6d77b112f\") " pod="openshift-authentication/oauth-openshift-558db77b4-rrhk2" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.798037 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/05cfdf5e-5390-4f32-986d-02872c05f444-bound-sa-token\") pod \"image-registry-697d97f7c8-kzrfw\" (UID: \"05cfdf5e-5390-4f32-986d-02872c05f444\") " pod="openshift-image-registry/image-registry-697d97f7c8-kzrfw" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.798121 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5btc8\" (UniqueName: \"kubernetes.io/projected/05cfdf5e-5390-4f32-986d-02872c05f444-kube-api-access-5btc8\") pod \"image-registry-697d97f7c8-kzrfw\" (UID: \"05cfdf5e-5390-4f32-986d-02872c05f444\") " pod="openshift-image-registry/image-registry-697d97f7c8-kzrfw" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.798212 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d9frb\" (UniqueName: \"kubernetes.io/projected/48ab701f-fc67-4d29-9cad-337e223f6f87-kube-api-access-d9frb\") pod \"apiserver-7bbb656c7d-slpn2\" (UID: \"48ab701f-fc67-4d29-9cad-337e223f6f87\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-slpn2" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.798356 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/05cfdf5e-5390-4f32-986d-02872c05f444-registry-tls\") pod \"image-registry-697d97f7c8-kzrfw\" (UID: \"05cfdf5e-5390-4f32-986d-02872c05f444\") " pod="openshift-image-registry/image-registry-697d97f7c8-kzrfw" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.798441 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/05cfdf5e-5390-4f32-986d-02872c05f444-registry-certificates\") pod \"image-registry-697d97f7c8-kzrfw\" (UID: \"05cfdf5e-5390-4f32-986d-02872c05f444\") " pod="openshift-image-registry/image-registry-697d97f7c8-kzrfw" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.798575 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9a9d3cd7-c717-4851-b4f3-2bb17d88d0bd-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-lb8pn\" (UID: \"9a9d3cd7-c717-4851-b4f3-2bb17d88d0bd\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-lb8pn" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.798689 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/05cfdf5e-5390-4f32-986d-02872c05f444-trusted-ca\") pod \"image-registry-697d97f7c8-kzrfw\" (UID: \"05cfdf5e-5390-4f32-986d-02872c05f444\") " pod="openshift-image-registry/image-registry-697d97f7c8-kzrfw" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.798859 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/c733e45d-a072-4619-b2f8-aea6d77b112f-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-rrhk2\" (UID: \"c733e45d-a072-4619-b2f8-aea6d77b112f\") " pod="openshift-authentication/oauth-openshift-558db77b4-rrhk2" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.798969 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/9a9d3cd7-c717-4851-b4f3-2bb17d88d0bd-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-lb8pn\" (UID: \"9a9d3cd7-c717-4851-b4f3-2bb17d88d0bd\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-lb8pn" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.799043 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/c733e45d-a072-4619-b2f8-aea6d77b112f-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-rrhk2\" (UID: \"c733e45d-a072-4619-b2f8-aea6d77b112f\") " pod="openshift-authentication/oauth-openshift-558db77b4-rrhk2" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.799125 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/48ab701f-fc67-4d29-9cad-337e223f6f87-encryption-config\") pod \"apiserver-7bbb656c7d-slpn2\" (UID: \"48ab701f-fc67-4d29-9cad-337e223f6f87\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-slpn2" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.799208 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/c733e45d-a072-4619-b2f8-aea6d77b112f-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-rrhk2\" (UID: \"c733e45d-a072-4619-b2f8-aea6d77b112f\") " pod="openshift-authentication/oauth-openshift-558db77b4-rrhk2" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.799286 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/48ab701f-fc67-4d29-9cad-337e223f6f87-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-slpn2\" (UID: \"48ab701f-fc67-4d29-9cad-337e223f6f87\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-slpn2" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.799359 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/48ab701f-fc67-4d29-9cad-337e223f6f87-audit-dir\") pod \"apiserver-7bbb656c7d-slpn2\" (UID: \"48ab701f-fc67-4d29-9cad-337e223f6f87\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-slpn2" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.799440 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4de72bcc-6d41-47cc-b9f7-f4cca10b977f-config\") pod \"machine-api-operator-5694c8668f-95wjd\" (UID: \"4de72bcc-6d41-47cc-b9f7-f4cca10b977f\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-95wjd" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.799686 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rgflg\" (UniqueName: \"kubernetes.io/projected/4de72bcc-6d41-47cc-b9f7-f4cca10b977f-kube-api-access-rgflg\") pod \"machine-api-operator-5694c8668f-95wjd\" (UID: \"4de72bcc-6d41-47cc-b9f7-f4cca10b977f\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-95wjd" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.799783 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/4de72bcc-6d41-47cc-b9f7-f4cca10b977f-images\") pod \"machine-api-operator-5694c8668f-95wjd\" (UID: \"4de72bcc-6d41-47cc-b9f7-f4cca10b977f\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-95wjd" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.799945 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/4de72bcc-6d41-47cc-b9f7-f4cca10b977f-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-95wjd\" (UID: \"4de72bcc-6d41-47cc-b9f7-f4cca10b977f\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-95wjd" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.800033 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/c733e45d-a072-4619-b2f8-aea6d77b112f-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-rrhk2\" (UID: \"c733e45d-a072-4619-b2f8-aea6d77b112f\") " pod="openshift-authentication/oauth-openshift-558db77b4-rrhk2" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.805652 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.825449 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.845858 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.866595 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.886072 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.901541 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.901759 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f5skj\" (UniqueName: \"kubernetes.io/projected/d2d9e940-6dc2-4325-b5eb-410c1f038ae5-kube-api-access-f5skj\") pod \"service-ca-9c57cc56f-f9p7v\" (UID: \"d2d9e940-6dc2-4325-b5eb-410c1f038ae5\") " pod="openshift-service-ca/service-ca-9c57cc56f-f9p7v" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.901793 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v79wc\" (UniqueName: \"kubernetes.io/projected/b546c0fc-b66f-4f2b-ab03-364362906f88-kube-api-access-v79wc\") pod \"olm-operator-6b444d44fb-lz7cz\" (UID: \"b546c0fc-b66f-4f2b-ab03-364362906f88\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lz7cz" Feb 23 08:51:26 crc kubenswrapper[4940]: E0223 08:51:26.901848 4940 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 08:51:27.401804067 +0000 UTC m=+218.785010264 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.901922 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/71aa9018-d3be-454d-8d1c-5853f3971151-config\") pod \"apiserver-76f77b778f-znvc9\" (UID: \"71aa9018-d3be-454d-8d1c-5853f3971151\") " pod="openshift-apiserver/apiserver-76f77b778f-znvc9" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.902042 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/71aa9018-d3be-454d-8d1c-5853f3971151-etcd-serving-ca\") pod \"apiserver-76f77b778f-znvc9\" (UID: \"71aa9018-d3be-454d-8d1c-5853f3971151\") " pod="openshift-apiserver/apiserver-76f77b778f-znvc9" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.902117 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4wtnk\" (UniqueName: \"kubernetes.io/projected/71aa9018-d3be-454d-8d1c-5853f3971151-kube-api-access-4wtnk\") pod \"apiserver-76f77b778f-znvc9\" (UID: \"71aa9018-d3be-454d-8d1c-5853f3971151\") " pod="openshift-apiserver/apiserver-76f77b778f-znvc9" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.902165 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/feea3a62-1f72-4b46-9655-521e8ff5c323-default-certificate\") pod \"router-default-5444994796-tqnlh\" (UID: \"feea3a62-1f72-4b46-9655-521e8ff5c323\") " pod="openshift-ingress/router-default-5444994796-tqnlh" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.902198 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/57505051-bc9e-499e-9013-6365439ebb68-config\") pod \"kube-apiserver-operator-766d6c64bb-pw8l4\" (UID: \"57505051-bc9e-499e-9013-6365439ebb68\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-pw8l4" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.902241 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d86468b3-88b2-4c49-b807-c00eceb862e2-serving-cert\") pod \"service-ca-operator-777779d784-nmq4g\" (UID: \"d86468b3-88b2-4c49-b807-c00eceb862e2\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-nmq4g" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.902293 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9de4a20c-3f76-4aa8-8347-42f3b3f53145-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-j9x9v\" (UID: \"9de4a20c-3f76-4aa8-8347-42f3b3f53145\") " pod="openshift-marketplace/marketplace-operator-79b997595-j9x9v" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.902325 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5b5rx\" (UniqueName: \"kubernetes.io/projected/1b16df0c-b660-4f3c-9d26-cfff395d5c88-kube-api-access-5b5rx\") pod \"collect-profiles-29530605-25rxm\" (UID: \"1b16df0c-b660-4f3c-9d26-cfff395d5c88\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29530605-25rxm" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.902355 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/7a60bce1-d0e8-451a-91da-396ec5d5c53b-srv-cert\") pod \"catalog-operator-68c6474976-9xdrp\" (UID: \"7a60bce1-d0e8-451a-91da-396ec5d5c53b\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9xdrp" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.902430 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/05cfdf5e-5390-4f32-986d-02872c05f444-ca-trust-extracted\") pod \"image-registry-697d97f7c8-kzrfw\" (UID: \"05cfdf5e-5390-4f32-986d-02872c05f444\") " pod="openshift-image-registry/image-registry-697d97f7c8-kzrfw" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.902478 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e13ea819-2f94-423e-ab3f-c7b6d03ad686-config\") pod \"controller-manager-879f6c89f-224dd\" (UID: \"e13ea819-2f94-423e-ab3f-c7b6d03ad686\") " pod="openshift-controller-manager/controller-manager-879f6c89f-224dd" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.902529 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/c733e45d-a072-4619-b2f8-aea6d77b112f-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-rrhk2\" (UID: \"c733e45d-a072-4619-b2f8-aea6d77b112f\") " pod="openshift-authentication/oauth-openshift-558db77b4-rrhk2" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.902604 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ksrsz\" (UniqueName: \"kubernetes.io/projected/4f793610-43f5-4faf-b61d-e2330db0b177-kube-api-access-ksrsz\") pod \"cluster-samples-operator-665b6dd947-wwhg8\" (UID: \"4f793610-43f5-4faf-b61d-e2330db0b177\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-wwhg8" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.902736 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/1fee6a64-486f-4aef-9242-8bf07796d6e3-available-featuregates\") pod \"openshift-config-operator-7777fb866f-n95bv\" (UID: \"1fee6a64-486f-4aef-9242-8bf07796d6e3\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-n95bv" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.902807 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/32e17925-dd05-43de-8e22-105f0002b651-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-klkgk\" (UID: \"32e17925-dd05-43de-8e22-105f0002b651\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-klkgk" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.902924 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xgvf7\" (UniqueName: \"kubernetes.io/projected/9de4a20c-3f76-4aa8-8347-42f3b3f53145-kube-api-access-xgvf7\") pod \"marketplace-operator-79b997595-j9x9v\" (UID: \"9de4a20c-3f76-4aa8-8347-42f3b3f53145\") " pod="openshift-marketplace/marketplace-operator-79b997595-j9x9v" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.902974 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/21f25477-51d5-480d-a252-f821cc008560-registration-dir\") pod \"csi-hostpathplugin-m4hz9\" (UID: \"21f25477-51d5-480d-a252-f821cc008560\") " pod="hostpath-provisioner/csi-hostpathplugin-m4hz9" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.903023 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7zqvx\" (UniqueName: \"kubernetes.io/projected/1fee6a64-486f-4aef-9242-8bf07796d6e3-kube-api-access-7zqvx\") pod \"openshift-config-operator-7777fb866f-n95bv\" (UID: \"1fee6a64-486f-4aef-9242-8bf07796d6e3\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-n95bv" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.903124 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6a449118-5ee9-42f2-bdc2-a23f1c6febf6-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-whcws\" (UID: \"6a449118-5ee9-42f2-bdc2-a23f1c6febf6\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-whcws" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.903187 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c5a6d2fa-46ed-4669-ac19-c335595a24fd-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-8smd9\" (UID: \"c5a6d2fa-46ed-4669-ac19-c335595a24fd\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8smd9" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.903249 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/48ab701f-fc67-4d29-9cad-337e223f6f87-serving-cert\") pod \"apiserver-7bbb656c7d-slpn2\" (UID: \"48ab701f-fc67-4d29-9cad-337e223f6f87\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-slpn2" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.903258 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/05cfdf5e-5390-4f32-986d-02872c05f444-ca-trust-extracted\") pod \"image-registry-697d97f7c8-kzrfw\" (UID: \"05cfdf5e-5390-4f32-986d-02872c05f444\") " pod="openshift-image-registry/image-registry-697d97f7c8-kzrfw" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.903288 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c733e45d-a072-4619-b2f8-aea6d77b112f-audit-dir\") pod \"oauth-openshift-558db77b4-rrhk2\" (UID: \"c733e45d-a072-4619-b2f8-aea6d77b112f\") " pod="openshift-authentication/oauth-openshift-558db77b4-rrhk2" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.903358 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/26986446-4844-49cf-a77d-7d316d2d826b-etcd-client\") pod \"etcd-operator-b45778765-5tf64\" (UID: \"26986446-4844-49cf-a77d-7d316d2d826b\") " pod="openshift-etcd-operator/etcd-operator-b45778765-5tf64" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.903401 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/726e20bd-8ba8-4ae8-a2ce-6d1a50300dfd-bound-sa-token\") pod \"ingress-operator-5b745b69d9-mg5lr\" (UID: \"726e20bd-8ba8-4ae8-a2ce-6d1a50300dfd\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-mg5lr" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.903441 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/21f25477-51d5-480d-a252-f821cc008560-csi-data-dir\") pod \"csi-hostpathplugin-m4hz9\" (UID: \"21f25477-51d5-480d-a252-f821cc008560\") " pod="hostpath-provisioner/csi-hostpathplugin-m4hz9" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.903452 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c733e45d-a072-4619-b2f8-aea6d77b112f-audit-dir\") pod \"oauth-openshift-558db77b4-rrhk2\" (UID: \"c733e45d-a072-4619-b2f8-aea6d77b112f\") " pod="openshift-authentication/oauth-openshift-558db77b4-rrhk2" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.903603 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5lfjv\" (UniqueName: \"kubernetes.io/projected/783e15c8-9066-455f-878d-86215d82093b-kube-api-access-5lfjv\") pod \"migrator-59844c95c7-hrnj4\" (UID: \"783e15c8-9066-455f-878d-86215d82093b\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-hrnj4" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.903804 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e13ea819-2f94-423e-ab3f-c7b6d03ad686-config\") pod \"controller-manager-879f6c89f-224dd\" (UID: \"e13ea819-2f94-423e-ab3f-c7b6d03ad686\") " pod="openshift-controller-manager/controller-manager-879f6c89f-224dd" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.903726 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/48ab701f-fc67-4d29-9cad-337e223f6f87-audit-policies\") pod \"apiserver-7bbb656c7d-slpn2\" (UID: \"48ab701f-fc67-4d29-9cad-337e223f6f87\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-slpn2" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.904736 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l48sg\" (UniqueName: \"kubernetes.io/projected/63609b48-d163-4000-a23b-bb70a6719c5c-kube-api-access-l48sg\") pod \"machine-config-server-6f7z7\" (UID: \"63609b48-d163-4000-a23b-bb70a6719c5c\") " pod="openshift-machine-config-operator/machine-config-server-6f7z7" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.904760 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/48ab701f-fc67-4d29-9cad-337e223f6f87-audit-policies\") pod \"apiserver-7bbb656c7d-slpn2\" (UID: \"48ab701f-fc67-4d29-9cad-337e223f6f87\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-slpn2" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.904788 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/05cfdf5e-5390-4f32-986d-02872c05f444-installation-pull-secrets\") pod \"image-registry-697d97f7c8-kzrfw\" (UID: \"05cfdf5e-5390-4f32-986d-02872c05f444\") " pod="openshift-image-registry/image-registry-697d97f7c8-kzrfw" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.904822 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/07ef0edd-666b-4ced-9a27-51433a59c6c0-oauth-serving-cert\") pod \"console-f9d7485db-2zgdn\" (UID: \"07ef0edd-666b-4ced-9a27-51433a59c6c0\") " pod="openshift-console/console-f9d7485db-2zgdn" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.904856 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4btlz\" (UniqueName: \"kubernetes.io/projected/21f25477-51d5-480d-a252-f821cc008560-kube-api-access-4btlz\") pod \"csi-hostpathplugin-m4hz9\" (UID: \"21f25477-51d5-480d-a252-f821cc008560\") " pod="hostpath-provisioner/csi-hostpathplugin-m4hz9" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.904887 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/46095393-c72a-4539-b3e4-e2f3f35301b8-proxy-tls\") pod \"machine-config-operator-74547568cd-6g7xj\" (UID: \"46095393-c72a-4539-b3e4-e2f3f35301b8\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-6g7xj" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.904946 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/bc0bbd6d-8e03-4d03-b2a8-2fe201d6c6bf-proxy-tls\") pod \"machine-config-controller-84d6567774-wzbgr\" (UID: \"bc0bbd6d-8e03-4d03-b2a8-2fe201d6c6bf\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-wzbgr" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.905021 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6a449118-5ee9-42f2-bdc2-a23f1c6febf6-config\") pod \"kube-controller-manager-operator-78b949d7b-whcws\" (UID: \"6a449118-5ee9-42f2-bdc2-a23f1c6febf6\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-whcws" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.905060 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/c733e45d-a072-4619-b2f8-aea6d77b112f-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-rrhk2\" (UID: \"c733e45d-a072-4619-b2f8-aea6d77b112f\") " pod="openshift-authentication/oauth-openshift-558db77b4-rrhk2" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.905081 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/2dfca0b5-693a-4e28-ae27-d5532038616c-metrics-tls\") pod \"dns-default-sppfz\" (UID: \"2dfca0b5-693a-4e28-ae27-d5532038616c\") " pod="openshift-dns/dns-default-sppfz" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.905116 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/24821bad-09c7-4880-bf0e-a6e829284f2e-cert\") pod \"ingress-canary-zgt46\" (UID: \"24821bad-09c7-4880-bf0e-a6e829284f2e\") " pod="openshift-ingress-canary/ingress-canary-zgt46" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.905134 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/c733e45d-a072-4619-b2f8-aea6d77b112f-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-rrhk2\" (UID: \"c733e45d-a072-4619-b2f8-aea6d77b112f\") " pod="openshift-authentication/oauth-openshift-558db77b4-rrhk2" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.905168 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/9de4a20c-3f76-4aa8-8347-42f3b3f53145-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-j9x9v\" (UID: \"9de4a20c-3f76-4aa8-8347-42f3b3f53145\") " pod="openshift-marketplace/marketplace-operator-79b997595-j9x9v" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.905196 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e13ea819-2f94-423e-ab3f-c7b6d03ad686-client-ca\") pod \"controller-manager-879f6c89f-224dd\" (UID: \"e13ea819-2f94-423e-ab3f-c7b6d03ad686\") " pod="openshift-controller-manager/controller-manager-879f6c89f-224dd" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.905256 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9a9d3cd7-c717-4851-b4f3-2bb17d88d0bd-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-lb8pn\" (UID: \"9a9d3cd7-c717-4851-b4f3-2bb17d88d0bd\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-lb8pn" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.906322 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.906599 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/48ab701f-fc67-4d29-9cad-337e223f6f87-serving-cert\") pod \"apiserver-7bbb656c7d-slpn2\" (UID: \"48ab701f-fc67-4d29-9cad-337e223f6f87\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-slpn2" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.907006 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e13ea819-2f94-423e-ab3f-c7b6d03ad686-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-224dd\" (UID: \"e13ea819-2f94-423e-ab3f-c7b6d03ad686\") " pod="openshift-controller-manager/controller-manager-879f6c89f-224dd" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.907085 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1b16df0c-b660-4f3c-9d26-cfff395d5c88-secret-volume\") pod \"collect-profiles-29530605-25rxm\" (UID: \"1b16df0c-b660-4f3c-9d26-cfff395d5c88\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29530605-25rxm" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.907145 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/c733e45d-a072-4619-b2f8-aea6d77b112f-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-rrhk2\" (UID: \"c733e45d-a072-4619-b2f8-aea6d77b112f\") " pod="openshift-authentication/oauth-openshift-558db77b4-rrhk2" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.907203 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d86468b3-88b2-4c49-b807-c00eceb862e2-config\") pod \"service-ca-operator-777779d784-nmq4g\" (UID: \"d86468b3-88b2-4c49-b807-c00eceb862e2\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-nmq4g" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.907250 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/05cfdf5e-5390-4f32-986d-02872c05f444-bound-sa-token\") pod \"image-registry-697d97f7c8-kzrfw\" (UID: \"05cfdf5e-5390-4f32-986d-02872c05f444\") " pod="openshift-image-registry/image-registry-697d97f7c8-kzrfw" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.907297 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5btc8\" (UniqueName: \"kubernetes.io/projected/05cfdf5e-5390-4f32-986d-02872c05f444-kube-api-access-5btc8\") pod \"image-registry-697d97f7c8-kzrfw\" (UID: \"05cfdf5e-5390-4f32-986d-02872c05f444\") " pod="openshift-image-registry/image-registry-697d97f7c8-kzrfw" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.907342 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/71aa9018-d3be-454d-8d1c-5853f3971151-image-import-ca\") pod \"apiserver-76f77b778f-znvc9\" (UID: \"71aa9018-d3be-454d-8d1c-5853f3971151\") " pod="openshift-apiserver/apiserver-76f77b778f-znvc9" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.907430 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/21f25477-51d5-480d-a252-f821cc008560-socket-dir\") pod \"csi-hostpathplugin-m4hz9\" (UID: \"21f25477-51d5-480d-a252-f821cc008560\") " pod="hostpath-provisioner/csi-hostpathplugin-m4hz9" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.907475 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c5a6d2fa-46ed-4669-ac19-c335595a24fd-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-8smd9\" (UID: \"c5a6d2fa-46ed-4669-ac19-c335595a24fd\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8smd9" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.907528 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/05cfdf5e-5390-4f32-986d-02872c05f444-registry-tls\") pod \"image-registry-697d97f7c8-kzrfw\" (UID: \"05cfdf5e-5390-4f32-986d-02872c05f444\") " pod="openshift-image-registry/image-registry-697d97f7c8-kzrfw" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.907575 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/05cfdf5e-5390-4f32-986d-02872c05f444-registry-certificates\") pod \"image-registry-697d97f7c8-kzrfw\" (UID: \"05cfdf5e-5390-4f32-986d-02872c05f444\") " pod="openshift-image-registry/image-registry-697d97f7c8-kzrfw" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.907656 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/7a60bce1-d0e8-451a-91da-396ec5d5c53b-profile-collector-cert\") pod \"catalog-operator-68c6474976-9xdrp\" (UID: \"7a60bce1-d0e8-451a-91da-396ec5d5c53b\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9xdrp" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.907710 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/4f793610-43f5-4faf-b61d-e2330db0b177-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-wwhg8\" (UID: \"4f793610-43f5-4faf-b61d-e2330db0b177\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-wwhg8" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.907758 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/26986446-4844-49cf-a77d-7d316d2d826b-config\") pod \"etcd-operator-b45778765-5tf64\" (UID: \"26986446-4844-49cf-a77d-7d316d2d826b\") " pod="openshift-etcd-operator/etcd-operator-b45778765-5tf64" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.907812 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b546c0fc-b66f-4f2b-ab03-364362906f88-profile-collector-cert\") pod \"olm-operator-6b444d44fb-lz7cz\" (UID: \"b546c0fc-b66f-4f2b-ab03-364362906f88\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lz7cz" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.907861 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cnb8z\" (UniqueName: \"kubernetes.io/projected/0d9b99ac-2db8-435f-ad9f-4d7335a40e19-kube-api-access-cnb8z\") pod \"kube-storage-version-migrator-operator-b67b599dd-z7w6c\" (UID: \"0d9b99ac-2db8-435f-ad9f-4d7335a40e19\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-z7w6c" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.907905 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-689w2\" (UniqueName: \"kubernetes.io/projected/7a60bce1-d0e8-451a-91da-396ec5d5c53b-kube-api-access-689w2\") pod \"catalog-operator-68c6474976-9xdrp\" (UID: \"7a60bce1-d0e8-451a-91da-396ec5d5c53b\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9xdrp" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.907906 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/c733e45d-a072-4619-b2f8-aea6d77b112f-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-rrhk2\" (UID: \"c733e45d-a072-4619-b2f8-aea6d77b112f\") " pod="openshift-authentication/oauth-openshift-558db77b4-rrhk2" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.907949 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/7f57c7c4-ddb8-48dd-853e-d87bdf3a9abd-tmpfs\") pod \"packageserver-d55dfcdfc-wtxsb\" (UID: \"7f57c7c4-ddb8-48dd-853e-d87bdf3a9abd\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-wtxsb" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.907992 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b5kv7\" (UniqueName: \"kubernetes.io/projected/07ef0edd-666b-4ced-9a27-51433a59c6c0-kube-api-access-b5kv7\") pod \"console-f9d7485db-2zgdn\" (UID: \"07ef0edd-666b-4ced-9a27-51433a59c6c0\") " pod="openshift-console/console-f9d7485db-2zgdn" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.908081 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71aa9018-d3be-454d-8d1c-5853f3971151-audit-dir\") pod \"apiserver-76f77b778f-znvc9\" (UID: \"71aa9018-d3be-454d-8d1c-5853f3971151\") " pod="openshift-apiserver/apiserver-76f77b778f-znvc9" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.908123 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/726e20bd-8ba8-4ae8-a2ce-6d1a50300dfd-metrics-tls\") pod \"ingress-operator-5b745b69d9-mg5lr\" (UID: \"726e20bd-8ba8-4ae8-a2ce-6d1a50300dfd\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-mg5lr" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.908210 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/c733e45d-a072-4619-b2f8-aea6d77b112f-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-rrhk2\" (UID: \"c733e45d-a072-4619-b2f8-aea6d77b112f\") " pod="openshift-authentication/oauth-openshift-558db77b4-rrhk2" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.908264 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5xknx\" (UniqueName: \"kubernetes.io/projected/7f57c7c4-ddb8-48dd-853e-d87bdf3a9abd-kube-api-access-5xknx\") pod \"packageserver-d55dfcdfc-wtxsb\" (UID: \"7f57c7c4-ddb8-48dd-853e-d87bdf3a9abd\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-wtxsb" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.908344 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/9a9d3cd7-c717-4851-b4f3-2bb17d88d0bd-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-lb8pn\" (UID: \"9a9d3cd7-c717-4851-b4f3-2bb17d88d0bd\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-lb8pn" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.908392 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/c733e45d-a072-4619-b2f8-aea6d77b112f-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-rrhk2\" (UID: \"c733e45d-a072-4619-b2f8-aea6d77b112f\") " pod="openshift-authentication/oauth-openshift-558db77b4-rrhk2" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.908441 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/bc0bbd6d-8e03-4d03-b2a8-2fe201d6c6bf-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-wzbgr\" (UID: \"bc0bbd6d-8e03-4d03-b2a8-2fe201d6c6bf\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-wzbgr" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.908481 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/21f25477-51d5-480d-a252-f821cc008560-mountpoint-dir\") pod \"csi-hostpathplugin-m4hz9\" (UID: \"21f25477-51d5-480d-a252-f821cc008560\") " pod="hostpath-provisioner/csi-hostpathplugin-m4hz9" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.908525 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/48ab701f-fc67-4d29-9cad-337e223f6f87-encryption-config\") pod \"apiserver-7bbb656c7d-slpn2\" (UID: \"48ab701f-fc67-4d29-9cad-337e223f6f87\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-slpn2" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.908573 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/c733e45d-a072-4619-b2f8-aea6d77b112f-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-rrhk2\" (UID: \"c733e45d-a072-4619-b2f8-aea6d77b112f\") " pod="openshift-authentication/oauth-openshift-558db77b4-rrhk2" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.908659 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jzg7k\" (UniqueName: \"kubernetes.io/projected/46095393-c72a-4539-b3e4-e2f3f35301b8-kube-api-access-jzg7k\") pod \"machine-config-operator-74547568cd-6g7xj\" (UID: \"46095393-c72a-4539-b3e4-e2f3f35301b8\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-6g7xj" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.908665 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9a9d3cd7-c717-4851-b4f3-2bb17d88d0bd-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-lb8pn\" (UID: \"9a9d3cd7-c717-4851-b4f3-2bb17d88d0bd\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-lb8pn" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.908707 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2dfca0b5-693a-4e28-ae27-d5532038616c-config-volume\") pod \"dns-default-sppfz\" (UID: \"2dfca0b5-693a-4e28-ae27-d5532038616c\") " pod="openshift-dns/dns-default-sppfz" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.908756 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/48ab701f-fc67-4d29-9cad-337e223f6f87-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-slpn2\" (UID: \"48ab701f-fc67-4d29-9cad-337e223f6f87\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-slpn2" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.908797 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/48ab701f-fc67-4d29-9cad-337e223f6f87-audit-dir\") pod \"apiserver-7bbb656c7d-slpn2\" (UID: \"48ab701f-fc67-4d29-9cad-337e223f6f87\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-slpn2" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.908837 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/07ef0edd-666b-4ced-9a27-51433a59c6c0-trusted-ca-bundle\") pod \"console-f9d7485db-2zgdn\" (UID: \"07ef0edd-666b-4ced-9a27-51433a59c6c0\") " pod="openshift-console/console-f9d7485db-2zgdn" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.908877 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/57505051-bc9e-499e-9013-6365439ebb68-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-pw8l4\" (UID: \"57505051-bc9e-499e-9013-6365439ebb68\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-pw8l4" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.908920 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/71aa9018-d3be-454d-8d1c-5853f3971151-etcd-client\") pod \"apiserver-76f77b778f-znvc9\" (UID: \"71aa9018-d3be-454d-8d1c-5853f3971151\") " pod="openshift-apiserver/apiserver-76f77b778f-znvc9" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.908966 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/4de72bcc-6d41-47cc-b9f7-f4cca10b977f-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-95wjd\" (UID: \"4de72bcc-6d41-47cc-b9f7-f4cca10b977f\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-95wjd" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.909013 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/c733e45d-a072-4619-b2f8-aea6d77b112f-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-rrhk2\" (UID: \"c733e45d-a072-4619-b2f8-aea6d77b112f\") " pod="openshift-authentication/oauth-openshift-558db77b4-rrhk2" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.909059 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ngnfq\" (UniqueName: \"kubernetes.io/projected/726e20bd-8ba8-4ae8-a2ce-6d1a50300dfd-kube-api-access-ngnfq\") pod \"ingress-operator-5b745b69d9-mg5lr\" (UID: \"726e20bd-8ba8-4ae8-a2ce-6d1a50300dfd\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-mg5lr" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.909108 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2vbvn\" (UniqueName: \"kubernetes.io/projected/24821bad-09c7-4880-bf0e-a6e829284f2e-kube-api-access-2vbvn\") pod \"ingress-canary-zgt46\" (UID: \"24821bad-09c7-4880-bf0e-a6e829284f2e\") " pod="openshift-ingress-canary/ingress-canary-zgt46" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.909154 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/57505051-bc9e-499e-9013-6365439ebb68-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-pw8l4\" (UID: \"57505051-bc9e-499e-9013-6365439ebb68\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-pw8l4" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.909058 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e13ea819-2f94-423e-ab3f-c7b6d03ad686-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-224dd\" (UID: \"e13ea819-2f94-423e-ab3f-c7b6d03ad686\") " pod="openshift-controller-manager/controller-manager-879f6c89f-224dd" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.909476 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/48ab701f-fc67-4d29-9cad-337e223f6f87-audit-dir\") pod \"apiserver-7bbb656c7d-slpn2\" (UID: \"48ab701f-fc67-4d29-9cad-337e223f6f87\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-slpn2" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.909917 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/05cfdf5e-5390-4f32-986d-02872c05f444-installation-pull-secrets\") pod \"image-registry-697d97f7c8-kzrfw\" (UID: \"05cfdf5e-5390-4f32-986d-02872c05f444\") " pod="openshift-image-registry/image-registry-697d97f7c8-kzrfw" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.909988 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/c733e45d-a072-4619-b2f8-aea6d77b112f-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-rrhk2\" (UID: \"c733e45d-a072-4619-b2f8-aea6d77b112f\") " pod="openshift-authentication/oauth-openshift-558db77b4-rrhk2" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.910527 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/c733e45d-a072-4619-b2f8-aea6d77b112f-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-rrhk2\" (UID: \"c733e45d-a072-4619-b2f8-aea6d77b112f\") " pod="openshift-authentication/oauth-openshift-558db77b4-rrhk2" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.910849 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/71aa9018-d3be-454d-8d1c-5853f3971151-audit\") pod \"apiserver-76f77b778f-znvc9\" (UID: \"71aa9018-d3be-454d-8d1c-5853f3971151\") " pod="openshift-apiserver/apiserver-76f77b778f-znvc9" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.910906 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/07ef0edd-666b-4ced-9a27-51433a59c6c0-console-serving-cert\") pod \"console-f9d7485db-2zgdn\" (UID: \"07ef0edd-666b-4ced-9a27-51433a59c6c0\") " pod="openshift-console/console-f9d7485db-2zgdn" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.910975 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kzrfw\" (UID: \"05cfdf5e-5390-4f32-986d-02872c05f444\") " pod="openshift-image-registry/image-registry-697d97f7c8-kzrfw" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.911124 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e13ea819-2f94-423e-ab3f-c7b6d03ad686-serving-cert\") pod \"controller-manager-879f6c89f-224dd\" (UID: \"e13ea819-2f94-423e-ab3f-c7b6d03ad686\") " pod="openshift-controller-manager/controller-manager-879f6c89f-224dd" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.911177 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/726e20bd-8ba8-4ae8-a2ce-6d1a50300dfd-trusted-ca\") pod \"ingress-operator-5b745b69d9-mg5lr\" (UID: \"726e20bd-8ba8-4ae8-a2ce-6d1a50300dfd\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-mg5lr" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.911232 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-94mcm\" (UniqueName: \"kubernetes.io/projected/916e1f6f-2bfc-41e7-86c2-6c379e3638c1-kube-api-access-94mcm\") pod \"downloads-7954f5f757-6qcpm\" (UID: \"916e1f6f-2bfc-41e7-86c2-6c379e3638c1\") " pod="openshift-console/downloads-7954f5f757-6qcpm" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.911284 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/c733e45d-a072-4619-b2f8-aea6d77b112f-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-rrhk2\" (UID: \"c733e45d-a072-4619-b2f8-aea6d77b112f\") " pod="openshift-authentication/oauth-openshift-558db77b4-rrhk2" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.911333 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/97dc456f-7c0c-49f0-ad5b-e4c791429d57-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-2c95j\" (UID: \"97dc456f-7c0c-49f0-ad5b-e4c791429d57\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-2c95j" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.911380 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/d2d9e940-6dc2-4325-b5eb-410c1f038ae5-signing-key\") pod \"service-ca-9c57cc56f-f9p7v\" (UID: \"d2d9e940-6dc2-4325-b5eb-410c1f038ae5\") " pod="openshift-service-ca/service-ca-9c57cc56f-f9p7v" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.911709 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e13ea819-2f94-423e-ab3f-c7b6d03ad686-client-ca\") pod \"controller-manager-879f6c89f-224dd\" (UID: \"e13ea819-2f94-423e-ab3f-c7b6d03ad686\") " pod="openshift-controller-manager/controller-manager-879f6c89f-224dd" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.912035 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/c733e45d-a072-4619-b2f8-aea6d77b112f-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-rrhk2\" (UID: \"c733e45d-a072-4619-b2f8-aea6d77b112f\") " pod="openshift-authentication/oauth-openshift-558db77b4-rrhk2" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.912061 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/9a9d3cd7-c717-4851-b4f3-2bb17d88d0bd-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-lb8pn\" (UID: \"9a9d3cd7-c717-4851-b4f3-2bb17d88d0bd\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-lb8pn" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.912275 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/05cfdf5e-5390-4f32-986d-02872c05f444-registry-certificates\") pod \"image-registry-697d97f7c8-kzrfw\" (UID: \"05cfdf5e-5390-4f32-986d-02872c05f444\") " pod="openshift-image-registry/image-registry-697d97f7c8-kzrfw" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.911737 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/feea3a62-1f72-4b46-9655-521e8ff5c323-metrics-certs\") pod \"router-default-5444994796-tqnlh\" (UID: \"feea3a62-1f72-4b46-9655-521e8ff5c323\") " pod="openshift-ingress/router-default-5444994796-tqnlh" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.912591 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/c733e45d-a072-4619-b2f8-aea6d77b112f-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-rrhk2\" (UID: \"c733e45d-a072-4619-b2f8-aea6d77b112f\") " pod="openshift-authentication/oauth-openshift-558db77b4-rrhk2" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.912779 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/48ab701f-fc67-4d29-9cad-337e223f6f87-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-slpn2\" (UID: \"48ab701f-fc67-4d29-9cad-337e223f6f87\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-slpn2" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.912842 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/7f57c7c4-ddb8-48dd-853e-d87bdf3a9abd-webhook-cert\") pod \"packageserver-d55dfcdfc-wtxsb\" (UID: \"7f57c7c4-ddb8-48dd-853e-d87bdf3a9abd\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-wtxsb" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.912896 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6a449118-5ee9-42f2-bdc2-a23f1c6febf6-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-whcws\" (UID: \"6a449118-5ee9-42f2-bdc2-a23f1c6febf6\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-whcws" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.912953 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4m4cz\" (UniqueName: \"kubernetes.io/projected/e13ea819-2f94-423e-ab3f-c7b6d03ad686-kube-api-access-4m4cz\") pod \"controller-manager-879f6c89f-224dd\" (UID: \"e13ea819-2f94-423e-ab3f-c7b6d03ad686\") " pod="openshift-controller-manager/controller-manager-879f6c89f-224dd" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.913000 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/07ef0edd-666b-4ced-9a27-51433a59c6c0-console-oauth-config\") pod \"console-f9d7485db-2zgdn\" (UID: \"07ef0edd-666b-4ced-9a27-51433a59c6c0\") " pod="openshift-console/console-f9d7485db-2zgdn" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.913185 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/c733e45d-a072-4619-b2f8-aea6d77b112f-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-rrhk2\" (UID: \"c733e45d-a072-4619-b2f8-aea6d77b112f\") " pod="openshift-authentication/oauth-openshift-558db77b4-rrhk2" Feb 23 08:51:26 crc kubenswrapper[4940]: E0223 08:51:26.913236 4940 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 08:51:27.413216229 +0000 UTC m=+218.796422416 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kzrfw" (UID: "05cfdf5e-5390-4f32-986d-02872c05f444") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.913462 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k2h4d\" (UniqueName: \"kubernetes.io/projected/97dc456f-7c0c-49f0-ad5b-e4c791429d57-kube-api-access-k2h4d\") pod \"multus-admission-controller-857f4d67dd-2c95j\" (UID: \"97dc456f-7c0c-49f0-ad5b-e4c791429d57\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-2c95j" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.913530 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/26986446-4844-49cf-a77d-7d316d2d826b-etcd-service-ca\") pod \"etcd-operator-b45778765-5tf64\" (UID: \"26986446-4844-49cf-a77d-7d316d2d826b\") " pod="openshift-etcd-operator/etcd-operator-b45778765-5tf64" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.913678 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/48ab701f-fc67-4d29-9cad-337e223f6f87-etcd-client\") pod \"apiserver-7bbb656c7d-slpn2\" (UID: \"48ab701f-fc67-4d29-9cad-337e223f6f87\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-slpn2" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.913731 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/46095393-c72a-4539-b3e4-e2f3f35301b8-images\") pod \"machine-config-operator-74547568cd-6g7xj\" (UID: \"46095393-c72a-4539-b3e4-e2f3f35301b8\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-6g7xj" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.914229 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/c733e45d-a072-4619-b2f8-aea6d77b112f-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-rrhk2\" (UID: \"c733e45d-a072-4619-b2f8-aea6d77b112f\") " pod="openshift-authentication/oauth-openshift-558db77b4-rrhk2" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.914488 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0d9b99ac-2db8-435f-ad9f-4d7335a40e19-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-z7w6c\" (UID: \"0d9b99ac-2db8-435f-ad9f-4d7335a40e19\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-z7w6c" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.914538 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/63609b48-d163-4000-a23b-bb70a6719c5c-certs\") pod \"machine-config-server-6f7z7\" (UID: \"63609b48-d163-4000-a23b-bb70a6719c5c\") " pod="openshift-machine-config-operator/machine-config-server-6f7z7" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.914585 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/48ab701f-fc67-4d29-9cad-337e223f6f87-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-slpn2\" (UID: \"48ab701f-fc67-4d29-9cad-337e223f6f87\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-slpn2" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.914657 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-44rxh\" (UniqueName: \"kubernetes.io/projected/2f476562-bc2b-4c2d-8283-fae9b3f8ac4e-kube-api-access-44rxh\") pod \"package-server-manager-789f6589d5-glhjd\" (UID: \"2f476562-bc2b-4c2d-8283-fae9b3f8ac4e\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-glhjd" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.914702 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/ef3ada46-965f-42e7-b89b-a67618bff8c6-metrics-tls\") pod \"dns-operator-744455d44c-8ls95\" (UID: \"ef3ada46-965f-42e7-b89b-a67618bff8c6\") " pod="openshift-dns-operator/dns-operator-744455d44c-8ls95" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.914708 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/c733e45d-a072-4619-b2f8-aea6d77b112f-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-rrhk2\" (UID: \"c733e45d-a072-4619-b2f8-aea6d77b112f\") " pod="openshift-authentication/oauth-openshift-558db77b4-rrhk2" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.914832 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zsnzz\" (UniqueName: \"kubernetes.io/projected/ef3ada46-965f-42e7-b89b-a67618bff8c6-kube-api-access-zsnzz\") pod \"dns-operator-744455d44c-8ls95\" (UID: \"ef3ada46-965f-42e7-b89b-a67618bff8c6\") " pod="openshift-dns-operator/dns-operator-744455d44c-8ls95" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.914870 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/21f25477-51d5-480d-a252-f821cc008560-plugins-dir\") pod \"csi-hostpathplugin-m4hz9\" (UID: \"21f25477-51d5-480d-a252-f821cc008560\") " pod="hostpath-provisioner/csi-hostpathplugin-m4hz9" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.914947 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/c733e45d-a072-4619-b2f8-aea6d77b112f-audit-policies\") pod \"oauth-openshift-558db77b4-rrhk2\" (UID: \"c733e45d-a072-4619-b2f8-aea6d77b112f\") " pod="openshift-authentication/oauth-openshift-558db77b4-rrhk2" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.915007 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bmntp\" (UniqueName: \"kubernetes.io/projected/feea3a62-1f72-4b46-9655-521e8ff5c323-kube-api-access-bmntp\") pod \"router-default-5444994796-tqnlh\" (UID: \"feea3a62-1f72-4b46-9655-521e8ff5c323\") " pod="openshift-ingress/router-default-5444994796-tqnlh" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.915135 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9ssbn\" (UniqueName: \"kubernetes.io/projected/26986446-4844-49cf-a77d-7d316d2d826b-kube-api-access-9ssbn\") pod \"etcd-operator-b45778765-5tf64\" (UID: \"26986446-4844-49cf-a77d-7d316d2d826b\") " pod="openshift-etcd-operator/etcd-operator-b45778765-5tf64" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.915264 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c733e45d-a072-4619-b2f8-aea6d77b112f-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-rrhk2\" (UID: \"c733e45d-a072-4619-b2f8-aea6d77b112f\") " pod="openshift-authentication/oauth-openshift-558db77b4-rrhk2" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.915327 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/2f476562-bc2b-4c2d-8283-fae9b3f8ac4e-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-glhjd\" (UID: \"2f476562-bc2b-4c2d-8283-fae9b3f8ac4e\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-glhjd" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.919088 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/48ab701f-fc67-4d29-9cad-337e223f6f87-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-slpn2\" (UID: \"48ab701f-fc67-4d29-9cad-337e223f6f87\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-slpn2" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.919550 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/48ab701f-fc67-4d29-9cad-337e223f6f87-encryption-config\") pod \"apiserver-7bbb656c7d-slpn2\" (UID: \"48ab701f-fc67-4d29-9cad-337e223f6f87\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-slpn2" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.919687 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e13ea819-2f94-423e-ab3f-c7b6d03ad686-serving-cert\") pod \"controller-manager-879f6c89f-224dd\" (UID: \"e13ea819-2f94-423e-ab3f-c7b6d03ad686\") " pod="openshift-controller-manager/controller-manager-879f6c89f-224dd" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.920078 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c733e45d-a072-4619-b2f8-aea6d77b112f-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-rrhk2\" (UID: \"c733e45d-a072-4619-b2f8-aea6d77b112f\") " pod="openshift-authentication/oauth-openshift-558db77b4-rrhk2" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.920187 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/c733e45d-a072-4619-b2f8-aea6d77b112f-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-rrhk2\" (UID: \"c733e45d-a072-4619-b2f8-aea6d77b112f\") " pod="openshift-authentication/oauth-openshift-558db77b4-rrhk2" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.920231 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/c733e45d-a072-4619-b2f8-aea6d77b112f-audit-policies\") pod \"oauth-openshift-558db77b4-rrhk2\" (UID: \"c733e45d-a072-4619-b2f8-aea6d77b112f\") " pod="openshift-authentication/oauth-openshift-558db77b4-rrhk2" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.920211 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/05cfdf5e-5390-4f32-986d-02872c05f444-registry-tls\") pod \"image-registry-697d97f7c8-kzrfw\" (UID: \"05cfdf5e-5390-4f32-986d-02872c05f444\") " pod="openshift-image-registry/image-registry-697d97f7c8-kzrfw" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.920515 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/c733e45d-a072-4619-b2f8-aea6d77b112f-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-rrhk2\" (UID: \"c733e45d-a072-4619-b2f8-aea6d77b112f\") " pod="openshift-authentication/oauth-openshift-558db77b4-rrhk2" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.920884 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/71aa9018-d3be-454d-8d1c-5853f3971151-serving-cert\") pod \"apiserver-76f77b778f-znvc9\" (UID: \"71aa9018-d3be-454d-8d1c-5853f3971151\") " pod="openshift-apiserver/apiserver-76f77b778f-znvc9" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.921277 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/c733e45d-a072-4619-b2f8-aea6d77b112f-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-rrhk2\" (UID: \"c733e45d-a072-4619-b2f8-aea6d77b112f\") " pod="openshift-authentication/oauth-openshift-558db77b4-rrhk2" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.921438 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/4de72bcc-6d41-47cc-b9f7-f4cca10b977f-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-95wjd\" (UID: \"4de72bcc-6d41-47cc-b9f7-f4cca10b977f\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-95wjd" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.922240 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-snx9t\" (UniqueName: \"kubernetes.io/projected/9a9d3cd7-c717-4851-b4f3-2bb17d88d0bd-kube-api-access-snx9t\") pod \"cluster-image-registry-operator-dc59b4c8b-lb8pn\" (UID: \"9a9d3cd7-c717-4851-b4f3-2bb17d88d0bd\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-lb8pn" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.922331 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ltvbx\" (UniqueName: \"kubernetes.io/projected/c733e45d-a072-4619-b2f8-aea6d77b112f-kube-api-access-ltvbx\") pod \"oauth-openshift-558db77b4-rrhk2\" (UID: \"c733e45d-a072-4619-b2f8-aea6d77b112f\") " pod="openshift-authentication/oauth-openshift-558db77b4-rrhk2" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.921284 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/48ab701f-fc67-4d29-9cad-337e223f6f87-etcd-client\") pod \"apiserver-7bbb656c7d-slpn2\" (UID: \"48ab701f-fc67-4d29-9cad-337e223f6f87\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-slpn2" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.922504 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/46095393-c72a-4539-b3e4-e2f3f35301b8-auth-proxy-config\") pod \"machine-config-operator-74547568cd-6g7xj\" (UID: \"46095393-c72a-4539-b3e4-e2f3f35301b8\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-6g7xj" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.923116 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d9frb\" (UniqueName: \"kubernetes.io/projected/48ab701f-fc67-4d29-9cad-337e223f6f87-kube-api-access-d9frb\") pod \"apiserver-7bbb656c7d-slpn2\" (UID: \"48ab701f-fc67-4d29-9cad-337e223f6f87\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-slpn2" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.923217 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/07ef0edd-666b-4ced-9a27-51433a59c6c0-service-ca\") pod \"console-f9d7485db-2zgdn\" (UID: \"07ef0edd-666b-4ced-9a27-51433a59c6c0\") " pod="openshift-console/console-f9d7485db-2zgdn" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.923363 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zdlj2\" (UniqueName: \"kubernetes.io/projected/d86468b3-88b2-4c49-b807-c00eceb862e2-kube-api-access-zdlj2\") pod \"service-ca-operator-777779d784-nmq4g\" (UID: \"d86468b3-88b2-4c49-b807-c00eceb862e2\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-nmq4g" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.923834 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b546c0fc-b66f-4f2b-ab03-364362906f88-srv-cert\") pod \"olm-operator-6b444d44fb-lz7cz\" (UID: \"b546c0fc-b66f-4f2b-ab03-364362906f88\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lz7cz" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.923931 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/63609b48-d163-4000-a23b-bb70a6719c5c-node-bootstrap-token\") pod \"machine-config-server-6f7z7\" (UID: \"63609b48-d163-4000-a23b-bb70a6719c5c\") " pod="openshift-machine-config-operator/machine-config-server-6f7z7" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.924016 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zfm8x\" (UniqueName: \"kubernetes.io/projected/2dfca0b5-693a-4e28-ae27-d5532038616c-kube-api-access-zfm8x\") pod \"dns-default-sppfz\" (UID: \"2dfca0b5-693a-4e28-ae27-d5532038616c\") " pod="openshift-dns/dns-default-sppfz" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.924225 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/05cfdf5e-5390-4f32-986d-02872c05f444-trusted-ca\") pod \"image-registry-697d97f7c8-kzrfw\" (UID: \"05cfdf5e-5390-4f32-986d-02872c05f444\") " pod="openshift-image-registry/image-registry-697d97f7c8-kzrfw" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.924355 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9a9d3cd7-c717-4851-b4f3-2bb17d88d0bd-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-lb8pn\" (UID: \"9a9d3cd7-c717-4851-b4f3-2bb17d88d0bd\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-lb8pn" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.924449 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1fee6a64-486f-4aef-9242-8bf07796d6e3-serving-cert\") pod \"openshift-config-operator-7777fb866f-n95bv\" (UID: \"1fee6a64-486f-4aef-9242-8bf07796d6e3\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-n95bv" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.925064 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/71aa9018-d3be-454d-8d1c-5853f3971151-trusted-ca-bundle\") pod \"apiserver-76f77b778f-znvc9\" (UID: \"71aa9018-d3be-454d-8d1c-5853f3971151\") " pod="openshift-apiserver/apiserver-76f77b778f-znvc9" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.925186 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/26986446-4844-49cf-a77d-7d316d2d826b-serving-cert\") pod \"etcd-operator-b45778765-5tf64\" (UID: \"26986446-4844-49cf-a77d-7d316d2d826b\") " pod="openshift-etcd-operator/etcd-operator-b45778765-5tf64" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.925316 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/71aa9018-d3be-454d-8d1c-5853f3971151-encryption-config\") pod \"apiserver-76f77b778f-znvc9\" (UID: \"71aa9018-d3be-454d-8d1c-5853f3971151\") " pod="openshift-apiserver/apiserver-76f77b778f-znvc9" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.925393 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/07ef0edd-666b-4ced-9a27-51433a59c6c0-console-config\") pod \"console-f9d7485db-2zgdn\" (UID: \"07ef0edd-666b-4ced-9a27-51433a59c6c0\") " pod="openshift-console/console-f9d7485db-2zgdn" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.925560 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/feea3a62-1f72-4b46-9655-521e8ff5c323-stats-auth\") pod \"router-default-5444994796-tqnlh\" (UID: \"feea3a62-1f72-4b46-9655-521e8ff5c323\") " pod="openshift-ingress/router-default-5444994796-tqnlh" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.925709 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/05cfdf5e-5390-4f32-986d-02872c05f444-trusted-ca\") pod \"image-registry-697d97f7c8-kzrfw\" (UID: \"05cfdf5e-5390-4f32-986d-02872c05f444\") " pod="openshift-image-registry/image-registry-697d97f7c8-kzrfw" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.925787 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4de72bcc-6d41-47cc-b9f7-f4cca10b977f-config\") pod \"machine-api-operator-5694c8668f-95wjd\" (UID: \"4de72bcc-6d41-47cc-b9f7-f4cca10b977f\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-95wjd" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.925928 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rgflg\" (UniqueName: \"kubernetes.io/projected/4de72bcc-6d41-47cc-b9f7-f4cca10b977f-kube-api-access-rgflg\") pod \"machine-api-operator-5694c8668f-95wjd\" (UID: \"4de72bcc-6d41-47cc-b9f7-f4cca10b977f\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-95wjd" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.926053 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0d9b99ac-2db8-435f-ad9f-4d7335a40e19-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-z7w6c\" (UID: \"0d9b99ac-2db8-435f-ad9f-4d7335a40e19\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-z7w6c" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.926130 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1b16df0c-b660-4f3c-9d26-cfff395d5c88-config-volume\") pod \"collect-profiles-29530605-25rxm\" (UID: \"1b16df0c-b660-4f3c-9d26-cfff395d5c88\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29530605-25rxm" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.926194 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/4de72bcc-6d41-47cc-b9f7-f4cca10b977f-images\") pod \"machine-api-operator-5694c8668f-95wjd\" (UID: \"4de72bcc-6d41-47cc-b9f7-f4cca10b977f\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-95wjd" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.926259 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/71aa9018-d3be-454d-8d1c-5853f3971151-node-pullsecrets\") pod \"apiserver-76f77b778f-znvc9\" (UID: \"71aa9018-d3be-454d-8d1c-5853f3971151\") " pod="openshift-apiserver/apiserver-76f77b778f-znvc9" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.926309 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/7f57c7c4-ddb8-48dd-853e-d87bdf3a9abd-apiservice-cert\") pod \"packageserver-d55dfcdfc-wtxsb\" (UID: \"7f57c7c4-ddb8-48dd-853e-d87bdf3a9abd\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-wtxsb" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.926340 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4de72bcc-6d41-47cc-b9f7-f4cca10b977f-config\") pod \"machine-api-operator-5694c8668f-95wjd\" (UID: \"4de72bcc-6d41-47cc-b9f7-f4cca10b977f\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-95wjd" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.926370 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hqmw9\" (UniqueName: \"kubernetes.io/projected/bc0bbd6d-8e03-4d03-b2a8-2fe201d6c6bf-kube-api-access-hqmw9\") pod \"machine-config-controller-84d6567774-wzbgr\" (UID: \"bc0bbd6d-8e03-4d03-b2a8-2fe201d6c6bf\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-wzbgr" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.926412 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/f5e834a4-b3e0-4d34-922a-2a8ed8e1fecb-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-687p7\" (UID: \"f5e834a4-b3e0-4d34-922a-2a8ed8e1fecb\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-687p7" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.926562 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w44mh\" (UniqueName: \"kubernetes.io/projected/32e17925-dd05-43de-8e22-105f0002b651-kube-api-access-w44mh\") pod \"openshift-controller-manager-operator-756b6f6bc6-klkgk\" (UID: \"32e17925-dd05-43de-8e22-105f0002b651\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-klkgk" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.926664 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/feea3a62-1f72-4b46-9655-521e8ff5c323-service-ca-bundle\") pod \"router-default-5444994796-tqnlh\" (UID: \"feea3a62-1f72-4b46-9655-521e8ff5c323\") " pod="openshift-ingress/router-default-5444994796-tqnlh" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.926776 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/32e17925-dd05-43de-8e22-105f0002b651-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-klkgk\" (UID: \"32e17925-dd05-43de-8e22-105f0002b651\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-klkgk" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.926851 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/26986446-4844-49cf-a77d-7d316d2d826b-etcd-ca\") pod \"etcd-operator-b45778765-5tf64\" (UID: \"26986446-4844-49cf-a77d-7d316d2d826b\") " pod="openshift-etcd-operator/etcd-operator-b45778765-5tf64" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.927043 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mctks\" (UniqueName: \"kubernetes.io/projected/f5e834a4-b3e0-4d34-922a-2a8ed8e1fecb-kube-api-access-mctks\") pod \"control-plane-machine-set-operator-78cbb6b69f-687p7\" (UID: \"f5e834a4-b3e0-4d34-922a-2a8ed8e1fecb\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-687p7" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.927124 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/d2d9e940-6dc2-4325-b5eb-410c1f038ae5-signing-cabundle\") pod \"service-ca-9c57cc56f-f9p7v\" (UID: \"d2d9e940-6dc2-4325-b5eb-410c1f038ae5\") " pod="openshift-service-ca/service-ca-9c57cc56f-f9p7v" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.927169 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c5a6d2fa-46ed-4669-ac19-c335595a24fd-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-8smd9\" (UID: \"c5a6d2fa-46ed-4669-ac19-c335595a24fd\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8smd9" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.927488 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.927760 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/4de72bcc-6d41-47cc-b9f7-f4cca10b977f-images\") pod \"machine-api-operator-5694c8668f-95wjd\" (UID: \"4de72bcc-6d41-47cc-b9f7-f4cca10b977f\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-95wjd" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.946288 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.973921 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Feb 23 08:51:26 crc kubenswrapper[4940]: I0223 08:51:26.985643 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Feb 23 08:51:27 crc kubenswrapper[4940]: I0223 08:51:27.004861 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Feb 23 08:51:27 crc kubenswrapper[4940]: I0223 08:51:27.025301 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Feb 23 08:51:27 crc kubenswrapper[4940]: I0223 08:51:27.028450 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 08:51:27 crc kubenswrapper[4940]: E0223 08:51:27.028664 4940 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 08:51:27.528600193 +0000 UTC m=+218.911806390 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 08:51:27 crc kubenswrapper[4940]: I0223 08:51:27.028752 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/63609b48-d163-4000-a23b-bb70a6719c5c-certs\") pod \"machine-config-server-6f7z7\" (UID: \"63609b48-d163-4000-a23b-bb70a6719c5c\") " pod="openshift-machine-config-operator/machine-config-server-6f7z7" Feb 23 08:51:27 crc kubenswrapper[4940]: I0223 08:51:27.028821 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zsnzz\" (UniqueName: \"kubernetes.io/projected/ef3ada46-965f-42e7-b89b-a67618bff8c6-kube-api-access-zsnzz\") pod \"dns-operator-744455d44c-8ls95\" (UID: \"ef3ada46-965f-42e7-b89b-a67618bff8c6\") " pod="openshift-dns-operator/dns-operator-744455d44c-8ls95" Feb 23 08:51:27 crc kubenswrapper[4940]: I0223 08:51:27.028871 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/21f25477-51d5-480d-a252-f821cc008560-plugins-dir\") pod \"csi-hostpathplugin-m4hz9\" (UID: \"21f25477-51d5-480d-a252-f821cc008560\") " pod="hostpath-provisioner/csi-hostpathplugin-m4hz9" Feb 23 08:51:27 crc kubenswrapper[4940]: I0223 08:51:27.028920 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-44rxh\" (UniqueName: \"kubernetes.io/projected/2f476562-bc2b-4c2d-8283-fae9b3f8ac4e-kube-api-access-44rxh\") pod \"package-server-manager-789f6589d5-glhjd\" (UID: \"2f476562-bc2b-4c2d-8283-fae9b3f8ac4e\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-glhjd" Feb 23 08:51:27 crc kubenswrapper[4940]: I0223 08:51:27.029091 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/ef3ada46-965f-42e7-b89b-a67618bff8c6-metrics-tls\") pod \"dns-operator-744455d44c-8ls95\" (UID: \"ef3ada46-965f-42e7-b89b-a67618bff8c6\") " pod="openshift-dns-operator/dns-operator-744455d44c-8ls95" Feb 23 08:51:27 crc kubenswrapper[4940]: I0223 08:51:27.029148 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bmntp\" (UniqueName: \"kubernetes.io/projected/feea3a62-1f72-4b46-9655-521e8ff5c323-kube-api-access-bmntp\") pod \"router-default-5444994796-tqnlh\" (UID: \"feea3a62-1f72-4b46-9655-521e8ff5c323\") " pod="openshift-ingress/router-default-5444994796-tqnlh" Feb 23 08:51:27 crc kubenswrapper[4940]: I0223 08:51:27.029218 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9ssbn\" (UniqueName: \"kubernetes.io/projected/26986446-4844-49cf-a77d-7d316d2d826b-kube-api-access-9ssbn\") pod \"etcd-operator-b45778765-5tf64\" (UID: \"26986446-4844-49cf-a77d-7d316d2d826b\") " pod="openshift-etcd-operator/etcd-operator-b45778765-5tf64" Feb 23 08:51:27 crc kubenswrapper[4940]: I0223 08:51:27.029258 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/21f25477-51d5-480d-a252-f821cc008560-plugins-dir\") pod \"csi-hostpathplugin-m4hz9\" (UID: \"21f25477-51d5-480d-a252-f821cc008560\") " pod="hostpath-provisioner/csi-hostpathplugin-m4hz9" Feb 23 08:51:27 crc kubenswrapper[4940]: I0223 08:51:27.029272 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/2f476562-bc2b-4c2d-8283-fae9b3f8ac4e-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-glhjd\" (UID: \"2f476562-bc2b-4c2d-8283-fae9b3f8ac4e\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-glhjd" Feb 23 08:51:27 crc kubenswrapper[4940]: I0223 08:51:27.029326 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/71aa9018-d3be-454d-8d1c-5853f3971151-serving-cert\") pod \"apiserver-76f77b778f-znvc9\" (UID: \"71aa9018-d3be-454d-8d1c-5853f3971151\") " pod="openshift-apiserver/apiserver-76f77b778f-znvc9" Feb 23 08:51:27 crc kubenswrapper[4940]: I0223 08:51:27.029391 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/46095393-c72a-4539-b3e4-e2f3f35301b8-auth-proxy-config\") pod \"machine-config-operator-74547568cd-6g7xj\" (UID: \"46095393-c72a-4539-b3e4-e2f3f35301b8\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-6g7xj" Feb 23 08:51:27 crc kubenswrapper[4940]: I0223 08:51:27.029485 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/07ef0edd-666b-4ced-9a27-51433a59c6c0-service-ca\") pod \"console-f9d7485db-2zgdn\" (UID: \"07ef0edd-666b-4ced-9a27-51433a59c6c0\") " pod="openshift-console/console-f9d7485db-2zgdn" Feb 23 08:51:27 crc kubenswrapper[4940]: I0223 08:51:27.029539 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zdlj2\" (UniqueName: \"kubernetes.io/projected/d86468b3-88b2-4c49-b807-c00eceb862e2-kube-api-access-zdlj2\") pod \"service-ca-operator-777779d784-nmq4g\" (UID: \"d86468b3-88b2-4c49-b807-c00eceb862e2\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-nmq4g" Feb 23 08:51:27 crc kubenswrapper[4940]: I0223 08:51:27.029640 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b546c0fc-b66f-4f2b-ab03-364362906f88-srv-cert\") pod \"olm-operator-6b444d44fb-lz7cz\" (UID: \"b546c0fc-b66f-4f2b-ab03-364362906f88\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lz7cz" Feb 23 08:51:27 crc kubenswrapper[4940]: I0223 08:51:27.029692 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/63609b48-d163-4000-a23b-bb70a6719c5c-node-bootstrap-token\") pod \"machine-config-server-6f7z7\" (UID: \"63609b48-d163-4000-a23b-bb70a6719c5c\") " pod="openshift-machine-config-operator/machine-config-server-6f7z7" Feb 23 08:51:27 crc kubenswrapper[4940]: I0223 08:51:27.029760 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zfm8x\" (UniqueName: \"kubernetes.io/projected/2dfca0b5-693a-4e28-ae27-d5532038616c-kube-api-access-zfm8x\") pod \"dns-default-sppfz\" (UID: \"2dfca0b5-693a-4e28-ae27-d5532038616c\") " pod="openshift-dns/dns-default-sppfz" Feb 23 08:51:27 crc kubenswrapper[4940]: I0223 08:51:27.029832 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/71aa9018-d3be-454d-8d1c-5853f3971151-trusted-ca-bundle\") pod \"apiserver-76f77b778f-znvc9\" (UID: \"71aa9018-d3be-454d-8d1c-5853f3971151\") " pod="openshift-apiserver/apiserver-76f77b778f-znvc9" Feb 23 08:51:27 crc kubenswrapper[4940]: I0223 08:51:27.029877 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/26986446-4844-49cf-a77d-7d316d2d826b-serving-cert\") pod \"etcd-operator-b45778765-5tf64\" (UID: \"26986446-4844-49cf-a77d-7d316d2d826b\") " pod="openshift-etcd-operator/etcd-operator-b45778765-5tf64" Feb 23 08:51:27 crc kubenswrapper[4940]: I0223 08:51:27.029926 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1fee6a64-486f-4aef-9242-8bf07796d6e3-serving-cert\") pod \"openshift-config-operator-7777fb866f-n95bv\" (UID: \"1fee6a64-486f-4aef-9242-8bf07796d6e3\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-n95bv" Feb 23 08:51:27 crc kubenswrapper[4940]: I0223 08:51:27.029975 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/71aa9018-d3be-454d-8d1c-5853f3971151-encryption-config\") pod \"apiserver-76f77b778f-znvc9\" (UID: \"71aa9018-d3be-454d-8d1c-5853f3971151\") " pod="openshift-apiserver/apiserver-76f77b778f-znvc9" Feb 23 08:51:27 crc kubenswrapper[4940]: I0223 08:51:27.030018 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/07ef0edd-666b-4ced-9a27-51433a59c6c0-console-config\") pod \"console-f9d7485db-2zgdn\" (UID: \"07ef0edd-666b-4ced-9a27-51433a59c6c0\") " pod="openshift-console/console-f9d7485db-2zgdn" Feb 23 08:51:27 crc kubenswrapper[4940]: I0223 08:51:27.030084 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/feea3a62-1f72-4b46-9655-521e8ff5c323-stats-auth\") pod \"router-default-5444994796-tqnlh\" (UID: \"feea3a62-1f72-4b46-9655-521e8ff5c323\") " pod="openshift-ingress/router-default-5444994796-tqnlh" Feb 23 08:51:27 crc kubenswrapper[4940]: I0223 08:51:27.030149 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0d9b99ac-2db8-435f-ad9f-4d7335a40e19-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-z7w6c\" (UID: \"0d9b99ac-2db8-435f-ad9f-4d7335a40e19\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-z7w6c" Feb 23 08:51:27 crc kubenswrapper[4940]: I0223 08:51:27.030196 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1b16df0c-b660-4f3c-9d26-cfff395d5c88-config-volume\") pod \"collect-profiles-29530605-25rxm\" (UID: \"1b16df0c-b660-4f3c-9d26-cfff395d5c88\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29530605-25rxm" Feb 23 08:51:27 crc kubenswrapper[4940]: I0223 08:51:27.030225 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/07ef0edd-666b-4ced-9a27-51433a59c6c0-service-ca\") pod \"console-f9d7485db-2zgdn\" (UID: \"07ef0edd-666b-4ced-9a27-51433a59c6c0\") " pod="openshift-console/console-f9d7485db-2zgdn" Feb 23 08:51:27 crc kubenswrapper[4940]: I0223 08:51:27.030242 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/71aa9018-d3be-454d-8d1c-5853f3971151-node-pullsecrets\") pod \"apiserver-76f77b778f-znvc9\" (UID: \"71aa9018-d3be-454d-8d1c-5853f3971151\") " pod="openshift-apiserver/apiserver-76f77b778f-znvc9" Feb 23 08:51:27 crc kubenswrapper[4940]: I0223 08:51:27.030291 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/7f57c7c4-ddb8-48dd-853e-d87bdf3a9abd-apiservice-cert\") pod \"packageserver-d55dfcdfc-wtxsb\" (UID: \"7f57c7c4-ddb8-48dd-853e-d87bdf3a9abd\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-wtxsb" Feb 23 08:51:27 crc kubenswrapper[4940]: I0223 08:51:27.030339 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hqmw9\" (UniqueName: \"kubernetes.io/projected/bc0bbd6d-8e03-4d03-b2a8-2fe201d6c6bf-kube-api-access-hqmw9\") pod \"machine-config-controller-84d6567774-wzbgr\" (UID: \"bc0bbd6d-8e03-4d03-b2a8-2fe201d6c6bf\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-wzbgr" Feb 23 08:51:27 crc kubenswrapper[4940]: I0223 08:51:27.030386 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w44mh\" (UniqueName: \"kubernetes.io/projected/32e17925-dd05-43de-8e22-105f0002b651-kube-api-access-w44mh\") pod \"openshift-controller-manager-operator-756b6f6bc6-klkgk\" (UID: \"32e17925-dd05-43de-8e22-105f0002b651\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-klkgk" Feb 23 08:51:27 crc kubenswrapper[4940]: I0223 08:51:27.030433 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/feea3a62-1f72-4b46-9655-521e8ff5c323-service-ca-bundle\") pod \"router-default-5444994796-tqnlh\" (UID: \"feea3a62-1f72-4b46-9655-521e8ff5c323\") " pod="openshift-ingress/router-default-5444994796-tqnlh" Feb 23 08:51:27 crc kubenswrapper[4940]: I0223 08:51:27.030483 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/f5e834a4-b3e0-4d34-922a-2a8ed8e1fecb-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-687p7\" (UID: \"f5e834a4-b3e0-4d34-922a-2a8ed8e1fecb\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-687p7" Feb 23 08:51:27 crc kubenswrapper[4940]: I0223 08:51:27.030569 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/32e17925-dd05-43de-8e22-105f0002b651-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-klkgk\" (UID: \"32e17925-dd05-43de-8e22-105f0002b651\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-klkgk" Feb 23 08:51:27 crc kubenswrapper[4940]: I0223 08:51:27.030663 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/26986446-4844-49cf-a77d-7d316d2d826b-etcd-ca\") pod \"etcd-operator-b45778765-5tf64\" (UID: \"26986446-4844-49cf-a77d-7d316d2d826b\") " pod="openshift-etcd-operator/etcd-operator-b45778765-5tf64" Feb 23 08:51:27 crc kubenswrapper[4940]: I0223 08:51:27.030760 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mctks\" (UniqueName: \"kubernetes.io/projected/f5e834a4-b3e0-4d34-922a-2a8ed8e1fecb-kube-api-access-mctks\") pod \"control-plane-machine-set-operator-78cbb6b69f-687p7\" (UID: \"f5e834a4-b3e0-4d34-922a-2a8ed8e1fecb\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-687p7" Feb 23 08:51:27 crc kubenswrapper[4940]: I0223 08:51:27.030812 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/d2d9e940-6dc2-4325-b5eb-410c1f038ae5-signing-cabundle\") pod \"service-ca-9c57cc56f-f9p7v\" (UID: \"d2d9e940-6dc2-4325-b5eb-410c1f038ae5\") " pod="openshift-service-ca/service-ca-9c57cc56f-f9p7v" Feb 23 08:51:27 crc kubenswrapper[4940]: I0223 08:51:27.030859 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c5a6d2fa-46ed-4669-ac19-c335595a24fd-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-8smd9\" (UID: \"c5a6d2fa-46ed-4669-ac19-c335595a24fd\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8smd9" Feb 23 08:51:27 crc kubenswrapper[4940]: I0223 08:51:27.030905 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/71aa9018-d3be-454d-8d1c-5853f3971151-etcd-serving-ca\") pod \"apiserver-76f77b778f-znvc9\" (UID: \"71aa9018-d3be-454d-8d1c-5853f3971151\") " pod="openshift-apiserver/apiserver-76f77b778f-znvc9" Feb 23 08:51:27 crc kubenswrapper[4940]: I0223 08:51:27.030952 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4wtnk\" (UniqueName: \"kubernetes.io/projected/71aa9018-d3be-454d-8d1c-5853f3971151-kube-api-access-4wtnk\") pod \"apiserver-76f77b778f-znvc9\" (UID: \"71aa9018-d3be-454d-8d1c-5853f3971151\") " pod="openshift-apiserver/apiserver-76f77b778f-znvc9" Feb 23 08:51:27 crc kubenswrapper[4940]: I0223 08:51:27.031002 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f5skj\" (UniqueName: \"kubernetes.io/projected/d2d9e940-6dc2-4325-b5eb-410c1f038ae5-kube-api-access-f5skj\") pod \"service-ca-9c57cc56f-f9p7v\" (UID: \"d2d9e940-6dc2-4325-b5eb-410c1f038ae5\") " pod="openshift-service-ca/service-ca-9c57cc56f-f9p7v" Feb 23 08:51:27 crc kubenswrapper[4940]: I0223 08:51:27.031053 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v79wc\" (UniqueName: \"kubernetes.io/projected/b546c0fc-b66f-4f2b-ab03-364362906f88-kube-api-access-v79wc\") pod \"olm-operator-6b444d44fb-lz7cz\" (UID: \"b546c0fc-b66f-4f2b-ab03-364362906f88\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lz7cz" Feb 23 08:51:27 crc kubenswrapper[4940]: I0223 08:51:27.031098 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/71aa9018-d3be-454d-8d1c-5853f3971151-config\") pod \"apiserver-76f77b778f-znvc9\" (UID: \"71aa9018-d3be-454d-8d1c-5853f3971151\") " pod="openshift-apiserver/apiserver-76f77b778f-znvc9" Feb 23 08:51:27 crc kubenswrapper[4940]: I0223 08:51:27.031144 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/feea3a62-1f72-4b46-9655-521e8ff5c323-default-certificate\") pod \"router-default-5444994796-tqnlh\" (UID: \"feea3a62-1f72-4b46-9655-521e8ff5c323\") " pod="openshift-ingress/router-default-5444994796-tqnlh" Feb 23 08:51:27 crc kubenswrapper[4940]: I0223 08:51:27.031189 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/57505051-bc9e-499e-9013-6365439ebb68-config\") pod \"kube-apiserver-operator-766d6c64bb-pw8l4\" (UID: \"57505051-bc9e-499e-9013-6365439ebb68\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-pw8l4" Feb 23 08:51:27 crc kubenswrapper[4940]: I0223 08:51:27.031231 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d86468b3-88b2-4c49-b807-c00eceb862e2-serving-cert\") pod \"service-ca-operator-777779d784-nmq4g\" (UID: \"d86468b3-88b2-4c49-b807-c00eceb862e2\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-nmq4g" Feb 23 08:51:27 crc kubenswrapper[4940]: I0223 08:51:27.031282 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9de4a20c-3f76-4aa8-8347-42f3b3f53145-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-j9x9v\" (UID: \"9de4a20c-3f76-4aa8-8347-42f3b3f53145\") " pod="openshift-marketplace/marketplace-operator-79b997595-j9x9v" Feb 23 08:51:27 crc kubenswrapper[4940]: I0223 08:51:27.031330 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5b5rx\" (UniqueName: \"kubernetes.io/projected/1b16df0c-b660-4f3c-9d26-cfff395d5c88-kube-api-access-5b5rx\") pod \"collect-profiles-29530605-25rxm\" (UID: \"1b16df0c-b660-4f3c-9d26-cfff395d5c88\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29530605-25rxm" Feb 23 08:51:27 crc kubenswrapper[4940]: I0223 08:51:27.031376 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/7a60bce1-d0e8-451a-91da-396ec5d5c53b-srv-cert\") pod \"catalog-operator-68c6474976-9xdrp\" (UID: \"7a60bce1-d0e8-451a-91da-396ec5d5c53b\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9xdrp" Feb 23 08:51:27 crc kubenswrapper[4940]: I0223 08:51:27.031428 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ksrsz\" (UniqueName: \"kubernetes.io/projected/4f793610-43f5-4faf-b61d-e2330db0b177-kube-api-access-ksrsz\") pod \"cluster-samples-operator-665b6dd947-wwhg8\" (UID: \"4f793610-43f5-4faf-b61d-e2330db0b177\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-wwhg8" Feb 23 08:51:27 crc kubenswrapper[4940]: I0223 08:51:27.031485 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xgvf7\" (UniqueName: \"kubernetes.io/projected/9de4a20c-3f76-4aa8-8347-42f3b3f53145-kube-api-access-xgvf7\") pod \"marketplace-operator-79b997595-j9x9v\" (UID: \"9de4a20c-3f76-4aa8-8347-42f3b3f53145\") " pod="openshift-marketplace/marketplace-operator-79b997595-j9x9v" Feb 23 08:51:27 crc kubenswrapper[4940]: I0223 08:51:27.031533 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/21f25477-51d5-480d-a252-f821cc008560-registration-dir\") pod \"csi-hostpathplugin-m4hz9\" (UID: \"21f25477-51d5-480d-a252-f821cc008560\") " pod="hostpath-provisioner/csi-hostpathplugin-m4hz9" Feb 23 08:51:27 crc kubenswrapper[4940]: I0223 08:51:27.031561 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/71aa9018-d3be-454d-8d1c-5853f3971151-trusted-ca-bundle\") pod \"apiserver-76f77b778f-znvc9\" (UID: \"71aa9018-d3be-454d-8d1c-5853f3971151\") " pod="openshift-apiserver/apiserver-76f77b778f-znvc9" Feb 23 08:51:27 crc kubenswrapper[4940]: I0223 08:51:27.031580 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/1fee6a64-486f-4aef-9242-8bf07796d6e3-available-featuregates\") pod \"openshift-config-operator-7777fb866f-n95bv\" (UID: \"1fee6a64-486f-4aef-9242-8bf07796d6e3\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-n95bv" Feb 23 08:51:27 crc kubenswrapper[4940]: I0223 08:51:27.031710 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/32e17925-dd05-43de-8e22-105f0002b651-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-klkgk\" (UID: \"32e17925-dd05-43de-8e22-105f0002b651\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-klkgk" Feb 23 08:51:27 crc kubenswrapper[4940]: I0223 08:51:27.032323 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7zqvx\" (UniqueName: \"kubernetes.io/projected/1fee6a64-486f-4aef-9242-8bf07796d6e3-kube-api-access-7zqvx\") pod \"openshift-config-operator-7777fb866f-n95bv\" (UID: \"1fee6a64-486f-4aef-9242-8bf07796d6e3\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-n95bv" Feb 23 08:51:27 crc kubenswrapper[4940]: I0223 08:51:27.032355 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6a449118-5ee9-42f2-bdc2-a23f1c6febf6-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-whcws\" (UID: \"6a449118-5ee9-42f2-bdc2-a23f1c6febf6\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-whcws" Feb 23 08:51:27 crc kubenswrapper[4940]: I0223 08:51:27.032380 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c5a6d2fa-46ed-4669-ac19-c335595a24fd-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-8smd9\" (UID: \"c5a6d2fa-46ed-4669-ac19-c335595a24fd\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8smd9" Feb 23 08:51:27 crc kubenswrapper[4940]: I0223 08:51:27.032408 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/26986446-4844-49cf-a77d-7d316d2d826b-etcd-client\") pod \"etcd-operator-b45778765-5tf64\" (UID: \"26986446-4844-49cf-a77d-7d316d2d826b\") " pod="openshift-etcd-operator/etcd-operator-b45778765-5tf64" Feb 23 08:51:27 crc kubenswrapper[4940]: I0223 08:51:27.032436 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/726e20bd-8ba8-4ae8-a2ce-6d1a50300dfd-bound-sa-token\") pod \"ingress-operator-5b745b69d9-mg5lr\" (UID: \"726e20bd-8ba8-4ae8-a2ce-6d1a50300dfd\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-mg5lr" Feb 23 08:51:27 crc kubenswrapper[4940]: I0223 08:51:27.032469 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/21f25477-51d5-480d-a252-f821cc008560-csi-data-dir\") pod \"csi-hostpathplugin-m4hz9\" (UID: \"21f25477-51d5-480d-a252-f821cc008560\") " pod="hostpath-provisioner/csi-hostpathplugin-m4hz9" Feb 23 08:51:27 crc kubenswrapper[4940]: I0223 08:51:27.032508 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5lfjv\" (UniqueName: \"kubernetes.io/projected/783e15c8-9066-455f-878d-86215d82093b-kube-api-access-5lfjv\") pod \"migrator-59844c95c7-hrnj4\" (UID: \"783e15c8-9066-455f-878d-86215d82093b\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-hrnj4" Feb 23 08:51:27 crc kubenswrapper[4940]: I0223 08:51:27.032535 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l48sg\" (UniqueName: \"kubernetes.io/projected/63609b48-d163-4000-a23b-bb70a6719c5c-kube-api-access-l48sg\") pod \"machine-config-server-6f7z7\" (UID: \"63609b48-d163-4000-a23b-bb70a6719c5c\") " pod="openshift-machine-config-operator/machine-config-server-6f7z7" Feb 23 08:51:27 crc kubenswrapper[4940]: I0223 08:51:27.032557 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/46095393-c72a-4539-b3e4-e2f3f35301b8-proxy-tls\") pod \"machine-config-operator-74547568cd-6g7xj\" (UID: \"46095393-c72a-4539-b3e4-e2f3f35301b8\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-6g7xj" Feb 23 08:51:27 crc kubenswrapper[4940]: I0223 08:51:27.032581 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/46095393-c72a-4539-b3e4-e2f3f35301b8-auth-proxy-config\") pod \"machine-config-operator-74547568cd-6g7xj\" (UID: \"46095393-c72a-4539-b3e4-e2f3f35301b8\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-6g7xj" Feb 23 08:51:27 crc kubenswrapper[4940]: I0223 08:51:27.032594 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/07ef0edd-666b-4ced-9a27-51433a59c6c0-oauth-serving-cert\") pod \"console-f9d7485db-2zgdn\" (UID: \"07ef0edd-666b-4ced-9a27-51433a59c6c0\") " pod="openshift-console/console-f9d7485db-2zgdn" Feb 23 08:51:27 crc kubenswrapper[4940]: I0223 08:51:27.032635 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4btlz\" (UniqueName: \"kubernetes.io/projected/21f25477-51d5-480d-a252-f821cc008560-kube-api-access-4btlz\") pod \"csi-hostpathplugin-m4hz9\" (UID: \"21f25477-51d5-480d-a252-f821cc008560\") " pod="hostpath-provisioner/csi-hostpathplugin-m4hz9" Feb 23 08:51:27 crc kubenswrapper[4940]: I0223 08:51:27.032655 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/bc0bbd6d-8e03-4d03-b2a8-2fe201d6c6bf-proxy-tls\") pod \"machine-config-controller-84d6567774-wzbgr\" (UID: \"bc0bbd6d-8e03-4d03-b2a8-2fe201d6c6bf\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-wzbgr" Feb 23 08:51:27 crc kubenswrapper[4940]: I0223 08:51:27.032674 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6a449118-5ee9-42f2-bdc2-a23f1c6febf6-config\") pod \"kube-controller-manager-operator-78b949d7b-whcws\" (UID: \"6a449118-5ee9-42f2-bdc2-a23f1c6febf6\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-whcws" Feb 23 08:51:27 crc kubenswrapper[4940]: I0223 08:51:27.032693 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/2dfca0b5-693a-4e28-ae27-d5532038616c-metrics-tls\") pod \"dns-default-sppfz\" (UID: \"2dfca0b5-693a-4e28-ae27-d5532038616c\") " pod="openshift-dns/dns-default-sppfz" Feb 23 08:51:27 crc kubenswrapper[4940]: I0223 08:51:27.032711 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/24821bad-09c7-4880-bf0e-a6e829284f2e-cert\") pod \"ingress-canary-zgt46\" (UID: \"24821bad-09c7-4880-bf0e-a6e829284f2e\") " pod="openshift-ingress-canary/ingress-canary-zgt46" Feb 23 08:51:27 crc kubenswrapper[4940]: I0223 08:51:27.032728 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/9de4a20c-3f76-4aa8-8347-42f3b3f53145-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-j9x9v\" (UID: \"9de4a20c-3f76-4aa8-8347-42f3b3f53145\") " pod="openshift-marketplace/marketplace-operator-79b997595-j9x9v" Feb 23 08:51:27 crc kubenswrapper[4940]: I0223 08:51:27.032756 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1b16df0c-b660-4f3c-9d26-cfff395d5c88-secret-volume\") pod \"collect-profiles-29530605-25rxm\" (UID: \"1b16df0c-b660-4f3c-9d26-cfff395d5c88\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29530605-25rxm" Feb 23 08:51:27 crc kubenswrapper[4940]: I0223 08:51:27.032775 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/21f25477-51d5-480d-a252-f821cc008560-socket-dir\") pod \"csi-hostpathplugin-m4hz9\" (UID: \"21f25477-51d5-480d-a252-f821cc008560\") " pod="hostpath-provisioner/csi-hostpathplugin-m4hz9" Feb 23 08:51:27 crc kubenswrapper[4940]: I0223 08:51:27.032930 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/71aa9018-d3be-454d-8d1c-5853f3971151-node-pullsecrets\") pod \"apiserver-76f77b778f-znvc9\" (UID: \"71aa9018-d3be-454d-8d1c-5853f3971151\") " pod="openshift-apiserver/apiserver-76f77b778f-znvc9" Feb 23 08:51:27 crc kubenswrapper[4940]: I0223 08:51:27.033054 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/1fee6a64-486f-4aef-9242-8bf07796d6e3-available-featuregates\") pod \"openshift-config-operator-7777fb866f-n95bv\" (UID: \"1fee6a64-486f-4aef-9242-8bf07796d6e3\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-n95bv" Feb 23 08:51:27 crc kubenswrapper[4940]: I0223 08:51:27.033172 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/71aa9018-d3be-454d-8d1c-5853f3971151-etcd-serving-ca\") pod \"apiserver-76f77b778f-znvc9\" (UID: \"71aa9018-d3be-454d-8d1c-5853f3971151\") " pod="openshift-apiserver/apiserver-76f77b778f-znvc9" Feb 23 08:51:27 crc kubenswrapper[4940]: I0223 08:51:27.033276 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/32e17925-dd05-43de-8e22-105f0002b651-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-klkgk\" (UID: \"32e17925-dd05-43de-8e22-105f0002b651\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-klkgk" Feb 23 08:51:27 crc kubenswrapper[4940]: I0223 08:51:27.033430 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/21f25477-51d5-480d-a252-f821cc008560-csi-data-dir\") pod \"csi-hostpathplugin-m4hz9\" (UID: \"21f25477-51d5-480d-a252-f821cc008560\") " pod="hostpath-provisioner/csi-hostpathplugin-m4hz9" Feb 23 08:51:27 crc kubenswrapper[4940]: I0223 08:51:27.033499 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/ef3ada46-965f-42e7-b89b-a67618bff8c6-metrics-tls\") pod \"dns-operator-744455d44c-8ls95\" (UID: \"ef3ada46-965f-42e7-b89b-a67618bff8c6\") " pod="openshift-dns-operator/dns-operator-744455d44c-8ls95" Feb 23 08:51:27 crc kubenswrapper[4940]: I0223 08:51:27.033603 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c5a6d2fa-46ed-4669-ac19-c335595a24fd-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-8smd9\" (UID: \"c5a6d2fa-46ed-4669-ac19-c335595a24fd\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8smd9" Feb 23 08:51:27 crc kubenswrapper[4940]: I0223 08:51:27.033650 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d86468b3-88b2-4c49-b807-c00eceb862e2-config\") pod \"service-ca-operator-777779d784-nmq4g\" (UID: \"d86468b3-88b2-4c49-b807-c00eceb862e2\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-nmq4g" Feb 23 08:51:27 crc kubenswrapper[4940]: I0223 08:51:27.033656 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/71aa9018-d3be-454d-8d1c-5853f3971151-config\") pod \"apiserver-76f77b778f-znvc9\" (UID: \"71aa9018-d3be-454d-8d1c-5853f3971151\") " pod="openshift-apiserver/apiserver-76f77b778f-znvc9" Feb 23 08:51:27 crc kubenswrapper[4940]: I0223 08:51:27.033692 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/26986446-4844-49cf-a77d-7d316d2d826b-etcd-ca\") pod \"etcd-operator-b45778765-5tf64\" (UID: \"26986446-4844-49cf-a77d-7d316d2d826b\") " pod="openshift-etcd-operator/etcd-operator-b45778765-5tf64" Feb 23 08:51:27 crc kubenswrapper[4940]: I0223 08:51:27.033679 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/71aa9018-d3be-454d-8d1c-5853f3971151-image-import-ca\") pod \"apiserver-76f77b778f-znvc9\" (UID: \"71aa9018-d3be-454d-8d1c-5853f3971151\") " pod="openshift-apiserver/apiserver-76f77b778f-znvc9" Feb 23 08:51:27 crc kubenswrapper[4940]: I0223 08:51:27.033763 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/7a60bce1-d0e8-451a-91da-396ec5d5c53b-profile-collector-cert\") pod \"catalog-operator-68c6474976-9xdrp\" (UID: \"7a60bce1-d0e8-451a-91da-396ec5d5c53b\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9xdrp" Feb 23 08:51:27 crc kubenswrapper[4940]: I0223 08:51:27.033803 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/21f25477-51d5-480d-a252-f821cc008560-registration-dir\") pod \"csi-hostpathplugin-m4hz9\" (UID: \"21f25477-51d5-480d-a252-f821cc008560\") " pod="hostpath-provisioner/csi-hostpathplugin-m4hz9" Feb 23 08:51:27 crc kubenswrapper[4940]: I0223 08:51:27.033826 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/21f25477-51d5-480d-a252-f821cc008560-socket-dir\") pod \"csi-hostpathplugin-m4hz9\" (UID: \"21f25477-51d5-480d-a252-f821cc008560\") " pod="hostpath-provisioner/csi-hostpathplugin-m4hz9" Feb 23 08:51:27 crc kubenswrapper[4940]: I0223 08:51:27.033808 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/4f793610-43f5-4faf-b61d-e2330db0b177-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-wwhg8\" (UID: \"4f793610-43f5-4faf-b61d-e2330db0b177\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-wwhg8" Feb 23 08:51:27 crc kubenswrapper[4940]: I0223 08:51:27.033872 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/26986446-4844-49cf-a77d-7d316d2d826b-config\") pod \"etcd-operator-b45778765-5tf64\" (UID: \"26986446-4844-49cf-a77d-7d316d2d826b\") " pod="openshift-etcd-operator/etcd-operator-b45778765-5tf64" Feb 23 08:51:27 crc kubenswrapper[4940]: I0223 08:51:27.033907 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b546c0fc-b66f-4f2b-ab03-364362906f88-profile-collector-cert\") pod \"olm-operator-6b444d44fb-lz7cz\" (UID: \"b546c0fc-b66f-4f2b-ab03-364362906f88\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lz7cz" Feb 23 08:51:27 crc kubenswrapper[4940]: I0223 08:51:27.033943 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cnb8z\" (UniqueName: \"kubernetes.io/projected/0d9b99ac-2db8-435f-ad9f-4d7335a40e19-kube-api-access-cnb8z\") pod \"kube-storage-version-migrator-operator-b67b599dd-z7w6c\" (UID: \"0d9b99ac-2db8-435f-ad9f-4d7335a40e19\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-z7w6c" Feb 23 08:51:27 crc kubenswrapper[4940]: I0223 08:51:27.033979 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-689w2\" (UniqueName: \"kubernetes.io/projected/7a60bce1-d0e8-451a-91da-396ec5d5c53b-kube-api-access-689w2\") pod \"catalog-operator-68c6474976-9xdrp\" (UID: \"7a60bce1-d0e8-451a-91da-396ec5d5c53b\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9xdrp" Feb 23 08:51:27 crc kubenswrapper[4940]: I0223 08:51:27.034013 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71aa9018-d3be-454d-8d1c-5853f3971151-audit-dir\") pod \"apiserver-76f77b778f-znvc9\" (UID: \"71aa9018-d3be-454d-8d1c-5853f3971151\") " pod="openshift-apiserver/apiserver-76f77b778f-znvc9" Feb 23 08:51:27 crc kubenswrapper[4940]: I0223 08:51:27.034040 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/7f57c7c4-ddb8-48dd-853e-d87bdf3a9abd-tmpfs\") pod \"packageserver-d55dfcdfc-wtxsb\" (UID: \"7f57c7c4-ddb8-48dd-853e-d87bdf3a9abd\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-wtxsb" Feb 23 08:51:27 crc kubenswrapper[4940]: I0223 08:51:27.034075 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b5kv7\" (UniqueName: \"kubernetes.io/projected/07ef0edd-666b-4ced-9a27-51433a59c6c0-kube-api-access-b5kv7\") pod \"console-f9d7485db-2zgdn\" (UID: \"07ef0edd-666b-4ced-9a27-51433a59c6c0\") " pod="openshift-console/console-f9d7485db-2zgdn" Feb 23 08:51:27 crc kubenswrapper[4940]: I0223 08:51:27.034120 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/726e20bd-8ba8-4ae8-a2ce-6d1a50300dfd-metrics-tls\") pod \"ingress-operator-5b745b69d9-mg5lr\" (UID: \"726e20bd-8ba8-4ae8-a2ce-6d1a50300dfd\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-mg5lr" Feb 23 08:51:27 crc kubenswrapper[4940]: I0223 08:51:27.034138 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/07ef0edd-666b-4ced-9a27-51433a59c6c0-console-config\") pod \"console-f9d7485db-2zgdn\" (UID: \"07ef0edd-666b-4ced-9a27-51433a59c6c0\") " pod="openshift-console/console-f9d7485db-2zgdn" Feb 23 08:51:27 crc kubenswrapper[4940]: I0223 08:51:27.034152 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/07ef0edd-666b-4ced-9a27-51433a59c6c0-oauth-serving-cert\") pod \"console-f9d7485db-2zgdn\" (UID: \"07ef0edd-666b-4ced-9a27-51433a59c6c0\") " pod="openshift-console/console-f9d7485db-2zgdn" Feb 23 08:51:27 crc kubenswrapper[4940]: I0223 08:51:27.034159 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5xknx\" (UniqueName: \"kubernetes.io/projected/7f57c7c4-ddb8-48dd-853e-d87bdf3a9abd-kube-api-access-5xknx\") pod \"packageserver-d55dfcdfc-wtxsb\" (UID: \"7f57c7c4-ddb8-48dd-853e-d87bdf3a9abd\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-wtxsb" Feb 23 08:51:27 crc kubenswrapper[4940]: I0223 08:51:27.034209 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/bc0bbd6d-8e03-4d03-b2a8-2fe201d6c6bf-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-wzbgr\" (UID: \"bc0bbd6d-8e03-4d03-b2a8-2fe201d6c6bf\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-wzbgr" Feb 23 08:51:27 crc kubenswrapper[4940]: I0223 08:51:27.034235 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/21f25477-51d5-480d-a252-f821cc008560-mountpoint-dir\") pod \"csi-hostpathplugin-m4hz9\" (UID: \"21f25477-51d5-480d-a252-f821cc008560\") " pod="hostpath-provisioner/csi-hostpathplugin-m4hz9" Feb 23 08:51:27 crc kubenswrapper[4940]: I0223 08:51:27.034260 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jzg7k\" (UniqueName: \"kubernetes.io/projected/46095393-c72a-4539-b3e4-e2f3f35301b8-kube-api-access-jzg7k\") pod \"machine-config-operator-74547568cd-6g7xj\" (UID: \"46095393-c72a-4539-b3e4-e2f3f35301b8\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-6g7xj" Feb 23 08:51:27 crc kubenswrapper[4940]: I0223 08:51:27.034284 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2dfca0b5-693a-4e28-ae27-d5532038616c-config-volume\") pod \"dns-default-sppfz\" (UID: \"2dfca0b5-693a-4e28-ae27-d5532038616c\") " pod="openshift-dns/dns-default-sppfz" Feb 23 08:51:27 crc kubenswrapper[4940]: I0223 08:51:27.034309 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/07ef0edd-666b-4ced-9a27-51433a59c6c0-trusted-ca-bundle\") pod \"console-f9d7485db-2zgdn\" (UID: \"07ef0edd-666b-4ced-9a27-51433a59c6c0\") " pod="openshift-console/console-f9d7485db-2zgdn" Feb 23 08:51:27 crc kubenswrapper[4940]: I0223 08:51:27.034336 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/57505051-bc9e-499e-9013-6365439ebb68-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-pw8l4\" (UID: \"57505051-bc9e-499e-9013-6365439ebb68\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-pw8l4" Feb 23 08:51:27 crc kubenswrapper[4940]: I0223 08:51:27.034362 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/71aa9018-d3be-454d-8d1c-5853f3971151-etcd-client\") pod \"apiserver-76f77b778f-znvc9\" (UID: \"71aa9018-d3be-454d-8d1c-5853f3971151\") " pod="openshift-apiserver/apiserver-76f77b778f-znvc9" Feb 23 08:51:27 crc kubenswrapper[4940]: I0223 08:51:27.034390 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ngnfq\" (UniqueName: \"kubernetes.io/projected/726e20bd-8ba8-4ae8-a2ce-6d1a50300dfd-kube-api-access-ngnfq\") pod \"ingress-operator-5b745b69d9-mg5lr\" (UID: \"726e20bd-8ba8-4ae8-a2ce-6d1a50300dfd\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-mg5lr" Feb 23 08:51:27 crc kubenswrapper[4940]: I0223 08:51:27.034416 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2vbvn\" (UniqueName: \"kubernetes.io/projected/24821bad-09c7-4880-bf0e-a6e829284f2e-kube-api-access-2vbvn\") pod \"ingress-canary-zgt46\" (UID: \"24821bad-09c7-4880-bf0e-a6e829284f2e\") " pod="openshift-ingress-canary/ingress-canary-zgt46" Feb 23 08:51:27 crc kubenswrapper[4940]: I0223 08:51:27.034443 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/57505051-bc9e-499e-9013-6365439ebb68-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-pw8l4\" (UID: \"57505051-bc9e-499e-9013-6365439ebb68\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-pw8l4" Feb 23 08:51:27 crc kubenswrapper[4940]: I0223 08:51:27.034468 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/07ef0edd-666b-4ced-9a27-51433a59c6c0-console-serving-cert\") pod \"console-f9d7485db-2zgdn\" (UID: \"07ef0edd-666b-4ced-9a27-51433a59c6c0\") " pod="openshift-console/console-f9d7485db-2zgdn" Feb 23 08:51:27 crc kubenswrapper[4940]: I0223 08:51:27.034495 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/71aa9018-d3be-454d-8d1c-5853f3971151-audit\") pod \"apiserver-76f77b778f-znvc9\" (UID: \"71aa9018-d3be-454d-8d1c-5853f3971151\") " pod="openshift-apiserver/apiserver-76f77b778f-znvc9" Feb 23 08:51:27 crc kubenswrapper[4940]: I0223 08:51:27.034512 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/7f57c7c4-ddb8-48dd-853e-d87bdf3a9abd-tmpfs\") pod \"packageserver-d55dfcdfc-wtxsb\" (UID: \"7f57c7c4-ddb8-48dd-853e-d87bdf3a9abd\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-wtxsb" Feb 23 08:51:27 crc kubenswrapper[4940]: I0223 08:51:27.034525 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kzrfw\" (UID: \"05cfdf5e-5390-4f32-986d-02872c05f444\") " pod="openshift-image-registry/image-registry-697d97f7c8-kzrfw" Feb 23 08:51:27 crc kubenswrapper[4940]: I0223 08:51:27.034551 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/726e20bd-8ba8-4ae8-a2ce-6d1a50300dfd-trusted-ca\") pod \"ingress-operator-5b745b69d9-mg5lr\" (UID: \"726e20bd-8ba8-4ae8-a2ce-6d1a50300dfd\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-mg5lr" Feb 23 08:51:27 crc kubenswrapper[4940]: I0223 08:51:27.034581 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/97dc456f-7c0c-49f0-ad5b-e4c791429d57-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-2c95j\" (UID: \"97dc456f-7c0c-49f0-ad5b-e4c791429d57\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-2c95j" Feb 23 08:51:27 crc kubenswrapper[4940]: I0223 08:51:27.034600 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71aa9018-d3be-454d-8d1c-5853f3971151-audit-dir\") pod \"apiserver-76f77b778f-znvc9\" (UID: \"71aa9018-d3be-454d-8d1c-5853f3971151\") " pod="openshift-apiserver/apiserver-76f77b778f-znvc9" Feb 23 08:51:27 crc kubenswrapper[4940]: I0223 08:51:27.034605 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/d2d9e940-6dc2-4325-b5eb-410c1f038ae5-signing-key\") pod \"service-ca-9c57cc56f-f9p7v\" (UID: \"d2d9e940-6dc2-4325-b5eb-410c1f038ae5\") " pod="openshift-service-ca/service-ca-9c57cc56f-f9p7v" Feb 23 08:51:27 crc kubenswrapper[4940]: I0223 08:51:27.034647 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/feea3a62-1f72-4b46-9655-521e8ff5c323-metrics-certs\") pod \"router-default-5444994796-tqnlh\" (UID: \"feea3a62-1f72-4b46-9655-521e8ff5c323\") " pod="openshift-ingress/router-default-5444994796-tqnlh" Feb 23 08:51:27 crc kubenswrapper[4940]: I0223 08:51:27.034695 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/7f57c7c4-ddb8-48dd-853e-d87bdf3a9abd-webhook-cert\") pod \"packageserver-d55dfcdfc-wtxsb\" (UID: \"7f57c7c4-ddb8-48dd-853e-d87bdf3a9abd\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-wtxsb" Feb 23 08:51:27 crc kubenswrapper[4940]: I0223 08:51:27.034714 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/21f25477-51d5-480d-a252-f821cc008560-mountpoint-dir\") pod \"csi-hostpathplugin-m4hz9\" (UID: \"21f25477-51d5-480d-a252-f821cc008560\") " pod="hostpath-provisioner/csi-hostpathplugin-m4hz9" Feb 23 08:51:27 crc kubenswrapper[4940]: I0223 08:51:27.034718 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6a449118-5ee9-42f2-bdc2-a23f1c6febf6-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-whcws\" (UID: \"6a449118-5ee9-42f2-bdc2-a23f1c6febf6\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-whcws" Feb 23 08:51:27 crc kubenswrapper[4940]: I0223 08:51:27.034747 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/07ef0edd-666b-4ced-9a27-51433a59c6c0-console-oauth-config\") pod \"console-f9d7485db-2zgdn\" (UID: \"07ef0edd-666b-4ced-9a27-51433a59c6c0\") " pod="openshift-console/console-f9d7485db-2zgdn" Feb 23 08:51:27 crc kubenswrapper[4940]: I0223 08:51:27.034771 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k2h4d\" (UniqueName: \"kubernetes.io/projected/97dc456f-7c0c-49f0-ad5b-e4c791429d57-kube-api-access-k2h4d\") pod \"multus-admission-controller-857f4d67dd-2c95j\" (UID: \"97dc456f-7c0c-49f0-ad5b-e4c791429d57\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-2c95j" Feb 23 08:51:27 crc kubenswrapper[4940]: I0223 08:51:27.034795 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/26986446-4844-49cf-a77d-7d316d2d826b-etcd-service-ca\") pod \"etcd-operator-b45778765-5tf64\" (UID: \"26986446-4844-49cf-a77d-7d316d2d826b\") " pod="openshift-etcd-operator/etcd-operator-b45778765-5tf64" Feb 23 08:51:27 crc kubenswrapper[4940]: I0223 08:51:27.034818 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/46095393-c72a-4539-b3e4-e2f3f35301b8-images\") pod \"machine-config-operator-74547568cd-6g7xj\" (UID: \"46095393-c72a-4539-b3e4-e2f3f35301b8\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-6g7xj" Feb 23 08:51:27 crc kubenswrapper[4940]: I0223 08:51:27.034844 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0d9b99ac-2db8-435f-ad9f-4d7335a40e19-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-z7w6c\" (UID: \"0d9b99ac-2db8-435f-ad9f-4d7335a40e19\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-z7w6c" Feb 23 08:51:27 crc kubenswrapper[4940]: I0223 08:51:27.035224 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/bc0bbd6d-8e03-4d03-b2a8-2fe201d6c6bf-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-wzbgr\" (UID: \"bc0bbd6d-8e03-4d03-b2a8-2fe201d6c6bf\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-wzbgr" Feb 23 08:51:27 crc kubenswrapper[4940]: I0223 08:51:27.035361 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/71aa9018-d3be-454d-8d1c-5853f3971151-image-import-ca\") pod \"apiserver-76f77b778f-znvc9\" (UID: \"71aa9018-d3be-454d-8d1c-5853f3971151\") " pod="openshift-apiserver/apiserver-76f77b778f-znvc9" Feb 23 08:51:27 crc kubenswrapper[4940]: E0223 08:51:27.035585 4940 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 08:51:27.535575155 +0000 UTC m=+218.918781312 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kzrfw" (UID: "05cfdf5e-5390-4f32-986d-02872c05f444") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 08:51:27 crc kubenswrapper[4940]: I0223 08:51:27.036186 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/26986446-4844-49cf-a77d-7d316d2d826b-etcd-service-ca\") pod \"etcd-operator-b45778765-5tf64\" (UID: \"26986446-4844-49cf-a77d-7d316d2d826b\") " pod="openshift-etcd-operator/etcd-operator-b45778765-5tf64" Feb 23 08:51:27 crc kubenswrapper[4940]: I0223 08:51:27.036326 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/71aa9018-d3be-454d-8d1c-5853f3971151-audit\") pod \"apiserver-76f77b778f-znvc9\" (UID: \"71aa9018-d3be-454d-8d1c-5853f3971151\") " pod="openshift-apiserver/apiserver-76f77b778f-znvc9" Feb 23 08:51:27 crc kubenswrapper[4940]: I0223 08:51:27.036357 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/71aa9018-d3be-454d-8d1c-5853f3971151-serving-cert\") pod \"apiserver-76f77b778f-znvc9\" (UID: \"71aa9018-d3be-454d-8d1c-5853f3971151\") " pod="openshift-apiserver/apiserver-76f77b778f-znvc9" Feb 23 08:51:27 crc kubenswrapper[4940]: I0223 08:51:27.036509 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/46095393-c72a-4539-b3e4-e2f3f35301b8-images\") pod \"machine-config-operator-74547568cd-6g7xj\" (UID: \"46095393-c72a-4539-b3e4-e2f3f35301b8\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-6g7xj" Feb 23 08:51:27 crc kubenswrapper[4940]: I0223 08:51:27.036654 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/26986446-4844-49cf-a77d-7d316d2d826b-serving-cert\") pod \"etcd-operator-b45778765-5tf64\" (UID: \"26986446-4844-49cf-a77d-7d316d2d826b\") " pod="openshift-etcd-operator/etcd-operator-b45778765-5tf64" Feb 23 08:51:27 crc kubenswrapper[4940]: I0223 08:51:27.037442 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/26986446-4844-49cf-a77d-7d316d2d826b-config\") pod \"etcd-operator-b45778765-5tf64\" (UID: \"26986446-4844-49cf-a77d-7d316d2d826b\") " pod="openshift-etcd-operator/etcd-operator-b45778765-5tf64" Feb 23 08:51:27 crc kubenswrapper[4940]: I0223 08:51:27.037572 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/726e20bd-8ba8-4ae8-a2ce-6d1a50300dfd-metrics-tls\") pod \"ingress-operator-5b745b69d9-mg5lr\" (UID: \"726e20bd-8ba8-4ae8-a2ce-6d1a50300dfd\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-mg5lr" Feb 23 08:51:27 crc kubenswrapper[4940]: I0223 08:51:27.038409 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/07ef0edd-666b-4ced-9a27-51433a59c6c0-trusted-ca-bundle\") pod \"console-f9d7485db-2zgdn\" (UID: \"07ef0edd-666b-4ced-9a27-51433a59c6c0\") " pod="openshift-console/console-f9d7485db-2zgdn" Feb 23 08:51:27 crc kubenswrapper[4940]: I0223 08:51:27.038971 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/726e20bd-8ba8-4ae8-a2ce-6d1a50300dfd-trusted-ca\") pod \"ingress-operator-5b745b69d9-mg5lr\" (UID: \"726e20bd-8ba8-4ae8-a2ce-6d1a50300dfd\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-mg5lr" Feb 23 08:51:27 crc kubenswrapper[4940]: I0223 08:51:27.039094 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/71aa9018-d3be-454d-8d1c-5853f3971151-etcd-client\") pod \"apiserver-76f77b778f-znvc9\" (UID: \"71aa9018-d3be-454d-8d1c-5853f3971151\") " pod="openshift-apiserver/apiserver-76f77b778f-znvc9" Feb 23 08:51:27 crc kubenswrapper[4940]: I0223 08:51:27.039743 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/46095393-c72a-4539-b3e4-e2f3f35301b8-proxy-tls\") pod \"machine-config-operator-74547568cd-6g7xj\" (UID: \"46095393-c72a-4539-b3e4-e2f3f35301b8\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-6g7xj" Feb 23 08:51:27 crc kubenswrapper[4940]: I0223 08:51:27.039768 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/4f793610-43f5-4faf-b61d-e2330db0b177-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-wwhg8\" (UID: \"4f793610-43f5-4faf-b61d-e2330db0b177\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-wwhg8" Feb 23 08:51:27 crc kubenswrapper[4940]: I0223 08:51:27.040027 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/26986446-4844-49cf-a77d-7d316d2d826b-etcd-client\") pod \"etcd-operator-b45778765-5tf64\" (UID: \"26986446-4844-49cf-a77d-7d316d2d826b\") " pod="openshift-etcd-operator/etcd-operator-b45778765-5tf64" Feb 23 08:51:27 crc kubenswrapper[4940]: I0223 08:51:27.040374 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/97dc456f-7c0c-49f0-ad5b-e4c791429d57-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-2c95j\" (UID: \"97dc456f-7c0c-49f0-ad5b-e4c791429d57\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-2c95j" Feb 23 08:51:27 crc kubenswrapper[4940]: I0223 08:51:27.040494 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1fee6a64-486f-4aef-9242-8bf07796d6e3-serving-cert\") pod \"openshift-config-operator-7777fb866f-n95bv\" (UID: \"1fee6a64-486f-4aef-9242-8bf07796d6e3\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-n95bv" Feb 23 08:51:27 crc kubenswrapper[4940]: I0223 08:51:27.041189 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/32e17925-dd05-43de-8e22-105f0002b651-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-klkgk\" (UID: \"32e17925-dd05-43de-8e22-105f0002b651\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-klkgk" Feb 23 08:51:27 crc kubenswrapper[4940]: I0223 08:51:27.041220 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/07ef0edd-666b-4ced-9a27-51433a59c6c0-console-oauth-config\") pod \"console-f9d7485db-2zgdn\" (UID: \"07ef0edd-666b-4ced-9a27-51433a59c6c0\") " pod="openshift-console/console-f9d7485db-2zgdn" Feb 23 08:51:27 crc kubenswrapper[4940]: I0223 08:51:27.041936 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/07ef0edd-666b-4ced-9a27-51433a59c6c0-console-serving-cert\") pod \"console-f9d7485db-2zgdn\" (UID: \"07ef0edd-666b-4ced-9a27-51433a59c6c0\") " pod="openshift-console/console-f9d7485db-2zgdn" Feb 23 08:51:27 crc kubenswrapper[4940]: I0223 08:51:27.042384 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/71aa9018-d3be-454d-8d1c-5853f3971151-encryption-config\") pod \"apiserver-76f77b778f-znvc9\" (UID: \"71aa9018-d3be-454d-8d1c-5853f3971151\") " pod="openshift-apiserver/apiserver-76f77b778f-znvc9" Feb 23 08:51:27 crc kubenswrapper[4940]: I0223 08:51:27.045911 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Feb 23 08:51:27 crc kubenswrapper[4940]: I0223 08:51:27.065694 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Feb 23 08:51:27 crc kubenswrapper[4940]: I0223 08:51:27.085951 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Feb 23 08:51:27 crc kubenswrapper[4940]: I0223 08:51:27.106023 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 23 08:51:27 crc kubenswrapper[4940]: I0223 08:51:27.126218 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Feb 23 08:51:27 crc kubenswrapper[4940]: I0223 08:51:27.135941 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 08:51:27 crc kubenswrapper[4940]: E0223 08:51:27.136124 4940 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 08:51:27.636084406 +0000 UTC m=+219.019290593 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 08:51:27 crc kubenswrapper[4940]: I0223 08:51:27.137136 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kzrfw\" (UID: \"05cfdf5e-5390-4f32-986d-02872c05f444\") " pod="openshift-image-registry/image-registry-697d97f7c8-kzrfw" Feb 23 08:51:27 crc kubenswrapper[4940]: E0223 08:51:27.137551 4940 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 08:51:27.637537012 +0000 UTC m=+219.020743169 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kzrfw" (UID: "05cfdf5e-5390-4f32-986d-02872c05f444") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 08:51:27 crc kubenswrapper[4940]: I0223 08:51:27.139133 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b546c0fc-b66f-4f2b-ab03-364362906f88-profile-collector-cert\") pod \"olm-operator-6b444d44fb-lz7cz\" (UID: \"b546c0fc-b66f-4f2b-ab03-364362906f88\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lz7cz" Feb 23 08:51:27 crc kubenswrapper[4940]: I0223 08:51:27.139603 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1b16df0c-b660-4f3c-9d26-cfff395d5c88-secret-volume\") pod \"collect-profiles-29530605-25rxm\" (UID: \"1b16df0c-b660-4f3c-9d26-cfff395d5c88\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29530605-25rxm" Feb 23 08:51:27 crc kubenswrapper[4940]: I0223 08:51:27.142604 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/7a60bce1-d0e8-451a-91da-396ec5d5c53b-profile-collector-cert\") pod \"catalog-operator-68c6474976-9xdrp\" (UID: \"7a60bce1-d0e8-451a-91da-396ec5d5c53b\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9xdrp" Feb 23 08:51:27 crc kubenswrapper[4940]: I0223 08:51:27.145595 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 23 08:51:27 crc kubenswrapper[4940]: I0223 08:51:27.154427 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1b16df0c-b660-4f3c-9d26-cfff395d5c88-config-volume\") pod \"collect-profiles-29530605-25rxm\" (UID: \"1b16df0c-b660-4f3c-9d26-cfff395d5c88\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29530605-25rxm" Feb 23 08:51:27 crc kubenswrapper[4940]: I0223 08:51:27.165470 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Feb 23 08:51:27 crc kubenswrapper[4940]: I0223 08:51:27.180349 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/57505051-bc9e-499e-9013-6365439ebb68-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-pw8l4\" (UID: \"57505051-bc9e-499e-9013-6365439ebb68\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-pw8l4" Feb 23 08:51:27 crc kubenswrapper[4940]: I0223 08:51:27.185415 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Feb 23 08:51:27 crc kubenswrapper[4940]: I0223 08:51:27.205760 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Feb 23 08:51:27 crc kubenswrapper[4940]: I0223 08:51:27.215141 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/57505051-bc9e-499e-9013-6365439ebb68-config\") pod \"kube-apiserver-operator-766d6c64bb-pw8l4\" (UID: \"57505051-bc9e-499e-9013-6365439ebb68\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-pw8l4" Feb 23 08:51:27 crc kubenswrapper[4940]: I0223 08:51:27.225413 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Feb 23 08:51:27 crc kubenswrapper[4940]: I0223 08:51:27.238381 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 08:51:27 crc kubenswrapper[4940]: E0223 08:51:27.238756 4940 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 08:51:27.738728405 +0000 UTC m=+219.121934602 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 08:51:27 crc kubenswrapper[4940]: I0223 08:51:27.239822 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kzrfw\" (UID: \"05cfdf5e-5390-4f32-986d-02872c05f444\") " pod="openshift-image-registry/image-registry-697d97f7c8-kzrfw" Feb 23 08:51:27 crc kubenswrapper[4940]: E0223 08:51:27.240428 4940 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 08:51:27.740401248 +0000 UTC m=+219.123607455 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kzrfw" (UID: "05cfdf5e-5390-4f32-986d-02872c05f444") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 08:51:27 crc kubenswrapper[4940]: I0223 08:51:27.245831 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Feb 23 08:51:27 crc kubenswrapper[4940]: I0223 08:51:27.251206 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6a449118-5ee9-42f2-bdc2-a23f1c6febf6-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-whcws\" (UID: \"6a449118-5ee9-42f2-bdc2-a23f1c6febf6\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-whcws" Feb 23 08:51:27 crc kubenswrapper[4940]: I0223 08:51:27.265396 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Feb 23 08:51:27 crc kubenswrapper[4940]: I0223 08:51:27.269308 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/d2d9e940-6dc2-4325-b5eb-410c1f038ae5-signing-key\") pod \"service-ca-9c57cc56f-f9p7v\" (UID: \"d2d9e940-6dc2-4325-b5eb-410c1f038ae5\") " pod="openshift-service-ca/service-ca-9c57cc56f-f9p7v" Feb 23 08:51:27 crc kubenswrapper[4940]: I0223 08:51:27.285962 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Feb 23 08:51:27 crc kubenswrapper[4940]: I0223 08:51:27.306742 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Feb 23 08:51:27 crc kubenswrapper[4940]: I0223 08:51:27.326674 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Feb 23 08:51:27 crc kubenswrapper[4940]: I0223 08:51:27.335564 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6a449118-5ee9-42f2-bdc2-a23f1c6febf6-config\") pod \"kube-controller-manager-operator-78b949d7b-whcws\" (UID: \"6a449118-5ee9-42f2-bdc2-a23f1c6febf6\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-whcws" Feb 23 08:51:27 crc kubenswrapper[4940]: I0223 08:51:27.341826 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 08:51:27 crc kubenswrapper[4940]: E0223 08:51:27.341972 4940 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 08:51:27.841939032 +0000 UTC m=+219.225145229 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 08:51:27 crc kubenswrapper[4940]: I0223 08:51:27.342666 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kzrfw\" (UID: \"05cfdf5e-5390-4f32-986d-02872c05f444\") " pod="openshift-image-registry/image-registry-697d97f7c8-kzrfw" Feb 23 08:51:27 crc kubenswrapper[4940]: E0223 08:51:27.343200 4940 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 08:51:27.843176932 +0000 UTC m=+219.226383129 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kzrfw" (UID: "05cfdf5e-5390-4f32-986d-02872c05f444") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 08:51:27 crc kubenswrapper[4940]: I0223 08:51:27.345245 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Feb 23 08:51:27 crc kubenswrapper[4940]: I0223 08:51:27.366386 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Feb 23 08:51:27 crc kubenswrapper[4940]: I0223 08:51:27.386009 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Feb 23 08:51:27 crc kubenswrapper[4940]: I0223 08:51:27.406539 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Feb 23 08:51:27 crc kubenswrapper[4940]: I0223 08:51:27.413311 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/d2d9e940-6dc2-4325-b5eb-410c1f038ae5-signing-cabundle\") pod \"service-ca-9c57cc56f-f9p7v\" (UID: \"d2d9e940-6dc2-4325-b5eb-410c1f038ae5\") " pod="openshift-service-ca/service-ca-9c57cc56f-f9p7v" Feb 23 08:51:27 crc kubenswrapper[4940]: I0223 08:51:27.426338 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Feb 23 08:51:27 crc kubenswrapper[4940]: I0223 08:51:27.438353 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/7f57c7c4-ddb8-48dd-853e-d87bdf3a9abd-apiservice-cert\") pod \"packageserver-d55dfcdfc-wtxsb\" (UID: \"7f57c7c4-ddb8-48dd-853e-d87bdf3a9abd\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-wtxsb" Feb 23 08:51:27 crc kubenswrapper[4940]: I0223 08:51:27.439978 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/7f57c7c4-ddb8-48dd-853e-d87bdf3a9abd-webhook-cert\") pod \"packageserver-d55dfcdfc-wtxsb\" (UID: \"7f57c7c4-ddb8-48dd-853e-d87bdf3a9abd\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-wtxsb" Feb 23 08:51:27 crc kubenswrapper[4940]: I0223 08:51:27.444266 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 08:51:27 crc kubenswrapper[4940]: E0223 08:51:27.444485 4940 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 08:51:27.944450958 +0000 UTC m=+219.327657125 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 08:51:27 crc kubenswrapper[4940]: I0223 08:51:27.444980 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kzrfw\" (UID: \"05cfdf5e-5390-4f32-986d-02872c05f444\") " pod="openshift-image-registry/image-registry-697d97f7c8-kzrfw" Feb 23 08:51:27 crc kubenswrapper[4940]: E0223 08:51:27.445359 4940 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 08:51:27.945339905 +0000 UTC m=+219.328546062 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kzrfw" (UID: "05cfdf5e-5390-4f32-986d-02872c05f444") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 08:51:27 crc kubenswrapper[4940]: I0223 08:51:27.445601 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Feb 23 08:51:27 crc kubenswrapper[4940]: I0223 08:51:27.457908 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/7a60bce1-d0e8-451a-91da-396ec5d5c53b-srv-cert\") pod \"catalog-operator-68c6474976-9xdrp\" (UID: \"7a60bce1-d0e8-451a-91da-396ec5d5c53b\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9xdrp" Feb 23 08:51:27 crc kubenswrapper[4940]: I0223 08:51:27.466287 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Feb 23 08:51:27 crc kubenswrapper[4940]: I0223 08:51:27.486000 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Feb 23 08:51:27 crc kubenswrapper[4940]: I0223 08:51:27.504307 4940 request.go:700] Waited for 1.00152247s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-api/secrets?fieldSelector=metadata.name%3Dcontrol-plane-machine-set-operator-tls&limit=500&resourceVersion=0 Feb 23 08:51:27 crc kubenswrapper[4940]: I0223 08:51:27.506018 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Feb 23 08:51:27 crc kubenswrapper[4940]: I0223 08:51:27.516841 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/f5e834a4-b3e0-4d34-922a-2a8ed8e1fecb-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-687p7\" (UID: \"f5e834a4-b3e0-4d34-922a-2a8ed8e1fecb\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-687p7" Feb 23 08:51:27 crc kubenswrapper[4940]: I0223 08:51:27.526318 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Feb 23 08:51:27 crc kubenswrapper[4940]: I0223 08:51:27.533263 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/2f476562-bc2b-4c2d-8283-fae9b3f8ac4e-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-glhjd\" (UID: \"2f476562-bc2b-4c2d-8283-fae9b3f8ac4e\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-glhjd" Feb 23 08:51:27 crc kubenswrapper[4940]: I0223 08:51:27.546452 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 08:51:27 crc kubenswrapper[4940]: E0223 08:51:27.546584 4940 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 08:51:28.046559429 +0000 UTC m=+219.429765586 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 08:51:27 crc kubenswrapper[4940]: I0223 08:51:27.546823 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Feb 23 08:51:27 crc kubenswrapper[4940]: I0223 08:51:27.548067 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kzrfw\" (UID: \"05cfdf5e-5390-4f32-986d-02872c05f444\") " pod="openshift-image-registry/image-registry-697d97f7c8-kzrfw" Feb 23 08:51:27 crc kubenswrapper[4940]: E0223 08:51:27.548419 4940 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 08:51:28.048411328 +0000 UTC m=+219.431617485 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kzrfw" (UID: "05cfdf5e-5390-4f32-986d-02872c05f444") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 08:51:27 crc kubenswrapper[4940]: I0223 08:51:27.565080 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Feb 23 08:51:27 crc kubenswrapper[4940]: I0223 08:51:27.577107 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c5a6d2fa-46ed-4669-ac19-c335595a24fd-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-8smd9\" (UID: \"c5a6d2fa-46ed-4669-ac19-c335595a24fd\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8smd9" Feb 23 08:51:27 crc kubenswrapper[4940]: I0223 08:51:27.585984 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Feb 23 08:51:27 crc kubenswrapper[4940]: I0223 08:51:27.593527 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c5a6d2fa-46ed-4669-ac19-c335595a24fd-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-8smd9\" (UID: \"c5a6d2fa-46ed-4669-ac19-c335595a24fd\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8smd9" Feb 23 08:51:27 crc kubenswrapper[4940]: I0223 08:51:27.605537 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Feb 23 08:51:27 crc kubenswrapper[4940]: I0223 08:51:27.626326 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Feb 23 08:51:27 crc kubenswrapper[4940]: I0223 08:51:27.646436 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Feb 23 08:51:27 crc kubenswrapper[4940]: I0223 08:51:27.649317 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 08:51:27 crc kubenswrapper[4940]: E0223 08:51:27.649596 4940 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 08:51:28.149543199 +0000 UTC m=+219.532749366 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 08:51:27 crc kubenswrapper[4940]: I0223 08:51:27.650514 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kzrfw\" (UID: \"05cfdf5e-5390-4f32-986d-02872c05f444\") " pod="openshift-image-registry/image-registry-697d97f7c8-kzrfw" Feb 23 08:51:27 crc kubenswrapper[4940]: E0223 08:51:27.650894 4940 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 08:51:28.150884322 +0000 UTC m=+219.534090489 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kzrfw" (UID: "05cfdf5e-5390-4f32-986d-02872c05f444") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 08:51:27 crc kubenswrapper[4940]: I0223 08:51:27.667176 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Feb 23 08:51:27 crc kubenswrapper[4940]: I0223 08:51:27.678170 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d86468b3-88b2-4c49-b807-c00eceb862e2-serving-cert\") pod \"service-ca-operator-777779d784-nmq4g\" (UID: \"d86468b3-88b2-4c49-b807-c00eceb862e2\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-nmq4g" Feb 23 08:51:27 crc kubenswrapper[4940]: I0223 08:51:27.686016 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Feb 23 08:51:27 crc kubenswrapper[4940]: I0223 08:51:27.695569 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d86468b3-88b2-4c49-b807-c00eceb862e2-config\") pod \"service-ca-operator-777779d784-nmq4g\" (UID: \"d86468b3-88b2-4c49-b807-c00eceb862e2\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-nmq4g" Feb 23 08:51:27 crc kubenswrapper[4940]: I0223 08:51:27.705278 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Feb 23 08:51:27 crc kubenswrapper[4940]: I0223 08:51:27.725803 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Feb 23 08:51:27 crc kubenswrapper[4940]: I0223 08:51:27.736370 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b546c0fc-b66f-4f2b-ab03-364362906f88-srv-cert\") pod \"olm-operator-6b444d44fb-lz7cz\" (UID: \"b546c0fc-b66f-4f2b-ab03-364362906f88\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lz7cz" Feb 23 08:51:27 crc kubenswrapper[4940]: I0223 08:51:27.746535 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Feb 23 08:51:27 crc kubenswrapper[4940]: I0223 08:51:27.751316 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 08:51:27 crc kubenswrapper[4940]: E0223 08:51:27.751711 4940 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 08:51:28.25159343 +0000 UTC m=+219.634799627 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 08:51:27 crc kubenswrapper[4940]: I0223 08:51:27.752104 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kzrfw\" (UID: \"05cfdf5e-5390-4f32-986d-02872c05f444\") " pod="openshift-image-registry/image-registry-697d97f7c8-kzrfw" Feb 23 08:51:27 crc kubenswrapper[4940]: E0223 08:51:27.752771 4940 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 08:51:28.252753986 +0000 UTC m=+219.635960173 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kzrfw" (UID: "05cfdf5e-5390-4f32-986d-02872c05f444") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 08:51:27 crc kubenswrapper[4940]: I0223 08:51:27.757140 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0d9b99ac-2db8-435f-ad9f-4d7335a40e19-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-z7w6c\" (UID: \"0d9b99ac-2db8-435f-ad9f-4d7335a40e19\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-z7w6c" Feb 23 08:51:27 crc kubenswrapper[4940]: I0223 08:51:27.765496 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Feb 23 08:51:27 crc kubenswrapper[4940]: I0223 08:51:27.776725 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0d9b99ac-2db8-435f-ad9f-4d7335a40e19-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-z7w6c\" (UID: \"0d9b99ac-2db8-435f-ad9f-4d7335a40e19\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-z7w6c" Feb 23 08:51:27 crc kubenswrapper[4940]: I0223 08:51:27.786411 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Feb 23 08:51:27 crc kubenswrapper[4940]: I0223 08:51:27.806601 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Feb 23 08:51:27 crc kubenswrapper[4940]: I0223 08:51:27.825927 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Feb 23 08:51:27 crc kubenswrapper[4940]: I0223 08:51:27.845595 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Feb 23 08:51:27 crc kubenswrapper[4940]: I0223 08:51:27.854807 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 08:51:27 crc kubenswrapper[4940]: E0223 08:51:27.855169 4940 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 08:51:28.355123697 +0000 UTC m=+219.738329894 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 08:51:27 crc kubenswrapper[4940]: I0223 08:51:27.855544 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kzrfw\" (UID: \"05cfdf5e-5390-4f32-986d-02872c05f444\") " pod="openshift-image-registry/image-registry-697d97f7c8-kzrfw" Feb 23 08:51:27 crc kubenswrapper[4940]: E0223 08:51:27.856027 4940 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 08:51:28.356003275 +0000 UTC m=+219.739209472 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kzrfw" (UID: "05cfdf5e-5390-4f32-986d-02872c05f444") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 08:51:27 crc kubenswrapper[4940]: I0223 08:51:27.865832 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Feb 23 08:51:27 crc kubenswrapper[4940]: I0223 08:51:27.886121 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Feb 23 08:51:27 crc kubenswrapper[4940]: I0223 08:51:27.906182 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Feb 23 08:51:27 crc kubenswrapper[4940]: I0223 08:51:27.926919 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Feb 23 08:51:27 crc kubenswrapper[4940]: I0223 08:51:27.946392 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Feb 23 08:51:27 crc kubenswrapper[4940]: I0223 08:51:27.956926 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 08:51:27 crc kubenswrapper[4940]: E0223 08:51:27.957164 4940 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 08:51:28.457130655 +0000 UTC m=+219.840336852 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 08:51:27 crc kubenswrapper[4940]: I0223 08:51:27.957605 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kzrfw\" (UID: \"05cfdf5e-5390-4f32-986d-02872c05f444\") " pod="openshift-image-registry/image-registry-697d97f7c8-kzrfw" Feb 23 08:51:27 crc kubenswrapper[4940]: E0223 08:51:27.958364 4940 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 08:51:28.458336554 +0000 UTC m=+219.841542741 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kzrfw" (UID: "05cfdf5e-5390-4f32-986d-02872c05f444") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 08:51:27 crc kubenswrapper[4940]: I0223 08:51:27.958740 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/feea3a62-1f72-4b46-9655-521e8ff5c323-default-certificate\") pod \"router-default-5444994796-tqnlh\" (UID: \"feea3a62-1f72-4b46-9655-521e8ff5c323\") " pod="openshift-ingress/router-default-5444994796-tqnlh" Feb 23 08:51:27 crc kubenswrapper[4940]: I0223 08:51:27.965484 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Feb 23 08:51:27 crc kubenswrapper[4940]: I0223 08:51:27.978020 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/feea3a62-1f72-4b46-9655-521e8ff5c323-stats-auth\") pod \"router-default-5444994796-tqnlh\" (UID: \"feea3a62-1f72-4b46-9655-521e8ff5c323\") " pod="openshift-ingress/router-default-5444994796-tqnlh" Feb 23 08:51:27 crc kubenswrapper[4940]: I0223 08:51:27.986076 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Feb 23 08:51:28 crc kubenswrapper[4940]: I0223 08:51:28.000794 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/feea3a62-1f72-4b46-9655-521e8ff5c323-metrics-certs\") pod \"router-default-5444994796-tqnlh\" (UID: \"feea3a62-1f72-4b46-9655-521e8ff5c323\") " pod="openshift-ingress/router-default-5444994796-tqnlh" Feb 23 08:51:28 crc kubenswrapper[4940]: I0223 08:51:28.006053 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Feb 23 08:51:28 crc kubenswrapper[4940]: I0223 08:51:28.015332 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/feea3a62-1f72-4b46-9655-521e8ff5c323-service-ca-bundle\") pod \"router-default-5444994796-tqnlh\" (UID: \"feea3a62-1f72-4b46-9655-521e8ff5c323\") " pod="openshift-ingress/router-default-5444994796-tqnlh" Feb 23 08:51:28 crc kubenswrapper[4940]: I0223 08:51:28.026860 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Feb 23 08:51:28 crc kubenswrapper[4940]: E0223 08:51:28.029720 4940 secret.go:188] Couldn't get secret openshift-machine-config-operator/machine-config-server-tls: failed to sync secret cache: timed out waiting for the condition Feb 23 08:51:28 crc kubenswrapper[4940]: E0223 08:51:28.030276 4940 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/63609b48-d163-4000-a23b-bb70a6719c5c-certs podName:63609b48-d163-4000-a23b-bb70a6719c5c nodeName:}" failed. No retries permitted until 2026-02-23 08:51:28.530243147 +0000 UTC m=+219.913449334 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "certs" (UniqueName: "kubernetes.io/secret/63609b48-d163-4000-a23b-bb70a6719c5c-certs") pod "machine-config-server-6f7z7" (UID: "63609b48-d163-4000-a23b-bb70a6719c5c") : failed to sync secret cache: timed out waiting for the condition Feb 23 08:51:28 crc kubenswrapper[4940]: E0223 08:51:28.030873 4940 secret.go:188] Couldn't get secret openshift-machine-config-operator/node-bootstrapper-token: failed to sync secret cache: timed out waiting for the condition Feb 23 08:51:28 crc kubenswrapper[4940]: E0223 08:51:28.030973 4940 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/63609b48-d163-4000-a23b-bb70a6719c5c-node-bootstrap-token podName:63609b48-d163-4000-a23b-bb70a6719c5c nodeName:}" failed. No retries permitted until 2026-02-23 08:51:28.530944569 +0000 UTC m=+219.914150766 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "node-bootstrap-token" (UniqueName: "kubernetes.io/secret/63609b48-d163-4000-a23b-bb70a6719c5c-node-bootstrap-token") pod "machine-config-server-6f7z7" (UID: "63609b48-d163-4000-a23b-bb70a6719c5c") : failed to sync secret cache: timed out waiting for the condition Feb 23 08:51:28 crc kubenswrapper[4940]: E0223 08:51:28.034748 4940 secret.go:188] Couldn't get secret openshift-ingress-canary/canary-serving-cert: failed to sync secret cache: timed out waiting for the condition Feb 23 08:51:28 crc kubenswrapper[4940]: E0223 08:51:28.034824 4940 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/24821bad-09c7-4880-bf0e-a6e829284f2e-cert podName:24821bad-09c7-4880-bf0e-a6e829284f2e nodeName:}" failed. No retries permitted until 2026-02-23 08:51:28.534803362 +0000 UTC m=+219.918009549 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/24821bad-09c7-4880-bf0e-a6e829284f2e-cert") pod "ingress-canary-zgt46" (UID: "24821bad-09c7-4880-bf0e-a6e829284f2e") : failed to sync secret cache: timed out waiting for the condition Feb 23 08:51:28 crc kubenswrapper[4940]: E0223 08:51:28.034866 4940 secret.go:188] Couldn't get secret openshift-dns/dns-default-metrics-tls: failed to sync secret cache: timed out waiting for the condition Feb 23 08:51:28 crc kubenswrapper[4940]: E0223 08:51:28.034881 4940 secret.go:188] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: failed to sync secret cache: timed out waiting for the condition Feb 23 08:51:28 crc kubenswrapper[4940]: E0223 08:51:28.034906 4940 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2dfca0b5-693a-4e28-ae27-d5532038616c-metrics-tls podName:2dfca0b5-693a-4e28-ae27-d5532038616c nodeName:}" failed. No retries permitted until 2026-02-23 08:51:28.534894855 +0000 UTC m=+219.918101042 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/2dfca0b5-693a-4e28-ae27-d5532038616c-metrics-tls") pod "dns-default-sppfz" (UID: "2dfca0b5-693a-4e28-ae27-d5532038616c") : failed to sync secret cache: timed out waiting for the condition Feb 23 08:51:28 crc kubenswrapper[4940]: E0223 08:51:28.034916 4940 configmap.go:193] Couldn't get configMap openshift-marketplace/marketplace-trusted-ca: failed to sync configmap cache: timed out waiting for the condition Feb 23 08:51:28 crc kubenswrapper[4940]: E0223 08:51:28.035066 4940 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9de4a20c-3f76-4aa8-8347-42f3b3f53145-marketplace-operator-metrics podName:9de4a20c-3f76-4aa8-8347-42f3b3f53145 nodeName:}" failed. No retries permitted until 2026-02-23 08:51:28.53503367 +0000 UTC m=+219.918239867 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/9de4a20c-3f76-4aa8-8347-42f3b3f53145-marketplace-operator-metrics") pod "marketplace-operator-79b997595-j9x9v" (UID: "9de4a20c-3f76-4aa8-8347-42f3b3f53145") : failed to sync secret cache: timed out waiting for the condition Feb 23 08:51:28 crc kubenswrapper[4940]: E0223 08:51:28.035109 4940 secret.go:188] Couldn't get secret openshift-machine-config-operator/mcc-proxy-tls: failed to sync secret cache: timed out waiting for the condition Feb 23 08:51:28 crc kubenswrapper[4940]: E0223 08:51:28.035133 4940 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9de4a20c-3f76-4aa8-8347-42f3b3f53145-marketplace-trusted-ca podName:9de4a20c-3f76-4aa8-8347-42f3b3f53145 nodeName:}" failed. No retries permitted until 2026-02-23 08:51:28.535105002 +0000 UTC m=+219.918311199 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "marketplace-trusted-ca" (UniqueName: "kubernetes.io/configmap/9de4a20c-3f76-4aa8-8347-42f3b3f53145-marketplace-trusted-ca") pod "marketplace-operator-79b997595-j9x9v" (UID: "9de4a20c-3f76-4aa8-8347-42f3b3f53145") : failed to sync configmap cache: timed out waiting for the condition Feb 23 08:51:28 crc kubenswrapper[4940]: E0223 08:51:28.035171 4940 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bc0bbd6d-8e03-4d03-b2a8-2fe201d6c6bf-proxy-tls podName:bc0bbd6d-8e03-4d03-b2a8-2fe201d6c6bf nodeName:}" failed. No retries permitted until 2026-02-23 08:51:28.535151043 +0000 UTC m=+219.918357290 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/bc0bbd6d-8e03-4d03-b2a8-2fe201d6c6bf-proxy-tls") pod "machine-config-controller-84d6567774-wzbgr" (UID: "bc0bbd6d-8e03-4d03-b2a8-2fe201d6c6bf") : failed to sync secret cache: timed out waiting for the condition Feb 23 08:51:28 crc kubenswrapper[4940]: E0223 08:51:28.036209 4940 configmap.go:193] Couldn't get configMap openshift-dns/dns-default: failed to sync configmap cache: timed out waiting for the condition Feb 23 08:51:28 crc kubenswrapper[4940]: E0223 08:51:28.036486 4940 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/2dfca0b5-693a-4e28-ae27-d5532038616c-config-volume podName:2dfca0b5-693a-4e28-ae27-d5532038616c nodeName:}" failed. No retries permitted until 2026-02-23 08:51:28.536456974 +0000 UTC m=+219.919663181 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/2dfca0b5-693a-4e28-ae27-d5532038616c-config-volume") pod "dns-default-sppfz" (UID: "2dfca0b5-693a-4e28-ae27-d5532038616c") : failed to sync configmap cache: timed out waiting for the condition Feb 23 08:51:28 crc kubenswrapper[4940]: I0223 08:51:28.046011 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Feb 23 08:51:28 crc kubenswrapper[4940]: I0223 08:51:28.058975 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 08:51:28 crc kubenswrapper[4940]: E0223 08:51:28.059181 4940 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 08:51:28.559157085 +0000 UTC m=+219.942363242 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 08:51:28 crc kubenswrapper[4940]: I0223 08:51:28.059527 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kzrfw\" (UID: \"05cfdf5e-5390-4f32-986d-02872c05f444\") " pod="openshift-image-registry/image-registry-697d97f7c8-kzrfw" Feb 23 08:51:28 crc kubenswrapper[4940]: E0223 08:51:28.059858 4940 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 08:51:28.559848007 +0000 UTC m=+219.943054154 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kzrfw" (UID: "05cfdf5e-5390-4f32-986d-02872c05f444") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 08:51:28 crc kubenswrapper[4940]: I0223 08:51:28.073530 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Feb 23 08:51:28 crc kubenswrapper[4940]: I0223 08:51:28.086150 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Feb 23 08:51:28 crc kubenswrapper[4940]: I0223 08:51:28.105680 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Feb 23 08:51:28 crc kubenswrapper[4940]: I0223 08:51:28.126010 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Feb 23 08:51:28 crc kubenswrapper[4940]: I0223 08:51:28.146209 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Feb 23 08:51:28 crc kubenswrapper[4940]: I0223 08:51:28.161581 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 08:51:28 crc kubenswrapper[4940]: E0223 08:51:28.161785 4940 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 08:51:28.661753293 +0000 UTC m=+220.044959460 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 08:51:28 crc kubenswrapper[4940]: I0223 08:51:28.162599 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kzrfw\" (UID: \"05cfdf5e-5390-4f32-986d-02872c05f444\") " pod="openshift-image-registry/image-registry-697d97f7c8-kzrfw" Feb 23 08:51:28 crc kubenswrapper[4940]: E0223 08:51:28.163035 4940 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 08:51:28.663019423 +0000 UTC m=+220.046225660 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kzrfw" (UID: "05cfdf5e-5390-4f32-986d-02872c05f444") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 08:51:28 crc kubenswrapper[4940]: I0223 08:51:28.165816 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Feb 23 08:51:28 crc kubenswrapper[4940]: I0223 08:51:28.185955 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Feb 23 08:51:28 crc kubenswrapper[4940]: I0223 08:51:28.205662 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Feb 23 08:51:28 crc kubenswrapper[4940]: I0223 08:51:28.226072 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Feb 23 08:51:28 crc kubenswrapper[4940]: I0223 08:51:28.246135 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Feb 23 08:51:28 crc kubenswrapper[4940]: I0223 08:51:28.264170 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 08:51:28 crc kubenswrapper[4940]: E0223 08:51:28.264370 4940 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 08:51:28.76433468 +0000 UTC m=+220.147540887 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 08:51:28 crc kubenswrapper[4940]: I0223 08:51:28.265676 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kzrfw\" (UID: \"05cfdf5e-5390-4f32-986d-02872c05f444\") " pod="openshift-image-registry/image-registry-697d97f7c8-kzrfw" Feb 23 08:51:28 crc kubenswrapper[4940]: I0223 08:51:28.266044 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Feb 23 08:51:28 crc kubenswrapper[4940]: E0223 08:51:28.266156 4940 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 08:51:28.766134017 +0000 UTC m=+220.149340204 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kzrfw" (UID: "05cfdf5e-5390-4f32-986d-02872c05f444") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 08:51:28 crc kubenswrapper[4940]: I0223 08:51:28.285944 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Feb 23 08:51:28 crc kubenswrapper[4940]: I0223 08:51:28.305747 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Feb 23 08:51:28 crc kubenswrapper[4940]: I0223 08:51:28.325814 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Feb 23 08:51:28 crc kubenswrapper[4940]: I0223 08:51:28.346917 4940 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Feb 23 08:51:28 crc kubenswrapper[4940]: I0223 08:51:28.365535 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Feb 23 08:51:28 crc kubenswrapper[4940]: I0223 08:51:28.366934 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 08:51:28 crc kubenswrapper[4940]: E0223 08:51:28.367177 4940 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 08:51:28.867142534 +0000 UTC m=+220.250348731 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 08:51:28 crc kubenswrapper[4940]: I0223 08:51:28.367252 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kzrfw\" (UID: \"05cfdf5e-5390-4f32-986d-02872c05f444\") " pod="openshift-image-registry/image-registry-697d97f7c8-kzrfw" Feb 23 08:51:28 crc kubenswrapper[4940]: E0223 08:51:28.367709 4940 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 08:51:28.867688612 +0000 UTC m=+220.250894779 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kzrfw" (UID: "05cfdf5e-5390-4f32-986d-02872c05f444") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 08:51:28 crc kubenswrapper[4940]: I0223 08:51:28.385511 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Feb 23 08:51:28 crc kubenswrapper[4940]: I0223 08:51:28.405175 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Feb 23 08:51:28 crc kubenswrapper[4940]: I0223 08:51:28.425474 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Feb 23 08:51:28 crc kubenswrapper[4940]: I0223 08:51:28.468810 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 08:51:28 crc kubenswrapper[4940]: E0223 08:51:28.469033 4940 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 08:51:28.968992879 +0000 UTC m=+220.352199076 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 08:51:28 crc kubenswrapper[4940]: I0223 08:51:28.471765 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kzrfw\" (UID: \"05cfdf5e-5390-4f32-986d-02872c05f444\") " pod="openshift-image-registry/image-registry-697d97f7c8-kzrfw" Feb 23 08:51:28 crc kubenswrapper[4940]: E0223 08:51:28.472213 4940 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 08:51:28.97219194 +0000 UTC m=+220.355398137 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kzrfw" (UID: "05cfdf5e-5390-4f32-986d-02872c05f444") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 08:51:28 crc kubenswrapper[4940]: I0223 08:51:28.476495 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n7t7z\" (UniqueName: \"kubernetes.io/projected/550e2596-a506-465a-91ab-44280e7727d3-kube-api-access-n7t7z\") pod \"console-operator-58897d9998-xlz2g\" (UID: \"550e2596-a506-465a-91ab-44280e7727d3\") " pod="openshift-console-operator/console-operator-58897d9998-xlz2g" Feb 23 08:51:28 crc kubenswrapper[4940]: I0223 08:51:28.484375 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8v6jx\" (UniqueName: \"kubernetes.io/projected/d1df31f2-53d1-42d0-9e24-a5ce6ad604d6-kube-api-access-8v6jx\") pod \"authentication-operator-69f744f599-7xd5s\" (UID: \"d1df31f2-53d1-42d0-9e24-a5ce6ad604d6\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-7xd5s" Feb 23 08:51:28 crc kubenswrapper[4940]: I0223 08:51:28.514252 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lhxrv\" (UniqueName: \"kubernetes.io/projected/539d5f30-be10-4fc5-bf8f-11442d337bac-kube-api-access-lhxrv\") pod \"openshift-apiserver-operator-796bbdcf4f-8d22c\" (UID: \"539d5f30-be10-4fc5-bf8f-11442d337bac\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-8d22c" Feb 23 08:51:28 crc kubenswrapper[4940]: I0223 08:51:28.524028 4940 request.go:700] Waited for 1.926870522s due to client-side throttling, not priority and fairness, request: POST:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/serviceaccounts/route-controller-manager-sa/token Feb 23 08:51:28 crc kubenswrapper[4940]: I0223 08:51:28.532416 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r9j95\" (UniqueName: \"kubernetes.io/projected/973d0526-cb43-44e0-adc5-9a5438c906f9-kube-api-access-r9j95\") pod \"machine-approver-56656f9798-vc9hk\" (UID: \"973d0526-cb43-44e0-adc5-9a5438c906f9\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-vc9hk" Feb 23 08:51:28 crc kubenswrapper[4940]: I0223 08:51:28.541486 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2vswr\" (UniqueName: \"kubernetes.io/projected/af5df6df-7f6c-40a3-b1da-44af29cdee8b-kube-api-access-2vswr\") pod \"route-controller-manager-6576b87f9c-2mrdb\" (UID: \"af5df6df-7f6c-40a3-b1da-44af29cdee8b\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-2mrdb" Feb 23 08:51:28 crc kubenswrapper[4940]: I0223 08:51:28.574257 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 08:51:28 crc kubenswrapper[4940]: E0223 08:51:28.574432 4940 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 08:51:29.074412576 +0000 UTC m=+220.457618743 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 08:51:28 crc kubenswrapper[4940]: I0223 08:51:28.574767 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9de4a20c-3f76-4aa8-8347-42f3b3f53145-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-j9x9v\" (UID: \"9de4a20c-3f76-4aa8-8347-42f3b3f53145\") " pod="openshift-marketplace/marketplace-operator-79b997595-j9x9v" Feb 23 08:51:28 crc kubenswrapper[4940]: I0223 08:51:28.574968 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/bc0bbd6d-8e03-4d03-b2a8-2fe201d6c6bf-proxy-tls\") pod \"machine-config-controller-84d6567774-wzbgr\" (UID: \"bc0bbd6d-8e03-4d03-b2a8-2fe201d6c6bf\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-wzbgr" Feb 23 08:51:28 crc kubenswrapper[4940]: I0223 08:51:28.575029 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/2dfca0b5-693a-4e28-ae27-d5532038616c-metrics-tls\") pod \"dns-default-sppfz\" (UID: \"2dfca0b5-693a-4e28-ae27-d5532038616c\") " pod="openshift-dns/dns-default-sppfz" Feb 23 08:51:28 crc kubenswrapper[4940]: I0223 08:51:28.575104 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/24821bad-09c7-4880-bf0e-a6e829284f2e-cert\") pod \"ingress-canary-zgt46\" (UID: \"24821bad-09c7-4880-bf0e-a6e829284f2e\") " pod="openshift-ingress-canary/ingress-canary-zgt46" Feb 23 08:51:28 crc kubenswrapper[4940]: I0223 08:51:28.575142 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/9de4a20c-3f76-4aa8-8347-42f3b3f53145-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-j9x9v\" (UID: \"9de4a20c-3f76-4aa8-8347-42f3b3f53145\") " pod="openshift-marketplace/marketplace-operator-79b997595-j9x9v" Feb 23 08:51:28 crc kubenswrapper[4940]: I0223 08:51:28.575301 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2dfca0b5-693a-4e28-ae27-d5532038616c-config-volume\") pod \"dns-default-sppfz\" (UID: \"2dfca0b5-693a-4e28-ae27-d5532038616c\") " pod="openshift-dns/dns-default-sppfz" Feb 23 08:51:28 crc kubenswrapper[4940]: I0223 08:51:28.575413 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kzrfw\" (UID: \"05cfdf5e-5390-4f32-986d-02872c05f444\") " pod="openshift-image-registry/image-registry-697d97f7c8-kzrfw" Feb 23 08:51:28 crc kubenswrapper[4940]: I0223 08:51:28.576657 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/63609b48-d163-4000-a23b-bb70a6719c5c-certs\") pod \"machine-config-server-6f7z7\" (UID: \"63609b48-d163-4000-a23b-bb70a6719c5c\") " pod="openshift-machine-config-operator/machine-config-server-6f7z7" Feb 23 08:51:28 crc kubenswrapper[4940]: E0223 08:51:28.576704 4940 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 08:51:29.076687898 +0000 UTC m=+220.459894065 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kzrfw" (UID: "05cfdf5e-5390-4f32-986d-02872c05f444") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 08:51:28 crc kubenswrapper[4940]: I0223 08:51:28.576883 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2dfca0b5-693a-4e28-ae27-d5532038616c-config-volume\") pod \"dns-default-sppfz\" (UID: \"2dfca0b5-693a-4e28-ae27-d5532038616c\") " pod="openshift-dns/dns-default-sppfz" Feb 23 08:51:28 crc kubenswrapper[4940]: I0223 08:51:28.577077 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/63609b48-d163-4000-a23b-bb70a6719c5c-node-bootstrap-token\") pod \"machine-config-server-6f7z7\" (UID: \"63609b48-d163-4000-a23b-bb70a6719c5c\") " pod="openshift-machine-config-operator/machine-config-server-6f7z7" Feb 23 08:51:28 crc kubenswrapper[4940]: I0223 08:51:28.578837 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9de4a20c-3f76-4aa8-8347-42f3b3f53145-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-j9x9v\" (UID: \"9de4a20c-3f76-4aa8-8347-42f3b3f53145\") " pod="openshift-marketplace/marketplace-operator-79b997595-j9x9v" Feb 23 08:51:28 crc kubenswrapper[4940]: I0223 08:51:28.579258 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/2dfca0b5-693a-4e28-ae27-d5532038616c-metrics-tls\") pod \"dns-default-sppfz\" (UID: \"2dfca0b5-693a-4e28-ae27-d5532038616c\") " pod="openshift-dns/dns-default-sppfz" Feb 23 08:51:28 crc kubenswrapper[4940]: I0223 08:51:28.580423 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/24821bad-09c7-4880-bf0e-a6e829284f2e-cert\") pod \"ingress-canary-zgt46\" (UID: \"24821bad-09c7-4880-bf0e-a6e829284f2e\") " pod="openshift-ingress-canary/ingress-canary-zgt46" Feb 23 08:51:28 crc kubenswrapper[4940]: I0223 08:51:28.580843 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/63609b48-d163-4000-a23b-bb70a6719c5c-node-bootstrap-token\") pod \"machine-config-server-6f7z7\" (UID: \"63609b48-d163-4000-a23b-bb70a6719c5c\") " pod="openshift-machine-config-operator/machine-config-server-6f7z7" Feb 23 08:51:28 crc kubenswrapper[4940]: I0223 08:51:28.581131 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/63609b48-d163-4000-a23b-bb70a6719c5c-certs\") pod \"machine-config-server-6f7z7\" (UID: \"63609b48-d163-4000-a23b-bb70a6719c5c\") " pod="openshift-machine-config-operator/machine-config-server-6f7z7" Feb 23 08:51:28 crc kubenswrapper[4940]: I0223 08:51:28.581466 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/9de4a20c-3f76-4aa8-8347-42f3b3f53145-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-j9x9v\" (UID: \"9de4a20c-3f76-4aa8-8347-42f3b3f53145\") " pod="openshift-marketplace/marketplace-operator-79b997595-j9x9v" Feb 23 08:51:28 crc kubenswrapper[4940]: I0223 08:51:28.584793 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5btc8\" (UniqueName: \"kubernetes.io/projected/05cfdf5e-5390-4f32-986d-02872c05f444-kube-api-access-5btc8\") pod \"image-registry-697d97f7c8-kzrfw\" (UID: \"05cfdf5e-5390-4f32-986d-02872c05f444\") " pod="openshift-image-registry/image-registry-697d97f7c8-kzrfw" Feb 23 08:51:28 crc kubenswrapper[4940]: I0223 08:51:28.585099 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/bc0bbd6d-8e03-4d03-b2a8-2fe201d6c6bf-proxy-tls\") pod \"machine-config-controller-84d6567774-wzbgr\" (UID: \"bc0bbd6d-8e03-4d03-b2a8-2fe201d6c6bf\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-wzbgr" Feb 23 08:51:28 crc kubenswrapper[4940]: I0223 08:51:28.589396 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-7xd5s" Feb 23 08:51:28 crc kubenswrapper[4940]: I0223 08:51:28.601328 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/05cfdf5e-5390-4f32-986d-02872c05f444-bound-sa-token\") pod \"image-registry-697d97f7c8-kzrfw\" (UID: \"05cfdf5e-5390-4f32-986d-02872c05f444\") " pod="openshift-image-registry/image-registry-697d97f7c8-kzrfw" Feb 23 08:51:28 crc kubenswrapper[4940]: I0223 08:51:28.623470 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-94mcm\" (UniqueName: \"kubernetes.io/projected/916e1f6f-2bfc-41e7-86c2-6c379e3638c1-kube-api-access-94mcm\") pod \"downloads-7954f5f757-6qcpm\" (UID: \"916e1f6f-2bfc-41e7-86c2-6c379e3638c1\") " pod="openshift-console/downloads-7954f5f757-6qcpm" Feb 23 08:51:28 crc kubenswrapper[4940]: I0223 08:51:28.641203 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4m4cz\" (UniqueName: \"kubernetes.io/projected/e13ea819-2f94-423e-ab3f-c7b6d03ad686-kube-api-access-4m4cz\") pod \"controller-manager-879f6c89f-224dd\" (UID: \"e13ea819-2f94-423e-ab3f-c7b6d03ad686\") " pod="openshift-controller-manager/controller-manager-879f6c89f-224dd" Feb 23 08:51:28 crc kubenswrapper[4940]: I0223 08:51:28.641762 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-vc9hk" Feb 23 08:51:28 crc kubenswrapper[4940]: I0223 08:51:28.655358 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-6qcpm" Feb 23 08:51:28 crc kubenswrapper[4940]: I0223 08:51:28.667731 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ltvbx\" (UniqueName: \"kubernetes.io/projected/c733e45d-a072-4619-b2f8-aea6d77b112f-kube-api-access-ltvbx\") pod \"oauth-openshift-558db77b4-rrhk2\" (UID: \"c733e45d-a072-4619-b2f8-aea6d77b112f\") " pod="openshift-authentication/oauth-openshift-558db77b4-rrhk2" Feb 23 08:51:28 crc kubenswrapper[4940]: I0223 08:51:28.678908 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 08:51:28 crc kubenswrapper[4940]: E0223 08:51:28.679203 4940 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 08:51:29.179181583 +0000 UTC m=+220.562387740 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 08:51:28 crc kubenswrapper[4940]: I0223 08:51:28.679442 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kzrfw\" (UID: \"05cfdf5e-5390-4f32-986d-02872c05f444\") " pod="openshift-image-registry/image-registry-697d97f7c8-kzrfw" Feb 23 08:51:28 crc kubenswrapper[4940]: E0223 08:51:28.679833 4940 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 08:51:29.179820843 +0000 UTC m=+220.563027010 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kzrfw" (UID: "05cfdf5e-5390-4f32-986d-02872c05f444") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 08:51:28 crc kubenswrapper[4940]: W0223 08:51:28.682166 4940 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod973d0526_cb43_44e0_adc5_9a5438c906f9.slice/crio-8e81b8bdc4cd1455057a7998b19bea11b2046810cda17939c16395dd0a49d8b4 WatchSource:0}: Error finding container 8e81b8bdc4cd1455057a7998b19bea11b2046810cda17939c16395dd0a49d8b4: Status 404 returned error can't find the container with id 8e81b8bdc4cd1455057a7998b19bea11b2046810cda17939c16395dd0a49d8b4 Feb 23 08:51:28 crc kubenswrapper[4940]: I0223 08:51:28.688843 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-snx9t\" (UniqueName: \"kubernetes.io/projected/9a9d3cd7-c717-4851-b4f3-2bb17d88d0bd-kube-api-access-snx9t\") pod \"cluster-image-registry-operator-dc59b4c8b-lb8pn\" (UID: \"9a9d3cd7-c717-4851-b4f3-2bb17d88d0bd\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-lb8pn" Feb 23 08:51:28 crc kubenswrapper[4940]: I0223 08:51:28.706854 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d9frb\" (UniqueName: \"kubernetes.io/projected/48ab701f-fc67-4d29-9cad-337e223f6f87-kube-api-access-d9frb\") pod \"apiserver-7bbb656c7d-slpn2\" (UID: \"48ab701f-fc67-4d29-9cad-337e223f6f87\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-slpn2" Feb 23 08:51:28 crc kubenswrapper[4940]: I0223 08:51:28.727223 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9a9d3cd7-c717-4851-b4f3-2bb17d88d0bd-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-lb8pn\" (UID: \"9a9d3cd7-c717-4851-b4f3-2bb17d88d0bd\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-lb8pn" Feb 23 08:51:28 crc kubenswrapper[4940]: I0223 08:51:28.740683 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rgflg\" (UniqueName: \"kubernetes.io/projected/4de72bcc-6d41-47cc-b9f7-f4cca10b977f-kube-api-access-rgflg\") pod \"machine-api-operator-5694c8668f-95wjd\" (UID: \"4de72bcc-6d41-47cc-b9f7-f4cca10b977f\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-95wjd" Feb 23 08:51:28 crc kubenswrapper[4940]: I0223 08:51:28.761092 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-xlz2g" Feb 23 08:51:28 crc kubenswrapper[4940]: I0223 08:51:28.761358 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zsnzz\" (UniqueName: \"kubernetes.io/projected/ef3ada46-965f-42e7-b89b-a67618bff8c6-kube-api-access-zsnzz\") pod \"dns-operator-744455d44c-8ls95\" (UID: \"ef3ada46-965f-42e7-b89b-a67618bff8c6\") " pod="openshift-dns-operator/dns-operator-744455d44c-8ls95" Feb 23 08:51:28 crc kubenswrapper[4940]: I0223 08:51:28.780205 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 08:51:28 crc kubenswrapper[4940]: E0223 08:51:28.780878 4940 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 08:51:29.280862371 +0000 UTC m=+220.664068528 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 08:51:28 crc kubenswrapper[4940]: I0223 08:51:28.784006 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-44rxh\" (UniqueName: \"kubernetes.io/projected/2f476562-bc2b-4c2d-8283-fae9b3f8ac4e-kube-api-access-44rxh\") pod \"package-server-manager-789f6589d5-glhjd\" (UID: \"2f476562-bc2b-4c2d-8283-fae9b3f8ac4e\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-glhjd" Feb 23 08:51:28 crc kubenswrapper[4940]: I0223 08:51:28.794967 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-2mrdb" Feb 23 08:51:28 crc kubenswrapper[4940]: I0223 08:51:28.802934 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bmntp\" (UniqueName: \"kubernetes.io/projected/feea3a62-1f72-4b46-9655-521e8ff5c323-kube-api-access-bmntp\") pod \"router-default-5444994796-tqnlh\" (UID: \"feea3a62-1f72-4b46-9655-521e8ff5c323\") " pod="openshift-ingress/router-default-5444994796-tqnlh" Feb 23 08:51:28 crc kubenswrapper[4940]: I0223 08:51:28.808899 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-8d22c" Feb 23 08:51:28 crc kubenswrapper[4940]: I0223 08:51:28.819141 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-slpn2" Feb 23 08:51:28 crc kubenswrapper[4940]: I0223 08:51:28.821115 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-glhjd" Feb 23 08:51:28 crc kubenswrapper[4940]: I0223 08:51:28.823599 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-7xd5s"] Feb 23 08:51:28 crc kubenswrapper[4940]: I0223 08:51:28.825988 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9ssbn\" (UniqueName: \"kubernetes.io/projected/26986446-4844-49cf-a77d-7d316d2d826b-kube-api-access-9ssbn\") pod \"etcd-operator-b45778765-5tf64\" (UID: \"26986446-4844-49cf-a77d-7d316d2d826b\") " pod="openshift-etcd-operator/etcd-operator-b45778765-5tf64" Feb 23 08:51:28 crc kubenswrapper[4940]: I0223 08:51:28.842646 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zdlj2\" (UniqueName: \"kubernetes.io/projected/d86468b3-88b2-4c49-b807-c00eceb862e2-kube-api-access-zdlj2\") pod \"service-ca-operator-777779d784-nmq4g\" (UID: \"d86468b3-88b2-4c49-b807-c00eceb862e2\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-nmq4g" Feb 23 08:51:28 crc kubenswrapper[4940]: I0223 08:51:28.873331 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zfm8x\" (UniqueName: \"kubernetes.io/projected/2dfca0b5-693a-4e28-ae27-d5532038616c-kube-api-access-zfm8x\") pod \"dns-default-sppfz\" (UID: \"2dfca0b5-693a-4e28-ae27-d5532038616c\") " pod="openshift-dns/dns-default-sppfz" Feb 23 08:51:28 crc kubenswrapper[4940]: I0223 08:51:28.873957 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-tqnlh" Feb 23 08:51:28 crc kubenswrapper[4940]: I0223 08:51:28.881402 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kzrfw\" (UID: \"05cfdf5e-5390-4f32-986d-02872c05f444\") " pod="openshift-image-registry/image-registry-697d97f7c8-kzrfw" Feb 23 08:51:28 crc kubenswrapper[4940]: E0223 08:51:28.881816 4940 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 08:51:29.381804266 +0000 UTC m=+220.765010423 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kzrfw" (UID: "05cfdf5e-5390-4f32-986d-02872c05f444") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 08:51:28 crc kubenswrapper[4940]: I0223 08:51:28.890964 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4wtnk\" (UniqueName: \"kubernetes.io/projected/71aa9018-d3be-454d-8d1c-5853f3971151-kube-api-access-4wtnk\") pod \"apiserver-76f77b778f-znvc9\" (UID: \"71aa9018-d3be-454d-8d1c-5853f3971151\") " pod="openshift-apiserver/apiserver-76f77b778f-znvc9" Feb 23 08:51:28 crc kubenswrapper[4940]: I0223 08:51:28.906493 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-6qcpm"] Feb 23 08:51:28 crc kubenswrapper[4940]: I0223 08:51:28.907708 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-224dd" Feb 23 08:51:28 crc kubenswrapper[4940]: I0223 08:51:28.911882 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f5skj\" (UniqueName: \"kubernetes.io/projected/d2d9e940-6dc2-4325-b5eb-410c1f038ae5-kube-api-access-f5skj\") pod \"service-ca-9c57cc56f-f9p7v\" (UID: \"d2d9e940-6dc2-4325-b5eb-410c1f038ae5\") " pod="openshift-service-ca/service-ca-9c57cc56f-f9p7v" Feb 23 08:51:28 crc kubenswrapper[4940]: W0223 08:51:28.916186 4940 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfeea3a62_1f72_4b46_9655_521e8ff5c323.slice/crio-df1f4d427e19165de8e27f0ba4adb4edf4387b669a97281c37ee240fe9f42376 WatchSource:0}: Error finding container df1f4d427e19165de8e27f0ba4adb4edf4387b669a97281c37ee240fe9f42376: Status 404 returned error can't find the container with id df1f4d427e19165de8e27f0ba4adb4edf4387b669a97281c37ee240fe9f42376 Feb 23 08:51:28 crc kubenswrapper[4940]: I0223 08:51:28.918286 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-rrhk2" Feb 23 08:51:28 crc kubenswrapper[4940]: I0223 08:51:28.924063 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7zqvx\" (UniqueName: \"kubernetes.io/projected/1fee6a64-486f-4aef-9242-8bf07796d6e3-kube-api-access-7zqvx\") pod \"openshift-config-operator-7777fb866f-n95bv\" (UID: \"1fee6a64-486f-4aef-9242-8bf07796d6e3\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-n95bv" Feb 23 08:51:28 crc kubenswrapper[4940]: I0223 08:51:28.937441 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-sppfz" Feb 23 08:51:28 crc kubenswrapper[4940]: I0223 08:51:28.937671 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-95wjd" Feb 23 08:51:28 crc kubenswrapper[4940]: I0223 08:51:28.938013 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-lb8pn" Feb 23 08:51:28 crc kubenswrapper[4940]: I0223 08:51:28.943377 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v79wc\" (UniqueName: \"kubernetes.io/projected/b546c0fc-b66f-4f2b-ab03-364362906f88-kube-api-access-v79wc\") pod \"olm-operator-6b444d44fb-lz7cz\" (UID: \"b546c0fc-b66f-4f2b-ab03-364362906f88\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lz7cz" Feb 23 08:51:28 crc kubenswrapper[4940]: I0223 08:51:28.961992 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-8ls95" Feb 23 08:51:28 crc kubenswrapper[4940]: I0223 08:51:28.976222 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6a449118-5ee9-42f2-bdc2-a23f1c6febf6-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-whcws\" (UID: \"6a449118-5ee9-42f2-bdc2-a23f1c6febf6\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-whcws" Feb 23 08:51:28 crc kubenswrapper[4940]: I0223 08:51:28.980492 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-5tf64" Feb 23 08:51:28 crc kubenswrapper[4940]: I0223 08:51:28.982915 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 08:51:28 crc kubenswrapper[4940]: E0223 08:51:28.983364 4940 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 08:51:29.483350221 +0000 UTC m=+220.866556378 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 08:51:28 crc kubenswrapper[4940]: I0223 08:51:28.986872 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-znvc9" Feb 23 08:51:28 crc kubenswrapper[4940]: I0223 08:51:28.987743 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hqmw9\" (UniqueName: \"kubernetes.io/projected/bc0bbd6d-8e03-4d03-b2a8-2fe201d6c6bf-kube-api-access-hqmw9\") pod \"machine-config-controller-84d6567774-wzbgr\" (UID: \"bc0bbd6d-8e03-4d03-b2a8-2fe201d6c6bf\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-wzbgr" Feb 23 08:51:28 crc kubenswrapper[4940]: I0223 08:51:28.998399 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-xlz2g"] Feb 23 08:51:29 crc kubenswrapper[4940]: I0223 08:51:29.003060 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w44mh\" (UniqueName: \"kubernetes.io/projected/32e17925-dd05-43de-8e22-105f0002b651-kube-api-access-w44mh\") pod \"openshift-controller-manager-operator-756b6f6bc6-klkgk\" (UID: \"32e17925-dd05-43de-8e22-105f0002b651\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-klkgk" Feb 23 08:51:29 crc kubenswrapper[4940]: I0223 08:51:29.021604 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-tqnlh" event={"ID":"feea3a62-1f72-4b46-9655-521e8ff5c323","Type":"ContainerStarted","Data":"df1f4d427e19165de8e27f0ba4adb4edf4387b669a97281c37ee240fe9f42376"} Feb 23 08:51:29 crc kubenswrapper[4940]: I0223 08:51:29.028624 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-6qcpm" event={"ID":"916e1f6f-2bfc-41e7-86c2-6c379e3638c1","Type":"ContainerStarted","Data":"a7311d940f6a9bdbdb8d6a9a1142e45a59c2380fb24cc54f175f4257795dad05"} Feb 23 08:51:29 crc kubenswrapper[4940]: I0223 08:51:29.028656 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-n95bv" Feb 23 08:51:29 crc kubenswrapper[4940]: I0223 08:51:29.030763 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-7xd5s" event={"ID":"d1df31f2-53d1-42d0-9e24-a5ce6ad604d6","Type":"ContainerStarted","Data":"985f143b27da170bda4362fe9585d82ca07e0d54583cb29121473239a6f5bcc1"} Feb 23 08:51:29 crc kubenswrapper[4940]: I0223 08:51:29.030811 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-7xd5s" event={"ID":"d1df31f2-53d1-42d0-9e24-a5ce6ad604d6","Type":"ContainerStarted","Data":"12bf95415c77e075964cc23c662bb2ee1da88be23eb0ee072429346b6584b5ba"} Feb 23 08:51:29 crc kubenswrapper[4940]: I0223 08:51:29.039074 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-vc9hk" event={"ID":"973d0526-cb43-44e0-adc5-9a5438c906f9","Type":"ContainerStarted","Data":"e78afab3db8b7e16ffe9d6d7024359effd91477eab2b3afaf9657dfd2a97db5c"} Feb 23 08:51:29 crc kubenswrapper[4940]: I0223 08:51:29.039129 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-vc9hk" event={"ID":"973d0526-cb43-44e0-adc5-9a5438c906f9","Type":"ContainerStarted","Data":"8e81b8bdc4cd1455057a7998b19bea11b2046810cda17939c16395dd0a49d8b4"} Feb 23 08:51:29 crc kubenswrapper[4940]: I0223 08:51:29.040854 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5lfjv\" (UniqueName: \"kubernetes.io/projected/783e15c8-9066-455f-878d-86215d82093b-kube-api-access-5lfjv\") pod \"migrator-59844c95c7-hrnj4\" (UID: \"783e15c8-9066-455f-878d-86215d82093b\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-hrnj4" Feb 23 08:51:29 crc kubenswrapper[4940]: I0223 08:51:29.044306 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/726e20bd-8ba8-4ae8-a2ce-6d1a50300dfd-bound-sa-token\") pod \"ingress-operator-5b745b69d9-mg5lr\" (UID: \"726e20bd-8ba8-4ae8-a2ce-6d1a50300dfd\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-mg5lr" Feb 23 08:51:29 crc kubenswrapper[4940]: I0223 08:51:29.061189 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mctks\" (UniqueName: \"kubernetes.io/projected/f5e834a4-b3e0-4d34-922a-2a8ed8e1fecb-kube-api-access-mctks\") pod \"control-plane-machine-set-operator-78cbb6b69f-687p7\" (UID: \"f5e834a4-b3e0-4d34-922a-2a8ed8e1fecb\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-687p7" Feb 23 08:51:29 crc kubenswrapper[4940]: I0223 08:51:29.085171 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kzrfw\" (UID: \"05cfdf5e-5390-4f32-986d-02872c05f444\") " pod="openshift-image-registry/image-registry-697d97f7c8-kzrfw" Feb 23 08:51:29 crc kubenswrapper[4940]: I0223 08:51:29.085408 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l48sg\" (UniqueName: \"kubernetes.io/projected/63609b48-d163-4000-a23b-bb70a6719c5c-kube-api-access-l48sg\") pod \"machine-config-server-6f7z7\" (UID: \"63609b48-d163-4000-a23b-bb70a6719c5c\") " pod="openshift-machine-config-operator/machine-config-server-6f7z7" Feb 23 08:51:29 crc kubenswrapper[4940]: E0223 08:51:29.087146 4940 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 08:51:29.587105205 +0000 UTC m=+220.970311362 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kzrfw" (UID: "05cfdf5e-5390-4f32-986d-02872c05f444") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 08:51:29 crc kubenswrapper[4940]: I0223 08:51:29.100859 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-f9p7v" Feb 23 08:51:29 crc kubenswrapper[4940]: I0223 08:51:29.101278 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-whcws" Feb 23 08:51:29 crc kubenswrapper[4940]: I0223 08:51:29.105843 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ksrsz\" (UniqueName: \"kubernetes.io/projected/4f793610-43f5-4faf-b61d-e2330db0b177-kube-api-access-ksrsz\") pod \"cluster-samples-operator-665b6dd947-wwhg8\" (UID: \"4f793610-43f5-4faf-b61d-e2330db0b177\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-wwhg8" Feb 23 08:51:29 crc kubenswrapper[4940]: I0223 08:51:29.114576 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-687p7" Feb 23 08:51:29 crc kubenswrapper[4940]: I0223 08:51:29.119998 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5b5rx\" (UniqueName: \"kubernetes.io/projected/1b16df0c-b660-4f3c-9d26-cfff395d5c88-kube-api-access-5b5rx\") pod \"collect-profiles-29530605-25rxm\" (UID: \"1b16df0c-b660-4f3c-9d26-cfff395d5c88\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29530605-25rxm" Feb 23 08:51:29 crc kubenswrapper[4940]: I0223 08:51:29.139515 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-nmq4g" Feb 23 08:51:29 crc kubenswrapper[4940]: I0223 08:51:29.143391 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-2mrdb"] Feb 23 08:51:29 crc kubenswrapper[4940]: I0223 08:51:29.144800 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lz7cz" Feb 23 08:51:29 crc kubenswrapper[4940]: I0223 08:51:29.145446 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xgvf7\" (UniqueName: \"kubernetes.io/projected/9de4a20c-3f76-4aa8-8347-42f3b3f53145-kube-api-access-xgvf7\") pod \"marketplace-operator-79b997595-j9x9v\" (UID: \"9de4a20c-3f76-4aa8-8347-42f3b3f53145\") " pod="openshift-marketplace/marketplace-operator-79b997595-j9x9v" Feb 23 08:51:29 crc kubenswrapper[4940]: I0223 08:51:29.161957 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-hrnj4" Feb 23 08:51:29 crc kubenswrapper[4940]: I0223 08:51:29.167790 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c5a6d2fa-46ed-4669-ac19-c335595a24fd-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-8smd9\" (UID: \"c5a6d2fa-46ed-4669-ac19-c335595a24fd\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8smd9" Feb 23 08:51:29 crc kubenswrapper[4940]: I0223 08:51:29.169025 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-8d22c"] Feb 23 08:51:29 crc kubenswrapper[4940]: I0223 08:51:29.181233 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-j9x9v" Feb 23 08:51:29 crc kubenswrapper[4940]: I0223 08:51:29.181691 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4btlz\" (UniqueName: \"kubernetes.io/projected/21f25477-51d5-480d-a252-f821cc008560-kube-api-access-4btlz\") pod \"csi-hostpathplugin-m4hz9\" (UID: \"21f25477-51d5-480d-a252-f821cc008560\") " pod="hostpath-provisioner/csi-hostpathplugin-m4hz9" Feb 23 08:51:29 crc kubenswrapper[4940]: I0223 08:51:29.187200 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 08:51:29 crc kubenswrapper[4940]: W0223 08:51:29.188189 4940 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod539d5f30_be10_4fc5_bf8f_11442d337bac.slice/crio-527eb0266e9169a7697a91a51de275dd818ee8964764a19bad5aeebdbf5f1189 WatchSource:0}: Error finding container 527eb0266e9169a7697a91a51de275dd818ee8964764a19bad5aeebdbf5f1189: Status 404 returned error can't find the container with id 527eb0266e9169a7697a91a51de275dd818ee8964764a19bad5aeebdbf5f1189 Feb 23 08:51:29 crc kubenswrapper[4940]: E0223 08:51:29.188527 4940 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 08:51:29.688512215 +0000 UTC m=+221.071718372 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 08:51:29 crc kubenswrapper[4940]: I0223 08:51:29.188642 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-wzbgr" Feb 23 08:51:29 crc kubenswrapper[4940]: I0223 08:51:29.200851 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-6f7z7" Feb 23 08:51:29 crc kubenswrapper[4940]: I0223 08:51:29.201062 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cnb8z\" (UniqueName: \"kubernetes.io/projected/0d9b99ac-2db8-435f-ad9f-4d7335a40e19-kube-api-access-cnb8z\") pod \"kube-storage-version-migrator-operator-b67b599dd-z7w6c\" (UID: \"0d9b99ac-2db8-435f-ad9f-4d7335a40e19\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-z7w6c" Feb 23 08:51:29 crc kubenswrapper[4940]: I0223 08:51:29.226831 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-m4hz9" Feb 23 08:51:29 crc kubenswrapper[4940]: I0223 08:51:29.229380 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5xknx\" (UniqueName: \"kubernetes.io/projected/7f57c7c4-ddb8-48dd-853e-d87bdf3a9abd-kube-api-access-5xknx\") pod \"packageserver-d55dfcdfc-wtxsb\" (UID: \"7f57c7c4-ddb8-48dd-853e-d87bdf3a9abd\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-wtxsb" Feb 23 08:51:29 crc kubenswrapper[4940]: I0223 08:51:29.246919 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b5kv7\" (UniqueName: \"kubernetes.io/projected/07ef0edd-666b-4ced-9a27-51433a59c6c0-kube-api-access-b5kv7\") pod \"console-f9d7485db-2zgdn\" (UID: \"07ef0edd-666b-4ced-9a27-51433a59c6c0\") " pod="openshift-console/console-f9d7485db-2zgdn" Feb 23 08:51:29 crc kubenswrapper[4940]: I0223 08:51:29.267523 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-klkgk" Feb 23 08:51:29 crc kubenswrapper[4940]: I0223 08:51:29.281086 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-689w2\" (UniqueName: \"kubernetes.io/projected/7a60bce1-d0e8-451a-91da-396ec5d5c53b-kube-api-access-689w2\") pod \"catalog-operator-68c6474976-9xdrp\" (UID: \"7a60bce1-d0e8-451a-91da-396ec5d5c53b\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9xdrp" Feb 23 08:51:29 crc kubenswrapper[4940]: I0223 08:51:29.283385 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ngnfq\" (UniqueName: \"kubernetes.io/projected/726e20bd-8ba8-4ae8-a2ce-6d1a50300dfd-kube-api-access-ngnfq\") pod \"ingress-operator-5b745b69d9-mg5lr\" (UID: \"726e20bd-8ba8-4ae8-a2ce-6d1a50300dfd\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-mg5lr" Feb 23 08:51:29 crc kubenswrapper[4940]: I0223 08:51:29.288443 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kzrfw\" (UID: \"05cfdf5e-5390-4f32-986d-02872c05f444\") " pod="openshift-image-registry/image-registry-697d97f7c8-kzrfw" Feb 23 08:51:29 crc kubenswrapper[4940]: E0223 08:51:29.288920 4940 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 08:51:29.788902493 +0000 UTC m=+221.172108650 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kzrfw" (UID: "05cfdf5e-5390-4f32-986d-02872c05f444") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 08:51:29 crc kubenswrapper[4940]: I0223 08:51:29.304603 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-rrhk2"] Feb 23 08:51:29 crc kubenswrapper[4940]: I0223 08:51:29.308095 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jzg7k\" (UniqueName: \"kubernetes.io/projected/46095393-c72a-4539-b3e4-e2f3f35301b8-kube-api-access-jzg7k\") pod \"machine-config-operator-74547568cd-6g7xj\" (UID: \"46095393-c72a-4539-b3e4-e2f3f35301b8\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-6g7xj" Feb 23 08:51:29 crc kubenswrapper[4940]: I0223 08:51:29.317436 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2vbvn\" (UniqueName: \"kubernetes.io/projected/24821bad-09c7-4880-bf0e-a6e829284f2e-kube-api-access-2vbvn\") pod \"ingress-canary-zgt46\" (UID: \"24821bad-09c7-4880-bf0e-a6e829284f2e\") " pod="openshift-ingress-canary/ingress-canary-zgt46" Feb 23 08:51:29 crc kubenswrapper[4940]: I0223 08:51:29.322539 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-2zgdn" Feb 23 08:51:29 crc kubenswrapper[4940]: I0223 08:51:29.338033 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-wwhg8" Feb 23 08:51:29 crc kubenswrapper[4940]: I0223 08:51:29.344795 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/57505051-bc9e-499e-9013-6365439ebb68-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-pw8l4\" (UID: \"57505051-bc9e-499e-9013-6365439ebb68\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-pw8l4" Feb 23 08:51:29 crc kubenswrapper[4940]: I0223 08:51:29.344782 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-6g7xj" Feb 23 08:51:29 crc kubenswrapper[4940]: I0223 08:51:29.359835 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-mg5lr" Feb 23 08:51:29 crc kubenswrapper[4940]: I0223 08:51:29.360133 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k2h4d\" (UniqueName: \"kubernetes.io/projected/97dc456f-7c0c-49f0-ad5b-e4c791429d57-kube-api-access-k2h4d\") pod \"multus-admission-controller-857f4d67dd-2c95j\" (UID: \"97dc456f-7c0c-49f0-ad5b-e4c791429d57\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-2c95j" Feb 23 08:51:29 crc kubenswrapper[4940]: I0223 08:51:29.369657 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29530605-25rxm" Feb 23 08:51:29 crc kubenswrapper[4940]: I0223 08:51:29.377834 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-pw8l4" Feb 23 08:51:29 crc kubenswrapper[4940]: I0223 08:51:29.389824 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 08:51:29 crc kubenswrapper[4940]: E0223 08:51:29.389922 4940 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 08:51:29.88989797 +0000 UTC m=+221.273104127 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 08:51:29 crc kubenswrapper[4940]: I0223 08:51:29.390033 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kzrfw\" (UID: \"05cfdf5e-5390-4f32-986d-02872c05f444\") " pod="openshift-image-registry/image-registry-697d97f7c8-kzrfw" Feb 23 08:51:29 crc kubenswrapper[4940]: E0223 08:51:29.390324 4940 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 08:51:29.890313673 +0000 UTC m=+221.273519820 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kzrfw" (UID: "05cfdf5e-5390-4f32-986d-02872c05f444") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 08:51:29 crc kubenswrapper[4940]: I0223 08:51:29.395286 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-sppfz"] Feb 23 08:51:29 crc kubenswrapper[4940]: I0223 08:51:29.401218 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-wtxsb" Feb 23 08:51:29 crc kubenswrapper[4940]: I0223 08:51:29.404360 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-224dd"] Feb 23 08:51:29 crc kubenswrapper[4940]: I0223 08:51:29.406109 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-slpn2"] Feb 23 08:51:29 crc kubenswrapper[4940]: I0223 08:51:29.407500 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9xdrp" Feb 23 08:51:29 crc kubenswrapper[4940]: I0223 08:51:29.422986 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-glhjd"] Feb 23 08:51:29 crc kubenswrapper[4940]: I0223 08:51:29.430544 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8smd9" Feb 23 08:51:29 crc kubenswrapper[4940]: I0223 08:51:29.453872 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-z7w6c" Feb 23 08:51:29 crc kubenswrapper[4940]: I0223 08:51:29.493206 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 08:51:29 crc kubenswrapper[4940]: E0223 08:51:29.493486 4940 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 08:51:29.993470488 +0000 UTC m=+221.376676645 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 08:51:29 crc kubenswrapper[4940]: I0223 08:51:29.535809 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-zgt46" Feb 23 08:51:29 crc kubenswrapper[4940]: I0223 08:51:29.538835 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-lb8pn"] Feb 23 08:51:29 crc kubenswrapper[4940]: I0223 08:51:29.556735 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-95wjd"] Feb 23 08:51:29 crc kubenswrapper[4940]: I0223 08:51:29.594539 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kzrfw\" (UID: \"05cfdf5e-5390-4f32-986d-02872c05f444\") " pod="openshift-image-registry/image-registry-697d97f7c8-kzrfw" Feb 23 08:51:29 crc kubenswrapper[4940]: E0223 08:51:29.594916 4940 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 08:51:30.094904488 +0000 UTC m=+221.478110645 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kzrfw" (UID: "05cfdf5e-5390-4f32-986d-02872c05f444") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 08:51:29 crc kubenswrapper[4940]: I0223 08:51:29.628120 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-5tf64"] Feb 23 08:51:29 crc kubenswrapper[4940]: I0223 08:51:29.653807 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-2c95j" Feb 23 08:51:29 crc kubenswrapper[4940]: I0223 08:51:29.697973 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 08:51:29 crc kubenswrapper[4940]: E0223 08:51:29.698544 4940 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 08:51:30.198521048 +0000 UTC m=+221.581727225 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 08:51:29 crc kubenswrapper[4940]: I0223 08:51:29.701340 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-znvc9"] Feb 23 08:51:29 crc kubenswrapper[4940]: I0223 08:51:29.744583 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-n95bv"] Feb 23 08:51:29 crc kubenswrapper[4940]: I0223 08:51:29.758796 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-8ls95"] Feb 23 08:51:29 crc kubenswrapper[4940]: I0223 08:51:29.800078 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kzrfw\" (UID: \"05cfdf5e-5390-4f32-986d-02872c05f444\") " pod="openshift-image-registry/image-registry-697d97f7c8-kzrfw" Feb 23 08:51:29 crc kubenswrapper[4940]: E0223 08:51:29.800450 4940 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 08:51:30.300433334 +0000 UTC m=+221.683639481 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kzrfw" (UID: "05cfdf5e-5390-4f32-986d-02872c05f444") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 08:51:29 crc kubenswrapper[4940]: W0223 08:51:29.834384 4940 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod26986446_4844_49cf_a77d_7d316d2d826b.slice/crio-3fc544113c6c2dcb43b799531e18695e37e247aa347332857b038c1cff24b0d7 WatchSource:0}: Error finding container 3fc544113c6c2dcb43b799531e18695e37e247aa347332857b038c1cff24b0d7: Status 404 returned error can't find the container with id 3fc544113c6c2dcb43b799531e18695e37e247aa347332857b038c1cff24b0d7 Feb 23 08:51:29 crc kubenswrapper[4940]: I0223 08:51:29.901747 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 08:51:29 crc kubenswrapper[4940]: E0223 08:51:29.902109 4940 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 08:51:30.402089642 +0000 UTC m=+221.785295799 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 08:51:29 crc kubenswrapper[4940]: I0223 08:51:29.902506 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kzrfw\" (UID: \"05cfdf5e-5390-4f32-986d-02872c05f444\") " pod="openshift-image-registry/image-registry-697d97f7c8-kzrfw" Feb 23 08:51:29 crc kubenswrapper[4940]: E0223 08:51:29.903004 4940 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 08:51:30.4029945 +0000 UTC m=+221.786200657 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kzrfw" (UID: "05cfdf5e-5390-4f32-986d-02872c05f444") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 08:51:29 crc kubenswrapper[4940]: W0223 08:51:29.913301 4940 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef3ada46_965f_42e7_b89b_a67618bff8c6.slice/crio-8e327f75b3e92b9d452be8340d2bb3043341f5d0c83a8d7fe47b267fc07c9493 WatchSource:0}: Error finding container 8e327f75b3e92b9d452be8340d2bb3043341f5d0c83a8d7fe47b267fc07c9493: Status 404 returned error can't find the container with id 8e327f75b3e92b9d452be8340d2bb3043341f5d0c83a8d7fe47b267fc07c9493 Feb 23 08:51:30 crc kubenswrapper[4940]: I0223 08:51:30.003289 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 08:51:30 crc kubenswrapper[4940]: E0223 08:51:30.003651 4940 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 08:51:30.503602455 +0000 UTC m=+221.886808612 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 08:51:30 crc kubenswrapper[4940]: I0223 08:51:30.020244 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-nmq4g"] Feb 23 08:51:30 crc kubenswrapper[4940]: I0223 08:51:30.032215 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-f9p7v"] Feb 23 08:51:30 crc kubenswrapper[4940]: I0223 08:51:30.039133 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-whcws"] Feb 23 08:51:30 crc kubenswrapper[4940]: I0223 08:51:30.055249 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-rrhk2" event={"ID":"c733e45d-a072-4619-b2f8-aea6d77b112f","Type":"ContainerStarted","Data":"3f739c33d43df28a656d8681974c2a2cdce1c262411a7308436aa14935e8d280"} Feb 23 08:51:30 crc kubenswrapper[4940]: I0223 08:51:30.057498 4940 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-7xd5s" podStartSLOduration=153.057481506 podStartE2EDuration="2m33.057481506s" podCreationTimestamp="2026-02-23 08:48:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 08:51:30.057265349 +0000 UTC m=+221.440471506" watchObservedRunningTime="2026-02-23 08:51:30.057481506 +0000 UTC m=+221.440687663" Feb 23 08:51:30 crc kubenswrapper[4940]: I0223 08:51:30.057925 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-tqnlh" event={"ID":"feea3a62-1f72-4b46-9655-521e8ff5c323","Type":"ContainerStarted","Data":"1a6c407e29e3d6396bb8172c89a7ac6b0752652bce48df309b33961219d457b0"} Feb 23 08:51:30 crc kubenswrapper[4940]: I0223 08:51:30.059495 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-224dd" event={"ID":"e13ea819-2f94-423e-ab3f-c7b6d03ad686","Type":"ContainerStarted","Data":"da7d9dfab5da8ecc4117f16b91c332b55f678d20b6e3c4694bdf5179dd80c3cf"} Feb 23 08:51:30 crc kubenswrapper[4940]: I0223 08:51:30.063043 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-slpn2" event={"ID":"48ab701f-fc67-4d29-9cad-337e223f6f87","Type":"ContainerStarted","Data":"b28964d7bfab965e27d0e577e7667d2ea99f36b3c37db167edce2359c3429fba"} Feb 23 08:51:30 crc kubenswrapper[4940]: I0223 08:51:30.068000 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-vc9hk" event={"ID":"973d0526-cb43-44e0-adc5-9a5438c906f9","Type":"ContainerStarted","Data":"55d4bfaa95688caddb0c4deabc1577bd6d3d0ac988badeb05c8222bbdc55073a"} Feb 23 08:51:30 crc kubenswrapper[4940]: I0223 08:51:30.074023 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-6qcpm" event={"ID":"916e1f6f-2bfc-41e7-86c2-6c379e3638c1","Type":"ContainerStarted","Data":"85e0d9f6119b8f84197f585c426f0fcbf4104816ce615be6077bdd0399c2ddab"} Feb 23 08:51:30 crc kubenswrapper[4940]: I0223 08:51:30.074263 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-6qcpm" Feb 23 08:51:30 crc kubenswrapper[4940]: I0223 08:51:30.075487 4940 patch_prober.go:28] interesting pod/downloads-7954f5f757-6qcpm container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.27:8080/\": dial tcp 10.217.0.27:8080: connect: connection refused" start-of-body= Feb 23 08:51:30 crc kubenswrapper[4940]: I0223 08:51:30.075521 4940 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-6qcpm" podUID="916e1f6f-2bfc-41e7-86c2-6c379e3638c1" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.27:8080/\": dial tcp 10.217.0.27:8080: connect: connection refused" Feb 23 08:51:30 crc kubenswrapper[4940]: I0223 08:51:30.076332 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-n95bv" event={"ID":"1fee6a64-486f-4aef-9242-8bf07796d6e3","Type":"ContainerStarted","Data":"63969fdd0ffe985e5c21aa826dbb3006cff5f4f5abfeeb3aa7c416d1d0095008"} Feb 23 08:51:30 crc kubenswrapper[4940]: I0223 08:51:30.079838 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-xlz2g" event={"ID":"550e2596-a506-465a-91ab-44280e7727d3","Type":"ContainerStarted","Data":"65587e726255579ca894bfdef452050699c11a96f51101f76f963667fefd5bd3"} Feb 23 08:51:30 crc kubenswrapper[4940]: I0223 08:51:30.079882 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-xlz2g" event={"ID":"550e2596-a506-465a-91ab-44280e7727d3","Type":"ContainerStarted","Data":"c73107f7f8fadb2c70c09ea11fd45a0fbf9bcdcf82181c9590572b8b0fa830a5"} Feb 23 08:51:30 crc kubenswrapper[4940]: I0223 08:51:30.080365 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-xlz2g" Feb 23 08:51:30 crc kubenswrapper[4940]: I0223 08:51:30.081490 4940 patch_prober.go:28] interesting pod/console-operator-58897d9998-xlz2g container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.5:8443/readyz\": dial tcp 10.217.0.5:8443: connect: connection refused" start-of-body= Feb 23 08:51:30 crc kubenswrapper[4940]: I0223 08:51:30.081577 4940 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-xlz2g" podUID="550e2596-a506-465a-91ab-44280e7727d3" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.5:8443/readyz\": dial tcp 10.217.0.5:8443: connect: connection refused" Feb 23 08:51:30 crc kubenswrapper[4940]: I0223 08:51:30.088912 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-znvc9" event={"ID":"71aa9018-d3be-454d-8d1c-5853f3971151","Type":"ContainerStarted","Data":"bfbf3553331bec99be21db2b1d1ceab8f034d183aa3cba4448912462e3f605dc"} Feb 23 08:51:30 crc kubenswrapper[4940]: I0223 08:51:30.096472 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-lb8pn" event={"ID":"9a9d3cd7-c717-4851-b4f3-2bb17d88d0bd","Type":"ContainerStarted","Data":"d77105b5314c7b9d422cc4408a5f66a0bc54f38916f3d994f0a24762485761b8"} Feb 23 08:51:30 crc kubenswrapper[4940]: I0223 08:51:30.099676 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-95wjd" event={"ID":"4de72bcc-6d41-47cc-b9f7-f4cca10b977f","Type":"ContainerStarted","Data":"45392af82d619fd1644a094ecc5c22adb39db9112f6edadfc0efa64f88b5414e"} Feb 23 08:51:30 crc kubenswrapper[4940]: I0223 08:51:30.101654 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-glhjd" event={"ID":"2f476562-bc2b-4c2d-8283-fae9b3f8ac4e","Type":"ContainerStarted","Data":"84e2d3b7946233a7b65f3383f2627a7571448547582d47d70726d14a7e5c60c3"} Feb 23 08:51:30 crc kubenswrapper[4940]: I0223 08:51:30.103561 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-sppfz" event={"ID":"2dfca0b5-693a-4e28-ae27-d5532038616c","Type":"ContainerStarted","Data":"2e85dadbeed3bb489e690ac7d5a9f589e34d70cdefef7d3f88e44c79c1532d26"} Feb 23 08:51:30 crc kubenswrapper[4940]: I0223 08:51:30.104516 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-6f7z7" event={"ID":"63609b48-d163-4000-a23b-bb70a6719c5c","Type":"ContainerStarted","Data":"ea519262fdb075f69b0610511b82f21dd65927df31cdb0757207bf8424239890"} Feb 23 08:51:30 crc kubenswrapper[4940]: I0223 08:51:30.104747 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kzrfw\" (UID: \"05cfdf5e-5390-4f32-986d-02872c05f444\") " pod="openshift-image-registry/image-registry-697d97f7c8-kzrfw" Feb 23 08:51:30 crc kubenswrapper[4940]: E0223 08:51:30.105066 4940 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 08:51:30.605051126 +0000 UTC m=+221.988257283 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kzrfw" (UID: "05cfdf5e-5390-4f32-986d-02872c05f444") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 08:51:30 crc kubenswrapper[4940]: I0223 08:51:30.108690 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-8ls95" event={"ID":"ef3ada46-965f-42e7-b89b-a67618bff8c6","Type":"ContainerStarted","Data":"8e327f75b3e92b9d452be8340d2bb3043341f5d0c83a8d7fe47b267fc07c9493"} Feb 23 08:51:30 crc kubenswrapper[4940]: I0223 08:51:30.115400 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-5tf64" event={"ID":"26986446-4844-49cf-a77d-7d316d2d826b","Type":"ContainerStarted","Data":"3fc544113c6c2dcb43b799531e18695e37e247aa347332857b038c1cff24b0d7"} Feb 23 08:51:30 crc kubenswrapper[4940]: I0223 08:51:30.120702 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-8d22c" event={"ID":"539d5f30-be10-4fc5-bf8f-11442d337bac","Type":"ContainerStarted","Data":"9cc49af27f40e0320bc0c09a4bd6cb90b82c005f7a75d44987e720e2301dbc4b"} Feb 23 08:51:30 crc kubenswrapper[4940]: I0223 08:51:30.120742 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-8d22c" event={"ID":"539d5f30-be10-4fc5-bf8f-11442d337bac","Type":"ContainerStarted","Data":"527eb0266e9169a7697a91a51de275dd818ee8964764a19bad5aeebdbf5f1189"} Feb 23 08:51:30 crc kubenswrapper[4940]: W0223 08:51:30.142183 4940 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6a449118_5ee9_42f2_bdc2_a23f1c6febf6.slice/crio-8f8db6d49ebc7eb232bb90f43b4036d74e85f4568cdd4c9469d5fd66dce0aa2a WatchSource:0}: Error finding container 8f8db6d49ebc7eb232bb90f43b4036d74e85f4568cdd4c9469d5fd66dce0aa2a: Status 404 returned error can't find the container with id 8f8db6d49ebc7eb232bb90f43b4036d74e85f4568cdd4c9469d5fd66dce0aa2a Feb 23 08:51:30 crc kubenswrapper[4940]: I0223 08:51:30.147793 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-2mrdb" event={"ID":"af5df6df-7f6c-40a3-b1da-44af29cdee8b","Type":"ContainerStarted","Data":"ac0a19ea351b92f589453a34077d4bddefb33a6b68a444fdf2e10a434d067cc0"} Feb 23 08:51:30 crc kubenswrapper[4940]: I0223 08:51:30.147830 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-2mrdb" event={"ID":"af5df6df-7f6c-40a3-b1da-44af29cdee8b","Type":"ContainerStarted","Data":"34f0b4758aab68ebfe65079b4ae647649d26117982b3035f6d70686248553646"} Feb 23 08:51:30 crc kubenswrapper[4940]: I0223 08:51:30.148340 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-2mrdb" Feb 23 08:51:30 crc kubenswrapper[4940]: I0223 08:51:30.149526 4940 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-2mrdb container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" start-of-body= Feb 23 08:51:30 crc kubenswrapper[4940]: I0223 08:51:30.149563 4940 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-2mrdb" podUID="af5df6df-7f6c-40a3-b1da-44af29cdee8b" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" Feb 23 08:51:30 crc kubenswrapper[4940]: W0223 08:51:30.175961 4940 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd2d9e940_6dc2_4325_b5eb_410c1f038ae5.slice/crio-46642c8ba443e6ec129cbce01470f925bca00b9755e47d96ca8be21fc20c5d36 WatchSource:0}: Error finding container 46642c8ba443e6ec129cbce01470f925bca00b9755e47d96ca8be21fc20c5d36: Status 404 returned error can't find the container with id 46642c8ba443e6ec129cbce01470f925bca00b9755e47d96ca8be21fc20c5d36 Feb 23 08:51:30 crc kubenswrapper[4940]: I0223 08:51:30.206282 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 08:51:30 crc kubenswrapper[4940]: E0223 08:51:30.207387 4940 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 08:51:30.707374216 +0000 UTC m=+222.090580373 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 08:51:30 crc kubenswrapper[4940]: I0223 08:51:30.307764 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kzrfw\" (UID: \"05cfdf5e-5390-4f32-986d-02872c05f444\") " pod="openshift-image-registry/image-registry-697d97f7c8-kzrfw" Feb 23 08:51:30 crc kubenswrapper[4940]: E0223 08:51:30.308144 4940 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 08:51:30.808128445 +0000 UTC m=+222.191334602 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kzrfw" (UID: "05cfdf5e-5390-4f32-986d-02872c05f444") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 08:51:30 crc kubenswrapper[4940]: I0223 08:51:30.410379 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 08:51:30 crc kubenswrapper[4940]: E0223 08:51:30.410799 4940 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 08:51:30.910780744 +0000 UTC m=+222.293986901 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 08:51:30 crc kubenswrapper[4940]: I0223 08:51:30.428485 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-j9x9v"] Feb 23 08:51:30 crc kubenswrapper[4940]: I0223 08:51:30.443694 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lz7cz"] Feb 23 08:51:30 crc kubenswrapper[4940]: W0223 08:51:30.459195 4940 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9de4a20c_3f76_4aa8_8347_42f3b3f53145.slice/crio-ee288c906ac0e67b3520b29e0f987e1ea4c2abfb1f71555f74c6a3a74e194ced WatchSource:0}: Error finding container ee288c906ac0e67b3520b29e0f987e1ea4c2abfb1f71555f74c6a3a74e194ced: Status 404 returned error can't find the container with id ee288c906ac0e67b3520b29e0f987e1ea4c2abfb1f71555f74c6a3a74e194ced Feb 23 08:51:30 crc kubenswrapper[4940]: I0223 08:51:30.513220 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kzrfw\" (UID: \"05cfdf5e-5390-4f32-986d-02872c05f444\") " pod="openshift-image-registry/image-registry-697d97f7c8-kzrfw" Feb 23 08:51:30 crc kubenswrapper[4940]: E0223 08:51:30.514871 4940 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 08:51:31.014759596 +0000 UTC m=+222.397965753 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kzrfw" (UID: "05cfdf5e-5390-4f32-986d-02872c05f444") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 08:51:30 crc kubenswrapper[4940]: I0223 08:51:30.614923 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 08:51:30 crc kubenswrapper[4940]: E0223 08:51:30.615170 4940 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 08:51:31.115135933 +0000 UTC m=+222.498342090 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 08:51:30 crc kubenswrapper[4940]: I0223 08:51:30.622137 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kzrfw\" (UID: \"05cfdf5e-5390-4f32-986d-02872c05f444\") " pod="openshift-image-registry/image-registry-697d97f7c8-kzrfw" Feb 23 08:51:30 crc kubenswrapper[4940]: E0223 08:51:30.622869 4940 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 08:51:31.122851377 +0000 UTC m=+222.506057534 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kzrfw" (UID: "05cfdf5e-5390-4f32-986d-02872c05f444") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 08:51:30 crc kubenswrapper[4940]: I0223 08:51:30.727768 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-m4hz9"] Feb 23 08:51:30 crc kubenswrapper[4940]: I0223 08:51:30.728123 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 08:51:30 crc kubenswrapper[4940]: E0223 08:51:30.728183 4940 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 08:51:31.228169352 +0000 UTC m=+222.611375509 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 08:51:30 crc kubenswrapper[4940]: I0223 08:51:30.728798 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kzrfw\" (UID: \"05cfdf5e-5390-4f32-986d-02872c05f444\") " pod="openshift-image-registry/image-registry-697d97f7c8-kzrfw" Feb 23 08:51:30 crc kubenswrapper[4940]: E0223 08:51:30.729179 4940 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 08:51:31.229166953 +0000 UTC m=+222.612373110 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kzrfw" (UID: "05cfdf5e-5390-4f32-986d-02872c05f444") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 08:51:30 crc kubenswrapper[4940]: I0223 08:51:30.736328 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-2zgdn"] Feb 23 08:51:30 crc kubenswrapper[4940]: I0223 08:51:30.752890 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-wzbgr"] Feb 23 08:51:30 crc kubenswrapper[4940]: I0223 08:51:30.774946 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-hrnj4"] Feb 23 08:51:30 crc kubenswrapper[4940]: I0223 08:51:30.777266 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-687p7"] Feb 23 08:51:30 crc kubenswrapper[4940]: W0223 08:51:30.783857 4940 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod21f25477_51d5_480d_a252_f821cc008560.slice/crio-42729a712477b969fda6091ac4e0670e073780cdcc4c0bbb48295772b2cfb85c WatchSource:0}: Error finding container 42729a712477b969fda6091ac4e0670e073780cdcc4c0bbb48295772b2cfb85c: Status 404 returned error can't find the container with id 42729a712477b969fda6091ac4e0670e073780cdcc4c0bbb48295772b2cfb85c Feb 23 08:51:30 crc kubenswrapper[4940]: I0223 08:51:30.829735 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 08:51:30 crc kubenswrapper[4940]: E0223 08:51:30.830124 4940 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 08:51:31.330104718 +0000 UTC m=+222.713310875 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 08:51:30 crc kubenswrapper[4940]: I0223 08:51:30.855415 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29530605-25rxm"] Feb 23 08:51:30 crc kubenswrapper[4940]: I0223 08:51:30.874852 4940 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-tqnlh" Feb 23 08:51:30 crc kubenswrapper[4940]: I0223 08:51:30.879037 4940 patch_prober.go:28] interesting pod/router-default-5444994796-tqnlh container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 08:51:30 crc kubenswrapper[4940]: [-]has-synced failed: reason withheld Feb 23 08:51:30 crc kubenswrapper[4940]: [+]process-running ok Feb 23 08:51:30 crc kubenswrapper[4940]: healthz check failed Feb 23 08:51:30 crc kubenswrapper[4940]: I0223 08:51:30.879073 4940 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-tqnlh" podUID="feea3a62-1f72-4b46-9655-521e8ff5c323" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 08:51:30 crc kubenswrapper[4940]: I0223 08:51:30.918118 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-wwhg8"] Feb 23 08:51:30 crc kubenswrapper[4940]: I0223 08:51:30.933443 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kzrfw\" (UID: \"05cfdf5e-5390-4f32-986d-02872c05f444\") " pod="openshift-image-registry/image-registry-697d97f7c8-kzrfw" Feb 23 08:51:30 crc kubenswrapper[4940]: E0223 08:51:30.934180 4940 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 08:51:31.434163672 +0000 UTC m=+222.817369829 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kzrfw" (UID: "05cfdf5e-5390-4f32-986d-02872c05f444") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 08:51:30 crc kubenswrapper[4940]: I0223 08:51:30.961338 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-klkgk"] Feb 23 08:51:30 crc kubenswrapper[4940]: I0223 08:51:30.977534 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-wtxsb"] Feb 23 08:51:30 crc kubenswrapper[4940]: I0223 08:51:30.985366 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-6g7xj"] Feb 23 08:51:31 crc kubenswrapper[4940]: I0223 08:51:31.023319 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-pw8l4"] Feb 23 08:51:31 crc kubenswrapper[4940]: I0223 08:51:31.029534 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-z7w6c"] Feb 23 08:51:31 crc kubenswrapper[4940]: I0223 08:51:31.030121 4940 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-2mrdb" podStartSLOduration=154.030111349 podStartE2EDuration="2m34.030111349s" podCreationTimestamp="2026-02-23 08:48:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 08:51:31.017218119 +0000 UTC m=+222.400424276" watchObservedRunningTime="2026-02-23 08:51:31.030111349 +0000 UTC m=+222.413317506" Feb 23 08:51:31 crc kubenswrapper[4940]: I0223 08:51:31.034489 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 08:51:31 crc kubenswrapper[4940]: E0223 08:51:31.034952 4940 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 08:51:31.534935442 +0000 UTC m=+222.918141599 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 08:51:31 crc kubenswrapper[4940]: I0223 08:51:31.044709 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-zgt46"] Feb 23 08:51:31 crc kubenswrapper[4940]: W0223 08:51:31.049878 4940 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod46095393_c72a_4539_b3e4_e2f3f35301b8.slice/crio-b7aca903c862bf403b80e68c2afc77a5ddd7aacd23da48d452601d1a51ccd30f WatchSource:0}: Error finding container b7aca903c862bf403b80e68c2afc77a5ddd7aacd23da48d452601d1a51ccd30f: Status 404 returned error can't find the container with id b7aca903c862bf403b80e68c2afc77a5ddd7aacd23da48d452601d1a51ccd30f Feb 23 08:51:31 crc kubenswrapper[4940]: I0223 08:51:31.055342 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8smd9"] Feb 23 08:51:31 crc kubenswrapper[4940]: I0223 08:51:31.057321 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9xdrp"] Feb 23 08:51:31 crc kubenswrapper[4940]: I0223 08:51:31.064302 4940 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-vc9hk" podStartSLOduration=154.064287104 podStartE2EDuration="2m34.064287104s" podCreationTimestamp="2026-02-23 08:48:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 08:51:31.063630583 +0000 UTC m=+222.446836740" watchObservedRunningTime="2026-02-23 08:51:31.064287104 +0000 UTC m=+222.447493261" Feb 23 08:51:31 crc kubenswrapper[4940]: I0223 08:51:31.082535 4940 csr.go:261] certificate signing request csr-mxg6p is approved, waiting to be issued Feb 23 08:51:31 crc kubenswrapper[4940]: I0223 08:51:31.091360 4940 csr.go:257] certificate signing request csr-mxg6p is issued Feb 23 08:51:31 crc kubenswrapper[4940]: I0223 08:51:31.099693 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-2c95j"] Feb 23 08:51:31 crc kubenswrapper[4940]: I0223 08:51:31.102032 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-mg5lr"] Feb 23 08:51:31 crc kubenswrapper[4940]: I0223 08:51:31.115261 4940 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-8d22c" podStartSLOduration=154.115239631 podStartE2EDuration="2m34.115239631s" podCreationTimestamp="2026-02-23 08:48:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 08:51:31.112963259 +0000 UTC m=+222.496169436" watchObservedRunningTime="2026-02-23 08:51:31.115239631 +0000 UTC m=+222.498445788" Feb 23 08:51:31 crc kubenswrapper[4940]: I0223 08:51:31.143470 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kzrfw\" (UID: \"05cfdf5e-5390-4f32-986d-02872c05f444\") " pod="openshift-image-registry/image-registry-697d97f7c8-kzrfw" Feb 23 08:51:31 crc kubenswrapper[4940]: I0223 08:51:31.143765 4940 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-6qcpm" podStartSLOduration=154.143748437 podStartE2EDuration="2m34.143748437s" podCreationTimestamp="2026-02-23 08:48:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 08:51:31.143006143 +0000 UTC m=+222.526212320" watchObservedRunningTime="2026-02-23 08:51:31.143748437 +0000 UTC m=+222.526954594" Feb 23 08:51:31 crc kubenswrapper[4940]: E0223 08:51:31.144102 4940 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 08:51:31.644085017 +0000 UTC m=+223.027291234 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kzrfw" (UID: "05cfdf5e-5390-4f32-986d-02872c05f444") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 08:51:31 crc kubenswrapper[4940]: W0223 08:51:31.169842 4940 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0d9b99ac_2db8_435f_ad9f_4d7335a40e19.slice/crio-bc366a16c8b45dc2216d12368ba52225fe0e1fffa0d15cd53556890b35b88e80 WatchSource:0}: Error finding container bc366a16c8b45dc2216d12368ba52225fe0e1fffa0d15cd53556890b35b88e80: Status 404 returned error can't find the container with id bc366a16c8b45dc2216d12368ba52225fe0e1fffa0d15cd53556890b35b88e80 Feb 23 08:51:31 crc kubenswrapper[4940]: W0223 08:51:31.179043 4940 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod24821bad_09c7_4880_bf0e_a6e829284f2e.slice/crio-ae788f38bc5c1aa687eb57c029bf558fa7d41c897c98c8e52fb947703573b572 WatchSource:0}: Error finding container ae788f38bc5c1aa687eb57c029bf558fa7d41c897c98c8e52fb947703573b572: Status 404 returned error can't find the container with id ae788f38bc5c1aa687eb57c029bf558fa7d41c897c98c8e52fb947703573b572 Feb 23 08:51:31 crc kubenswrapper[4940]: I0223 08:51:31.189465 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-wzbgr" event={"ID":"bc0bbd6d-8e03-4d03-b2a8-2fe201d6c6bf","Type":"ContainerStarted","Data":"730a767089f92a4fda2be6fa16f3df35ce6ca674873f048fe734eef9090dfedd"} Feb 23 08:51:31 crc kubenswrapper[4940]: I0223 08:51:31.200488 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-rrhk2" event={"ID":"c733e45d-a072-4619-b2f8-aea6d77b112f","Type":"ContainerStarted","Data":"b0f38fd326c8c7e639046b6dbffc5f3aeee8b6a9d51e0727dd2478b5c51b6b74"} Feb 23 08:51:31 crc kubenswrapper[4940]: I0223 08:51:31.203523 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-wwhg8" event={"ID":"4f793610-43f5-4faf-b61d-e2330db0b177","Type":"ContainerStarted","Data":"8cede644ed4b4fdee7b52cae99cbe1c458615de54ae4b616f33323a54d6ea595"} Feb 23 08:51:31 crc kubenswrapper[4940]: I0223 08:51:31.203596 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-rrhk2" Feb 23 08:51:31 crc kubenswrapper[4940]: I0223 08:51:31.214560 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-m4hz9" event={"ID":"21f25477-51d5-480d-a252-f821cc008560","Type":"ContainerStarted","Data":"42729a712477b969fda6091ac4e0670e073780cdcc4c0bbb48295772b2cfb85c"} Feb 23 08:51:31 crc kubenswrapper[4940]: I0223 08:51:31.217592 4940 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-rrhk2 container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.21:6443/healthz\": dial tcp 10.217.0.21:6443: connect: connection refused" start-of-body= Feb 23 08:51:31 crc kubenswrapper[4940]: I0223 08:51:31.217668 4940 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-rrhk2" podUID="c733e45d-a072-4619-b2f8-aea6d77b112f" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.21:6443/healthz\": dial tcp 10.217.0.21:6443: connect: connection refused" Feb 23 08:51:31 crc kubenswrapper[4940]: I0223 08:51:31.220307 4940 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-tqnlh" podStartSLOduration=154.220291677 podStartE2EDuration="2m34.220291677s" podCreationTimestamp="2026-02-23 08:48:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 08:51:31.201363587 +0000 UTC m=+222.584569754" watchObservedRunningTime="2026-02-23 08:51:31.220291677 +0000 UTC m=+222.603497824" Feb 23 08:51:31 crc kubenswrapper[4940]: I0223 08:51:31.229842 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-95wjd" event={"ID":"4de72bcc-6d41-47cc-b9f7-f4cca10b977f","Type":"ContainerStarted","Data":"fb9a46c1784629bba68f51386cc31494bc80fb6db2dbf084e9a24de8c7025dc4"} Feb 23 08:51:31 crc kubenswrapper[4940]: I0223 08:51:31.229891 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-95wjd" event={"ID":"4de72bcc-6d41-47cc-b9f7-f4cca10b977f","Type":"ContainerStarted","Data":"91109c9abe80fc374b66b9c6e7c00edbc1cec613f74a58bbd76c91d57c49f93a"} Feb 23 08:51:31 crc kubenswrapper[4940]: I0223 08:51:31.232678 4940 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-xlz2g" podStartSLOduration=154.23266309 podStartE2EDuration="2m34.23266309s" podCreationTimestamp="2026-02-23 08:48:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 08:51:31.229352915 +0000 UTC m=+222.612559072" watchObservedRunningTime="2026-02-23 08:51:31.23266309 +0000 UTC m=+222.615869247" Feb 23 08:51:31 crc kubenswrapper[4940]: I0223 08:51:31.237665 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29530605-25rxm" event={"ID":"1b16df0c-b660-4f3c-9d26-cfff395d5c88","Type":"ContainerStarted","Data":"429301659db5ff149f44a22558d8598803d8457df67f2e20dc866f8e87d8acaf"} Feb 23 08:51:31 crc kubenswrapper[4940]: I0223 08:51:31.243685 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-sppfz" event={"ID":"2dfca0b5-693a-4e28-ae27-d5532038616c","Type":"ContainerStarted","Data":"1e85666d6ccc095d5827c35d28e684c75e26e9c869332baef2a8f2a91f386ae7"} Feb 23 08:51:31 crc kubenswrapper[4940]: I0223 08:51:31.244542 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 08:51:31 crc kubenswrapper[4940]: E0223 08:51:31.245141 4940 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 08:51:31.745101535 +0000 UTC m=+223.128307692 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 08:51:31 crc kubenswrapper[4940]: I0223 08:51:31.245160 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-sppfz" event={"ID":"2dfca0b5-693a-4e28-ae27-d5532038616c","Type":"ContainerStarted","Data":"d76fd3e1498bdca7ea09e0d97f7b6618450923a0a29cb9904ed58d530e72a922"} Feb 23 08:51:31 crc kubenswrapper[4940]: I0223 08:51:31.245185 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-sppfz" Feb 23 08:51:31 crc kubenswrapper[4940]: I0223 08:51:31.261742 4940 generic.go:334] "Generic (PLEG): container finished" podID="48ab701f-fc67-4d29-9cad-337e223f6f87" containerID="ed026c35719026e63770b0630b779b4c44d5d023d629202404d356f1ee9a7975" exitCode=0 Feb 23 08:51:31 crc kubenswrapper[4940]: I0223 08:51:31.261821 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-slpn2" event={"ID":"48ab701f-fc67-4d29-9cad-337e223f6f87","Type":"ContainerDied","Data":"ed026c35719026e63770b0630b779b4c44d5d023d629202404d356f1ee9a7975"} Feb 23 08:51:31 crc kubenswrapper[4940]: I0223 08:51:31.266126 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-8ls95" event={"ID":"ef3ada46-965f-42e7-b89b-a67618bff8c6","Type":"ContainerStarted","Data":"ba42e45c9eac5803cd96107c398f2705e3eb293b90715b04981c6c0f2ef3ec85"} Feb 23 08:51:31 crc kubenswrapper[4940]: I0223 08:51:31.267869 4940 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-rrhk2" podStartSLOduration=154.267855308 podStartE2EDuration="2m34.267855308s" podCreationTimestamp="2026-02-23 08:48:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 08:51:31.258207381 +0000 UTC m=+222.641413538" watchObservedRunningTime="2026-02-23 08:51:31.267855308 +0000 UTC m=+222.651061465" Feb 23 08:51:31 crc kubenswrapper[4940]: I0223 08:51:31.277230 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-224dd" event={"ID":"e13ea819-2f94-423e-ab3f-c7b6d03ad686","Type":"ContainerStarted","Data":"7000fb665f85ae134abf3bb8cb528888596a50a0a688957be37a939d2e8ed876"} Feb 23 08:51:31 crc kubenswrapper[4940]: I0223 08:51:31.277979 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-224dd" Feb 23 08:51:31 crc kubenswrapper[4940]: I0223 08:51:31.287787 4940 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-224dd container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.10:8443/healthz\": dial tcp 10.217.0.10:8443: connect: connection refused" start-of-body= Feb 23 08:51:31 crc kubenswrapper[4940]: I0223 08:51:31.287857 4940 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-224dd" podUID="e13ea819-2f94-423e-ab3f-c7b6d03ad686" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.10:8443/healthz\": dial tcp 10.217.0.10:8443: connect: connection refused" Feb 23 08:51:31 crc kubenswrapper[4940]: I0223 08:51:31.293860 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-2zgdn" event={"ID":"07ef0edd-666b-4ced-9a27-51433a59c6c0","Type":"ContainerStarted","Data":"dfff17a1648e69202a24771d9c9d5be6a439fc1f7a9ecd98c487199f22c3fef3"} Feb 23 08:51:31 crc kubenswrapper[4940]: I0223 08:51:31.302022 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-f9p7v" event={"ID":"d2d9e940-6dc2-4325-b5eb-410c1f038ae5","Type":"ContainerStarted","Data":"b8737e081109e89f20b3841dcf6533b6d17c00ad13ebc3e2fb1a49d3b6f37ff6"} Feb 23 08:51:31 crc kubenswrapper[4940]: I0223 08:51:31.302066 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-f9p7v" event={"ID":"d2d9e940-6dc2-4325-b5eb-410c1f038ae5","Type":"ContainerStarted","Data":"46642c8ba443e6ec129cbce01470f925bca00b9755e47d96ca8be21fc20c5d36"} Feb 23 08:51:31 crc kubenswrapper[4940]: I0223 08:51:31.307499 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-nmq4g" event={"ID":"d86468b3-88b2-4c49-b807-c00eceb862e2","Type":"ContainerStarted","Data":"8342d49d51ff9af0addd62fe28e4fa6b683a9f24f88e20002efdb4b80b801ed9"} Feb 23 08:51:31 crc kubenswrapper[4940]: I0223 08:51:31.307541 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-nmq4g" event={"ID":"d86468b3-88b2-4c49-b807-c00eceb862e2","Type":"ContainerStarted","Data":"4ea5a98db741a4196812b3779bd5bf160e6402fa9bd10d4c9e33025b4e9eaa4c"} Feb 23 08:51:31 crc kubenswrapper[4940]: I0223 08:51:31.313062 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lz7cz" event={"ID":"b546c0fc-b66f-4f2b-ab03-364362906f88","Type":"ContainerStarted","Data":"1cc8083fb784ab4bfd72346b4854f80c7d266786e5bbd57967df653c18d5664d"} Feb 23 08:51:31 crc kubenswrapper[4940]: I0223 08:51:31.313749 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lz7cz" Feb 23 08:51:31 crc kubenswrapper[4940]: I0223 08:51:31.323851 4940 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-lz7cz container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.38:8443/healthz\": dial tcp 10.217.0.38:8443: connect: connection refused" start-of-body= Feb 23 08:51:31 crc kubenswrapper[4940]: I0223 08:51:31.323900 4940 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lz7cz" podUID="b546c0fc-b66f-4f2b-ab03-364362906f88" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.38:8443/healthz\": dial tcp 10.217.0.38:8443: connect: connection refused" Feb 23 08:51:31 crc kubenswrapper[4940]: I0223 08:51:31.327784 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-glhjd" event={"ID":"2f476562-bc2b-4c2d-8283-fae9b3f8ac4e","Type":"ContainerStarted","Data":"2019db9e062814c2c093c1b9fa8962e114d17035d38e9c3eea45c8e4f286b0ad"} Feb 23 08:51:31 crc kubenswrapper[4940]: I0223 08:51:31.327840 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-glhjd" event={"ID":"2f476562-bc2b-4c2d-8283-fae9b3f8ac4e","Type":"ContainerStarted","Data":"2921d75ea759177f303c86a4e53b11238d6c240c540c8fff9bdbce819f38a6c7"} Feb 23 08:51:31 crc kubenswrapper[4940]: I0223 08:51:31.328467 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-glhjd" Feb 23 08:51:31 crc kubenswrapper[4940]: I0223 08:51:31.329294 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-6g7xj" event={"ID":"46095393-c72a-4539-b3e4-e2f3f35301b8","Type":"ContainerStarted","Data":"b7aca903c862bf403b80e68c2afc77a5ddd7aacd23da48d452601d1a51ccd30f"} Feb 23 08:51:31 crc kubenswrapper[4940]: I0223 08:51:31.330306 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-6f7z7" event={"ID":"63609b48-d163-4000-a23b-bb70a6719c5c","Type":"ContainerStarted","Data":"e8bf5c4f4f512a0551172ca8529e579baaad5e3a1fc91282946befa28767c31f"} Feb 23 08:51:31 crc kubenswrapper[4940]: I0223 08:51:31.346399 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kzrfw\" (UID: \"05cfdf5e-5390-4f32-986d-02872c05f444\") " pod="openshift-image-registry/image-registry-697d97f7c8-kzrfw" Feb 23 08:51:31 crc kubenswrapper[4940]: E0223 08:51:31.348519 4940 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 08:51:31.848500648 +0000 UTC m=+223.231706805 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kzrfw" (UID: "05cfdf5e-5390-4f32-986d-02872c05f444") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 08:51:31 crc kubenswrapper[4940]: I0223 08:51:31.352341 4940 generic.go:334] "Generic (PLEG): container finished" podID="71aa9018-d3be-454d-8d1c-5853f3971151" containerID="a96a663bcea4ae6facb08f94a182acfdab8ff2d07a20562e073a3e1c65d7f75c" exitCode=0 Feb 23 08:51:31 crc kubenswrapper[4940]: I0223 08:51:31.354262 4940 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-95wjd" podStartSLOduration=154.354246911 podStartE2EDuration="2m34.354246911s" podCreationTimestamp="2026-02-23 08:48:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 08:51:31.306289008 +0000 UTC m=+222.689495165" watchObservedRunningTime="2026-02-23 08:51:31.354246911 +0000 UTC m=+222.737453068" Feb 23 08:51:31 crc kubenswrapper[4940]: I0223 08:51:31.354841 4940 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-sppfz" podStartSLOduration=5.35483522 podStartE2EDuration="5.35483522s" podCreationTimestamp="2026-02-23 08:51:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 08:51:31.34288738 +0000 UTC m=+222.726093537" watchObservedRunningTime="2026-02-23 08:51:31.35483522 +0000 UTC m=+222.738041377" Feb 23 08:51:31 crc kubenswrapper[4940]: I0223 08:51:31.373511 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-znvc9" event={"ID":"71aa9018-d3be-454d-8d1c-5853f3971151","Type":"ContainerDied","Data":"a96a663bcea4ae6facb08f94a182acfdab8ff2d07a20562e073a3e1c65d7f75c"} Feb 23 08:51:31 crc kubenswrapper[4940]: I0223 08:51:31.373784 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8smd9" event={"ID":"c5a6d2fa-46ed-4669-ac19-c335595a24fd","Type":"ContainerStarted","Data":"48482d932a91c7f55af7c3e5751caea11028f70cf9ff0059b7246925ac490546"} Feb 23 08:51:31 crc kubenswrapper[4940]: I0223 08:51:31.379775 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-lb8pn" event={"ID":"9a9d3cd7-c717-4851-b4f3-2bb17d88d0bd","Type":"ContainerStarted","Data":"32199a27f702b193f7056cb66e8a877ec02011aa83fa42d927da0c96268eb25c"} Feb 23 08:51:31 crc kubenswrapper[4940]: I0223 08:51:31.381759 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-pw8l4" event={"ID":"57505051-bc9e-499e-9013-6365439ebb68","Type":"ContainerStarted","Data":"893d2f15e98d15de9ea51063b9a41ae888f04f979ecc9866365f32c85db25e4b"} Feb 23 08:51:31 crc kubenswrapper[4940]: I0223 08:51:31.389902 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-687p7" event={"ID":"f5e834a4-b3e0-4d34-922a-2a8ed8e1fecb","Type":"ContainerStarted","Data":"37bfdd8e7b14b800edf5bf5ca6fb8909ea4cc3348f7664cc828d8a7126eed159"} Feb 23 08:51:31 crc kubenswrapper[4940]: I0223 08:51:31.393666 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-wtxsb" event={"ID":"7f57c7c4-ddb8-48dd-853e-d87bdf3a9abd","Type":"ContainerStarted","Data":"273acbf1072dc3e70ce31ddf5a2f9538f372dfb56bc3499d0d6c5cd2da5cae4a"} Feb 23 08:51:31 crc kubenswrapper[4940]: I0223 08:51:31.399361 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-hrnj4" event={"ID":"783e15c8-9066-455f-878d-86215d82093b","Type":"ContainerStarted","Data":"5f429bebaeba167be1cacb912f9576fa0a80f805ff2b77f00e820c7265e5d5eb"} Feb 23 08:51:31 crc kubenswrapper[4940]: I0223 08:51:31.401912 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-j9x9v" event={"ID":"9de4a20c-3f76-4aa8-8347-42f3b3f53145","Type":"ContainerStarted","Data":"c8e99b53927abcd81861c6fbe79afe435a022e99706e38f1d1b81d38554144d6"} Feb 23 08:51:31 crc kubenswrapper[4940]: I0223 08:51:31.401981 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-j9x9v" event={"ID":"9de4a20c-3f76-4aa8-8347-42f3b3f53145","Type":"ContainerStarted","Data":"ee288c906ac0e67b3520b29e0f987e1ea4c2abfb1f71555f74c6a3a74e194ced"} Feb 23 08:51:31 crc kubenswrapper[4940]: I0223 08:51:31.402955 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-j9x9v" Feb 23 08:51:31 crc kubenswrapper[4940]: I0223 08:51:31.404188 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-whcws" event={"ID":"6a449118-5ee9-42f2-bdc2-a23f1c6febf6","Type":"ContainerStarted","Data":"8f8db6d49ebc7eb232bb90f43b4036d74e85f4568cdd4c9469d5fd66dce0aa2a"} Feb 23 08:51:31 crc kubenswrapper[4940]: I0223 08:51:31.405172 4940 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-j9x9v container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.39:8080/healthz\": dial tcp 10.217.0.39:8080: connect: connection refused" start-of-body= Feb 23 08:51:31 crc kubenswrapper[4940]: I0223 08:51:31.405206 4940 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-j9x9v" podUID="9de4a20c-3f76-4aa8-8347-42f3b3f53145" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.39:8080/healthz\": dial tcp 10.217.0.39:8080: connect: connection refused" Feb 23 08:51:31 crc kubenswrapper[4940]: I0223 08:51:31.406963 4940 generic.go:334] "Generic (PLEG): container finished" podID="1fee6a64-486f-4aef-9242-8bf07796d6e3" containerID="1008e15c679246e7508bbece406e898cd2bf087361cc42cc57e6ec31125bff75" exitCode=0 Feb 23 08:51:31 crc kubenswrapper[4940]: I0223 08:51:31.407570 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-n95bv" event={"ID":"1fee6a64-486f-4aef-9242-8bf07796d6e3","Type":"ContainerDied","Data":"1008e15c679246e7508bbece406e898cd2bf087361cc42cc57e6ec31125bff75"} Feb 23 08:51:31 crc kubenswrapper[4940]: I0223 08:51:31.409204 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-5tf64" event={"ID":"26986446-4844-49cf-a77d-7d316d2d826b","Type":"ContainerStarted","Data":"6087eba6264ba10d6eae6d5db0ab01e7ac797246f0e69b2a3a62bde2a82f8aac"} Feb 23 08:51:31 crc kubenswrapper[4940]: I0223 08:51:31.411166 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-klkgk" event={"ID":"32e17925-dd05-43de-8e22-105f0002b651","Type":"ContainerStarted","Data":"3d84451e12401551ccae433764b0bf794ac43a943371f052084b0d6107e534a8"} Feb 23 08:51:31 crc kubenswrapper[4940]: I0223 08:51:31.412679 4940 patch_prober.go:28] interesting pod/downloads-7954f5f757-6qcpm container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.27:8080/\": dial tcp 10.217.0.27:8080: connect: connection refused" start-of-body= Feb 23 08:51:31 crc kubenswrapper[4940]: I0223 08:51:31.412708 4940 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-6qcpm" podUID="916e1f6f-2bfc-41e7-86c2-6c379e3638c1" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.27:8080/\": dial tcp 10.217.0.27:8080: connect: connection refused" Feb 23 08:51:31 crc kubenswrapper[4940]: I0223 08:51:31.417419 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-2mrdb" Feb 23 08:51:31 crc kubenswrapper[4940]: I0223 08:51:31.431232 4940 patch_prober.go:28] interesting pod/machine-config-daemon-26mgs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 23 08:51:31 crc kubenswrapper[4940]: I0223 08:51:31.431482 4940 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" podUID="f3f2cfd6-5ddf-436d-998f-440f1cc642b1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 23 08:51:31 crc kubenswrapper[4940]: I0223 08:51:31.447356 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 08:51:31 crc kubenswrapper[4940]: E0223 08:51:31.447428 4940 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 08:51:31.947412829 +0000 UTC m=+223.330618986 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 08:51:31 crc kubenswrapper[4940]: I0223 08:51:31.448049 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kzrfw\" (UID: \"05cfdf5e-5390-4f32-986d-02872c05f444\") " pod="openshift-image-registry/image-registry-697d97f7c8-kzrfw" Feb 23 08:51:31 crc kubenswrapper[4940]: E0223 08:51:31.450491 4940 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 08:51:31.950467976 +0000 UTC m=+223.333674203 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kzrfw" (UID: "05cfdf5e-5390-4f32-986d-02872c05f444") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 08:51:31 crc kubenswrapper[4940]: I0223 08:51:31.478109 4940 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-224dd" podStartSLOduration=154.478084833 podStartE2EDuration="2m34.478084833s" podCreationTimestamp="2026-02-23 08:48:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 08:51:31.466309389 +0000 UTC m=+222.849515546" watchObservedRunningTime="2026-02-23 08:51:31.478084833 +0000 UTC m=+222.861290990" Feb 23 08:51:31 crc kubenswrapper[4940]: I0223 08:51:31.557627 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 08:51:31 crc kubenswrapper[4940]: E0223 08:51:31.558988 4940 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 08:51:32.058966531 +0000 UTC m=+223.442172678 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 08:51:31 crc kubenswrapper[4940]: I0223 08:51:31.617084 4940 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-8ls95" podStartSLOduration=154.617063745 podStartE2EDuration="2m34.617063745s" podCreationTimestamp="2026-02-23 08:48:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 08:51:31.582438796 +0000 UTC m=+222.965644963" watchObservedRunningTime="2026-02-23 08:51:31.617063745 +0000 UTC m=+223.000269902" Feb 23 08:51:31 crc kubenswrapper[4940]: I0223 08:51:31.663332 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kzrfw\" (UID: \"05cfdf5e-5390-4f32-986d-02872c05f444\") " pod="openshift-image-registry/image-registry-697d97f7c8-kzrfw" Feb 23 08:51:31 crc kubenswrapper[4940]: E0223 08:51:31.663731 4940 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 08:51:32.163715386 +0000 UTC m=+223.546921543 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kzrfw" (UID: "05cfdf5e-5390-4f32-986d-02872c05f444") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 08:51:31 crc kubenswrapper[4940]: I0223 08:51:31.725713 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-xlz2g" Feb 23 08:51:31 crc kubenswrapper[4940]: I0223 08:51:31.764698 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 08:51:31 crc kubenswrapper[4940]: E0223 08:51:31.764821 4940 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 08:51:32.264804836 +0000 UTC m=+223.648010983 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 08:51:31 crc kubenswrapper[4940]: I0223 08:51:31.765116 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kzrfw\" (UID: \"05cfdf5e-5390-4f32-986d-02872c05f444\") " pod="openshift-image-registry/image-registry-697d97f7c8-kzrfw" Feb 23 08:51:31 crc kubenswrapper[4940]: E0223 08:51:31.765457 4940 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 08:51:32.265449667 +0000 UTC m=+223.648655824 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kzrfw" (UID: "05cfdf5e-5390-4f32-986d-02872c05f444") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 08:51:31 crc kubenswrapper[4940]: I0223 08:51:31.852237 4940 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-6f7z7" podStartSLOduration=5.852216782 podStartE2EDuration="5.852216782s" podCreationTimestamp="2026-02-23 08:51:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 08:51:31.846561692 +0000 UTC m=+223.229767859" watchObservedRunningTime="2026-02-23 08:51:31.852216782 +0000 UTC m=+223.235422929" Feb 23 08:51:31 crc kubenswrapper[4940]: I0223 08:51:31.865764 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 08:51:31 crc kubenswrapper[4940]: E0223 08:51:31.866206 4940 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 08:51:32.366192446 +0000 UTC m=+223.749398593 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 08:51:31 crc kubenswrapper[4940]: I0223 08:51:31.884808 4940 patch_prober.go:28] interesting pod/router-default-5444994796-tqnlh container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 08:51:31 crc kubenswrapper[4940]: [-]has-synced failed: reason withheld Feb 23 08:51:31 crc kubenswrapper[4940]: [+]process-running ok Feb 23 08:51:31 crc kubenswrapper[4940]: healthz check failed Feb 23 08:51:31 crc kubenswrapper[4940]: I0223 08:51:31.884853 4940 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-tqnlh" podUID="feea3a62-1f72-4b46-9655-521e8ff5c323" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 08:51:31 crc kubenswrapper[4940]: I0223 08:51:31.968575 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kzrfw\" (UID: \"05cfdf5e-5390-4f32-986d-02872c05f444\") " pod="openshift-image-registry/image-registry-697d97f7c8-kzrfw" Feb 23 08:51:31 crc kubenswrapper[4940]: I0223 08:51:31.968551 4940 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-f9p7v" podStartSLOduration=154.968536696 podStartE2EDuration="2m34.968536696s" podCreationTimestamp="2026-02-23 08:48:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 08:51:31.916849034 +0000 UTC m=+223.300055211" watchObservedRunningTime="2026-02-23 08:51:31.968536696 +0000 UTC m=+223.351742853" Feb 23 08:51:31 crc kubenswrapper[4940]: E0223 08:51:31.968889 4940 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 08:51:32.468878716 +0000 UTC m=+223.852084873 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kzrfw" (UID: "05cfdf5e-5390-4f32-986d-02872c05f444") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 08:51:32 crc kubenswrapper[4940]: I0223 08:51:32.055251 4940 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-2zgdn" podStartSLOduration=155.055231048 podStartE2EDuration="2m35.055231048s" podCreationTimestamp="2026-02-23 08:48:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 08:51:32.051972665 +0000 UTC m=+223.435178822" watchObservedRunningTime="2026-02-23 08:51:32.055231048 +0000 UTC m=+223.438437205" Feb 23 08:51:32 crc kubenswrapper[4940]: I0223 08:51:32.070408 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 08:51:32 crc kubenswrapper[4940]: E0223 08:51:32.070529 4940 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 08:51:32.570510553 +0000 UTC m=+223.953716710 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 08:51:32 crc kubenswrapper[4940]: I0223 08:51:32.070752 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kzrfw\" (UID: \"05cfdf5e-5390-4f32-986d-02872c05f444\") " pod="openshift-image-registry/image-registry-697d97f7c8-kzrfw" Feb 23 08:51:32 crc kubenswrapper[4940]: E0223 08:51:32.071138 4940 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 08:51:32.571130123 +0000 UTC m=+223.954336280 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kzrfw" (UID: "05cfdf5e-5390-4f32-986d-02872c05f444") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 08:51:32 crc kubenswrapper[4940]: I0223 08:51:32.089393 4940 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-nmq4g" podStartSLOduration=155.089367251 podStartE2EDuration="2m35.089367251s" podCreationTimestamp="2026-02-23 08:48:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 08:51:32.082840375 +0000 UTC m=+223.466046542" watchObservedRunningTime="2026-02-23 08:51:32.089367251 +0000 UTC m=+223.472573408" Feb 23 08:51:32 crc kubenswrapper[4940]: I0223 08:51:32.106477 4940 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-02-23 08:46:31 +0000 UTC, rotation deadline is 2026-12-15 23:04:13.73802906 +0000 UTC Feb 23 08:51:32 crc kubenswrapper[4940]: I0223 08:51:32.111224 4940 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 7094h12m41.626822704s for next certificate rotation Feb 23 08:51:32 crc kubenswrapper[4940]: I0223 08:51:32.116114 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-qkw6w" Feb 23 08:51:32 crc kubenswrapper[4940]: I0223 08:51:32.172705 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 08:51:32 crc kubenswrapper[4940]: E0223 08:51:32.173086 4940 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 08:51:32.67306919 +0000 UTC m=+224.056275347 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 08:51:32 crc kubenswrapper[4940]: I0223 08:51:32.270671 4940 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-glhjd" podStartSLOduration=155.270638148 podStartE2EDuration="2m35.270638148s" podCreationTimestamp="2026-02-23 08:48:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 08:51:32.207365598 +0000 UTC m=+223.590571765" watchObservedRunningTime="2026-02-23 08:51:32.270638148 +0000 UTC m=+223.653844305" Feb 23 08:51:32 crc kubenswrapper[4940]: I0223 08:51:32.281966 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kzrfw\" (UID: \"05cfdf5e-5390-4f32-986d-02872c05f444\") " pod="openshift-image-registry/image-registry-697d97f7c8-kzrfw" Feb 23 08:51:32 crc kubenswrapper[4940]: E0223 08:51:32.282484 4940 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 08:51:32.782470333 +0000 UTC m=+224.165676490 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kzrfw" (UID: "05cfdf5e-5390-4f32-986d-02872c05f444") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 08:51:32 crc kubenswrapper[4940]: I0223 08:51:32.384222 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 08:51:32 crc kubenswrapper[4940]: E0223 08:51:32.384665 4940 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 08:51:32.884650258 +0000 UTC m=+224.267856415 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 08:51:32 crc kubenswrapper[4940]: I0223 08:51:32.387954 4940 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lz7cz" podStartSLOduration=155.387936332 podStartE2EDuration="2m35.387936332s" podCreationTimestamp="2026-02-23 08:48:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 08:51:32.341429795 +0000 UTC m=+223.724635962" watchObservedRunningTime="2026-02-23 08:51:32.387936332 +0000 UTC m=+223.771142489" Feb 23 08:51:32 crc kubenswrapper[4940]: I0223 08:51:32.445894 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9xdrp" event={"ID":"7a60bce1-d0e8-451a-91da-396ec5d5c53b","Type":"ContainerStarted","Data":"392a172efc91474c2ad993ca4f7a197983593259b5030fc047d1a98036397332"} Feb 23 08:51:32 crc kubenswrapper[4940]: I0223 08:51:32.445956 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9xdrp" Feb 23 08:51:32 crc kubenswrapper[4940]: I0223 08:51:32.445971 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9xdrp" event={"ID":"7a60bce1-d0e8-451a-91da-396ec5d5c53b","Type":"ContainerStarted","Data":"59f2ec704ec86ac6678a8db67aade88448a9274dda741104583c937f19c5d86c"} Feb 23 08:51:32 crc kubenswrapper[4940]: I0223 08:51:32.453014 4940 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-9xdrp container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.30:8443/healthz\": dial tcp 10.217.0.30:8443: connect: connection refused" start-of-body= Feb 23 08:51:32 crc kubenswrapper[4940]: I0223 08:51:32.453080 4940 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9xdrp" podUID="7a60bce1-d0e8-451a-91da-396ec5d5c53b" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.30:8443/healthz\": dial tcp 10.217.0.30:8443: connect: connection refused" Feb 23 08:51:32 crc kubenswrapper[4940]: I0223 08:51:32.458167 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-wtxsb" event={"ID":"7f57c7c4-ddb8-48dd-853e-d87bdf3a9abd","Type":"ContainerStarted","Data":"75e2e5e4830e6b844f82afc73f4789d315f912f335e4e3185fd8ec3f5d76b7eb"} Feb 23 08:51:32 crc kubenswrapper[4940]: I0223 08:51:32.459042 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-wtxsb" Feb 23 08:51:32 crc kubenswrapper[4940]: I0223 08:51:32.471108 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-z7w6c" event={"ID":"0d9b99ac-2db8-435f-ad9f-4d7335a40e19","Type":"ContainerStarted","Data":"3ba4423257e0a69038f367d40a0bf5f0198018a44b1764159fd230d8b71e6501"} Feb 23 08:51:32 crc kubenswrapper[4940]: I0223 08:51:32.471165 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-z7w6c" event={"ID":"0d9b99ac-2db8-435f-ad9f-4d7335a40e19","Type":"ContainerStarted","Data":"bc366a16c8b45dc2216d12368ba52225fe0e1fffa0d15cd53556890b35b88e80"} Feb 23 08:51:32 crc kubenswrapper[4940]: I0223 08:51:32.472206 4940 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-wtxsb container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.32:5443/healthz\": dial tcp 10.217.0.32:5443: connect: connection refused" start-of-body= Feb 23 08:51:32 crc kubenswrapper[4940]: I0223 08:51:32.472244 4940 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-wtxsb" podUID="7f57c7c4-ddb8-48dd-853e-d87bdf3a9abd" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.32:5443/healthz\": dial tcp 10.217.0.32:5443: connect: connection refused" Feb 23 08:51:32 crc kubenswrapper[4940]: I0223 08:51:32.480279 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-whcws" event={"ID":"6a449118-5ee9-42f2-bdc2-a23f1c6febf6","Type":"ContainerStarted","Data":"d1efd2aca5d27cbcf6af736c73bfc5634e93443fc12864022572e386f1a53048"} Feb 23 08:51:32 crc kubenswrapper[4940]: I0223 08:51:32.488674 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kzrfw\" (UID: \"05cfdf5e-5390-4f32-986d-02872c05f444\") " pod="openshift-image-registry/image-registry-697d97f7c8-kzrfw" Feb 23 08:51:32 crc kubenswrapper[4940]: E0223 08:51:32.489035 4940 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 08:51:32.989021842 +0000 UTC m=+224.372227999 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kzrfw" (UID: "05cfdf5e-5390-4f32-986d-02872c05f444") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 08:51:32 crc kubenswrapper[4940]: I0223 08:51:32.492486 4940 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-5tf64" podStartSLOduration=155.492476071 podStartE2EDuration="2m35.492476071s" podCreationTimestamp="2026-02-23 08:48:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 08:51:32.489715773 +0000 UTC m=+223.872921930" watchObservedRunningTime="2026-02-23 08:51:32.492476071 +0000 UTC m=+223.875682228" Feb 23 08:51:32 crc kubenswrapper[4940]: I0223 08:51:32.527699 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-wwhg8" event={"ID":"4f793610-43f5-4faf-b61d-e2330db0b177","Type":"ContainerStarted","Data":"29b33c8a0a8e1a730e530e18f96a4cb67322ee3b61c6520c1c40b063ea21233a"} Feb 23 08:51:32 crc kubenswrapper[4940]: I0223 08:51:32.558706 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-8ls95" event={"ID":"ef3ada46-965f-42e7-b89b-a67618bff8c6","Type":"ContainerStarted","Data":"e5acefbef4a2cf8bef0340ee9a5ffd1c077a84c657b2d7dca02ea676a78cff2b"} Feb 23 08:51:32 crc kubenswrapper[4940]: I0223 08:51:32.585179 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-wzbgr" event={"ID":"bc0bbd6d-8e03-4d03-b2a8-2fe201d6c6bf","Type":"ContainerStarted","Data":"fcd9c640e28e733bc2a91a5c9bdbc0e58eab6ab87d1ef181bfbeec0dd903fb9a"} Feb 23 08:51:32 crc kubenswrapper[4940]: I0223 08:51:32.585235 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-wzbgr" event={"ID":"bc0bbd6d-8e03-4d03-b2a8-2fe201d6c6bf","Type":"ContainerStarted","Data":"3e9b3a68fce7681d54a750200e4f96062ac70108ac11272861b9383b5b8c2feb"} Feb 23 08:51:32 crc kubenswrapper[4940]: I0223 08:51:32.588839 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-pw8l4" event={"ID":"57505051-bc9e-499e-9013-6365439ebb68","Type":"ContainerStarted","Data":"92f8b1aa141625aa5a9e374d192105e278fc0c37bb58fbaa6852307a19ab13dc"} Feb 23 08:51:32 crc kubenswrapper[4940]: I0223 08:51:32.593021 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 08:51:32 crc kubenswrapper[4940]: E0223 08:51:32.593210 4940 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 08:51:33.093184909 +0000 UTC m=+224.476391066 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 08:51:32 crc kubenswrapper[4940]: I0223 08:51:32.593427 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kzrfw\" (UID: \"05cfdf5e-5390-4f32-986d-02872c05f444\") " pod="openshift-image-registry/image-registry-697d97f7c8-kzrfw" Feb 23 08:51:32 crc kubenswrapper[4940]: I0223 08:51:32.594503 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-hrnj4" event={"ID":"783e15c8-9066-455f-878d-86215d82093b","Type":"ContainerStarted","Data":"b3e0c71820dad713a146fb1ea1997651fec3b8754ded45a86f4e194ebeb9f3d3"} Feb 23 08:51:32 crc kubenswrapper[4940]: I0223 08:51:32.594545 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-hrnj4" event={"ID":"783e15c8-9066-455f-878d-86215d82093b","Type":"ContainerStarted","Data":"f946c8cf6dd5ce5a383c99d5295e080a6b38c0bb62b82f674c706d5a5f951531"} Feb 23 08:51:32 crc kubenswrapper[4940]: E0223 08:51:32.594937 4940 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 08:51:33.094924735 +0000 UTC m=+224.478130892 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kzrfw" (UID: "05cfdf5e-5390-4f32-986d-02872c05f444") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 08:51:32 crc kubenswrapper[4940]: I0223 08:51:32.612758 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-6g7xj" event={"ID":"46095393-c72a-4539-b3e4-e2f3f35301b8","Type":"ContainerStarted","Data":"54af9de178830cdaf3de69426ed636c6c78e134954592b1b4858b7ced7e3d8c4"} Feb 23 08:51:32 crc kubenswrapper[4940]: I0223 08:51:32.612805 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-6g7xj" event={"ID":"46095393-c72a-4539-b3e4-e2f3f35301b8","Type":"ContainerStarted","Data":"26aaf6d64f814e8f647c5a2ec6507c4446c9cf8b7ddd3db8a8aed5d89b245fdc"} Feb 23 08:51:32 crc kubenswrapper[4940]: I0223 08:51:32.614857 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-zgt46" event={"ID":"24821bad-09c7-4880-bf0e-a6e829284f2e","Type":"ContainerStarted","Data":"f906cb89748918f1092496937d5825880c03190a8dd28cc326bebf86af2d5281"} Feb 23 08:51:32 crc kubenswrapper[4940]: I0223 08:51:32.614885 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-zgt46" event={"ID":"24821bad-09c7-4880-bf0e-a6e829284f2e","Type":"ContainerStarted","Data":"ae788f38bc5c1aa687eb57c029bf558fa7d41c897c98c8e52fb947703573b572"} Feb 23 08:51:32 crc kubenswrapper[4940]: I0223 08:51:32.624540 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-klkgk" event={"ID":"32e17925-dd05-43de-8e22-105f0002b651","Type":"ContainerStarted","Data":"b755bbe7b2e54725608e95c65531fc9177db12b932a7e7ef1991af65f26c6085"} Feb 23 08:51:32 crc kubenswrapper[4940]: I0223 08:51:32.629010 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-2zgdn" event={"ID":"07ef0edd-666b-4ced-9a27-51433a59c6c0","Type":"ContainerStarted","Data":"3026d82c3f351e0b3048d623ce2822cb03f6221b44aaf47c1869a811cebe8570"} Feb 23 08:51:32 crc kubenswrapper[4940]: I0223 08:51:32.642501 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-2c95j" event={"ID":"97dc456f-7c0c-49f0-ad5b-e4c791429d57","Type":"ContainerStarted","Data":"3f75fb3a18359519d4d4105e8ac97aa4e33ec7e9ece5a9791b768d77e57bf0a9"} Feb 23 08:51:32 crc kubenswrapper[4940]: I0223 08:51:32.642560 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-2c95j" event={"ID":"97dc456f-7c0c-49f0-ad5b-e4c791429d57","Type":"ContainerStarted","Data":"f8b9ebab690fc018ade64f20faead10f932aa98f1747f4a2c81c1bf0c7f1b005"} Feb 23 08:51:32 crc kubenswrapper[4940]: I0223 08:51:32.668973 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-mg5lr" event={"ID":"726e20bd-8ba8-4ae8-a2ce-6d1a50300dfd","Type":"ContainerStarted","Data":"176897eb7892731e30e9aade3c7e0630068bb4d4bcf454e4044472769f5a39e7"} Feb 23 08:51:32 crc kubenswrapper[4940]: I0223 08:51:32.669031 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-mg5lr" event={"ID":"726e20bd-8ba8-4ae8-a2ce-6d1a50300dfd","Type":"ContainerStarted","Data":"f2b4f7ccbc8e8bcd8b1d5a8a1b2b7f520031ff9128ee698151f95447eff5fb25"} Feb 23 08:51:32 crc kubenswrapper[4940]: I0223 08:51:32.676471 4940 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-lb8pn" podStartSLOduration=155.676447153 podStartE2EDuration="2m35.676447153s" podCreationTimestamp="2026-02-23 08:48:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 08:51:32.67447998 +0000 UTC m=+224.057686157" watchObservedRunningTime="2026-02-23 08:51:32.676447153 +0000 UTC m=+224.059653320" Feb 23 08:51:32 crc kubenswrapper[4940]: I0223 08:51:32.693304 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29530605-25rxm" event={"ID":"1b16df0c-b660-4f3c-9d26-cfff395d5c88","Type":"ContainerStarted","Data":"517398e88a48d2218e2707899fb06889fdece6b02715d0d6da6fdfd4576022e5"} Feb 23 08:51:32 crc kubenswrapper[4940]: I0223 08:51:32.694335 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 08:51:32 crc kubenswrapper[4940]: E0223 08:51:32.695921 4940 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 08:51:33.195891641 +0000 UTC m=+224.579097858 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 08:51:32 crc kubenswrapper[4940]: I0223 08:51:32.702951 4940 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-wtxsb" podStartSLOduration=155.702932024 podStartE2EDuration="2m35.702932024s" podCreationTimestamp="2026-02-23 08:48:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 08:51:32.702357225 +0000 UTC m=+224.085563392" watchObservedRunningTime="2026-02-23 08:51:32.702932024 +0000 UTC m=+224.086138181" Feb 23 08:51:32 crc kubenswrapper[4940]: I0223 08:51:32.710060 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lz7cz" event={"ID":"b546c0fc-b66f-4f2b-ab03-364362906f88","Type":"ContainerStarted","Data":"75bca2999ee45f9128b5c2187ea9c265c0436c4782eb2164e6a55699f7525385"} Feb 23 08:51:32 crc kubenswrapper[4940]: I0223 08:51:32.711734 4940 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-lz7cz container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.38:8443/healthz\": dial tcp 10.217.0.38:8443: connect: connection refused" start-of-body= Feb 23 08:51:32 crc kubenswrapper[4940]: I0223 08:51:32.711799 4940 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lz7cz" podUID="b546c0fc-b66f-4f2b-ab03-364362906f88" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.38:8443/healthz\": dial tcp 10.217.0.38:8443: connect: connection refused" Feb 23 08:51:32 crc kubenswrapper[4940]: I0223 08:51:32.746352 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8smd9" event={"ID":"c5a6d2fa-46ed-4669-ac19-c335595a24fd","Type":"ContainerStarted","Data":"c83edfb3950198c240b1aec75abfe42d1c8735bde35277e0c7d1bc684b134219"} Feb 23 08:51:32 crc kubenswrapper[4940]: I0223 08:51:32.772696 4940 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9xdrp" podStartSLOduration=155.772676278 podStartE2EDuration="2m35.772676278s" podCreationTimestamp="2026-02-23 08:48:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 08:51:32.742886303 +0000 UTC m=+224.126092460" watchObservedRunningTime="2026-02-23 08:51:32.772676278 +0000 UTC m=+224.155882435" Feb 23 08:51:32 crc kubenswrapper[4940]: I0223 08:51:32.773625 4940 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-j9x9v" podStartSLOduration=155.773621998 podStartE2EDuration="2m35.773621998s" podCreationTimestamp="2026-02-23 08:48:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 08:51:32.772295186 +0000 UTC m=+224.155501343" watchObservedRunningTime="2026-02-23 08:51:32.773621998 +0000 UTC m=+224.156828145" Feb 23 08:51:32 crc kubenswrapper[4940]: I0223 08:51:32.787522 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-n95bv" event={"ID":"1fee6a64-486f-4aef-9242-8bf07796d6e3","Type":"ContainerStarted","Data":"1591258bb95bee1301b9155d27d6a45004c2cb5e9a9e37c9b631e366a8478a8a"} Feb 23 08:51:32 crc kubenswrapper[4940]: I0223 08:51:32.788286 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-n95bv" Feb 23 08:51:32 crc kubenswrapper[4940]: I0223 08:51:32.797960 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kzrfw\" (UID: \"05cfdf5e-5390-4f32-986d-02872c05f444\") " pod="openshift-image-registry/image-registry-697d97f7c8-kzrfw" Feb 23 08:51:32 crc kubenswrapper[4940]: E0223 08:51:32.799471 4940 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 08:51:33.299460679 +0000 UTC m=+224.682666836 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kzrfw" (UID: "05cfdf5e-5390-4f32-986d-02872c05f444") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 08:51:32 crc kubenswrapper[4940]: I0223 08:51:32.808204 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-slpn2" event={"ID":"48ab701f-fc67-4d29-9cad-337e223f6f87","Type":"ContainerStarted","Data":"39ff5ef64b42aee88ec342028ddfd012021fc27979030de0986b4a5a9e5fc1c6"} Feb 23 08:51:32 crc kubenswrapper[4940]: I0223 08:51:32.831653 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-687p7" event={"ID":"f5e834a4-b3e0-4d34-922a-2a8ed8e1fecb","Type":"ContainerStarted","Data":"df3d4e48e3cf95fb2516c4c3e9d8fc9b2471ff1a44453ee88f8daab22119023f"} Feb 23 08:51:32 crc kubenswrapper[4940]: I0223 08:51:32.837197 4940 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-j9x9v container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.39:8080/healthz\": dial tcp 10.217.0.39:8080: connect: connection refused" start-of-body= Feb 23 08:51:32 crc kubenswrapper[4940]: I0223 08:51:32.837239 4940 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-j9x9v" podUID="9de4a20c-3f76-4aa8-8347-42f3b3f53145" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.39:8080/healthz\": dial tcp 10.217.0.39:8080: connect: connection refused" Feb 23 08:51:32 crc kubenswrapper[4940]: I0223 08:51:32.846539 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-224dd" Feb 23 08:51:32 crc kubenswrapper[4940]: I0223 08:51:32.869482 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-rrhk2" Feb 23 08:51:32 crc kubenswrapper[4940]: I0223 08:51:32.882853 4940 patch_prober.go:28] interesting pod/router-default-5444994796-tqnlh container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 08:51:32 crc kubenswrapper[4940]: [-]has-synced failed: reason withheld Feb 23 08:51:32 crc kubenswrapper[4940]: [+]process-running ok Feb 23 08:51:32 crc kubenswrapper[4940]: healthz check failed Feb 23 08:51:32 crc kubenswrapper[4940]: I0223 08:51:32.882902 4940 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-tqnlh" podUID="feea3a62-1f72-4b46-9655-521e8ff5c323" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 08:51:32 crc kubenswrapper[4940]: I0223 08:51:32.900443 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 08:51:32 crc kubenswrapper[4940]: E0223 08:51:32.901521 4940 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 08:51:33.401505239 +0000 UTC m=+224.784711396 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 08:51:32 crc kubenswrapper[4940]: I0223 08:51:32.901765 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kzrfw\" (UID: \"05cfdf5e-5390-4f32-986d-02872c05f444\") " pod="openshift-image-registry/image-registry-697d97f7c8-kzrfw" Feb 23 08:51:32 crc kubenswrapper[4940]: E0223 08:51:32.927593 4940 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 08:51:33.427577747 +0000 UTC m=+224.810783904 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kzrfw" (UID: "05cfdf5e-5390-4f32-986d-02872c05f444") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 08:51:32 crc kubenswrapper[4940]: I0223 08:51:32.961995 4940 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-whcws" podStartSLOduration=155.961967269 podStartE2EDuration="2m35.961967269s" podCreationTimestamp="2026-02-23 08:48:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 08:51:32.896913733 +0000 UTC m=+224.280119890" watchObservedRunningTime="2026-02-23 08:51:32.961967269 +0000 UTC m=+224.345173436" Feb 23 08:51:33 crc kubenswrapper[4940]: I0223 08:51:33.004345 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 08:51:33 crc kubenswrapper[4940]: E0223 08:51:33.004862 4940 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 08:51:33.50484214 +0000 UTC m=+224.888048307 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 08:51:33 crc kubenswrapper[4940]: I0223 08:51:33.019204 4940 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-klkgk" podStartSLOduration=156.019185585 podStartE2EDuration="2m36.019185585s" podCreationTimestamp="2026-02-23 08:48:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 08:51:32.97996742 +0000 UTC m=+224.363173587" watchObservedRunningTime="2026-02-23 08:51:33.019185585 +0000 UTC m=+224.402391742" Feb 23 08:51:33 crc kubenswrapper[4940]: I0223 08:51:33.066032 4940 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-z7w6c" podStartSLOduration=156.066013583 podStartE2EDuration="2m36.066013583s" podCreationTimestamp="2026-02-23 08:48:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 08:51:33.019689251 +0000 UTC m=+224.402895408" watchObservedRunningTime="2026-02-23 08:51:33.066013583 +0000 UTC m=+224.449219740" Feb 23 08:51:33 crc kubenswrapper[4940]: I0223 08:51:33.092285 4940 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-slpn2" podStartSLOduration=156.092268066 podStartE2EDuration="2m36.092268066s" podCreationTimestamp="2026-02-23 08:48:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 08:51:33.069123761 +0000 UTC m=+224.452329918" watchObservedRunningTime="2026-02-23 08:51:33.092268066 +0000 UTC m=+224.475474223" Feb 23 08:51:33 crc kubenswrapper[4940]: I0223 08:51:33.093868 4940 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-6g7xj" podStartSLOduration=156.093863157 podStartE2EDuration="2m36.093863157s" podCreationTimestamp="2026-02-23 08:48:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 08:51:33.091417529 +0000 UTC m=+224.474623686" watchObservedRunningTime="2026-02-23 08:51:33.093863157 +0000 UTC m=+224.477069304" Feb 23 08:51:33 crc kubenswrapper[4940]: I0223 08:51:33.110395 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kzrfw\" (UID: \"05cfdf5e-5390-4f32-986d-02872c05f444\") " pod="openshift-image-registry/image-registry-697d97f7c8-kzrfw" Feb 23 08:51:33 crc kubenswrapper[4940]: E0223 08:51:33.110897 4940 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 08:51:33.610882228 +0000 UTC m=+224.994088385 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kzrfw" (UID: "05cfdf5e-5390-4f32-986d-02872c05f444") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 08:51:33 crc kubenswrapper[4940]: I0223 08:51:33.117724 4940 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-wzbgr" podStartSLOduration=156.117699653 podStartE2EDuration="2m36.117699653s" podCreationTimestamp="2026-02-23 08:48:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 08:51:33.116921559 +0000 UTC m=+224.500127716" watchObservedRunningTime="2026-02-23 08:51:33.117699653 +0000 UTC m=+224.500905810" Feb 23 08:51:33 crc kubenswrapper[4940]: I0223 08:51:33.166583 4940 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-zgt46" podStartSLOduration=7.166552065 podStartE2EDuration="7.166552065s" podCreationTimestamp="2026-02-23 08:51:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 08:51:33.143951197 +0000 UTC m=+224.527157354" watchObservedRunningTime="2026-02-23 08:51:33.166552065 +0000 UTC m=+224.549758232" Feb 23 08:51:33 crc kubenswrapper[4940]: I0223 08:51:33.206995 4940 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-n95bv" podStartSLOduration=156.206976768 podStartE2EDuration="2m36.206976768s" podCreationTimestamp="2026-02-23 08:48:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 08:51:33.205848113 +0000 UTC m=+224.589054290" watchObservedRunningTime="2026-02-23 08:51:33.206976768 +0000 UTC m=+224.590182925" Feb 23 08:51:33 crc kubenswrapper[4940]: I0223 08:51:33.217348 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 08:51:33 crc kubenswrapper[4940]: E0223 08:51:33.217791 4940 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 08:51:33.717774141 +0000 UTC m=+225.100980298 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 08:51:33 crc kubenswrapper[4940]: I0223 08:51:33.280873 4940 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-pw8l4" podStartSLOduration=156.280853054 podStartE2EDuration="2m36.280853054s" podCreationTimestamp="2026-02-23 08:48:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 08:51:33.279354076 +0000 UTC m=+224.662560233" watchObservedRunningTime="2026-02-23 08:51:33.280853054 +0000 UTC m=+224.664059211" Feb 23 08:51:33 crc kubenswrapper[4940]: I0223 08:51:33.319402 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kzrfw\" (UID: \"05cfdf5e-5390-4f32-986d-02872c05f444\") " pod="openshift-image-registry/image-registry-697d97f7c8-kzrfw" Feb 23 08:51:33 crc kubenswrapper[4940]: E0223 08:51:33.319834 4940 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 08:51:33.819815771 +0000 UTC m=+225.203021928 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kzrfw" (UID: "05cfdf5e-5390-4f32-986d-02872c05f444") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 08:51:33 crc kubenswrapper[4940]: I0223 08:51:33.324832 4940 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-hrnj4" podStartSLOduration=156.32481267 podStartE2EDuration="2m36.32481267s" podCreationTimestamp="2026-02-23 08:48:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 08:51:33.324056415 +0000 UTC m=+224.707262572" watchObservedRunningTime="2026-02-23 08:51:33.32481267 +0000 UTC m=+224.708018827" Feb 23 08:51:33 crc kubenswrapper[4940]: I0223 08:51:33.405074 4940 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-mg5lr" podStartSLOduration=156.405056157 podStartE2EDuration="2m36.405056157s" podCreationTimestamp="2026-02-23 08:48:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 08:51:33.363467497 +0000 UTC m=+224.746673654" watchObservedRunningTime="2026-02-23 08:51:33.405056157 +0000 UTC m=+224.788262314" Feb 23 08:51:33 crc kubenswrapper[4940]: I0223 08:51:33.406715 4940 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29530605-25rxm" podStartSLOduration=156.40671058 podStartE2EDuration="2m36.40671058s" podCreationTimestamp="2026-02-23 08:48:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 08:51:33.404818361 +0000 UTC m=+224.788024518" watchObservedRunningTime="2026-02-23 08:51:33.40671058 +0000 UTC m=+224.789916737" Feb 23 08:51:33 crc kubenswrapper[4940]: I0223 08:51:33.423023 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 08:51:33 crc kubenswrapper[4940]: E0223 08:51:33.423451 4940 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 08:51:33.923435741 +0000 UTC m=+225.306641908 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 08:51:33 crc kubenswrapper[4940]: I0223 08:51:33.477692 4940 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8smd9" podStartSLOduration=156.477671684 podStartE2EDuration="2m36.477671684s" podCreationTimestamp="2026-02-23 08:48:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 08:51:33.441429972 +0000 UTC m=+224.824636139" watchObservedRunningTime="2026-02-23 08:51:33.477671684 +0000 UTC m=+224.860877841" Feb 23 08:51:33 crc kubenswrapper[4940]: I0223 08:51:33.524917 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kzrfw\" (UID: \"05cfdf5e-5390-4f32-986d-02872c05f444\") " pod="openshift-image-registry/image-registry-697d97f7c8-kzrfw" Feb 23 08:51:33 crc kubenswrapper[4940]: E0223 08:51:33.525294 4940 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 08:51:34.025277185 +0000 UTC m=+225.408483342 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kzrfw" (UID: "05cfdf5e-5390-4f32-986d-02872c05f444") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 08:51:33 crc kubenswrapper[4940]: I0223 08:51:33.625983 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 08:51:33 crc kubenswrapper[4940]: E0223 08:51:33.626520 4940 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 08:51:34.126494238 +0000 UTC m=+225.509700395 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 08:51:33 crc kubenswrapper[4940]: I0223 08:51:33.727644 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kzrfw\" (UID: \"05cfdf5e-5390-4f32-986d-02872c05f444\") " pod="openshift-image-registry/image-registry-697d97f7c8-kzrfw" Feb 23 08:51:33 crc kubenswrapper[4940]: E0223 08:51:33.728056 4940 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 08:51:34.228018812 +0000 UTC m=+225.611224959 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kzrfw" (UID: "05cfdf5e-5390-4f32-986d-02872c05f444") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 08:51:33 crc kubenswrapper[4940]: I0223 08:51:33.819294 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-slpn2" Feb 23 08:51:33 crc kubenswrapper[4940]: I0223 08:51:33.819347 4940 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-slpn2" Feb 23 08:51:33 crc kubenswrapper[4940]: I0223 08:51:33.820898 4940 patch_prober.go:28] interesting pod/apiserver-7bbb656c7d-slpn2 container/oauth-apiserver namespace/openshift-oauth-apiserver: Startup probe status=failure output="Get \"https://10.217.0.9:8443/livez\": dial tcp 10.217.0.9:8443: connect: connection refused" start-of-body= Feb 23 08:51:33 crc kubenswrapper[4940]: I0223 08:51:33.820961 4940 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-slpn2" podUID="48ab701f-fc67-4d29-9cad-337e223f6f87" containerName="oauth-apiserver" probeResult="failure" output="Get \"https://10.217.0.9:8443/livez\": dial tcp 10.217.0.9:8443: connect: connection refused" Feb 23 08:51:33 crc kubenswrapper[4940]: I0223 08:51:33.828332 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 08:51:33 crc kubenswrapper[4940]: E0223 08:51:33.828502 4940 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 08:51:34.328477002 +0000 UTC m=+225.711683159 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 08:51:33 crc kubenswrapper[4940]: I0223 08:51:33.828661 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kzrfw\" (UID: \"05cfdf5e-5390-4f32-986d-02872c05f444\") " pod="openshift-image-registry/image-registry-697d97f7c8-kzrfw" Feb 23 08:51:33 crc kubenswrapper[4940]: E0223 08:51:33.828978 4940 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 08:51:34.328971648 +0000 UTC m=+225.712177805 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kzrfw" (UID: "05cfdf5e-5390-4f32-986d-02872c05f444") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 08:51:33 crc kubenswrapper[4940]: I0223 08:51:33.842886 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-wwhg8" event={"ID":"4f793610-43f5-4faf-b61d-e2330db0b177","Type":"ContainerStarted","Data":"7767f50135359a229bee447e6eb3e118fefd15912eae8f36432984e0d8a5fa86"} Feb 23 08:51:33 crc kubenswrapper[4940]: I0223 08:51:33.845177 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-znvc9" event={"ID":"71aa9018-d3be-454d-8d1c-5853f3971151","Type":"ContainerStarted","Data":"c71698e1b3e3d7306683f0eda66528cab3dbf087e512b123a5f4c43188c0f036"} Feb 23 08:51:33 crc kubenswrapper[4940]: I0223 08:51:33.845202 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-znvc9" event={"ID":"71aa9018-d3be-454d-8d1c-5853f3971151","Type":"ContainerStarted","Data":"1d762fa35148b7737fbe8472dac48ffcc94c199d1443d1ae6d109ffeadffaeab"} Feb 23 08:51:33 crc kubenswrapper[4940]: I0223 08:51:33.848820 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-2c95j" event={"ID":"97dc456f-7c0c-49f0-ad5b-e4c791429d57","Type":"ContainerStarted","Data":"c8d189becef0dfbf71093c65fa10924249b0cf67d4eab4315d43ad9e9495aa66"} Feb 23 08:51:33 crc kubenswrapper[4940]: I0223 08:51:33.850451 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-mg5lr" event={"ID":"726e20bd-8ba8-4ae8-a2ce-6d1a50300dfd","Type":"ContainerStarted","Data":"1199fafa76d3ae16750b142e77e09d608b7230baf3e582674d89060b7d05a35d"} Feb 23 08:51:33 crc kubenswrapper[4940]: I0223 08:51:33.852407 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-m4hz9" event={"ID":"21f25477-51d5-480d-a252-f821cc008560","Type":"ContainerStarted","Data":"ebeb384bdf7d928f7718d0e0f699813ddce0d33988860b6db752491d74ee77e1"} Feb 23 08:51:33 crc kubenswrapper[4940]: I0223 08:51:33.853156 4940 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-j9x9v container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.39:8080/healthz\": dial tcp 10.217.0.39:8080: connect: connection refused" start-of-body= Feb 23 08:51:33 crc kubenswrapper[4940]: I0223 08:51:33.853189 4940 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-j9x9v" podUID="9de4a20c-3f76-4aa8-8347-42f3b3f53145" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.39:8080/healthz\": dial tcp 10.217.0.39:8080: connect: connection refused" Feb 23 08:51:33 crc kubenswrapper[4940]: I0223 08:51:33.853253 4940 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-9xdrp container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.30:8443/healthz\": dial tcp 10.217.0.30:8443: connect: connection refused" start-of-body= Feb 23 08:51:33 crc kubenswrapper[4940]: I0223 08:51:33.853307 4940 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9xdrp" podUID="7a60bce1-d0e8-451a-91da-396ec5d5c53b" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.30:8443/healthz\": dial tcp 10.217.0.30:8443: connect: connection refused" Feb 23 08:51:33 crc kubenswrapper[4940]: I0223 08:51:33.861488 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lz7cz" Feb 23 08:51:33 crc kubenswrapper[4940]: I0223 08:51:33.877521 4940 patch_prober.go:28] interesting pod/router-default-5444994796-tqnlh container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 08:51:33 crc kubenswrapper[4940]: [-]has-synced failed: reason withheld Feb 23 08:51:33 crc kubenswrapper[4940]: [+]process-running ok Feb 23 08:51:33 crc kubenswrapper[4940]: healthz check failed Feb 23 08:51:33 crc kubenswrapper[4940]: I0223 08:51:33.877560 4940 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-tqnlh" podUID="feea3a62-1f72-4b46-9655-521e8ff5c323" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 08:51:33 crc kubenswrapper[4940]: I0223 08:51:33.881026 4940 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-687p7" podStartSLOduration=156.8810131 podStartE2EDuration="2m36.8810131s" podCreationTimestamp="2026-02-23 08:48:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 08:51:33.479985667 +0000 UTC m=+224.863191824" watchObservedRunningTime="2026-02-23 08:51:33.8810131 +0000 UTC m=+225.264219257" Feb 23 08:51:33 crc kubenswrapper[4940]: I0223 08:51:33.881266 4940 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-wwhg8" podStartSLOduration=156.881261958 podStartE2EDuration="2m36.881261958s" podCreationTimestamp="2026-02-23 08:48:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 08:51:33.879317446 +0000 UTC m=+225.262523613" watchObservedRunningTime="2026-02-23 08:51:33.881261958 +0000 UTC m=+225.264468115" Feb 23 08:51:33 crc kubenswrapper[4940]: I0223 08:51:33.929674 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 08:51:33 crc kubenswrapper[4940]: E0223 08:51:33.929776 4940 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 08:51:34.429759388 +0000 UTC m=+225.812965545 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 08:51:33 crc kubenswrapper[4940]: I0223 08:51:33.934017 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kzrfw\" (UID: \"05cfdf5e-5390-4f32-986d-02872c05f444\") " pod="openshift-image-registry/image-registry-697d97f7c8-kzrfw" Feb 23 08:51:33 crc kubenswrapper[4940]: E0223 08:51:33.966116 4940 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 08:51:34.45534555 +0000 UTC m=+225.838551707 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kzrfw" (UID: "05cfdf5e-5390-4f32-986d-02872c05f444") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 08:51:33 crc kubenswrapper[4940]: I0223 08:51:33.970221 4940 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-2c95j" podStartSLOduration=156.970203772 podStartE2EDuration="2m36.970203772s" podCreationTimestamp="2026-02-23 08:48:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 08:51:33.966155943 +0000 UTC m=+225.349362100" watchObservedRunningTime="2026-02-23 08:51:33.970203772 +0000 UTC m=+225.353409929" Feb 23 08:51:33 crc kubenswrapper[4940]: I0223 08:51:33.989740 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-znvc9" Feb 23 08:51:33 crc kubenswrapper[4940]: I0223 08:51:33.996386 4940 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-znvc9" Feb 23 08:51:34 crc kubenswrapper[4940]: I0223 08:51:34.003365 4940 patch_prober.go:28] interesting pod/apiserver-76f77b778f-znvc9 container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="Get \"https://10.217.0.22:8443/livez\": dial tcp 10.217.0.22:8443: connect: connection refused" start-of-body= Feb 23 08:51:34 crc kubenswrapper[4940]: I0223 08:51:34.003435 4940 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-znvc9" podUID="71aa9018-d3be-454d-8d1c-5853f3971151" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.217.0.22:8443/livez\": dial tcp 10.217.0.22:8443: connect: connection refused" Feb 23 08:51:34 crc kubenswrapper[4940]: I0223 08:51:34.005813 4940 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-znvc9" podStartSLOduration=157.005798412 podStartE2EDuration="2m37.005798412s" podCreationTimestamp="2026-02-23 08:48:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 08:51:34.003409477 +0000 UTC m=+225.386615634" watchObservedRunningTime="2026-02-23 08:51:34.005798412 +0000 UTC m=+225.389004569" Feb 23 08:51:34 crc kubenswrapper[4940]: I0223 08:51:34.036264 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 08:51:34 crc kubenswrapper[4940]: E0223 08:51:34.036486 4940 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 08:51:34.536440825 +0000 UTC m=+225.919646982 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 08:51:34 crc kubenswrapper[4940]: I0223 08:51:34.036825 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kzrfw\" (UID: \"05cfdf5e-5390-4f32-986d-02872c05f444\") " pod="openshift-image-registry/image-registry-697d97f7c8-kzrfw" Feb 23 08:51:34 crc kubenswrapper[4940]: E0223 08:51:34.037148 4940 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 08:51:34.537135868 +0000 UTC m=+225.920342025 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kzrfw" (UID: "05cfdf5e-5390-4f32-986d-02872c05f444") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 08:51:34 crc kubenswrapper[4940]: I0223 08:51:34.138179 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 08:51:34 crc kubenswrapper[4940]: E0223 08:51:34.138529 4940 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 08:51:34.638500895 +0000 UTC m=+226.021707062 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 08:51:34 crc kubenswrapper[4940]: I0223 08:51:34.138624 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kzrfw\" (UID: \"05cfdf5e-5390-4f32-986d-02872c05f444\") " pod="openshift-image-registry/image-registry-697d97f7c8-kzrfw" Feb 23 08:51:34 crc kubenswrapper[4940]: E0223 08:51:34.138906 4940 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 08:51:34.638895548 +0000 UTC m=+226.022101705 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kzrfw" (UID: "05cfdf5e-5390-4f32-986d-02872c05f444") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 08:51:34 crc kubenswrapper[4940]: I0223 08:51:34.240586 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 08:51:34 crc kubenswrapper[4940]: E0223 08:51:34.240968 4940 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 08:51:34.740949689 +0000 UTC m=+226.124155846 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 08:51:34 crc kubenswrapper[4940]: I0223 08:51:34.342261 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kzrfw\" (UID: \"05cfdf5e-5390-4f32-986d-02872c05f444\") " pod="openshift-image-registry/image-registry-697d97f7c8-kzrfw" Feb 23 08:51:34 crc kubenswrapper[4940]: E0223 08:51:34.342586 4940 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 08:51:34.842572116 +0000 UTC m=+226.225778273 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kzrfw" (UID: "05cfdf5e-5390-4f32-986d-02872c05f444") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 08:51:34 crc kubenswrapper[4940]: I0223 08:51:34.443736 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 08:51:34 crc kubenswrapper[4940]: E0223 08:51:34.444131 4940 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 08:51:34.944113819 +0000 UTC m=+226.327319976 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 08:51:34 crc kubenswrapper[4940]: I0223 08:51:34.544875 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kzrfw\" (UID: \"05cfdf5e-5390-4f32-986d-02872c05f444\") " pod="openshift-image-registry/image-registry-697d97f7c8-kzrfw" Feb 23 08:51:34 crc kubenswrapper[4940]: E0223 08:51:34.545254 4940 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 08:51:35.0452372 +0000 UTC m=+226.428443357 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kzrfw" (UID: "05cfdf5e-5390-4f32-986d-02872c05f444") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 08:51:34 crc kubenswrapper[4940]: I0223 08:51:34.646272 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 08:51:34 crc kubenswrapper[4940]: E0223 08:51:34.646974 4940 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 08:51:35.14695916 +0000 UTC m=+226.530165317 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 08:51:34 crc kubenswrapper[4940]: I0223 08:51:34.748263 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kzrfw\" (UID: \"05cfdf5e-5390-4f32-986d-02872c05f444\") " pod="openshift-image-registry/image-registry-697d97f7c8-kzrfw" Feb 23 08:51:34 crc kubenswrapper[4940]: E0223 08:51:34.748563 4940 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 08:51:35.248551606 +0000 UTC m=+226.631757753 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kzrfw" (UID: "05cfdf5e-5390-4f32-986d-02872c05f444") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 08:51:34 crc kubenswrapper[4940]: I0223 08:51:34.849221 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 08:51:34 crc kubenswrapper[4940]: E0223 08:51:34.849581 4940 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 08:51:35.349562923 +0000 UTC m=+226.732769080 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 08:51:34 crc kubenswrapper[4940]: I0223 08:51:34.853852 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-wtxsb" Feb 23 08:51:34 crc kubenswrapper[4940]: I0223 08:51:34.875278 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-m4hz9" event={"ID":"21f25477-51d5-480d-a252-f821cc008560","Type":"ContainerStarted","Data":"2433a614becd756b37b6a8c6cf8371d3b9f97cd7d414eea1050e6c5a4350b93c"} Feb 23 08:51:34 crc kubenswrapper[4940]: I0223 08:51:34.881880 4940 patch_prober.go:28] interesting pod/router-default-5444994796-tqnlh container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 08:51:34 crc kubenswrapper[4940]: [-]has-synced failed: reason withheld Feb 23 08:51:34 crc kubenswrapper[4940]: [+]process-running ok Feb 23 08:51:34 crc kubenswrapper[4940]: healthz check failed Feb 23 08:51:34 crc kubenswrapper[4940]: I0223 08:51:34.881949 4940 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-tqnlh" podUID="feea3a62-1f72-4b46-9655-521e8ff5c323" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 08:51:34 crc kubenswrapper[4940]: I0223 08:51:34.886373 4940 generic.go:334] "Generic (PLEG): container finished" podID="1b16df0c-b660-4f3c-9d26-cfff395d5c88" containerID="517398e88a48d2218e2707899fb06889fdece6b02715d0d6da6fdfd4576022e5" exitCode=0 Feb 23 08:51:34 crc kubenswrapper[4940]: I0223 08:51:34.887575 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29530605-25rxm" event={"ID":"1b16df0c-b660-4f3c-9d26-cfff395d5c88","Type":"ContainerDied","Data":"517398e88a48d2218e2707899fb06889fdece6b02715d0d6da6fdfd4576022e5"} Feb 23 08:51:34 crc kubenswrapper[4940]: I0223 08:51:34.953116 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kzrfw\" (UID: \"05cfdf5e-5390-4f32-986d-02872c05f444\") " pod="openshift-image-registry/image-registry-697d97f7c8-kzrfw" Feb 23 08:51:34 crc kubenswrapper[4940]: E0223 08:51:34.955750 4940 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 08:51:35.455733845 +0000 UTC m=+226.838940002 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kzrfw" (UID: "05cfdf5e-5390-4f32-986d-02872c05f444") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 08:51:35 crc kubenswrapper[4940]: I0223 08:51:35.054175 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 08:51:35 crc kubenswrapper[4940]: E0223 08:51:35.054421 4940 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 08:51:35.554393347 +0000 UTC m=+226.937599504 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 08:51:35 crc kubenswrapper[4940]: I0223 08:51:35.054895 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kzrfw\" (UID: \"05cfdf5e-5390-4f32-986d-02872c05f444\") " pod="openshift-image-registry/image-registry-697d97f7c8-kzrfw" Feb 23 08:51:35 crc kubenswrapper[4940]: E0223 08:51:35.055229 4940 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 08:51:35.555222203 +0000 UTC m=+226.938428350 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kzrfw" (UID: "05cfdf5e-5390-4f32-986d-02872c05f444") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 08:51:35 crc kubenswrapper[4940]: I0223 08:51:35.156531 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 08:51:35 crc kubenswrapper[4940]: E0223 08:51:35.156684 4940 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 08:51:35.656664804 +0000 UTC m=+227.039870961 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 08:51:35 crc kubenswrapper[4940]: I0223 08:51:35.156841 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kzrfw\" (UID: \"05cfdf5e-5390-4f32-986d-02872c05f444\") " pod="openshift-image-registry/image-registry-697d97f7c8-kzrfw" Feb 23 08:51:35 crc kubenswrapper[4940]: E0223 08:51:35.157137 4940 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 08:51:35.657115589 +0000 UTC m=+227.040321746 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kzrfw" (UID: "05cfdf5e-5390-4f32-986d-02872c05f444") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 08:51:35 crc kubenswrapper[4940]: I0223 08:51:35.257967 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 08:51:35 crc kubenswrapper[4940]: E0223 08:51:35.258206 4940 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 08:51:35.758174008 +0000 UTC m=+227.141380165 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 08:51:35 crc kubenswrapper[4940]: I0223 08:51:35.258520 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kzrfw\" (UID: \"05cfdf5e-5390-4f32-986d-02872c05f444\") " pod="openshift-image-registry/image-registry-697d97f7c8-kzrfw" Feb 23 08:51:35 crc kubenswrapper[4940]: E0223 08:51:35.258866 4940 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 08:51:35.758852369 +0000 UTC m=+227.142058526 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kzrfw" (UID: "05cfdf5e-5390-4f32-986d-02872c05f444") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 08:51:35 crc kubenswrapper[4940]: I0223 08:51:35.360106 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 08:51:35 crc kubenswrapper[4940]: E0223 08:51:35.360300 4940 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 08:51:35.860273739 +0000 UTC m=+227.243479896 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 08:51:35 crc kubenswrapper[4940]: I0223 08:51:35.360457 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kzrfw\" (UID: \"05cfdf5e-5390-4f32-986d-02872c05f444\") " pod="openshift-image-registry/image-registry-697d97f7c8-kzrfw" Feb 23 08:51:35 crc kubenswrapper[4940]: E0223 08:51:35.360826 4940 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 08:51:35.860811046 +0000 UTC m=+227.244017203 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kzrfw" (UID: "05cfdf5e-5390-4f32-986d-02872c05f444") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 08:51:35 crc kubenswrapper[4940]: I0223 08:51:35.461138 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 08:51:35 crc kubenswrapper[4940]: E0223 08:51:35.461481 4940 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 08:51:35.961465032 +0000 UTC m=+227.344671189 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 08:51:35 crc kubenswrapper[4940]: I0223 08:51:35.562866 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kzrfw\" (UID: \"05cfdf5e-5390-4f32-986d-02872c05f444\") " pod="openshift-image-registry/image-registry-697d97f7c8-kzrfw" Feb 23 08:51:35 crc kubenswrapper[4940]: E0223 08:51:35.563351 4940 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 08:51:36.063329647 +0000 UTC m=+227.446535854 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kzrfw" (UID: "05cfdf5e-5390-4f32-986d-02872c05f444") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 08:51:35 crc kubenswrapper[4940]: I0223 08:51:35.628890 4940 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-224dd"] Feb 23 08:51:35 crc kubenswrapper[4940]: I0223 08:51:35.637181 4940 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-2mrdb"] Feb 23 08:51:35 crc kubenswrapper[4940]: I0223 08:51:35.637433 4940 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-2mrdb" podUID="af5df6df-7f6c-40a3-b1da-44af29cdee8b" containerName="route-controller-manager" containerID="cri-o://ac0a19ea351b92f589453a34077d4bddefb33a6b68a444fdf2e10a434d067cc0" gracePeriod=30 Feb 23 08:51:35 crc kubenswrapper[4940]: I0223 08:51:35.664207 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 08:51:35 crc kubenswrapper[4940]: E0223 08:51:35.664511 4940 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 08:51:36.164476118 +0000 UTC m=+227.547682275 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 08:51:35 crc kubenswrapper[4940]: I0223 08:51:35.731717 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-n95bv" Feb 23 08:51:35 crc kubenswrapper[4940]: I0223 08:51:35.766639 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kzrfw\" (UID: \"05cfdf5e-5390-4f32-986d-02872c05f444\") " pod="openshift-image-registry/image-registry-697d97f7c8-kzrfw" Feb 23 08:51:35 crc kubenswrapper[4940]: E0223 08:51:35.766962 4940 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 08:51:36.266951162 +0000 UTC m=+227.650157319 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kzrfw" (UID: "05cfdf5e-5390-4f32-986d-02872c05f444") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 08:51:35 crc kubenswrapper[4940]: I0223 08:51:35.847110 4940 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Feb 23 08:51:35 crc kubenswrapper[4940]: I0223 08:51:35.868196 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 08:51:35 crc kubenswrapper[4940]: E0223 08:51:35.868530 4940 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 08:51:36.368515157 +0000 UTC m=+227.751721314 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 08:51:35 crc kubenswrapper[4940]: I0223 08:51:35.881816 4940 patch_prober.go:28] interesting pod/router-default-5444994796-tqnlh container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 08:51:35 crc kubenswrapper[4940]: [-]has-synced failed: reason withheld Feb 23 08:51:35 crc kubenswrapper[4940]: [+]process-running ok Feb 23 08:51:35 crc kubenswrapper[4940]: healthz check failed Feb 23 08:51:35 crc kubenswrapper[4940]: I0223 08:51:35.881876 4940 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-tqnlh" podUID="feea3a62-1f72-4b46-9655-521e8ff5c323" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 08:51:35 crc kubenswrapper[4940]: I0223 08:51:35.899214 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-m4hz9" event={"ID":"21f25477-51d5-480d-a252-f821cc008560","Type":"ContainerStarted","Data":"758c1c8248336b8f0dc573ebed93b75b9b00f0812720c237e0bf924073a52a5d"} Feb 23 08:51:35 crc kubenswrapper[4940]: I0223 08:51:35.899289 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-m4hz9" event={"ID":"21f25477-51d5-480d-a252-f821cc008560","Type":"ContainerStarted","Data":"8126f76e48aeb5a3a301bbece7749a3bfad7f55a12099ed7582061033ef571d0"} Feb 23 08:51:35 crc kubenswrapper[4940]: I0223 08:51:35.908538 4940 generic.go:334] "Generic (PLEG): container finished" podID="af5df6df-7f6c-40a3-b1da-44af29cdee8b" containerID="ac0a19ea351b92f589453a34077d4bddefb33a6b68a444fdf2e10a434d067cc0" exitCode=0 Feb 23 08:51:35 crc kubenswrapper[4940]: I0223 08:51:35.908863 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-2mrdb" event={"ID":"af5df6df-7f6c-40a3-b1da-44af29cdee8b","Type":"ContainerDied","Data":"ac0a19ea351b92f589453a34077d4bddefb33a6b68a444fdf2e10a434d067cc0"} Feb 23 08:51:35 crc kubenswrapper[4940]: I0223 08:51:35.909875 4940 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-224dd" podUID="e13ea819-2f94-423e-ab3f-c7b6d03ad686" containerName="controller-manager" containerID="cri-o://7000fb665f85ae134abf3bb8cb528888596a50a0a688957be37a939d2e8ed876" gracePeriod=30 Feb 23 08:51:35 crc kubenswrapper[4940]: I0223 08:51:35.949703 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-wprw9"] Feb 23 08:51:35 crc kubenswrapper[4940]: I0223 08:51:35.951907 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wprw9" Feb 23 08:51:35 crc kubenswrapper[4940]: I0223 08:51:35.954984 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 23 08:51:35 crc kubenswrapper[4940]: I0223 08:51:35.966363 4940 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-m4hz9" podStartSLOduration=9.966346683 podStartE2EDuration="9.966346683s" podCreationTimestamp="2026-02-23 08:51:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 08:51:35.950336684 +0000 UTC m=+227.333542841" watchObservedRunningTime="2026-02-23 08:51:35.966346683 +0000 UTC m=+227.349552840" Feb 23 08:51:35 crc kubenswrapper[4940]: I0223 08:51:35.969136 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kzrfw\" (UID: \"05cfdf5e-5390-4f32-986d-02872c05f444\") " pod="openshift-image-registry/image-registry-697d97f7c8-kzrfw" Feb 23 08:51:35 crc kubenswrapper[4940]: E0223 08:51:35.972132 4940 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 08:51:36.472120797 +0000 UTC m=+227.855326954 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kzrfw" (UID: "05cfdf5e-5390-4f32-986d-02872c05f444") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 08:51:35 crc kubenswrapper[4940]: I0223 08:51:35.987649 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-wprw9"] Feb 23 08:51:36 crc kubenswrapper[4940]: I0223 08:51:36.035455 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 23 08:51:36 crc kubenswrapper[4940]: I0223 08:51:36.037253 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 23 08:51:36 crc kubenswrapper[4940]: I0223 08:51:36.041211 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Feb 23 08:51:36 crc kubenswrapper[4940]: I0223 08:51:36.041454 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Feb 23 08:51:36 crc kubenswrapper[4940]: I0223 08:51:36.044477 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 23 08:51:36 crc kubenswrapper[4940]: I0223 08:51:36.070012 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 08:51:36 crc kubenswrapper[4940]: I0223 08:51:36.070514 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fc357ef5-0994-4918-859b-d623e534da2a-catalog-content\") pod \"certified-operators-wprw9\" (UID: \"fc357ef5-0994-4918-859b-d623e534da2a\") " pod="openshift-marketplace/certified-operators-wprw9" Feb 23 08:51:36 crc kubenswrapper[4940]: I0223 08:51:36.070548 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fc357ef5-0994-4918-859b-d623e534da2a-utilities\") pod \"certified-operators-wprw9\" (UID: \"fc357ef5-0994-4918-859b-d623e534da2a\") " pod="openshift-marketplace/certified-operators-wprw9" Feb 23 08:51:36 crc kubenswrapper[4940]: I0223 08:51:36.070626 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5n729\" (UniqueName: \"kubernetes.io/projected/fc357ef5-0994-4918-859b-d623e534da2a-kube-api-access-5n729\") pod \"certified-operators-wprw9\" (UID: \"fc357ef5-0994-4918-859b-d623e534da2a\") " pod="openshift-marketplace/certified-operators-wprw9" Feb 23 08:51:36 crc kubenswrapper[4940]: E0223 08:51:36.070744 4940 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 08:51:36.570729227 +0000 UTC m=+227.953935384 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 08:51:36 crc kubenswrapper[4940]: I0223 08:51:36.113015 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-mqw5m"] Feb 23 08:51:36 crc kubenswrapper[4940]: I0223 08:51:36.114040 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mqw5m" Feb 23 08:51:36 crc kubenswrapper[4940]: I0223 08:51:36.119030 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 23 08:51:36 crc kubenswrapper[4940]: I0223 08:51:36.147672 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-mqw5m"] Feb 23 08:51:36 crc kubenswrapper[4940]: I0223 08:51:36.180907 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4nm4p\" (UniqueName: \"kubernetes.io/projected/0917e41b-2e05-4596-9ad4-05b382ee9f56-kube-api-access-4nm4p\") pod \"community-operators-mqw5m\" (UID: \"0917e41b-2e05-4596-9ad4-05b382ee9f56\") " pod="openshift-marketplace/community-operators-mqw5m" Feb 23 08:51:36 crc kubenswrapper[4940]: I0223 08:51:36.180955 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kzrfw\" (UID: \"05cfdf5e-5390-4f32-986d-02872c05f444\") " pod="openshift-image-registry/image-registry-697d97f7c8-kzrfw" Feb 23 08:51:36 crc kubenswrapper[4940]: I0223 08:51:36.181032 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3c2be6bf-ea2f-47a2-851e-c67d893f0563-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"3c2be6bf-ea2f-47a2-851e-c67d893f0563\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 23 08:51:36 crc kubenswrapper[4940]: I0223 08:51:36.181101 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fc357ef5-0994-4918-859b-d623e534da2a-catalog-content\") pod \"certified-operators-wprw9\" (UID: \"fc357ef5-0994-4918-859b-d623e534da2a\") " pod="openshift-marketplace/certified-operators-wprw9" Feb 23 08:51:36 crc kubenswrapper[4940]: I0223 08:51:36.181127 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fc357ef5-0994-4918-859b-d623e534da2a-utilities\") pod \"certified-operators-wprw9\" (UID: \"fc357ef5-0994-4918-859b-d623e534da2a\") " pod="openshift-marketplace/certified-operators-wprw9" Feb 23 08:51:36 crc kubenswrapper[4940]: I0223 08:51:36.181149 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0917e41b-2e05-4596-9ad4-05b382ee9f56-utilities\") pod \"community-operators-mqw5m\" (UID: \"0917e41b-2e05-4596-9ad4-05b382ee9f56\") " pod="openshift-marketplace/community-operators-mqw5m" Feb 23 08:51:36 crc kubenswrapper[4940]: I0223 08:51:36.181171 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3c2be6bf-ea2f-47a2-851e-c67d893f0563-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"3c2be6bf-ea2f-47a2-851e-c67d893f0563\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 23 08:51:36 crc kubenswrapper[4940]: I0223 08:51:36.181201 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0917e41b-2e05-4596-9ad4-05b382ee9f56-catalog-content\") pod \"community-operators-mqw5m\" (UID: \"0917e41b-2e05-4596-9ad4-05b382ee9f56\") " pod="openshift-marketplace/community-operators-mqw5m" Feb 23 08:51:36 crc kubenswrapper[4940]: I0223 08:51:36.181250 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5n729\" (UniqueName: \"kubernetes.io/projected/fc357ef5-0994-4918-859b-d623e534da2a-kube-api-access-5n729\") pod \"certified-operators-wprw9\" (UID: \"fc357ef5-0994-4918-859b-d623e534da2a\") " pod="openshift-marketplace/certified-operators-wprw9" Feb 23 08:51:36 crc kubenswrapper[4940]: E0223 08:51:36.181795 4940 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 08:51:36.681784724 +0000 UTC m=+228.064990881 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kzrfw" (UID: "05cfdf5e-5390-4f32-986d-02872c05f444") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 08:51:36 crc kubenswrapper[4940]: I0223 08:51:36.182183 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fc357ef5-0994-4918-859b-d623e534da2a-utilities\") pod \"certified-operators-wprw9\" (UID: \"fc357ef5-0994-4918-859b-d623e534da2a\") " pod="openshift-marketplace/certified-operators-wprw9" Feb 23 08:51:36 crc kubenswrapper[4940]: I0223 08:51:36.182963 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fc357ef5-0994-4918-859b-d623e534da2a-catalog-content\") pod \"certified-operators-wprw9\" (UID: \"fc357ef5-0994-4918-859b-d623e534da2a\") " pod="openshift-marketplace/certified-operators-wprw9" Feb 23 08:51:36 crc kubenswrapper[4940]: I0223 08:51:36.210045 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5n729\" (UniqueName: \"kubernetes.io/projected/fc357ef5-0994-4918-859b-d623e534da2a-kube-api-access-5n729\") pod \"certified-operators-wprw9\" (UID: \"fc357ef5-0994-4918-859b-d623e534da2a\") " pod="openshift-marketplace/certified-operators-wprw9" Feb 23 08:51:36 crc kubenswrapper[4940]: I0223 08:51:36.273152 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wprw9" Feb 23 08:51:36 crc kubenswrapper[4940]: I0223 08:51:36.282292 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 08:51:36 crc kubenswrapper[4940]: I0223 08:51:36.282567 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4nm4p\" (UniqueName: \"kubernetes.io/projected/0917e41b-2e05-4596-9ad4-05b382ee9f56-kube-api-access-4nm4p\") pod \"community-operators-mqw5m\" (UID: \"0917e41b-2e05-4596-9ad4-05b382ee9f56\") " pod="openshift-marketplace/community-operators-mqw5m" Feb 23 08:51:36 crc kubenswrapper[4940]: I0223 08:51:36.282681 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3c2be6bf-ea2f-47a2-851e-c67d893f0563-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"3c2be6bf-ea2f-47a2-851e-c67d893f0563\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 23 08:51:36 crc kubenswrapper[4940]: I0223 08:51:36.282734 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0917e41b-2e05-4596-9ad4-05b382ee9f56-utilities\") pod \"community-operators-mqw5m\" (UID: \"0917e41b-2e05-4596-9ad4-05b382ee9f56\") " pod="openshift-marketplace/community-operators-mqw5m" Feb 23 08:51:36 crc kubenswrapper[4940]: I0223 08:51:36.282756 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3c2be6bf-ea2f-47a2-851e-c67d893f0563-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"3c2be6bf-ea2f-47a2-851e-c67d893f0563\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 23 08:51:36 crc kubenswrapper[4940]: I0223 08:51:36.282787 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0917e41b-2e05-4596-9ad4-05b382ee9f56-catalog-content\") pod \"community-operators-mqw5m\" (UID: \"0917e41b-2e05-4596-9ad4-05b382ee9f56\") " pod="openshift-marketplace/community-operators-mqw5m" Feb 23 08:51:36 crc kubenswrapper[4940]: I0223 08:51:36.283250 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0917e41b-2e05-4596-9ad4-05b382ee9f56-catalog-content\") pod \"community-operators-mqw5m\" (UID: \"0917e41b-2e05-4596-9ad4-05b382ee9f56\") " pod="openshift-marketplace/community-operators-mqw5m" Feb 23 08:51:36 crc kubenswrapper[4940]: E0223 08:51:36.283335 4940 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 08:51:36.783316937 +0000 UTC m=+228.166523104 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 08:51:36 crc kubenswrapper[4940]: I0223 08:51:36.284466 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0917e41b-2e05-4596-9ad4-05b382ee9f56-utilities\") pod \"community-operators-mqw5m\" (UID: \"0917e41b-2e05-4596-9ad4-05b382ee9f56\") " pod="openshift-marketplace/community-operators-mqw5m" Feb 23 08:51:36 crc kubenswrapper[4940]: I0223 08:51:36.284517 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3c2be6bf-ea2f-47a2-851e-c67d893f0563-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"3c2be6bf-ea2f-47a2-851e-c67d893f0563\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 23 08:51:36 crc kubenswrapper[4940]: I0223 08:51:36.306160 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3c2be6bf-ea2f-47a2-851e-c67d893f0563-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"3c2be6bf-ea2f-47a2-851e-c67d893f0563\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 23 08:51:36 crc kubenswrapper[4940]: I0223 08:51:36.310419 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-cg7jw"] Feb 23 08:51:36 crc kubenswrapper[4940]: I0223 08:51:36.314446 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4nm4p\" (UniqueName: \"kubernetes.io/projected/0917e41b-2e05-4596-9ad4-05b382ee9f56-kube-api-access-4nm4p\") pod \"community-operators-mqw5m\" (UID: \"0917e41b-2e05-4596-9ad4-05b382ee9f56\") " pod="openshift-marketplace/community-operators-mqw5m" Feb 23 08:51:36 crc kubenswrapper[4940]: I0223 08:51:36.314577 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cg7jw" Feb 23 08:51:36 crc kubenswrapper[4940]: I0223 08:51:36.327068 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-cg7jw"] Feb 23 08:51:36 crc kubenswrapper[4940]: I0223 08:51:36.329726 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-2mrdb" Feb 23 08:51:36 crc kubenswrapper[4940]: I0223 08:51:36.386435 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d1d92eba-93c9-43cf-8c22-584d8a2e1579-catalog-content\") pod \"certified-operators-cg7jw\" (UID: \"d1d92eba-93c9-43cf-8c22-584d8a2e1579\") " pod="openshift-marketplace/certified-operators-cg7jw" Feb 23 08:51:36 crc kubenswrapper[4940]: I0223 08:51:36.386498 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d1d92eba-93c9-43cf-8c22-584d8a2e1579-utilities\") pod \"certified-operators-cg7jw\" (UID: \"d1d92eba-93c9-43cf-8c22-584d8a2e1579\") " pod="openshift-marketplace/certified-operators-cg7jw" Feb 23 08:51:36 crc kubenswrapper[4940]: I0223 08:51:36.386538 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-shs7s\" (UniqueName: \"kubernetes.io/projected/d1d92eba-93c9-43cf-8c22-584d8a2e1579-kube-api-access-shs7s\") pod \"certified-operators-cg7jw\" (UID: \"d1d92eba-93c9-43cf-8c22-584d8a2e1579\") " pod="openshift-marketplace/certified-operators-cg7jw" Feb 23 08:51:36 crc kubenswrapper[4940]: I0223 08:51:36.386567 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kzrfw\" (UID: \"05cfdf5e-5390-4f32-986d-02872c05f444\") " pod="openshift-image-registry/image-registry-697d97f7c8-kzrfw" Feb 23 08:51:36 crc kubenswrapper[4940]: E0223 08:51:36.386943 4940 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 08:51:36.886929907 +0000 UTC m=+228.270136064 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kzrfw" (UID: "05cfdf5e-5390-4f32-986d-02872c05f444") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 08:51:36 crc kubenswrapper[4940]: I0223 08:51:36.388440 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 23 08:51:36 crc kubenswrapper[4940]: I0223 08:51:36.444813 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mqw5m" Feb 23 08:51:36 crc kubenswrapper[4940]: I0223 08:51:36.445395 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-224dd" Feb 23 08:51:36 crc kubenswrapper[4940]: I0223 08:51:36.487986 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 08:51:36 crc kubenswrapper[4940]: I0223 08:51:36.488070 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/af5df6df-7f6c-40a3-b1da-44af29cdee8b-config\") pod \"af5df6df-7f6c-40a3-b1da-44af29cdee8b\" (UID: \"af5df6df-7f6c-40a3-b1da-44af29cdee8b\") " Feb 23 08:51:36 crc kubenswrapper[4940]: E0223 08:51:36.488107 4940 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 08:51:36.988080219 +0000 UTC m=+228.371286436 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 08:51:36 crc kubenswrapper[4940]: I0223 08:51:36.488178 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/af5df6df-7f6c-40a3-b1da-44af29cdee8b-serving-cert\") pod \"af5df6df-7f6c-40a3-b1da-44af29cdee8b\" (UID: \"af5df6df-7f6c-40a3-b1da-44af29cdee8b\") " Feb 23 08:51:36 crc kubenswrapper[4940]: I0223 08:51:36.488253 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/af5df6df-7f6c-40a3-b1da-44af29cdee8b-client-ca\") pod \"af5df6df-7f6c-40a3-b1da-44af29cdee8b\" (UID: \"af5df6df-7f6c-40a3-b1da-44af29cdee8b\") " Feb 23 08:51:36 crc kubenswrapper[4940]: I0223 08:51:36.488380 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2vswr\" (UniqueName: \"kubernetes.io/projected/af5df6df-7f6c-40a3-b1da-44af29cdee8b-kube-api-access-2vswr\") pod \"af5df6df-7f6c-40a3-b1da-44af29cdee8b\" (UID: \"af5df6df-7f6c-40a3-b1da-44af29cdee8b\") " Feb 23 08:51:36 crc kubenswrapper[4940]: I0223 08:51:36.489699 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d1d92eba-93c9-43cf-8c22-584d8a2e1579-catalog-content\") pod \"certified-operators-cg7jw\" (UID: \"d1d92eba-93c9-43cf-8c22-584d8a2e1579\") " pod="openshift-marketplace/certified-operators-cg7jw" Feb 23 08:51:36 crc kubenswrapper[4940]: I0223 08:51:36.489793 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d1d92eba-93c9-43cf-8c22-584d8a2e1579-utilities\") pod \"certified-operators-cg7jw\" (UID: \"d1d92eba-93c9-43cf-8c22-584d8a2e1579\") " pod="openshift-marketplace/certified-operators-cg7jw" Feb 23 08:51:36 crc kubenswrapper[4940]: I0223 08:51:36.489868 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-shs7s\" (UniqueName: \"kubernetes.io/projected/d1d92eba-93c9-43cf-8c22-584d8a2e1579-kube-api-access-shs7s\") pod \"certified-operators-cg7jw\" (UID: \"d1d92eba-93c9-43cf-8c22-584d8a2e1579\") " pod="openshift-marketplace/certified-operators-cg7jw" Feb 23 08:51:36 crc kubenswrapper[4940]: I0223 08:51:36.489919 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kzrfw\" (UID: \"05cfdf5e-5390-4f32-986d-02872c05f444\") " pod="openshift-image-registry/image-registry-697d97f7c8-kzrfw" Feb 23 08:51:36 crc kubenswrapper[4940]: E0223 08:51:36.490246 4940 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 08:51:36.990236357 +0000 UTC m=+228.373442514 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kzrfw" (UID: "05cfdf5e-5390-4f32-986d-02872c05f444") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 08:51:36 crc kubenswrapper[4940]: I0223 08:51:36.490426 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af5df6df-7f6c-40a3-b1da-44af29cdee8b-config" (OuterVolumeSpecName: "config") pod "af5df6df-7f6c-40a3-b1da-44af29cdee8b" (UID: "af5df6df-7f6c-40a3-b1da-44af29cdee8b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 08:51:36 crc kubenswrapper[4940]: I0223 08:51:36.490942 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d1d92eba-93c9-43cf-8c22-584d8a2e1579-catalog-content\") pod \"certified-operators-cg7jw\" (UID: \"d1d92eba-93c9-43cf-8c22-584d8a2e1579\") " pod="openshift-marketplace/certified-operators-cg7jw" Feb 23 08:51:36 crc kubenswrapper[4940]: I0223 08:51:36.490970 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d1d92eba-93c9-43cf-8c22-584d8a2e1579-utilities\") pod \"certified-operators-cg7jw\" (UID: \"d1d92eba-93c9-43cf-8c22-584d8a2e1579\") " pod="openshift-marketplace/certified-operators-cg7jw" Feb 23 08:51:36 crc kubenswrapper[4940]: I0223 08:51:36.491087 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af5df6df-7f6c-40a3-b1da-44af29cdee8b-client-ca" (OuterVolumeSpecName: "client-ca") pod "af5df6df-7f6c-40a3-b1da-44af29cdee8b" (UID: "af5df6df-7f6c-40a3-b1da-44af29cdee8b"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 08:51:36 crc kubenswrapper[4940]: I0223 08:51:36.495480 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af5df6df-7f6c-40a3-b1da-44af29cdee8b-kube-api-access-2vswr" (OuterVolumeSpecName: "kube-api-access-2vswr") pod "af5df6df-7f6c-40a3-b1da-44af29cdee8b" (UID: "af5df6df-7f6c-40a3-b1da-44af29cdee8b"). InnerVolumeSpecName "kube-api-access-2vswr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 08:51:36 crc kubenswrapper[4940]: I0223 08:51:36.498132 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af5df6df-7f6c-40a3-b1da-44af29cdee8b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "af5df6df-7f6c-40a3-b1da-44af29cdee8b" (UID: "af5df6df-7f6c-40a3-b1da-44af29cdee8b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 08:51:36 crc kubenswrapper[4940]: I0223 08:51:36.508403 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29530605-25rxm" Feb 23 08:51:36 crc kubenswrapper[4940]: I0223 08:51:36.523921 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-mmcm9"] Feb 23 08:51:36 crc kubenswrapper[4940]: E0223 08:51:36.524194 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b16df0c-b660-4f3c-9d26-cfff395d5c88" containerName="collect-profiles" Feb 23 08:51:36 crc kubenswrapper[4940]: I0223 08:51:36.524209 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b16df0c-b660-4f3c-9d26-cfff395d5c88" containerName="collect-profiles" Feb 23 08:51:36 crc kubenswrapper[4940]: E0223 08:51:36.524226 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e13ea819-2f94-423e-ab3f-c7b6d03ad686" containerName="controller-manager" Feb 23 08:51:36 crc kubenswrapper[4940]: I0223 08:51:36.524234 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="e13ea819-2f94-423e-ab3f-c7b6d03ad686" containerName="controller-manager" Feb 23 08:51:36 crc kubenswrapper[4940]: E0223 08:51:36.524251 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af5df6df-7f6c-40a3-b1da-44af29cdee8b" containerName="route-controller-manager" Feb 23 08:51:36 crc kubenswrapper[4940]: I0223 08:51:36.524257 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="af5df6df-7f6c-40a3-b1da-44af29cdee8b" containerName="route-controller-manager" Feb 23 08:51:36 crc kubenswrapper[4940]: I0223 08:51:36.524354 4940 memory_manager.go:354] "RemoveStaleState removing state" podUID="e13ea819-2f94-423e-ab3f-c7b6d03ad686" containerName="controller-manager" Feb 23 08:51:36 crc kubenswrapper[4940]: I0223 08:51:36.524385 4940 memory_manager.go:354] "RemoveStaleState removing state" podUID="1b16df0c-b660-4f3c-9d26-cfff395d5c88" containerName="collect-profiles" Feb 23 08:51:36 crc kubenswrapper[4940]: I0223 08:51:36.524398 4940 memory_manager.go:354] "RemoveStaleState removing state" podUID="af5df6df-7f6c-40a3-b1da-44af29cdee8b" containerName="route-controller-manager" Feb 23 08:51:36 crc kubenswrapper[4940]: I0223 08:51:36.525236 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mmcm9" Feb 23 08:51:36 crc kubenswrapper[4940]: I0223 08:51:36.525351 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-shs7s\" (UniqueName: \"kubernetes.io/projected/d1d92eba-93c9-43cf-8c22-584d8a2e1579-kube-api-access-shs7s\") pod \"certified-operators-cg7jw\" (UID: \"d1d92eba-93c9-43cf-8c22-584d8a2e1579\") " pod="openshift-marketplace/certified-operators-cg7jw" Feb 23 08:51:36 crc kubenswrapper[4940]: I0223 08:51:36.536677 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-mmcm9"] Feb 23 08:51:36 crc kubenswrapper[4940]: I0223 08:51:36.591038 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1b16df0c-b660-4f3c-9d26-cfff395d5c88-config-volume\") pod \"1b16df0c-b660-4f3c-9d26-cfff395d5c88\" (UID: \"1b16df0c-b660-4f3c-9d26-cfff395d5c88\") " Feb 23 08:51:36 crc kubenswrapper[4940]: I0223 08:51:36.591089 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e13ea819-2f94-423e-ab3f-c7b6d03ad686-config\") pod \"e13ea819-2f94-423e-ab3f-c7b6d03ad686\" (UID: \"e13ea819-2f94-423e-ab3f-c7b6d03ad686\") " Feb 23 08:51:36 crc kubenswrapper[4940]: I0223 08:51:36.591107 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e13ea819-2f94-423e-ab3f-c7b6d03ad686-client-ca\") pod \"e13ea819-2f94-423e-ab3f-c7b6d03ad686\" (UID: \"e13ea819-2f94-423e-ab3f-c7b6d03ad686\") " Feb 23 08:51:36 crc kubenswrapper[4940]: I0223 08:51:36.591126 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1b16df0c-b660-4f3c-9d26-cfff395d5c88-secret-volume\") pod \"1b16df0c-b660-4f3c-9d26-cfff395d5c88\" (UID: \"1b16df0c-b660-4f3c-9d26-cfff395d5c88\") " Feb 23 08:51:36 crc kubenswrapper[4940]: I0223 08:51:36.591231 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 08:51:36 crc kubenswrapper[4940]: I0223 08:51:36.591261 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4m4cz\" (UniqueName: \"kubernetes.io/projected/e13ea819-2f94-423e-ab3f-c7b6d03ad686-kube-api-access-4m4cz\") pod \"e13ea819-2f94-423e-ab3f-c7b6d03ad686\" (UID: \"e13ea819-2f94-423e-ab3f-c7b6d03ad686\") " Feb 23 08:51:36 crc kubenswrapper[4940]: I0223 08:51:36.591282 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5b5rx\" (UniqueName: \"kubernetes.io/projected/1b16df0c-b660-4f3c-9d26-cfff395d5c88-kube-api-access-5b5rx\") pod \"1b16df0c-b660-4f3c-9d26-cfff395d5c88\" (UID: \"1b16df0c-b660-4f3c-9d26-cfff395d5c88\") " Feb 23 08:51:36 crc kubenswrapper[4940]: I0223 08:51:36.591324 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e13ea819-2f94-423e-ab3f-c7b6d03ad686-proxy-ca-bundles\") pod \"e13ea819-2f94-423e-ab3f-c7b6d03ad686\" (UID: \"e13ea819-2f94-423e-ab3f-c7b6d03ad686\") " Feb 23 08:51:36 crc kubenswrapper[4940]: I0223 08:51:36.591389 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e13ea819-2f94-423e-ab3f-c7b6d03ad686-serving-cert\") pod \"e13ea819-2f94-423e-ab3f-c7b6d03ad686\" (UID: \"e13ea819-2f94-423e-ab3f-c7b6d03ad686\") " Feb 23 08:51:36 crc kubenswrapper[4940]: I0223 08:51:36.591566 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b627df02-a2a7-4070-b2c7-8b9260637500-utilities\") pod \"community-operators-mmcm9\" (UID: \"b627df02-a2a7-4070-b2c7-8b9260637500\") " pod="openshift-marketplace/community-operators-mmcm9" Feb 23 08:51:36 crc kubenswrapper[4940]: I0223 08:51:36.591645 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b627df02-a2a7-4070-b2c7-8b9260637500-catalog-content\") pod \"community-operators-mmcm9\" (UID: \"b627df02-a2a7-4070-b2c7-8b9260637500\") " pod="openshift-marketplace/community-operators-mmcm9" Feb 23 08:51:36 crc kubenswrapper[4940]: I0223 08:51:36.591682 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pxvmt\" (UniqueName: \"kubernetes.io/projected/b627df02-a2a7-4070-b2c7-8b9260637500-kube-api-access-pxvmt\") pod \"community-operators-mmcm9\" (UID: \"b627df02-a2a7-4070-b2c7-8b9260637500\") " pod="openshift-marketplace/community-operators-mmcm9" Feb 23 08:51:36 crc kubenswrapper[4940]: I0223 08:51:36.591743 4940 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/af5df6df-7f6c-40a3-b1da-44af29cdee8b-config\") on node \"crc\" DevicePath \"\"" Feb 23 08:51:36 crc kubenswrapper[4940]: I0223 08:51:36.591758 4940 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/af5df6df-7f6c-40a3-b1da-44af29cdee8b-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 23 08:51:36 crc kubenswrapper[4940]: I0223 08:51:36.591770 4940 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/af5df6df-7f6c-40a3-b1da-44af29cdee8b-client-ca\") on node \"crc\" DevicePath \"\"" Feb 23 08:51:36 crc kubenswrapper[4940]: I0223 08:51:36.591782 4940 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2vswr\" (UniqueName: \"kubernetes.io/projected/af5df6df-7f6c-40a3-b1da-44af29cdee8b-kube-api-access-2vswr\") on node \"crc\" DevicePath \"\"" Feb 23 08:51:36 crc kubenswrapper[4940]: I0223 08:51:36.592276 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e13ea819-2f94-423e-ab3f-c7b6d03ad686-client-ca" (OuterVolumeSpecName: "client-ca") pod "e13ea819-2f94-423e-ab3f-c7b6d03ad686" (UID: "e13ea819-2f94-423e-ab3f-c7b6d03ad686"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 08:51:36 crc kubenswrapper[4940]: I0223 08:51:36.592401 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1b16df0c-b660-4f3c-9d26-cfff395d5c88-config-volume" (OuterVolumeSpecName: "config-volume") pod "1b16df0c-b660-4f3c-9d26-cfff395d5c88" (UID: "1b16df0c-b660-4f3c-9d26-cfff395d5c88"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 08:51:36 crc kubenswrapper[4940]: I0223 08:51:36.592788 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e13ea819-2f94-423e-ab3f-c7b6d03ad686-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "e13ea819-2f94-423e-ab3f-c7b6d03ad686" (UID: "e13ea819-2f94-423e-ab3f-c7b6d03ad686"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 08:51:36 crc kubenswrapper[4940]: E0223 08:51:36.592878 4940 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 08:51:37.092857876 +0000 UTC m=+228.476064073 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 08:51:36 crc kubenswrapper[4940]: I0223 08:51:36.596694 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e13ea819-2f94-423e-ab3f-c7b6d03ad686-config" (OuterVolumeSpecName: "config") pod "e13ea819-2f94-423e-ab3f-c7b6d03ad686" (UID: "e13ea819-2f94-423e-ab3f-c7b6d03ad686"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 08:51:36 crc kubenswrapper[4940]: I0223 08:51:36.598911 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e13ea819-2f94-423e-ab3f-c7b6d03ad686-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e13ea819-2f94-423e-ab3f-c7b6d03ad686" (UID: "e13ea819-2f94-423e-ab3f-c7b6d03ad686"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 08:51:36 crc kubenswrapper[4940]: I0223 08:51:36.600278 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1b16df0c-b660-4f3c-9d26-cfff395d5c88-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "1b16df0c-b660-4f3c-9d26-cfff395d5c88" (UID: "1b16df0c-b660-4f3c-9d26-cfff395d5c88"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 08:51:36 crc kubenswrapper[4940]: I0223 08:51:36.609986 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1b16df0c-b660-4f3c-9d26-cfff395d5c88-kube-api-access-5b5rx" (OuterVolumeSpecName: "kube-api-access-5b5rx") pod "1b16df0c-b660-4f3c-9d26-cfff395d5c88" (UID: "1b16df0c-b660-4f3c-9d26-cfff395d5c88"). InnerVolumeSpecName "kube-api-access-5b5rx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 08:51:36 crc kubenswrapper[4940]: I0223 08:51:36.611712 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e13ea819-2f94-423e-ab3f-c7b6d03ad686-kube-api-access-4m4cz" (OuterVolumeSpecName: "kube-api-access-4m4cz") pod "e13ea819-2f94-423e-ab3f-c7b6d03ad686" (UID: "e13ea819-2f94-423e-ab3f-c7b6d03ad686"). InnerVolumeSpecName "kube-api-access-4m4cz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 08:51:36 crc kubenswrapper[4940]: I0223 08:51:36.656403 4940 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-02-23T08:51:35.847146868Z","Handler":null,"Name":""} Feb 23 08:51:36 crc kubenswrapper[4940]: I0223 08:51:36.661200 4940 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Feb 23 08:51:36 crc kubenswrapper[4940]: I0223 08:51:36.661234 4940 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Feb 23 08:51:36 crc kubenswrapper[4940]: I0223 08:51:36.670703 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-wprw9"] Feb 23 08:51:36 crc kubenswrapper[4940]: I0223 08:51:36.687236 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cg7jw" Feb 23 08:51:36 crc kubenswrapper[4940]: I0223 08:51:36.692693 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b627df02-a2a7-4070-b2c7-8b9260637500-utilities\") pod \"community-operators-mmcm9\" (UID: \"b627df02-a2a7-4070-b2c7-8b9260637500\") " pod="openshift-marketplace/community-operators-mmcm9" Feb 23 08:51:36 crc kubenswrapper[4940]: I0223 08:51:36.692786 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b627df02-a2a7-4070-b2c7-8b9260637500-catalog-content\") pod \"community-operators-mmcm9\" (UID: \"b627df02-a2a7-4070-b2c7-8b9260637500\") " pod="openshift-marketplace/community-operators-mmcm9" Feb 23 08:51:36 crc kubenswrapper[4940]: I0223 08:51:36.692841 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pxvmt\" (UniqueName: \"kubernetes.io/projected/b627df02-a2a7-4070-b2c7-8b9260637500-kube-api-access-pxvmt\") pod \"community-operators-mmcm9\" (UID: \"b627df02-a2a7-4070-b2c7-8b9260637500\") " pod="openshift-marketplace/community-operators-mmcm9" Feb 23 08:51:36 crc kubenswrapper[4940]: I0223 08:51:36.692864 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kzrfw\" (UID: \"05cfdf5e-5390-4f32-986d-02872c05f444\") " pod="openshift-image-registry/image-registry-697d97f7c8-kzrfw" Feb 23 08:51:36 crc kubenswrapper[4940]: I0223 08:51:36.692956 4940 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4m4cz\" (UniqueName: \"kubernetes.io/projected/e13ea819-2f94-423e-ab3f-c7b6d03ad686-kube-api-access-4m4cz\") on node \"crc\" DevicePath \"\"" Feb 23 08:51:36 crc kubenswrapper[4940]: I0223 08:51:36.692993 4940 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5b5rx\" (UniqueName: \"kubernetes.io/projected/1b16df0c-b660-4f3c-9d26-cfff395d5c88-kube-api-access-5b5rx\") on node \"crc\" DevicePath \"\"" Feb 23 08:51:36 crc kubenswrapper[4940]: I0223 08:51:36.693005 4940 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e13ea819-2f94-423e-ab3f-c7b6d03ad686-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 23 08:51:36 crc kubenswrapper[4940]: I0223 08:51:36.693014 4940 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e13ea819-2f94-423e-ab3f-c7b6d03ad686-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 23 08:51:36 crc kubenswrapper[4940]: I0223 08:51:36.693022 4940 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1b16df0c-b660-4f3c-9d26-cfff395d5c88-config-volume\") on node \"crc\" DevicePath \"\"" Feb 23 08:51:36 crc kubenswrapper[4940]: I0223 08:51:36.693032 4940 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e13ea819-2f94-423e-ab3f-c7b6d03ad686-config\") on node \"crc\" DevicePath \"\"" Feb 23 08:51:36 crc kubenswrapper[4940]: I0223 08:51:36.693040 4940 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e13ea819-2f94-423e-ab3f-c7b6d03ad686-client-ca\") on node \"crc\" DevicePath \"\"" Feb 23 08:51:36 crc kubenswrapper[4940]: I0223 08:51:36.693073 4940 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1b16df0c-b660-4f3c-9d26-cfff395d5c88-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 23 08:51:36 crc kubenswrapper[4940]: I0223 08:51:36.693233 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b627df02-a2a7-4070-b2c7-8b9260637500-utilities\") pod \"community-operators-mmcm9\" (UID: \"b627df02-a2a7-4070-b2c7-8b9260637500\") " pod="openshift-marketplace/community-operators-mmcm9" Feb 23 08:51:36 crc kubenswrapper[4940]: I0223 08:51:36.693295 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b627df02-a2a7-4070-b2c7-8b9260637500-catalog-content\") pod \"community-operators-mmcm9\" (UID: \"b627df02-a2a7-4070-b2c7-8b9260637500\") " pod="openshift-marketplace/community-operators-mmcm9" Feb 23 08:51:36 crc kubenswrapper[4940]: I0223 08:51:36.696306 4940 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 23 08:51:36 crc kubenswrapper[4940]: I0223 08:51:36.696352 4940 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kzrfw\" (UID: \"05cfdf5e-5390-4f32-986d-02872c05f444\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-kzrfw" Feb 23 08:51:36 crc kubenswrapper[4940]: I0223 08:51:36.714601 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pxvmt\" (UniqueName: \"kubernetes.io/projected/b627df02-a2a7-4070-b2c7-8b9260637500-kube-api-access-pxvmt\") pod \"community-operators-mmcm9\" (UID: \"b627df02-a2a7-4070-b2c7-8b9260637500\") " pod="openshift-marketplace/community-operators-mmcm9" Feb 23 08:51:36 crc kubenswrapper[4940]: I0223 08:51:36.742716 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kzrfw\" (UID: \"05cfdf5e-5390-4f32-986d-02872c05f444\") " pod="openshift-image-registry/image-registry-697d97f7c8-kzrfw" Feb 23 08:51:36 crc kubenswrapper[4940]: I0223 08:51:36.748555 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 23 08:51:36 crc kubenswrapper[4940]: I0223 08:51:36.749382 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-kzrfw" Feb 23 08:51:36 crc kubenswrapper[4940]: I0223 08:51:36.795488 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 08:51:36 crc kubenswrapper[4940]: I0223 08:51:36.807380 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-mqw5m"] Feb 23 08:51:36 crc kubenswrapper[4940]: I0223 08:51:36.811785 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 23 08:51:36 crc kubenswrapper[4940]: W0223 08:51:36.835247 4940 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0917e41b_2e05_4596_9ad4_05b382ee9f56.slice/crio-54043031f90f7b77931128beca5b5a9aa21e1648c68e41a3fe41c22034d9511e WatchSource:0}: Error finding container 54043031f90f7b77931128beca5b5a9aa21e1648c68e41a3fe41c22034d9511e: Status 404 returned error can't find the container with id 54043031f90f7b77931128beca5b5a9aa21e1648c68e41a3fe41c22034d9511e Feb 23 08:51:36 crc kubenswrapper[4940]: I0223 08:51:36.871164 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mmcm9" Feb 23 08:51:36 crc kubenswrapper[4940]: I0223 08:51:36.880795 4940 patch_prober.go:28] interesting pod/router-default-5444994796-tqnlh container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 08:51:36 crc kubenswrapper[4940]: [-]has-synced failed: reason withheld Feb 23 08:51:36 crc kubenswrapper[4940]: [+]process-running ok Feb 23 08:51:36 crc kubenswrapper[4940]: healthz check failed Feb 23 08:51:36 crc kubenswrapper[4940]: I0223 08:51:36.881542 4940 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-tqnlh" podUID="feea3a62-1f72-4b46-9655-521e8ff5c323" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 08:51:36 crc kubenswrapper[4940]: I0223 08:51:36.919983 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"3c2be6bf-ea2f-47a2-851e-c67d893f0563","Type":"ContainerStarted","Data":"c3f049a6ab970fb553bbb8e679814ec3f21c8b101b9ecc42d5eef89b6cf22455"} Feb 23 08:51:36 crc kubenswrapper[4940]: I0223 08:51:36.924645 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mqw5m" event={"ID":"0917e41b-2e05-4596-9ad4-05b382ee9f56","Type":"ContainerStarted","Data":"54043031f90f7b77931128beca5b5a9aa21e1648c68e41a3fe41c22034d9511e"} Feb 23 08:51:36 crc kubenswrapper[4940]: I0223 08:51:36.934585 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29530605-25rxm" event={"ID":"1b16df0c-b660-4f3c-9d26-cfff395d5c88","Type":"ContainerDied","Data":"429301659db5ff149f44a22558d8598803d8457df67f2e20dc866f8e87d8acaf"} Feb 23 08:51:36 crc kubenswrapper[4940]: I0223 08:51:36.934639 4940 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="429301659db5ff149f44a22558d8598803d8457df67f2e20dc866f8e87d8acaf" Feb 23 08:51:36 crc kubenswrapper[4940]: I0223 08:51:36.934637 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29530605-25rxm" Feb 23 08:51:36 crc kubenswrapper[4940]: I0223 08:51:36.936230 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-2mrdb" Feb 23 08:51:36 crc kubenswrapper[4940]: I0223 08:51:36.936911 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-2mrdb" event={"ID":"af5df6df-7f6c-40a3-b1da-44af29cdee8b","Type":"ContainerDied","Data":"34f0b4758aab68ebfe65079b4ae647649d26117982b3035f6d70686248553646"} Feb 23 08:51:36 crc kubenswrapper[4940]: I0223 08:51:36.936945 4940 scope.go:117] "RemoveContainer" containerID="ac0a19ea351b92f589453a34077d4bddefb33a6b68a444fdf2e10a434d067cc0" Feb 23 08:51:36 crc kubenswrapper[4940]: I0223 08:51:36.938894 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wprw9" event={"ID":"fc357ef5-0994-4918-859b-d623e534da2a","Type":"ContainerStarted","Data":"cfc9f66e639a8eff2a2cbbdba1e035526c227113f4135eba0828333b03efc051"} Feb 23 08:51:36 crc kubenswrapper[4940]: I0223 08:51:36.938915 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wprw9" event={"ID":"fc357ef5-0994-4918-859b-d623e534da2a","Type":"ContainerStarted","Data":"a186e27b1095869988f46249fb2a428f422bd7a4816bc0d1cc1d70b853317975"} Feb 23 08:51:36 crc kubenswrapper[4940]: I0223 08:51:36.940296 4940 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 23 08:51:36 crc kubenswrapper[4940]: I0223 08:51:36.942040 4940 generic.go:334] "Generic (PLEG): container finished" podID="e13ea819-2f94-423e-ab3f-c7b6d03ad686" containerID="7000fb665f85ae134abf3bb8cb528888596a50a0a688957be37a939d2e8ed876" exitCode=0 Feb 23 08:51:36 crc kubenswrapper[4940]: I0223 08:51:36.942667 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-224dd" Feb 23 08:51:36 crc kubenswrapper[4940]: I0223 08:51:36.942661 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-224dd" event={"ID":"e13ea819-2f94-423e-ab3f-c7b6d03ad686","Type":"ContainerDied","Data":"7000fb665f85ae134abf3bb8cb528888596a50a0a688957be37a939d2e8ed876"} Feb 23 08:51:36 crc kubenswrapper[4940]: I0223 08:51:36.942730 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-224dd" event={"ID":"e13ea819-2f94-423e-ab3f-c7b6d03ad686","Type":"ContainerDied","Data":"da7d9dfab5da8ecc4117f16b91c332b55f678d20b6e3c4694bdf5179dd80c3cf"} Feb 23 08:51:36 crc kubenswrapper[4940]: I0223 08:51:36.974085 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-cg7jw"] Feb 23 08:51:37 crc kubenswrapper[4940]: I0223 08:51:36.998570 4940 scope.go:117] "RemoveContainer" containerID="7000fb665f85ae134abf3bb8cb528888596a50a0a688957be37a939d2e8ed876" Feb 23 08:51:37 crc kubenswrapper[4940]: W0223 08:51:36.999231 4940 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1d92eba_93c9_43cf_8c22_584d8a2e1579.slice/crio-0f45b121fbf5bfc6155727e3725f638fb699fcf754116aa5803c6f9c93a81e67 WatchSource:0}: Error finding container 0f45b121fbf5bfc6155727e3725f638fb699fcf754116aa5803c6f9c93a81e67: Status 404 returned error can't find the container with id 0f45b121fbf5bfc6155727e3725f638fb699fcf754116aa5803c6f9c93a81e67 Feb 23 08:51:37 crc kubenswrapper[4940]: I0223 08:51:37.025529 4940 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-2mrdb"] Feb 23 08:51:37 crc kubenswrapper[4940]: I0223 08:51:37.026417 4940 scope.go:117] "RemoveContainer" containerID="7000fb665f85ae134abf3bb8cb528888596a50a0a688957be37a939d2e8ed876" Feb 23 08:51:37 crc kubenswrapper[4940]: E0223 08:51:37.030373 4940 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7000fb665f85ae134abf3bb8cb528888596a50a0a688957be37a939d2e8ed876\": container with ID starting with 7000fb665f85ae134abf3bb8cb528888596a50a0a688957be37a939d2e8ed876 not found: ID does not exist" containerID="7000fb665f85ae134abf3bb8cb528888596a50a0a688957be37a939d2e8ed876" Feb 23 08:51:37 crc kubenswrapper[4940]: I0223 08:51:37.030418 4940 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7000fb665f85ae134abf3bb8cb528888596a50a0a688957be37a939d2e8ed876"} err="failed to get container status \"7000fb665f85ae134abf3bb8cb528888596a50a0a688957be37a939d2e8ed876\": rpc error: code = NotFound desc = could not find container \"7000fb665f85ae134abf3bb8cb528888596a50a0a688957be37a939d2e8ed876\": container with ID starting with 7000fb665f85ae134abf3bb8cb528888596a50a0a688957be37a939d2e8ed876 not found: ID does not exist" Feb 23 08:51:37 crc kubenswrapper[4940]: I0223 08:51:37.035102 4940 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-2mrdb"] Feb 23 08:51:37 crc kubenswrapper[4940]: I0223 08:51:37.048767 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-kzrfw"] Feb 23 08:51:37 crc kubenswrapper[4940]: I0223 08:51:37.057684 4940 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-224dd"] Feb 23 08:51:37 crc kubenswrapper[4940]: I0223 08:51:37.061632 4940 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-224dd"] Feb 23 08:51:37 crc kubenswrapper[4940]: W0223 08:51:37.074315 4940 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod05cfdf5e_5390_4f32_986d_02872c05f444.slice/crio-ef97393bd95d235ea6f1c38119a4749b446c23f6e22aafecb53ba9c5c8018584 WatchSource:0}: Error finding container ef97393bd95d235ea6f1c38119a4749b446c23f6e22aafecb53ba9c5c8018584: Status 404 returned error can't find the container with id ef97393bd95d235ea6f1c38119a4749b446c23f6e22aafecb53ba9c5c8018584 Feb 23 08:51:37 crc kubenswrapper[4940]: I0223 08:51:37.224752 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-mmcm9"] Feb 23 08:51:37 crc kubenswrapper[4940]: W0223 08:51:37.247218 4940 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb627df02_a2a7_4070_b2c7_8b9260637500.slice/crio-c6eb116cbda86fb79800ee166fd0190fe411ec4082dbafcb3a5c02f8f84e1ace WatchSource:0}: Error finding container c6eb116cbda86fb79800ee166fd0190fe411ec4082dbafcb3a5c02f8f84e1ace: Status 404 returned error can't find the container with id c6eb116cbda86fb79800ee166fd0190fe411ec4082dbafcb3a5c02f8f84e1ace Feb 23 08:51:37 crc kubenswrapper[4940]: I0223 08:51:37.353130 4940 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Feb 23 08:51:37 crc kubenswrapper[4940]: I0223 08:51:37.353982 4940 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="af5df6df-7f6c-40a3-b1da-44af29cdee8b" path="/var/lib/kubelet/pods/af5df6df-7f6c-40a3-b1da-44af29cdee8b/volumes" Feb 23 08:51:37 crc kubenswrapper[4940]: I0223 08:51:37.354569 4940 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e13ea819-2f94-423e-ab3f-c7b6d03ad686" path="/var/lib/kubelet/pods/e13ea819-2f94-423e-ab3f-c7b6d03ad686/volumes" Feb 23 08:51:37 crc kubenswrapper[4940]: I0223 08:51:37.879566 4940 patch_prober.go:28] interesting pod/router-default-5444994796-tqnlh container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 08:51:37 crc kubenswrapper[4940]: [-]has-synced failed: reason withheld Feb 23 08:51:37 crc kubenswrapper[4940]: [+]process-running ok Feb 23 08:51:37 crc kubenswrapper[4940]: healthz check failed Feb 23 08:51:37 crc kubenswrapper[4940]: I0223 08:51:37.879653 4940 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-tqnlh" podUID="feea3a62-1f72-4b46-9655-521e8ff5c323" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 08:51:37 crc kubenswrapper[4940]: I0223 08:51:37.903330 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-56j6r"] Feb 23 08:51:37 crc kubenswrapper[4940]: I0223 08:51:37.905072 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-56j6r" Feb 23 08:51:37 crc kubenswrapper[4940]: I0223 08:51:37.909799 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 23 08:51:37 crc kubenswrapper[4940]: I0223 08:51:37.911835 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-56j6r"] Feb 23 08:51:37 crc kubenswrapper[4940]: I0223 08:51:37.948646 4940 generic.go:334] "Generic (PLEG): container finished" podID="b627df02-a2a7-4070-b2c7-8b9260637500" containerID="f70bc7ba72d690a203569460b38b1ab8a145c7f4a84b00d7902b59ea4185e401" exitCode=0 Feb 23 08:51:37 crc kubenswrapper[4940]: I0223 08:51:37.948749 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mmcm9" event={"ID":"b627df02-a2a7-4070-b2c7-8b9260637500","Type":"ContainerDied","Data":"f70bc7ba72d690a203569460b38b1ab8a145c7f4a84b00d7902b59ea4185e401"} Feb 23 08:51:37 crc kubenswrapper[4940]: I0223 08:51:37.949009 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mmcm9" event={"ID":"b627df02-a2a7-4070-b2c7-8b9260637500","Type":"ContainerStarted","Data":"c6eb116cbda86fb79800ee166fd0190fe411ec4082dbafcb3a5c02f8f84e1ace"} Feb 23 08:51:37 crc kubenswrapper[4940]: I0223 08:51:37.959199 4940 generic.go:334] "Generic (PLEG): container finished" podID="3c2be6bf-ea2f-47a2-851e-c67d893f0563" containerID="3cf17afce22c10778a308b634ad607c83ca2d0401457f554e0c9d6ee1606c256" exitCode=0 Feb 23 08:51:37 crc kubenswrapper[4940]: I0223 08:51:37.959316 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"3c2be6bf-ea2f-47a2-851e-c67d893f0563","Type":"ContainerDied","Data":"3cf17afce22c10778a308b634ad607c83ca2d0401457f554e0c9d6ee1606c256"} Feb 23 08:51:37 crc kubenswrapper[4940]: I0223 08:51:37.961809 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-kzrfw" event={"ID":"05cfdf5e-5390-4f32-986d-02872c05f444","Type":"ContainerStarted","Data":"1c8867353593e94f0ede13e01668356dc2d183093456d7d2df192c012790e029"} Feb 23 08:51:37 crc kubenswrapper[4940]: I0223 08:51:37.961877 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-kzrfw" event={"ID":"05cfdf5e-5390-4f32-986d-02872c05f444","Type":"ContainerStarted","Data":"ef97393bd95d235ea6f1c38119a4749b446c23f6e22aafecb53ba9c5c8018584"} Feb 23 08:51:37 crc kubenswrapper[4940]: I0223 08:51:37.962186 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-kzrfw" Feb 23 08:51:37 crc kubenswrapper[4940]: I0223 08:51:37.978405 4940 generic.go:334] "Generic (PLEG): container finished" podID="d1d92eba-93c9-43cf-8c22-584d8a2e1579" containerID="5bfdfbc6eac963cad0712336d4aa006674004cbbc0d70174671ccd11ecf20810" exitCode=0 Feb 23 08:51:37 crc kubenswrapper[4940]: I0223 08:51:37.978507 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cg7jw" event={"ID":"d1d92eba-93c9-43cf-8c22-584d8a2e1579","Type":"ContainerDied","Data":"5bfdfbc6eac963cad0712336d4aa006674004cbbc0d70174671ccd11ecf20810"} Feb 23 08:51:37 crc kubenswrapper[4940]: I0223 08:51:37.978552 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cg7jw" event={"ID":"d1d92eba-93c9-43cf-8c22-584d8a2e1579","Type":"ContainerStarted","Data":"0f45b121fbf5bfc6155727e3725f638fb699fcf754116aa5803c6f9c93a81e67"} Feb 23 08:51:37 crc kubenswrapper[4940]: I0223 08:51:37.995282 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6d79fb88b9-6n2zj"] Feb 23 08:51:37 crc kubenswrapper[4940]: I0223 08:51:37.995901 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6d79fb88b9-6n2zj" Feb 23 08:51:37 crc kubenswrapper[4940]: I0223 08:51:37.997518 4940 generic.go:334] "Generic (PLEG): container finished" podID="0917e41b-2e05-4596-9ad4-05b382ee9f56" containerID="e3c26e2458bf9b516eb733b368692ed4cb6c27e9fc4092b7c616d2438debbc22" exitCode=0 Feb 23 08:51:37 crc kubenswrapper[4940]: I0223 08:51:37.997564 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mqw5m" event={"ID":"0917e41b-2e05-4596-9ad4-05b382ee9f56","Type":"ContainerDied","Data":"e3c26e2458bf9b516eb733b368692ed4cb6c27e9fc4092b7c616d2438debbc22"} Feb 23 08:51:37 crc kubenswrapper[4940]: I0223 08:51:37.999056 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 23 08:51:37 crc kubenswrapper[4940]: I0223 08:51:37.999183 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 23 08:51:37 crc kubenswrapper[4940]: I0223 08:51:37.999291 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 23 08:51:37 crc kubenswrapper[4940]: I0223 08:51:37.999359 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 23 08:51:37 crc kubenswrapper[4940]: I0223 08:51:37.999547 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 23 08:51:38 crc kubenswrapper[4940]: I0223 08:51:38.000120 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 23 08:51:38 crc kubenswrapper[4940]: I0223 08:51:38.012804 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-98f8ddb54-5qrql"] Feb 23 08:51:38 crc kubenswrapper[4940]: I0223 08:51:38.012872 4940 generic.go:334] "Generic (PLEG): container finished" podID="fc357ef5-0994-4918-859b-d623e534da2a" containerID="cfc9f66e639a8eff2a2cbbdba1e035526c227113f4135eba0828333b03efc051" exitCode=0 Feb 23 08:51:38 crc kubenswrapper[4940]: I0223 08:51:38.013730 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wprw9" event={"ID":"fc357ef5-0994-4918-859b-d623e534da2a","Type":"ContainerDied","Data":"cfc9f66e639a8eff2a2cbbdba1e035526c227113f4135eba0828333b03efc051"} Feb 23 08:51:38 crc kubenswrapper[4940]: I0223 08:51:38.013836 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-98f8ddb54-5qrql" Feb 23 08:51:38 crc kubenswrapper[4940]: I0223 08:51:38.014190 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6d79fb88b9-6n2zj"] Feb 23 08:51:38 crc kubenswrapper[4940]: I0223 08:51:38.016480 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 23 08:51:38 crc kubenswrapper[4940]: I0223 08:51:38.016758 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 23 08:51:38 crc kubenswrapper[4940]: I0223 08:51:38.017159 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 23 08:51:38 crc kubenswrapper[4940]: I0223 08:51:38.017367 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 23 08:51:38 crc kubenswrapper[4940]: I0223 08:51:38.017513 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 23 08:51:38 crc kubenswrapper[4940]: I0223 08:51:38.018426 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 23 08:51:38 crc kubenswrapper[4940]: I0223 08:51:38.024016 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 23 08:51:38 crc kubenswrapper[4940]: I0223 08:51:38.027626 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c723f067-bc1f-4d88-aa37-00f8896a9d38-utilities\") pod \"redhat-marketplace-56j6r\" (UID: \"c723f067-bc1f-4d88-aa37-00f8896a9d38\") " pod="openshift-marketplace/redhat-marketplace-56j6r" Feb 23 08:51:38 crc kubenswrapper[4940]: I0223 08:51:38.027739 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pn57d\" (UniqueName: \"kubernetes.io/projected/c723f067-bc1f-4d88-aa37-00f8896a9d38-kube-api-access-pn57d\") pod \"redhat-marketplace-56j6r\" (UID: \"c723f067-bc1f-4d88-aa37-00f8896a9d38\") " pod="openshift-marketplace/redhat-marketplace-56j6r" Feb 23 08:51:38 crc kubenswrapper[4940]: I0223 08:51:38.027826 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c723f067-bc1f-4d88-aa37-00f8896a9d38-catalog-content\") pod \"redhat-marketplace-56j6r\" (UID: \"c723f067-bc1f-4d88-aa37-00f8896a9d38\") " pod="openshift-marketplace/redhat-marketplace-56j6r" Feb 23 08:51:38 crc kubenswrapper[4940]: I0223 08:51:38.032525 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-98f8ddb54-5qrql"] Feb 23 08:51:38 crc kubenswrapper[4940]: I0223 08:51:38.036883 4940 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-kzrfw" podStartSLOduration=161.036857165 podStartE2EDuration="2m41.036857165s" podCreationTimestamp="2026-02-23 08:48:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 08:51:38.012453711 +0000 UTC m=+229.395659868" watchObservedRunningTime="2026-02-23 08:51:38.036857165 +0000 UTC m=+229.420063322" Feb 23 08:51:38 crc kubenswrapper[4940]: I0223 08:51:38.130162 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/576e1b7d-ed0d-45c7-8730-b93b9ee1d86d-client-ca\") pod \"controller-manager-98f8ddb54-5qrql\" (UID: \"576e1b7d-ed0d-45c7-8730-b93b9ee1d86d\") " pod="openshift-controller-manager/controller-manager-98f8ddb54-5qrql" Feb 23 08:51:38 crc kubenswrapper[4940]: I0223 08:51:38.130299 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/576e1b7d-ed0d-45c7-8730-b93b9ee1d86d-config\") pod \"controller-manager-98f8ddb54-5qrql\" (UID: \"576e1b7d-ed0d-45c7-8730-b93b9ee1d86d\") " pod="openshift-controller-manager/controller-manager-98f8ddb54-5qrql" Feb 23 08:51:38 crc kubenswrapper[4940]: I0223 08:51:38.130347 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/576e1b7d-ed0d-45c7-8730-b93b9ee1d86d-proxy-ca-bundles\") pod \"controller-manager-98f8ddb54-5qrql\" (UID: \"576e1b7d-ed0d-45c7-8730-b93b9ee1d86d\") " pod="openshift-controller-manager/controller-manager-98f8ddb54-5qrql" Feb 23 08:51:38 crc kubenswrapper[4940]: I0223 08:51:38.130387 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c723f067-bc1f-4d88-aa37-00f8896a9d38-utilities\") pod \"redhat-marketplace-56j6r\" (UID: \"c723f067-bc1f-4d88-aa37-00f8896a9d38\") " pod="openshift-marketplace/redhat-marketplace-56j6r" Feb 23 08:51:38 crc kubenswrapper[4940]: I0223 08:51:38.130411 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d66696cc-13f0-4bec-a40a-1874441498ee-config\") pod \"route-controller-manager-6d79fb88b9-6n2zj\" (UID: \"d66696cc-13f0-4bec-a40a-1874441498ee\") " pod="openshift-route-controller-manager/route-controller-manager-6d79fb88b9-6n2zj" Feb 23 08:51:38 crc kubenswrapper[4940]: I0223 08:51:38.130463 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d66696cc-13f0-4bec-a40a-1874441498ee-client-ca\") pod \"route-controller-manager-6d79fb88b9-6n2zj\" (UID: \"d66696cc-13f0-4bec-a40a-1874441498ee\") " pod="openshift-route-controller-manager/route-controller-manager-6d79fb88b9-6n2zj" Feb 23 08:51:38 crc kubenswrapper[4940]: I0223 08:51:38.130487 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cl5hl\" (UniqueName: \"kubernetes.io/projected/d66696cc-13f0-4bec-a40a-1874441498ee-kube-api-access-cl5hl\") pod \"route-controller-manager-6d79fb88b9-6n2zj\" (UID: \"d66696cc-13f0-4bec-a40a-1874441498ee\") " pod="openshift-route-controller-manager/route-controller-manager-6d79fb88b9-6n2zj" Feb 23 08:51:38 crc kubenswrapper[4940]: I0223 08:51:38.130520 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pn57d\" (UniqueName: \"kubernetes.io/projected/c723f067-bc1f-4d88-aa37-00f8896a9d38-kube-api-access-pn57d\") pod \"redhat-marketplace-56j6r\" (UID: \"c723f067-bc1f-4d88-aa37-00f8896a9d38\") " pod="openshift-marketplace/redhat-marketplace-56j6r" Feb 23 08:51:38 crc kubenswrapper[4940]: I0223 08:51:38.130542 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d66696cc-13f0-4bec-a40a-1874441498ee-serving-cert\") pod \"route-controller-manager-6d79fb88b9-6n2zj\" (UID: \"d66696cc-13f0-4bec-a40a-1874441498ee\") " pod="openshift-route-controller-manager/route-controller-manager-6d79fb88b9-6n2zj" Feb 23 08:51:38 crc kubenswrapper[4940]: I0223 08:51:38.130586 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z9rf7\" (UniqueName: \"kubernetes.io/projected/576e1b7d-ed0d-45c7-8730-b93b9ee1d86d-kube-api-access-z9rf7\") pod \"controller-manager-98f8ddb54-5qrql\" (UID: \"576e1b7d-ed0d-45c7-8730-b93b9ee1d86d\") " pod="openshift-controller-manager/controller-manager-98f8ddb54-5qrql" Feb 23 08:51:38 crc kubenswrapper[4940]: I0223 08:51:38.130636 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/576e1b7d-ed0d-45c7-8730-b93b9ee1d86d-serving-cert\") pod \"controller-manager-98f8ddb54-5qrql\" (UID: \"576e1b7d-ed0d-45c7-8730-b93b9ee1d86d\") " pod="openshift-controller-manager/controller-manager-98f8ddb54-5qrql" Feb 23 08:51:38 crc kubenswrapper[4940]: I0223 08:51:38.130689 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c723f067-bc1f-4d88-aa37-00f8896a9d38-catalog-content\") pod \"redhat-marketplace-56j6r\" (UID: \"c723f067-bc1f-4d88-aa37-00f8896a9d38\") " pod="openshift-marketplace/redhat-marketplace-56j6r" Feb 23 08:51:38 crc kubenswrapper[4940]: I0223 08:51:38.131380 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c723f067-bc1f-4d88-aa37-00f8896a9d38-catalog-content\") pod \"redhat-marketplace-56j6r\" (UID: \"c723f067-bc1f-4d88-aa37-00f8896a9d38\") " pod="openshift-marketplace/redhat-marketplace-56j6r" Feb 23 08:51:38 crc kubenswrapper[4940]: I0223 08:51:38.131453 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c723f067-bc1f-4d88-aa37-00f8896a9d38-utilities\") pod \"redhat-marketplace-56j6r\" (UID: \"c723f067-bc1f-4d88-aa37-00f8896a9d38\") " pod="openshift-marketplace/redhat-marketplace-56j6r" Feb 23 08:51:38 crc kubenswrapper[4940]: I0223 08:51:38.155876 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pn57d\" (UniqueName: \"kubernetes.io/projected/c723f067-bc1f-4d88-aa37-00f8896a9d38-kube-api-access-pn57d\") pod \"redhat-marketplace-56j6r\" (UID: \"c723f067-bc1f-4d88-aa37-00f8896a9d38\") " pod="openshift-marketplace/redhat-marketplace-56j6r" Feb 23 08:51:38 crc kubenswrapper[4940]: I0223 08:51:38.231540 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d66696cc-13f0-4bec-a40a-1874441498ee-serving-cert\") pod \"route-controller-manager-6d79fb88b9-6n2zj\" (UID: \"d66696cc-13f0-4bec-a40a-1874441498ee\") " pod="openshift-route-controller-manager/route-controller-manager-6d79fb88b9-6n2zj" Feb 23 08:51:38 crc kubenswrapper[4940]: I0223 08:51:38.231652 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z9rf7\" (UniqueName: \"kubernetes.io/projected/576e1b7d-ed0d-45c7-8730-b93b9ee1d86d-kube-api-access-z9rf7\") pod \"controller-manager-98f8ddb54-5qrql\" (UID: \"576e1b7d-ed0d-45c7-8730-b93b9ee1d86d\") " pod="openshift-controller-manager/controller-manager-98f8ddb54-5qrql" Feb 23 08:51:38 crc kubenswrapper[4940]: I0223 08:51:38.231696 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/576e1b7d-ed0d-45c7-8730-b93b9ee1d86d-serving-cert\") pod \"controller-manager-98f8ddb54-5qrql\" (UID: \"576e1b7d-ed0d-45c7-8730-b93b9ee1d86d\") " pod="openshift-controller-manager/controller-manager-98f8ddb54-5qrql" Feb 23 08:51:38 crc kubenswrapper[4940]: I0223 08:51:38.231733 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/576e1b7d-ed0d-45c7-8730-b93b9ee1d86d-client-ca\") pod \"controller-manager-98f8ddb54-5qrql\" (UID: \"576e1b7d-ed0d-45c7-8730-b93b9ee1d86d\") " pod="openshift-controller-manager/controller-manager-98f8ddb54-5qrql" Feb 23 08:51:38 crc kubenswrapper[4940]: I0223 08:51:38.231775 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/576e1b7d-ed0d-45c7-8730-b93b9ee1d86d-config\") pod \"controller-manager-98f8ddb54-5qrql\" (UID: \"576e1b7d-ed0d-45c7-8730-b93b9ee1d86d\") " pod="openshift-controller-manager/controller-manager-98f8ddb54-5qrql" Feb 23 08:51:38 crc kubenswrapper[4940]: I0223 08:51:38.231810 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/576e1b7d-ed0d-45c7-8730-b93b9ee1d86d-proxy-ca-bundles\") pod \"controller-manager-98f8ddb54-5qrql\" (UID: \"576e1b7d-ed0d-45c7-8730-b93b9ee1d86d\") " pod="openshift-controller-manager/controller-manager-98f8ddb54-5qrql" Feb 23 08:51:38 crc kubenswrapper[4940]: I0223 08:51:38.231842 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d66696cc-13f0-4bec-a40a-1874441498ee-config\") pod \"route-controller-manager-6d79fb88b9-6n2zj\" (UID: \"d66696cc-13f0-4bec-a40a-1874441498ee\") " pod="openshift-route-controller-manager/route-controller-manager-6d79fb88b9-6n2zj" Feb 23 08:51:38 crc kubenswrapper[4940]: I0223 08:51:38.231891 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d66696cc-13f0-4bec-a40a-1874441498ee-client-ca\") pod \"route-controller-manager-6d79fb88b9-6n2zj\" (UID: \"d66696cc-13f0-4bec-a40a-1874441498ee\") " pod="openshift-route-controller-manager/route-controller-manager-6d79fb88b9-6n2zj" Feb 23 08:51:38 crc kubenswrapper[4940]: I0223 08:51:38.231919 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cl5hl\" (UniqueName: \"kubernetes.io/projected/d66696cc-13f0-4bec-a40a-1874441498ee-kube-api-access-cl5hl\") pod \"route-controller-manager-6d79fb88b9-6n2zj\" (UID: \"d66696cc-13f0-4bec-a40a-1874441498ee\") " pod="openshift-route-controller-manager/route-controller-manager-6d79fb88b9-6n2zj" Feb 23 08:51:38 crc kubenswrapper[4940]: I0223 08:51:38.251455 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d66696cc-13f0-4bec-a40a-1874441498ee-serving-cert\") pod \"route-controller-manager-6d79fb88b9-6n2zj\" (UID: \"d66696cc-13f0-4bec-a40a-1874441498ee\") " pod="openshift-route-controller-manager/route-controller-manager-6d79fb88b9-6n2zj" Feb 23 08:51:38 crc kubenswrapper[4940]: I0223 08:51:38.252396 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/576e1b7d-ed0d-45c7-8730-b93b9ee1d86d-proxy-ca-bundles\") pod \"controller-manager-98f8ddb54-5qrql\" (UID: \"576e1b7d-ed0d-45c7-8730-b93b9ee1d86d\") " pod="openshift-controller-manager/controller-manager-98f8ddb54-5qrql" Feb 23 08:51:38 crc kubenswrapper[4940]: I0223 08:51:38.256475 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d66696cc-13f0-4bec-a40a-1874441498ee-client-ca\") pod \"route-controller-manager-6d79fb88b9-6n2zj\" (UID: \"d66696cc-13f0-4bec-a40a-1874441498ee\") " pod="openshift-route-controller-manager/route-controller-manager-6d79fb88b9-6n2zj" Feb 23 08:51:38 crc kubenswrapper[4940]: I0223 08:51:38.259726 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d66696cc-13f0-4bec-a40a-1874441498ee-config\") pod \"route-controller-manager-6d79fb88b9-6n2zj\" (UID: \"d66696cc-13f0-4bec-a40a-1874441498ee\") " pod="openshift-route-controller-manager/route-controller-manager-6d79fb88b9-6n2zj" Feb 23 08:51:38 crc kubenswrapper[4940]: I0223 08:51:38.259971 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cl5hl\" (UniqueName: \"kubernetes.io/projected/d66696cc-13f0-4bec-a40a-1874441498ee-kube-api-access-cl5hl\") pod \"route-controller-manager-6d79fb88b9-6n2zj\" (UID: \"d66696cc-13f0-4bec-a40a-1874441498ee\") " pod="openshift-route-controller-manager/route-controller-manager-6d79fb88b9-6n2zj" Feb 23 08:51:38 crc kubenswrapper[4940]: I0223 08:51:38.263417 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/576e1b7d-ed0d-45c7-8730-b93b9ee1d86d-config\") pod \"controller-manager-98f8ddb54-5qrql\" (UID: \"576e1b7d-ed0d-45c7-8730-b93b9ee1d86d\") " pod="openshift-controller-manager/controller-manager-98f8ddb54-5qrql" Feb 23 08:51:38 crc kubenswrapper[4940]: I0223 08:51:38.266471 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/576e1b7d-ed0d-45c7-8730-b93b9ee1d86d-client-ca\") pod \"controller-manager-98f8ddb54-5qrql\" (UID: \"576e1b7d-ed0d-45c7-8730-b93b9ee1d86d\") " pod="openshift-controller-manager/controller-manager-98f8ddb54-5qrql" Feb 23 08:51:38 crc kubenswrapper[4940]: I0223 08:51:38.273289 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z9rf7\" (UniqueName: \"kubernetes.io/projected/576e1b7d-ed0d-45c7-8730-b93b9ee1d86d-kube-api-access-z9rf7\") pod \"controller-manager-98f8ddb54-5qrql\" (UID: \"576e1b7d-ed0d-45c7-8730-b93b9ee1d86d\") " pod="openshift-controller-manager/controller-manager-98f8ddb54-5qrql" Feb 23 08:51:38 crc kubenswrapper[4940]: I0223 08:51:38.273692 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/576e1b7d-ed0d-45c7-8730-b93b9ee1d86d-serving-cert\") pod \"controller-manager-98f8ddb54-5qrql\" (UID: \"576e1b7d-ed0d-45c7-8730-b93b9ee1d86d\") " pod="openshift-controller-manager/controller-manager-98f8ddb54-5qrql" Feb 23 08:51:38 crc kubenswrapper[4940]: I0223 08:51:38.275452 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-56j6r" Feb 23 08:51:38 crc kubenswrapper[4940]: I0223 08:51:38.299305 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-9zgq2"] Feb 23 08:51:38 crc kubenswrapper[4940]: I0223 08:51:38.300717 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9zgq2" Feb 23 08:51:38 crc kubenswrapper[4940]: I0223 08:51:38.309395 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-9zgq2"] Feb 23 08:51:38 crc kubenswrapper[4940]: I0223 08:51:38.321227 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6d79fb88b9-6n2zj" Feb 23 08:51:38 crc kubenswrapper[4940]: I0223 08:51:38.334799 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-98f8ddb54-5qrql" Feb 23 08:51:38 crc kubenswrapper[4940]: I0223 08:51:38.434754 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5k24j\" (UniqueName: \"kubernetes.io/projected/e5b4ab15-e0ee-4adb-814e-0aea200aa9d8-kube-api-access-5k24j\") pod \"redhat-marketplace-9zgq2\" (UID: \"e5b4ab15-e0ee-4adb-814e-0aea200aa9d8\") " pod="openshift-marketplace/redhat-marketplace-9zgq2" Feb 23 08:51:38 crc kubenswrapper[4940]: I0223 08:51:38.434837 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e5b4ab15-e0ee-4adb-814e-0aea200aa9d8-utilities\") pod \"redhat-marketplace-9zgq2\" (UID: \"e5b4ab15-e0ee-4adb-814e-0aea200aa9d8\") " pod="openshift-marketplace/redhat-marketplace-9zgq2" Feb 23 08:51:38 crc kubenswrapper[4940]: I0223 08:51:38.434864 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e5b4ab15-e0ee-4adb-814e-0aea200aa9d8-catalog-content\") pod \"redhat-marketplace-9zgq2\" (UID: \"e5b4ab15-e0ee-4adb-814e-0aea200aa9d8\") " pod="openshift-marketplace/redhat-marketplace-9zgq2" Feb 23 08:51:38 crc kubenswrapper[4940]: I0223 08:51:38.536685 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e5b4ab15-e0ee-4adb-814e-0aea200aa9d8-catalog-content\") pod \"redhat-marketplace-9zgq2\" (UID: \"e5b4ab15-e0ee-4adb-814e-0aea200aa9d8\") " pod="openshift-marketplace/redhat-marketplace-9zgq2" Feb 23 08:51:38 crc kubenswrapper[4940]: I0223 08:51:38.536875 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5k24j\" (UniqueName: \"kubernetes.io/projected/e5b4ab15-e0ee-4adb-814e-0aea200aa9d8-kube-api-access-5k24j\") pod \"redhat-marketplace-9zgq2\" (UID: \"e5b4ab15-e0ee-4adb-814e-0aea200aa9d8\") " pod="openshift-marketplace/redhat-marketplace-9zgq2" Feb 23 08:51:38 crc kubenswrapper[4940]: I0223 08:51:38.536942 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e5b4ab15-e0ee-4adb-814e-0aea200aa9d8-utilities\") pod \"redhat-marketplace-9zgq2\" (UID: \"e5b4ab15-e0ee-4adb-814e-0aea200aa9d8\") " pod="openshift-marketplace/redhat-marketplace-9zgq2" Feb 23 08:51:38 crc kubenswrapper[4940]: I0223 08:51:38.537221 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e5b4ab15-e0ee-4adb-814e-0aea200aa9d8-catalog-content\") pod \"redhat-marketplace-9zgq2\" (UID: \"e5b4ab15-e0ee-4adb-814e-0aea200aa9d8\") " pod="openshift-marketplace/redhat-marketplace-9zgq2" Feb 23 08:51:38 crc kubenswrapper[4940]: I0223 08:51:38.537478 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e5b4ab15-e0ee-4adb-814e-0aea200aa9d8-utilities\") pod \"redhat-marketplace-9zgq2\" (UID: \"e5b4ab15-e0ee-4adb-814e-0aea200aa9d8\") " pod="openshift-marketplace/redhat-marketplace-9zgq2" Feb 23 08:51:38 crc kubenswrapper[4940]: I0223 08:51:38.559539 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5k24j\" (UniqueName: \"kubernetes.io/projected/e5b4ab15-e0ee-4adb-814e-0aea200aa9d8-kube-api-access-5k24j\") pod \"redhat-marketplace-9zgq2\" (UID: \"e5b4ab15-e0ee-4adb-814e-0aea200aa9d8\") " pod="openshift-marketplace/redhat-marketplace-9zgq2" Feb 23 08:51:38 crc kubenswrapper[4940]: I0223 08:51:38.616624 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6d79fb88b9-6n2zj"] Feb 23 08:51:38 crc kubenswrapper[4940]: I0223 08:51:38.640824 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9zgq2" Feb 23 08:51:38 crc kubenswrapper[4940]: I0223 08:51:38.656027 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-98f8ddb54-5qrql"] Feb 23 08:51:38 crc kubenswrapper[4940]: I0223 08:51:38.656324 4940 patch_prober.go:28] interesting pod/downloads-7954f5f757-6qcpm container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.27:8080/\": dial tcp 10.217.0.27:8080: connect: connection refused" start-of-body= Feb 23 08:51:38 crc kubenswrapper[4940]: I0223 08:51:38.656399 4940 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-6qcpm" podUID="916e1f6f-2bfc-41e7-86c2-6c379e3638c1" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.27:8080/\": dial tcp 10.217.0.27:8080: connect: connection refused" Feb 23 08:51:38 crc kubenswrapper[4940]: I0223 08:51:38.656440 4940 patch_prober.go:28] interesting pod/downloads-7954f5f757-6qcpm container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.27:8080/\": dial tcp 10.217.0.27:8080: connect: connection refused" start-of-body= Feb 23 08:51:38 crc kubenswrapper[4940]: I0223 08:51:38.656494 4940 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-6qcpm" podUID="916e1f6f-2bfc-41e7-86c2-6c379e3638c1" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.27:8080/\": dial tcp 10.217.0.27:8080: connect: connection refused" Feb 23 08:51:38 crc kubenswrapper[4940]: I0223 08:51:38.763905 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-56j6r"] Feb 23 08:51:38 crc kubenswrapper[4940]: I0223 08:51:38.828060 4940 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-slpn2" Feb 23 08:51:38 crc kubenswrapper[4940]: I0223 08:51:38.837825 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-slpn2" Feb 23 08:51:38 crc kubenswrapper[4940]: I0223 08:51:38.878277 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-tqnlh" Feb 23 08:51:38 crc kubenswrapper[4940]: I0223 08:51:38.898297 4940 patch_prober.go:28] interesting pod/router-default-5444994796-tqnlh container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 08:51:38 crc kubenswrapper[4940]: [-]has-synced failed: reason withheld Feb 23 08:51:38 crc kubenswrapper[4940]: [+]process-running ok Feb 23 08:51:38 crc kubenswrapper[4940]: healthz check failed Feb 23 08:51:38 crc kubenswrapper[4940]: I0223 08:51:38.898357 4940 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-tqnlh" podUID="feea3a62-1f72-4b46-9655-521e8ff5c323" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 08:51:39 crc kubenswrapper[4940]: I0223 08:51:39.017865 4940 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-znvc9" Feb 23 08:51:39 crc kubenswrapper[4940]: I0223 08:51:39.034294 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-9zgq2"] Feb 23 08:51:39 crc kubenswrapper[4940]: I0223 08:51:39.034957 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-znvc9" Feb 23 08:51:39 crc kubenswrapper[4940]: I0223 08:51:39.038242 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 23 08:51:39 crc kubenswrapper[4940]: I0223 08:51:39.040310 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 23 08:51:39 crc kubenswrapper[4940]: I0223 08:51:39.041125 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 23 08:51:39 crc kubenswrapper[4940]: I0223 08:51:39.042784 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-56j6r" event={"ID":"c723f067-bc1f-4d88-aa37-00f8896a9d38","Type":"ContainerStarted","Data":"b96e2cb65f19360ec966a4ffd3a6bfe1d49693a85538d6a1b10aaa39c1f84030"} Feb 23 08:51:39 crc kubenswrapper[4940]: I0223 08:51:39.042827 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-56j6r" event={"ID":"c723f067-bc1f-4d88-aa37-00f8896a9d38","Type":"ContainerStarted","Data":"af6420bcd7623b9239412c41965d458c6e7ada3f5b50007544bbcea361894f48"} Feb 23 08:51:39 crc kubenswrapper[4940]: I0223 08:51:39.049590 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 23 08:51:39 crc kubenswrapper[4940]: I0223 08:51:39.049911 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Feb 23 08:51:39 crc kubenswrapper[4940]: I0223 08:51:39.080917 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-98f8ddb54-5qrql" event={"ID":"576e1b7d-ed0d-45c7-8730-b93b9ee1d86d","Type":"ContainerStarted","Data":"fa841dc3c61b6044bb1ece41f2e22d8cfd65c979a327cd2461ec6efed33db109"} Feb 23 08:51:39 crc kubenswrapper[4940]: I0223 08:51:39.080998 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-98f8ddb54-5qrql" event={"ID":"576e1b7d-ed0d-45c7-8730-b93b9ee1d86d","Type":"ContainerStarted","Data":"27335f41bcf6e5780150260eb461d358616b1355d17c7aeb43997451871183f7"} Feb 23 08:51:39 crc kubenswrapper[4940]: I0223 08:51:39.081713 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-98f8ddb54-5qrql" Feb 23 08:51:39 crc kubenswrapper[4940]: I0223 08:51:39.086883 4940 patch_prober.go:28] interesting pod/controller-manager-98f8ddb54-5qrql container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.51:8443/healthz\": dial tcp 10.217.0.51:8443: connect: connection refused" start-of-body= Feb 23 08:51:39 crc kubenswrapper[4940]: I0223 08:51:39.086984 4940 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-98f8ddb54-5qrql" podUID="576e1b7d-ed0d-45c7-8730-b93b9ee1d86d" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.51:8443/healthz\": dial tcp 10.217.0.51:8443: connect: connection refused" Feb 23 08:51:39 crc kubenswrapper[4940]: I0223 08:51:39.129818 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6d79fb88b9-6n2zj" event={"ID":"d66696cc-13f0-4bec-a40a-1874441498ee","Type":"ContainerStarted","Data":"18db3853e6a450249c65461ba46aa0452cd05fcf8a4970d9468686bd576527d4"} Feb 23 08:51:39 crc kubenswrapper[4940]: I0223 08:51:39.129858 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6d79fb88b9-6n2zj" event={"ID":"d66696cc-13f0-4bec-a40a-1874441498ee","Type":"ContainerStarted","Data":"db37f91299927042231ab874ff96de31f167ed839c27c1761263d0d0744edad9"} Feb 23 08:51:39 crc kubenswrapper[4940]: I0223 08:51:39.149498 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/fb9a2585-8f25-4d1b-83b1-ab3d523e73e0-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"fb9a2585-8f25-4d1b-83b1-ab3d523e73e0\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 23 08:51:39 crc kubenswrapper[4940]: I0223 08:51:39.149789 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fb9a2585-8f25-4d1b-83b1-ab3d523e73e0-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"fb9a2585-8f25-4d1b-83b1-ab3d523e73e0\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 23 08:51:39 crc kubenswrapper[4940]: I0223 08:51:39.216980 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-j9x9v" Feb 23 08:51:39 crc kubenswrapper[4940]: I0223 08:51:39.224898 4940 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-98f8ddb54-5qrql" podStartSLOduration=4.224873077 podStartE2EDuration="4.224873077s" podCreationTimestamp="2026-02-23 08:51:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 08:51:39.219090763 +0000 UTC m=+230.602296920" watchObservedRunningTime="2026-02-23 08:51:39.224873077 +0000 UTC m=+230.608079254" Feb 23 08:51:39 crc kubenswrapper[4940]: I0223 08:51:39.253238 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fb9a2585-8f25-4d1b-83b1-ab3d523e73e0-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"fb9a2585-8f25-4d1b-83b1-ab3d523e73e0\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 23 08:51:39 crc kubenswrapper[4940]: I0223 08:51:39.253350 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/fb9a2585-8f25-4d1b-83b1-ab3d523e73e0-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"fb9a2585-8f25-4d1b-83b1-ab3d523e73e0\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 23 08:51:39 crc kubenswrapper[4940]: I0223 08:51:39.254138 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/fb9a2585-8f25-4d1b-83b1-ab3d523e73e0-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"fb9a2585-8f25-4d1b-83b1-ab3d523e73e0\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 23 08:51:39 crc kubenswrapper[4940]: I0223 08:51:39.289184 4940 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6d79fb88b9-6n2zj" podStartSLOduration=3.289143818 podStartE2EDuration="3.289143818s" podCreationTimestamp="2026-02-23 08:51:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 08:51:39.275282598 +0000 UTC m=+230.658488755" watchObservedRunningTime="2026-02-23 08:51:39.289143818 +0000 UTC m=+230.672349975" Feb 23 08:51:39 crc kubenswrapper[4940]: I0223 08:51:39.310080 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fb9a2585-8f25-4d1b-83b1-ab3d523e73e0-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"fb9a2585-8f25-4d1b-83b1-ab3d523e73e0\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 23 08:51:39 crc kubenswrapper[4940]: I0223 08:51:39.310201 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-7w9jb"] Feb 23 08:51:39 crc kubenswrapper[4940]: I0223 08:51:39.311644 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7w9jb" Feb 23 08:51:39 crc kubenswrapper[4940]: I0223 08:51:39.328703 4940 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-2zgdn" Feb 23 08:51:39 crc kubenswrapper[4940]: I0223 08:51:39.328738 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-2zgdn" Feb 23 08:51:39 crc kubenswrapper[4940]: I0223 08:51:39.330544 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 23 08:51:39 crc kubenswrapper[4940]: I0223 08:51:39.336516 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-7w9jb"] Feb 23 08:51:39 crc kubenswrapper[4940]: I0223 08:51:39.374732 4940 patch_prober.go:28] interesting pod/console-f9d7485db-2zgdn container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.28:8443/health\": dial tcp 10.217.0.28:8443: connect: connection refused" start-of-body= Feb 23 08:51:39 crc kubenswrapper[4940]: I0223 08:51:39.374819 4940 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-2zgdn" podUID="07ef0edd-666b-4ced-9a27-51433a59c6c0" containerName="console" probeResult="failure" output="Get \"https://10.217.0.28:8443/health\": dial tcp 10.217.0.28:8443: connect: connection refused" Feb 23 08:51:39 crc kubenswrapper[4940]: I0223 08:51:39.403025 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 23 08:51:39 crc kubenswrapper[4940]: I0223 08:51:39.418259 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9xdrp" Feb 23 08:51:39 crc kubenswrapper[4940]: I0223 08:51:39.472257 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xj69h\" (UniqueName: \"kubernetes.io/projected/000a58ff-706e-452e-8fa1-493f98d2e314-kube-api-access-xj69h\") pod \"redhat-operators-7w9jb\" (UID: \"000a58ff-706e-452e-8fa1-493f98d2e314\") " pod="openshift-marketplace/redhat-operators-7w9jb" Feb 23 08:51:39 crc kubenswrapper[4940]: I0223 08:51:39.472386 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/000a58ff-706e-452e-8fa1-493f98d2e314-catalog-content\") pod \"redhat-operators-7w9jb\" (UID: \"000a58ff-706e-452e-8fa1-493f98d2e314\") " pod="openshift-marketplace/redhat-operators-7w9jb" Feb 23 08:51:39 crc kubenswrapper[4940]: I0223 08:51:39.472499 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/000a58ff-706e-452e-8fa1-493f98d2e314-utilities\") pod \"redhat-operators-7w9jb\" (UID: \"000a58ff-706e-452e-8fa1-493f98d2e314\") " pod="openshift-marketplace/redhat-operators-7w9jb" Feb 23 08:51:39 crc kubenswrapper[4940]: I0223 08:51:39.576178 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xj69h\" (UniqueName: \"kubernetes.io/projected/000a58ff-706e-452e-8fa1-493f98d2e314-kube-api-access-xj69h\") pod \"redhat-operators-7w9jb\" (UID: \"000a58ff-706e-452e-8fa1-493f98d2e314\") " pod="openshift-marketplace/redhat-operators-7w9jb" Feb 23 08:51:39 crc kubenswrapper[4940]: I0223 08:51:39.576265 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/000a58ff-706e-452e-8fa1-493f98d2e314-catalog-content\") pod \"redhat-operators-7w9jb\" (UID: \"000a58ff-706e-452e-8fa1-493f98d2e314\") " pod="openshift-marketplace/redhat-operators-7w9jb" Feb 23 08:51:39 crc kubenswrapper[4940]: I0223 08:51:39.576356 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/000a58ff-706e-452e-8fa1-493f98d2e314-utilities\") pod \"redhat-operators-7w9jb\" (UID: \"000a58ff-706e-452e-8fa1-493f98d2e314\") " pod="openshift-marketplace/redhat-operators-7w9jb" Feb 23 08:51:39 crc kubenswrapper[4940]: I0223 08:51:39.576783 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/000a58ff-706e-452e-8fa1-493f98d2e314-utilities\") pod \"redhat-operators-7w9jb\" (UID: \"000a58ff-706e-452e-8fa1-493f98d2e314\") " pod="openshift-marketplace/redhat-operators-7w9jb" Feb 23 08:51:39 crc kubenswrapper[4940]: I0223 08:51:39.577995 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/000a58ff-706e-452e-8fa1-493f98d2e314-catalog-content\") pod \"redhat-operators-7w9jb\" (UID: \"000a58ff-706e-452e-8fa1-493f98d2e314\") " pod="openshift-marketplace/redhat-operators-7w9jb" Feb 23 08:51:39 crc kubenswrapper[4940]: I0223 08:51:39.612685 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 23 08:51:39 crc kubenswrapper[4940]: I0223 08:51:39.625992 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xj69h\" (UniqueName: \"kubernetes.io/projected/000a58ff-706e-452e-8fa1-493f98d2e314-kube-api-access-xj69h\") pod \"redhat-operators-7w9jb\" (UID: \"000a58ff-706e-452e-8fa1-493f98d2e314\") " pod="openshift-marketplace/redhat-operators-7w9jb" Feb 23 08:51:39 crc kubenswrapper[4940]: I0223 08:51:39.702489 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-6shbw"] Feb 23 08:51:39 crc kubenswrapper[4940]: E0223 08:51:39.702929 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c2be6bf-ea2f-47a2-851e-c67d893f0563" containerName="pruner" Feb 23 08:51:39 crc kubenswrapper[4940]: I0223 08:51:39.707627 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c2be6bf-ea2f-47a2-851e-c67d893f0563" containerName="pruner" Feb 23 08:51:39 crc kubenswrapper[4940]: I0223 08:51:39.707886 4940 memory_manager.go:354] "RemoveStaleState removing state" podUID="3c2be6bf-ea2f-47a2-851e-c67d893f0563" containerName="pruner" Feb 23 08:51:39 crc kubenswrapper[4940]: I0223 08:51:39.708729 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6shbw" Feb 23 08:51:39 crc kubenswrapper[4940]: I0223 08:51:39.716901 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7w9jb" Feb 23 08:51:39 crc kubenswrapper[4940]: I0223 08:51:39.725924 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-6shbw"] Feb 23 08:51:39 crc kubenswrapper[4940]: I0223 08:51:39.782151 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3c2be6bf-ea2f-47a2-851e-c67d893f0563-kube-api-access\") pod \"3c2be6bf-ea2f-47a2-851e-c67d893f0563\" (UID: \"3c2be6bf-ea2f-47a2-851e-c67d893f0563\") " Feb 23 08:51:39 crc kubenswrapper[4940]: I0223 08:51:39.782235 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3c2be6bf-ea2f-47a2-851e-c67d893f0563-kubelet-dir\") pod \"3c2be6bf-ea2f-47a2-851e-c67d893f0563\" (UID: \"3c2be6bf-ea2f-47a2-851e-c67d893f0563\") " Feb 23 08:51:39 crc kubenswrapper[4940]: I0223 08:51:39.782505 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3c2be6bf-ea2f-47a2-851e-c67d893f0563-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "3c2be6bf-ea2f-47a2-851e-c67d893f0563" (UID: "3c2be6bf-ea2f-47a2-851e-c67d893f0563"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 08:51:39 crc kubenswrapper[4940]: I0223 08:51:39.794880 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3c2be6bf-ea2f-47a2-851e-c67d893f0563-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "3c2be6bf-ea2f-47a2-851e-c67d893f0563" (UID: "3c2be6bf-ea2f-47a2-851e-c67d893f0563"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 08:51:39 crc kubenswrapper[4940]: I0223 08:51:39.884479 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f0f689fc-e907-420c-869b-0a3d496358a4-utilities\") pod \"redhat-operators-6shbw\" (UID: \"f0f689fc-e907-420c-869b-0a3d496358a4\") " pod="openshift-marketplace/redhat-operators-6shbw" Feb 23 08:51:39 crc kubenswrapper[4940]: I0223 08:51:39.884526 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f0f689fc-e907-420c-869b-0a3d496358a4-catalog-content\") pod \"redhat-operators-6shbw\" (UID: \"f0f689fc-e907-420c-869b-0a3d496358a4\") " pod="openshift-marketplace/redhat-operators-6shbw" Feb 23 08:51:39 crc kubenswrapper[4940]: I0223 08:51:39.884566 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p5429\" (UniqueName: \"kubernetes.io/projected/f0f689fc-e907-420c-869b-0a3d496358a4-kube-api-access-p5429\") pod \"redhat-operators-6shbw\" (UID: \"f0f689fc-e907-420c-869b-0a3d496358a4\") " pod="openshift-marketplace/redhat-operators-6shbw" Feb 23 08:51:39 crc kubenswrapper[4940]: I0223 08:51:39.884670 4940 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3c2be6bf-ea2f-47a2-851e-c67d893f0563-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 23 08:51:39 crc kubenswrapper[4940]: I0223 08:51:39.884681 4940 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3c2be6bf-ea2f-47a2-851e-c67d893f0563-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 23 08:51:39 crc kubenswrapper[4940]: I0223 08:51:39.885199 4940 patch_prober.go:28] interesting pod/router-default-5444994796-tqnlh container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 08:51:39 crc kubenswrapper[4940]: [-]has-synced failed: reason withheld Feb 23 08:51:39 crc kubenswrapper[4940]: [+]process-running ok Feb 23 08:51:39 crc kubenswrapper[4940]: healthz check failed Feb 23 08:51:39 crc kubenswrapper[4940]: I0223 08:51:39.885230 4940 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-tqnlh" podUID="feea3a62-1f72-4b46-9655-521e8ff5c323" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 08:51:39 crc kubenswrapper[4940]: I0223 08:51:39.976371 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 23 08:51:39 crc kubenswrapper[4940]: I0223 08:51:39.985545 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f0f689fc-e907-420c-869b-0a3d496358a4-utilities\") pod \"redhat-operators-6shbw\" (UID: \"f0f689fc-e907-420c-869b-0a3d496358a4\") " pod="openshift-marketplace/redhat-operators-6shbw" Feb 23 08:51:39 crc kubenswrapper[4940]: I0223 08:51:39.985592 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f0f689fc-e907-420c-869b-0a3d496358a4-catalog-content\") pod \"redhat-operators-6shbw\" (UID: \"f0f689fc-e907-420c-869b-0a3d496358a4\") " pod="openshift-marketplace/redhat-operators-6shbw" Feb 23 08:51:39 crc kubenswrapper[4940]: I0223 08:51:39.985660 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p5429\" (UniqueName: \"kubernetes.io/projected/f0f689fc-e907-420c-869b-0a3d496358a4-kube-api-access-p5429\") pod \"redhat-operators-6shbw\" (UID: \"f0f689fc-e907-420c-869b-0a3d496358a4\") " pod="openshift-marketplace/redhat-operators-6shbw" Feb 23 08:51:39 crc kubenswrapper[4940]: I0223 08:51:39.986417 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f0f689fc-e907-420c-869b-0a3d496358a4-utilities\") pod \"redhat-operators-6shbw\" (UID: \"f0f689fc-e907-420c-869b-0a3d496358a4\") " pod="openshift-marketplace/redhat-operators-6shbw" Feb 23 08:51:39 crc kubenswrapper[4940]: I0223 08:51:39.987009 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f0f689fc-e907-420c-869b-0a3d496358a4-catalog-content\") pod \"redhat-operators-6shbw\" (UID: \"f0f689fc-e907-420c-869b-0a3d496358a4\") " pod="openshift-marketplace/redhat-operators-6shbw" Feb 23 08:51:40 crc kubenswrapper[4940]: I0223 08:51:40.011122 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p5429\" (UniqueName: \"kubernetes.io/projected/f0f689fc-e907-420c-869b-0a3d496358a4-kube-api-access-p5429\") pod \"redhat-operators-6shbw\" (UID: \"f0f689fc-e907-420c-869b-0a3d496358a4\") " pod="openshift-marketplace/redhat-operators-6shbw" Feb 23 08:51:40 crc kubenswrapper[4940]: I0223 08:51:40.043700 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6shbw" Feb 23 08:51:40 crc kubenswrapper[4940]: I0223 08:51:40.159383 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"fb9a2585-8f25-4d1b-83b1-ab3d523e73e0","Type":"ContainerStarted","Data":"a6324478ab8f8634fcf11039a0a07b04a34fd6e48f567a8ea3feb707b63101a1"} Feb 23 08:51:40 crc kubenswrapper[4940]: I0223 08:51:40.167261 4940 generic.go:334] "Generic (PLEG): container finished" podID="c723f067-bc1f-4d88-aa37-00f8896a9d38" containerID="b96e2cb65f19360ec966a4ffd3a6bfe1d49693a85538d6a1b10aaa39c1f84030" exitCode=0 Feb 23 08:51:40 crc kubenswrapper[4940]: I0223 08:51:40.167431 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-56j6r" event={"ID":"c723f067-bc1f-4d88-aa37-00f8896a9d38","Type":"ContainerDied","Data":"b96e2cb65f19360ec966a4ffd3a6bfe1d49693a85538d6a1b10aaa39c1f84030"} Feb 23 08:51:40 crc kubenswrapper[4940]: I0223 08:51:40.173463 4940 generic.go:334] "Generic (PLEG): container finished" podID="e5b4ab15-e0ee-4adb-814e-0aea200aa9d8" containerID="feb7300dd07a770d99a9da96807b929e5fff1fe0952b9815443bc3de122e09c9" exitCode=0 Feb 23 08:51:40 crc kubenswrapper[4940]: I0223 08:51:40.173936 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9zgq2" event={"ID":"e5b4ab15-e0ee-4adb-814e-0aea200aa9d8","Type":"ContainerDied","Data":"feb7300dd07a770d99a9da96807b929e5fff1fe0952b9815443bc3de122e09c9"} Feb 23 08:51:40 crc kubenswrapper[4940]: I0223 08:51:40.174004 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9zgq2" event={"ID":"e5b4ab15-e0ee-4adb-814e-0aea200aa9d8","Type":"ContainerStarted","Data":"5a94ef051733ac25de3250fa8525f81b28c9eb531f5ce82594d35e16b243f6b3"} Feb 23 08:51:40 crc kubenswrapper[4940]: I0223 08:51:40.182300 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"3c2be6bf-ea2f-47a2-851e-c67d893f0563","Type":"ContainerDied","Data":"c3f049a6ab970fb553bbb8e679814ec3f21c8b101b9ecc42d5eef89b6cf22455"} Feb 23 08:51:40 crc kubenswrapper[4940]: I0223 08:51:40.182336 4940 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c3f049a6ab970fb553bbb8e679814ec3f21c8b101b9ecc42d5eef89b6cf22455" Feb 23 08:51:40 crc kubenswrapper[4940]: I0223 08:51:40.182671 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 23 08:51:40 crc kubenswrapper[4940]: I0223 08:51:40.183197 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6d79fb88b9-6n2zj" Feb 23 08:51:40 crc kubenswrapper[4940]: I0223 08:51:40.198855 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-98f8ddb54-5qrql" Feb 23 08:51:40 crc kubenswrapper[4940]: I0223 08:51:40.231982 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6d79fb88b9-6n2zj" Feb 23 08:51:40 crc kubenswrapper[4940]: I0223 08:51:40.424273 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-7w9jb"] Feb 23 08:51:40 crc kubenswrapper[4940]: W0223 08:51:40.485851 4940 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod000a58ff_706e_452e_8fa1_493f98d2e314.slice/crio-e974e27648c51fe2f52df96c81b8212a21cadc8150f286f2259ea35df0f4e4ce WatchSource:0}: Error finding container e974e27648c51fe2f52df96c81b8212a21cadc8150f286f2259ea35df0f4e4ce: Status 404 returned error can't find the container with id e974e27648c51fe2f52df96c81b8212a21cadc8150f286f2259ea35df0f4e4ce Feb 23 08:51:40 crc kubenswrapper[4940]: I0223 08:51:40.539641 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-6shbw"] Feb 23 08:51:40 crc kubenswrapper[4940]: I0223 08:51:40.881464 4940 patch_prober.go:28] interesting pod/router-default-5444994796-tqnlh container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 08:51:40 crc kubenswrapper[4940]: [-]has-synced failed: reason withheld Feb 23 08:51:40 crc kubenswrapper[4940]: [+]process-running ok Feb 23 08:51:40 crc kubenswrapper[4940]: healthz check failed Feb 23 08:51:40 crc kubenswrapper[4940]: I0223 08:51:40.882445 4940 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-tqnlh" podUID="feea3a62-1f72-4b46-9655-521e8ff5c323" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 08:51:40 crc kubenswrapper[4940]: I0223 08:51:40.945508 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-sppfz" Feb 23 08:51:41 crc kubenswrapper[4940]: I0223 08:51:41.218013 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"fb9a2585-8f25-4d1b-83b1-ab3d523e73e0","Type":"ContainerStarted","Data":"c8f67e7acd38dc5a484ea0389ccac5dc2114881f80de1ffc9d7f9435b2c52c51"} Feb 23 08:51:41 crc kubenswrapper[4940]: I0223 08:51:41.223851 4940 generic.go:334] "Generic (PLEG): container finished" podID="000a58ff-706e-452e-8fa1-493f98d2e314" containerID="4008745a0eb03cf617bef2c70c3bd5d21cb592264c0955ee07f3a69074c33918" exitCode=0 Feb 23 08:51:41 crc kubenswrapper[4940]: I0223 08:51:41.223933 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7w9jb" event={"ID":"000a58ff-706e-452e-8fa1-493f98d2e314","Type":"ContainerDied","Data":"4008745a0eb03cf617bef2c70c3bd5d21cb592264c0955ee07f3a69074c33918"} Feb 23 08:51:41 crc kubenswrapper[4940]: I0223 08:51:41.224001 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7w9jb" event={"ID":"000a58ff-706e-452e-8fa1-493f98d2e314","Type":"ContainerStarted","Data":"e974e27648c51fe2f52df96c81b8212a21cadc8150f286f2259ea35df0f4e4ce"} Feb 23 08:51:41 crc kubenswrapper[4940]: I0223 08:51:41.233273 4940 generic.go:334] "Generic (PLEG): container finished" podID="f0f689fc-e907-420c-869b-0a3d496358a4" containerID="8f82bc158ea28d01e5fc05f6db76667e5d32f37cde8673cb60c65437f669c37e" exitCode=0 Feb 23 08:51:41 crc kubenswrapper[4940]: I0223 08:51:41.233411 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6shbw" event={"ID":"f0f689fc-e907-420c-869b-0a3d496358a4","Type":"ContainerDied","Data":"8f82bc158ea28d01e5fc05f6db76667e5d32f37cde8673cb60c65437f669c37e"} Feb 23 08:51:41 crc kubenswrapper[4940]: I0223 08:51:41.233468 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6shbw" event={"ID":"f0f689fc-e907-420c-869b-0a3d496358a4","Type":"ContainerStarted","Data":"46f3ac616ee244492e447917dc6a1964a7c228462e67d6e6f73c3b7c0cf3d24a"} Feb 23 08:51:41 crc kubenswrapper[4940]: I0223 08:51:41.244196 4940 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-8-crc" podStartSLOduration=2.244175303 podStartE2EDuration="2.244175303s" podCreationTimestamp="2026-02-23 08:51:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 08:51:41.240741214 +0000 UTC m=+232.623947371" watchObservedRunningTime="2026-02-23 08:51:41.244175303 +0000 UTC m=+232.627381460" Feb 23 08:51:41 crc kubenswrapper[4940]: I0223 08:51:41.878565 4940 patch_prober.go:28] interesting pod/router-default-5444994796-tqnlh container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 08:51:41 crc kubenswrapper[4940]: [-]has-synced failed: reason withheld Feb 23 08:51:41 crc kubenswrapper[4940]: [+]process-running ok Feb 23 08:51:41 crc kubenswrapper[4940]: healthz check failed Feb 23 08:51:41 crc kubenswrapper[4940]: I0223 08:51:41.878639 4940 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-tqnlh" podUID="feea3a62-1f72-4b46-9655-521e8ff5c323" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 08:51:42 crc kubenswrapper[4940]: I0223 08:51:42.244623 4940 generic.go:334] "Generic (PLEG): container finished" podID="fb9a2585-8f25-4d1b-83b1-ab3d523e73e0" containerID="c8f67e7acd38dc5a484ea0389ccac5dc2114881f80de1ffc9d7f9435b2c52c51" exitCode=0 Feb 23 08:51:42 crc kubenswrapper[4940]: I0223 08:51:42.244729 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"fb9a2585-8f25-4d1b-83b1-ab3d523e73e0","Type":"ContainerDied","Data":"c8f67e7acd38dc5a484ea0389ccac5dc2114881f80de1ffc9d7f9435b2c52c51"} Feb 23 08:51:42 crc kubenswrapper[4940]: I0223 08:51:42.886449 4940 patch_prober.go:28] interesting pod/router-default-5444994796-tqnlh container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 08:51:42 crc kubenswrapper[4940]: [-]has-synced failed: reason withheld Feb 23 08:51:42 crc kubenswrapper[4940]: [+]process-running ok Feb 23 08:51:42 crc kubenswrapper[4940]: healthz check failed Feb 23 08:51:42 crc kubenswrapper[4940]: I0223 08:51:42.886520 4940 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-tqnlh" podUID="feea3a62-1f72-4b46-9655-521e8ff5c323" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 08:51:43 crc kubenswrapper[4940]: I0223 08:51:43.877733 4940 patch_prober.go:28] interesting pod/router-default-5444994796-tqnlh container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 08:51:43 crc kubenswrapper[4940]: [-]has-synced failed: reason withheld Feb 23 08:51:43 crc kubenswrapper[4940]: [+]process-running ok Feb 23 08:51:43 crc kubenswrapper[4940]: healthz check failed Feb 23 08:51:43 crc kubenswrapper[4940]: I0223 08:51:43.877796 4940 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-tqnlh" podUID="feea3a62-1f72-4b46-9655-521e8ff5c323" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 08:51:44 crc kubenswrapper[4940]: I0223 08:51:44.877196 4940 patch_prober.go:28] interesting pod/router-default-5444994796-tqnlh container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 08:51:44 crc kubenswrapper[4940]: [-]has-synced failed: reason withheld Feb 23 08:51:44 crc kubenswrapper[4940]: [+]process-running ok Feb 23 08:51:44 crc kubenswrapper[4940]: healthz check failed Feb 23 08:51:44 crc kubenswrapper[4940]: I0223 08:51:44.877558 4940 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-tqnlh" podUID="feea3a62-1f72-4b46-9655-521e8ff5c323" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 08:51:45 crc kubenswrapper[4940]: I0223 08:51:45.877345 4940 patch_prober.go:28] interesting pod/router-default-5444994796-tqnlh container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 08:51:45 crc kubenswrapper[4940]: [-]has-synced failed: reason withheld Feb 23 08:51:45 crc kubenswrapper[4940]: [+]process-running ok Feb 23 08:51:45 crc kubenswrapper[4940]: healthz check failed Feb 23 08:51:45 crc kubenswrapper[4940]: I0223 08:51:45.877577 4940 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-tqnlh" podUID="feea3a62-1f72-4b46-9655-521e8ff5c323" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 08:51:46 crc kubenswrapper[4940]: I0223 08:51:46.876207 4940 patch_prober.go:28] interesting pod/router-default-5444994796-tqnlh container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 08:51:46 crc kubenswrapper[4940]: [-]has-synced failed: reason withheld Feb 23 08:51:46 crc kubenswrapper[4940]: [+]process-running ok Feb 23 08:51:46 crc kubenswrapper[4940]: healthz check failed Feb 23 08:51:46 crc kubenswrapper[4940]: I0223 08:51:46.876566 4940 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-tqnlh" podUID="feea3a62-1f72-4b46-9655-521e8ff5c323" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 08:51:47 crc kubenswrapper[4940]: I0223 08:51:47.877426 4940 patch_prober.go:28] interesting pod/router-default-5444994796-tqnlh container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 08:51:47 crc kubenswrapper[4940]: [-]has-synced failed: reason withheld Feb 23 08:51:47 crc kubenswrapper[4940]: [+]process-running ok Feb 23 08:51:47 crc kubenswrapper[4940]: healthz check failed Feb 23 08:51:47 crc kubenswrapper[4940]: I0223 08:51:47.877497 4940 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-tqnlh" podUID="feea3a62-1f72-4b46-9655-521e8ff5c323" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 08:51:48 crc kubenswrapper[4940]: I0223 08:51:48.692539 4940 patch_prober.go:28] interesting pod/downloads-7954f5f757-6qcpm container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.27:8080/\": dial tcp 10.217.0.27:8080: connect: connection refused" start-of-body= Feb 23 08:51:48 crc kubenswrapper[4940]: I0223 08:51:48.692602 4940 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-6qcpm" podUID="916e1f6f-2bfc-41e7-86c2-6c379e3638c1" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.27:8080/\": dial tcp 10.217.0.27:8080: connect: connection refused" Feb 23 08:51:48 crc kubenswrapper[4940]: I0223 08:51:48.693333 4940 patch_prober.go:28] interesting pod/downloads-7954f5f757-6qcpm container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.27:8080/\": dial tcp 10.217.0.27:8080: connect: connection refused" start-of-body= Feb 23 08:51:48 crc kubenswrapper[4940]: I0223 08:51:48.693400 4940 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-6qcpm" podUID="916e1f6f-2bfc-41e7-86c2-6c379e3638c1" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.27:8080/\": dial tcp 10.217.0.27:8080: connect: connection refused" Feb 23 08:51:48 crc kubenswrapper[4940]: I0223 08:51:48.877594 4940 patch_prober.go:28] interesting pod/router-default-5444994796-tqnlh container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 08:51:48 crc kubenswrapper[4940]: [-]has-synced failed: reason withheld Feb 23 08:51:48 crc kubenswrapper[4940]: [+]process-running ok Feb 23 08:51:48 crc kubenswrapper[4940]: healthz check failed Feb 23 08:51:48 crc kubenswrapper[4940]: I0223 08:51:48.877670 4940 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-tqnlh" podUID="feea3a62-1f72-4b46-9655-521e8ff5c323" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 08:51:49 crc kubenswrapper[4940]: I0223 08:51:49.323403 4940 patch_prober.go:28] interesting pod/console-f9d7485db-2zgdn container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.28:8443/health\": dial tcp 10.217.0.28:8443: connect: connection refused" start-of-body= Feb 23 08:51:49 crc kubenswrapper[4940]: I0223 08:51:49.323740 4940 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-2zgdn" podUID="07ef0edd-666b-4ced-9a27-51433a59c6c0" containerName="console" probeResult="failure" output="Get \"https://10.217.0.28:8443/health\": dial tcp 10.217.0.28:8443: connect: connection refused" Feb 23 08:51:49 crc kubenswrapper[4940]: I0223 08:51:49.630280 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 23 08:51:49 crc kubenswrapper[4940]: I0223 08:51:49.722347 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/fb9a2585-8f25-4d1b-83b1-ab3d523e73e0-kubelet-dir\") pod \"fb9a2585-8f25-4d1b-83b1-ab3d523e73e0\" (UID: \"fb9a2585-8f25-4d1b-83b1-ab3d523e73e0\") " Feb 23 08:51:49 crc kubenswrapper[4940]: I0223 08:51:49.722635 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fb9a2585-8f25-4d1b-83b1-ab3d523e73e0-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "fb9a2585-8f25-4d1b-83b1-ab3d523e73e0" (UID: "fb9a2585-8f25-4d1b-83b1-ab3d523e73e0"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 08:51:49 crc kubenswrapper[4940]: I0223 08:51:49.722732 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fb9a2585-8f25-4d1b-83b1-ab3d523e73e0-kube-api-access\") pod \"fb9a2585-8f25-4d1b-83b1-ab3d523e73e0\" (UID: \"fb9a2585-8f25-4d1b-83b1-ab3d523e73e0\") " Feb 23 08:51:49 crc kubenswrapper[4940]: I0223 08:51:49.723030 4940 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/fb9a2585-8f25-4d1b-83b1-ab3d523e73e0-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 23 08:51:49 crc kubenswrapper[4940]: I0223 08:51:49.728648 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fb9a2585-8f25-4d1b-83b1-ab3d523e73e0-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "fb9a2585-8f25-4d1b-83b1-ab3d523e73e0" (UID: "fb9a2585-8f25-4d1b-83b1-ab3d523e73e0"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 08:51:49 crc kubenswrapper[4940]: I0223 08:51:49.824653 4940 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fb9a2585-8f25-4d1b-83b1-ab3d523e73e0-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 23 08:51:49 crc kubenswrapper[4940]: I0223 08:51:49.877766 4940 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-tqnlh" Feb 23 08:51:49 crc kubenswrapper[4940]: I0223 08:51:49.879848 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-tqnlh" Feb 23 08:51:50 crc kubenswrapper[4940]: I0223 08:51:50.333827 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 23 08:51:50 crc kubenswrapper[4940]: I0223 08:51:50.335724 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"fb9a2585-8f25-4d1b-83b1-ab3d523e73e0","Type":"ContainerDied","Data":"a6324478ab8f8634fcf11039a0a07b04a34fd6e48f567a8ea3feb707b63101a1"} Feb 23 08:51:50 crc kubenswrapper[4940]: I0223 08:51:50.335777 4940 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a6324478ab8f8634fcf11039a0a07b04a34fd6e48f567a8ea3feb707b63101a1" Feb 23 08:51:52 crc kubenswrapper[4940]: I0223 08:51:52.155563 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d8dd2da9-cea0-44f5-8c93-91b79c7f66ea-metrics-certs\") pod \"network-metrics-daemon-jwb9b\" (UID: \"d8dd2da9-cea0-44f5-8c93-91b79c7f66ea\") " pod="openshift-multus/network-metrics-daemon-jwb9b" Feb 23 08:51:52 crc kubenswrapper[4940]: I0223 08:51:52.157370 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Feb 23 08:51:52 crc kubenswrapper[4940]: I0223 08:51:52.174765 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d8dd2da9-cea0-44f5-8c93-91b79c7f66ea-metrics-certs\") pod \"network-metrics-daemon-jwb9b\" (UID: \"d8dd2da9-cea0-44f5-8c93-91b79c7f66ea\") " pod="openshift-multus/network-metrics-daemon-jwb9b" Feb 23 08:51:52 crc kubenswrapper[4940]: I0223 08:51:52.308313 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Feb 23 08:51:52 crc kubenswrapper[4940]: I0223 08:51:52.316937 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jwb9b" Feb 23 08:51:56 crc kubenswrapper[4940]: I0223 08:51:56.756968 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-kzrfw" Feb 23 08:51:58 crc kubenswrapper[4940]: I0223 08:51:58.661002 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-6qcpm" Feb 23 08:51:59 crc kubenswrapper[4940]: I0223 08:51:59.329141 4940 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-2zgdn" Feb 23 08:51:59 crc kubenswrapper[4940]: I0223 08:51:59.333969 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-2zgdn" Feb 23 08:52:01 crc kubenswrapper[4940]: I0223 08:52:01.430013 4940 patch_prober.go:28] interesting pod/machine-config-daemon-26mgs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 23 08:52:01 crc kubenswrapper[4940]: I0223 08:52:01.430995 4940 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" podUID="f3f2cfd6-5ddf-436d-998f-440f1cc642b1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 23 08:52:03 crc kubenswrapper[4940]: I0223 08:52:03.400863 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-jwb9b"] Feb 23 08:52:03 crc kubenswrapper[4940]: W0223 08:52:03.417146 4940 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd8dd2da9_cea0_44f5_8c93_91b79c7f66ea.slice/crio-9d55195a847ae1e99a31354ca98a2e8295ed2112b52e3f4ef365d3a51ffe8dce WatchSource:0}: Error finding container 9d55195a847ae1e99a31354ca98a2e8295ed2112b52e3f4ef365d3a51ffe8dce: Status 404 returned error can't find the container with id 9d55195a847ae1e99a31354ca98a2e8295ed2112b52e3f4ef365d3a51ffe8dce Feb 23 08:52:04 crc kubenswrapper[4940]: I0223 08:52:04.415565 4940 generic.go:334] "Generic (PLEG): container finished" podID="fc357ef5-0994-4918-859b-d623e534da2a" containerID="9e0b8b2dd6940a6c2f6b61c84c38c8f1b91743505e3b0f7eb5ba21d5ee3bc1d9" exitCode=0 Feb 23 08:52:04 crc kubenswrapper[4940]: I0223 08:52:04.415646 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wprw9" event={"ID":"fc357ef5-0994-4918-859b-d623e534da2a","Type":"ContainerDied","Data":"9e0b8b2dd6940a6c2f6b61c84c38c8f1b91743505e3b0f7eb5ba21d5ee3bc1d9"} Feb 23 08:52:04 crc kubenswrapper[4940]: I0223 08:52:04.419310 4940 generic.go:334] "Generic (PLEG): container finished" podID="b627df02-a2a7-4070-b2c7-8b9260637500" containerID="f2c9a3b20d7cbb753eca22d7547d236cc5da5dcdf16a799b11cedb2192028327" exitCode=0 Feb 23 08:52:04 crc kubenswrapper[4940]: I0223 08:52:04.419390 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mmcm9" event={"ID":"b627df02-a2a7-4070-b2c7-8b9260637500","Type":"ContainerDied","Data":"f2c9a3b20d7cbb753eca22d7547d236cc5da5dcdf16a799b11cedb2192028327"} Feb 23 08:52:04 crc kubenswrapper[4940]: I0223 08:52:04.424603 4940 generic.go:334] "Generic (PLEG): container finished" podID="f0f689fc-e907-420c-869b-0a3d496358a4" containerID="8c44eba136de2f29e0e6114913db95655e6557aab2438ea008b26ab23cd5c416" exitCode=0 Feb 23 08:52:04 crc kubenswrapper[4940]: I0223 08:52:04.424716 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6shbw" event={"ID":"f0f689fc-e907-420c-869b-0a3d496358a4","Type":"ContainerDied","Data":"8c44eba136de2f29e0e6114913db95655e6557aab2438ea008b26ab23cd5c416"} Feb 23 08:52:04 crc kubenswrapper[4940]: I0223 08:52:04.426709 4940 generic.go:334] "Generic (PLEG): container finished" podID="d1d92eba-93c9-43cf-8c22-584d8a2e1579" containerID="f8f48781d6d7d86de13825249378c0c34dd442c7fd04a81b54f17f571e3d9ecb" exitCode=0 Feb 23 08:52:04 crc kubenswrapper[4940]: I0223 08:52:04.426784 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cg7jw" event={"ID":"d1d92eba-93c9-43cf-8c22-584d8a2e1579","Type":"ContainerDied","Data":"f8f48781d6d7d86de13825249378c0c34dd442c7fd04a81b54f17f571e3d9ecb"} Feb 23 08:52:04 crc kubenswrapper[4940]: I0223 08:52:04.429704 4940 generic.go:334] "Generic (PLEG): container finished" podID="e5b4ab15-e0ee-4adb-814e-0aea200aa9d8" containerID="09087d2397927be93f120b32637375f1735fe6d0d071de86b9a174f2cb554b4f" exitCode=0 Feb 23 08:52:04 crc kubenswrapper[4940]: I0223 08:52:04.429930 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9zgq2" event={"ID":"e5b4ab15-e0ee-4adb-814e-0aea200aa9d8","Type":"ContainerDied","Data":"09087d2397927be93f120b32637375f1735fe6d0d071de86b9a174f2cb554b4f"} Feb 23 08:52:04 crc kubenswrapper[4940]: I0223 08:52:04.432369 4940 generic.go:334] "Generic (PLEG): container finished" podID="000a58ff-706e-452e-8fa1-493f98d2e314" containerID="6e499ed69c2486907a2224441d634116c23f5ac490cae61461f6eccce172f349" exitCode=0 Feb 23 08:52:04 crc kubenswrapper[4940]: I0223 08:52:04.432425 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7w9jb" event={"ID":"000a58ff-706e-452e-8fa1-493f98d2e314","Type":"ContainerDied","Data":"6e499ed69c2486907a2224441d634116c23f5ac490cae61461f6eccce172f349"} Feb 23 08:52:04 crc kubenswrapper[4940]: I0223 08:52:04.436417 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-jwb9b" event={"ID":"d8dd2da9-cea0-44f5-8c93-91b79c7f66ea","Type":"ContainerStarted","Data":"3d9725b1ea91d3f0e18ecf74cc767520c48b557d3d18f7ad211303d06b0ca2d4"} Feb 23 08:52:04 crc kubenswrapper[4940]: I0223 08:52:04.436460 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-jwb9b" event={"ID":"d8dd2da9-cea0-44f5-8c93-91b79c7f66ea","Type":"ContainerStarted","Data":"18de57023189d562802a5c71d256f803535bd73f2bd3c6902c394b58fffc34ed"} Feb 23 08:52:04 crc kubenswrapper[4940]: I0223 08:52:04.436476 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-jwb9b" event={"ID":"d8dd2da9-cea0-44f5-8c93-91b79c7f66ea","Type":"ContainerStarted","Data":"9d55195a847ae1e99a31354ca98a2e8295ed2112b52e3f4ef365d3a51ffe8dce"} Feb 23 08:52:04 crc kubenswrapper[4940]: I0223 08:52:04.441061 4940 generic.go:334] "Generic (PLEG): container finished" podID="0917e41b-2e05-4596-9ad4-05b382ee9f56" containerID="2de3e70c5ef4c59aa5d97f8f4e97511c31312fefba903285ec2309fdcc30a823" exitCode=0 Feb 23 08:52:04 crc kubenswrapper[4940]: I0223 08:52:04.441147 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mqw5m" event={"ID":"0917e41b-2e05-4596-9ad4-05b382ee9f56","Type":"ContainerDied","Data":"2de3e70c5ef4c59aa5d97f8f4e97511c31312fefba903285ec2309fdcc30a823"} Feb 23 08:52:04 crc kubenswrapper[4940]: I0223 08:52:04.443850 4940 generic.go:334] "Generic (PLEG): container finished" podID="c723f067-bc1f-4d88-aa37-00f8896a9d38" containerID="6e097091f20ba50bea8fd842361443e25d8f042b1c7eca9bb8585213cdd1f7b9" exitCode=0 Feb 23 08:52:04 crc kubenswrapper[4940]: I0223 08:52:04.443887 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-56j6r" event={"ID":"c723f067-bc1f-4d88-aa37-00f8896a9d38","Type":"ContainerDied","Data":"6e097091f20ba50bea8fd842361443e25d8f042b1c7eca9bb8585213cdd1f7b9"} Feb 23 08:52:05 crc kubenswrapper[4940]: I0223 08:52:05.488707 4940 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-jwb9b" podStartSLOduration=188.488674577 podStartE2EDuration="3m8.488674577s" podCreationTimestamp="2026-02-23 08:48:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 08:52:05.48149831 +0000 UTC m=+256.864704477" watchObservedRunningTime="2026-02-23 08:52:05.488674577 +0000 UTC m=+256.871880794" Feb 23 08:52:08 crc kubenswrapper[4940]: I0223 08:52:08.826973 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-glhjd" Feb 23 08:52:09 crc kubenswrapper[4940]: I0223 08:52:09.482407 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6shbw" event={"ID":"f0f689fc-e907-420c-869b-0a3d496358a4","Type":"ContainerStarted","Data":"6b6ccfddc245636386dcb0729fc553c9a914b0db47eb094e0e3aa613768c1ad4"} Feb 23 08:52:09 crc kubenswrapper[4940]: I0223 08:52:09.484255 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cg7jw" event={"ID":"d1d92eba-93c9-43cf-8c22-584d8a2e1579","Type":"ContainerStarted","Data":"322bc5c2861504d0cbb54e9c203e66d323d765ef2a6feb32e4d565f901ea707b"} Feb 23 08:52:09 crc kubenswrapper[4940]: I0223 08:52:09.486719 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mqw5m" event={"ID":"0917e41b-2e05-4596-9ad4-05b382ee9f56","Type":"ContainerStarted","Data":"58422578cda5f9650c1dc8ae7b5c834ee4cc093018ad56812e160eb7aa441d9a"} Feb 23 08:52:09 crc kubenswrapper[4940]: I0223 08:52:09.499195 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-56j6r" event={"ID":"c723f067-bc1f-4d88-aa37-00f8896a9d38","Type":"ContainerStarted","Data":"1175022f71819c931fcc7554b8a5901ca23eb6f7c509e2cde4a60490bde43107"} Feb 23 08:52:09 crc kubenswrapper[4940]: I0223 08:52:09.503460 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7w9jb" event={"ID":"000a58ff-706e-452e-8fa1-493f98d2e314","Type":"ContainerStarted","Data":"019d73b114a5ddd64f206e2e51eb0b9186f9ec8748f063f3273874cfa6be9954"} Feb 23 08:52:09 crc kubenswrapper[4940]: I0223 08:52:09.509308 4940 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-6shbw" podStartSLOduration=3.631383309 podStartE2EDuration="30.509288648s" podCreationTimestamp="2026-02-23 08:51:39 +0000 UTC" firstStartedPulling="2026-02-23 08:51:41.254083798 +0000 UTC m=+232.637289955" lastFinishedPulling="2026-02-23 08:52:08.131989127 +0000 UTC m=+259.515195294" observedRunningTime="2026-02-23 08:52:09.508548265 +0000 UTC m=+260.891754442" watchObservedRunningTime="2026-02-23 08:52:09.509288648 +0000 UTC m=+260.892494805" Feb 23 08:52:09 crc kubenswrapper[4940]: I0223 08:52:09.528264 4940 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-56j6r" podStartSLOduration=4.19462105 podStartE2EDuration="32.5282457s" podCreationTimestamp="2026-02-23 08:51:37 +0000 UTC" firstStartedPulling="2026-02-23 08:51:40.169965795 +0000 UTC m=+231.553171952" lastFinishedPulling="2026-02-23 08:52:08.503590445 +0000 UTC m=+259.886796602" observedRunningTime="2026-02-23 08:52:09.527059943 +0000 UTC m=+260.910266100" watchObservedRunningTime="2026-02-23 08:52:09.5282457 +0000 UTC m=+260.911451857" Feb 23 08:52:10 crc kubenswrapper[4940]: I0223 08:52:10.044851 4940 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-6shbw" Feb 23 08:52:10 crc kubenswrapper[4940]: I0223 08:52:10.044906 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-6shbw" Feb 23 08:52:10 crc kubenswrapper[4940]: I0223 08:52:10.517218 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9zgq2" event={"ID":"e5b4ab15-e0ee-4adb-814e-0aea200aa9d8","Type":"ContainerStarted","Data":"0c3dd9e80eb1e94d123a3e747d065e505a13f9319cc8ba56cad66e5f4f1a320d"} Feb 23 08:52:10 crc kubenswrapper[4940]: I0223 08:52:10.523436 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mmcm9" event={"ID":"b627df02-a2a7-4070-b2c7-8b9260637500","Type":"ContainerStarted","Data":"d4bc9ce99c8c7a73553b4b20a7e4949547cc7c9521f81b59dbda48ebf62c9bbe"} Feb 23 08:52:10 crc kubenswrapper[4940]: I0223 08:52:10.562122 4940 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-9zgq2" podStartSLOduration=2.422139621 podStartE2EDuration="32.562098407s" podCreationTimestamp="2026-02-23 08:51:38 +0000 UTC" firstStartedPulling="2026-02-23 08:51:40.208038303 +0000 UTC m=+231.591244450" lastFinishedPulling="2026-02-23 08:52:10.347997079 +0000 UTC m=+261.731203236" observedRunningTime="2026-02-23 08:52:10.544329462 +0000 UTC m=+261.927535639" watchObservedRunningTime="2026-02-23 08:52:10.562098407 +0000 UTC m=+261.945304564" Feb 23 08:52:10 crc kubenswrapper[4940]: I0223 08:52:10.563214 4940 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-7w9jb" podStartSLOduration=4.162125542 podStartE2EDuration="31.563206073s" podCreationTimestamp="2026-02-23 08:51:39 +0000 UTC" firstStartedPulling="2026-02-23 08:51:41.229055533 +0000 UTC m=+232.612261690" lastFinishedPulling="2026-02-23 08:52:08.630136064 +0000 UTC m=+260.013342221" observedRunningTime="2026-02-23 08:52:10.560126435 +0000 UTC m=+261.943332612" watchObservedRunningTime="2026-02-23 08:52:10.563206073 +0000 UTC m=+261.946412230" Feb 23 08:52:10 crc kubenswrapper[4940]: I0223 08:52:10.577046 4940 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-cg7jw" podStartSLOduration=3.831484807 podStartE2EDuration="34.577027921s" podCreationTimestamp="2026-02-23 08:51:36 +0000 UTC" firstStartedPulling="2026-02-23 08:51:37.988856651 +0000 UTC m=+229.372062808" lastFinishedPulling="2026-02-23 08:52:08.734399735 +0000 UTC m=+260.117605922" observedRunningTime="2026-02-23 08:52:10.576331059 +0000 UTC m=+261.959537236" watchObservedRunningTime="2026-02-23 08:52:10.577027921 +0000 UTC m=+261.960234088" Feb 23 08:52:10 crc kubenswrapper[4940]: I0223 08:52:10.591442 4940 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-mmcm9" podStartSLOduration=2.269129541 podStartE2EDuration="34.591427259s" podCreationTimestamp="2026-02-23 08:51:36 +0000 UTC" firstStartedPulling="2026-02-23 08:51:37.951398942 +0000 UTC m=+229.334605099" lastFinishedPulling="2026-02-23 08:52:10.27369665 +0000 UTC m=+261.656902817" observedRunningTime="2026-02-23 08:52:10.590686855 +0000 UTC m=+261.973893022" watchObservedRunningTime="2026-02-23 08:52:10.591427259 +0000 UTC m=+261.974633416" Feb 23 08:52:11 crc kubenswrapper[4940]: I0223 08:52:11.336881 4940 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-6shbw" podUID="f0f689fc-e907-420c-869b-0a3d496358a4" containerName="registry-server" probeResult="failure" output=< Feb 23 08:52:11 crc kubenswrapper[4940]: timeout: failed to connect service ":50051" within 1s Feb 23 08:52:11 crc kubenswrapper[4940]: > Feb 23 08:52:11 crc kubenswrapper[4940]: I0223 08:52:11.529841 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wprw9" event={"ID":"fc357ef5-0994-4918-859b-d623e534da2a","Type":"ContainerStarted","Data":"22e5105f9236d955be99940e29187626b8258b579494937380299cb4182398ee"} Feb 23 08:52:11 crc kubenswrapper[4940]: I0223 08:52:11.569720 4940 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-mqw5m" podStartSLOduration=4.882814139 podStartE2EDuration="35.56970042s" podCreationTimestamp="2026-02-23 08:51:36 +0000 UTC" firstStartedPulling="2026-02-23 08:51:37.999502959 +0000 UTC m=+229.382709116" lastFinishedPulling="2026-02-23 08:52:08.68638924 +0000 UTC m=+260.069595397" observedRunningTime="2026-02-23 08:52:10.621673069 +0000 UTC m=+262.004879236" watchObservedRunningTime="2026-02-23 08:52:11.56970042 +0000 UTC m=+262.952906577" Feb 23 08:52:11 crc kubenswrapper[4940]: I0223 08:52:11.571803 4940 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-wprw9" podStartSLOduration=3.068529121 podStartE2EDuration="36.571795636s" podCreationTimestamp="2026-02-23 08:51:35 +0000 UTC" firstStartedPulling="2026-02-23 08:51:36.94004256 +0000 UTC m=+228.323248717" lastFinishedPulling="2026-02-23 08:52:10.443309075 +0000 UTC m=+261.826515232" observedRunningTime="2026-02-23 08:52:11.567040485 +0000 UTC m=+262.950246652" watchObservedRunningTime="2026-02-23 08:52:11.571795636 +0000 UTC m=+262.955001783" Feb 23 08:52:14 crc kubenswrapper[4940]: I0223 08:52:14.395867 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 23 08:52:14 crc kubenswrapper[4940]: E0223 08:52:14.396347 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb9a2585-8f25-4d1b-83b1-ab3d523e73e0" containerName="pruner" Feb 23 08:52:14 crc kubenswrapper[4940]: I0223 08:52:14.396360 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb9a2585-8f25-4d1b-83b1-ab3d523e73e0" containerName="pruner" Feb 23 08:52:14 crc kubenswrapper[4940]: I0223 08:52:14.396480 4940 memory_manager.go:354] "RemoveStaleState removing state" podUID="fb9a2585-8f25-4d1b-83b1-ab3d523e73e0" containerName="pruner" Feb 23 08:52:14 crc kubenswrapper[4940]: I0223 08:52:14.396868 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 23 08:52:14 crc kubenswrapper[4940]: I0223 08:52:14.399306 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Feb 23 08:52:14 crc kubenswrapper[4940]: I0223 08:52:14.402470 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 23 08:52:14 crc kubenswrapper[4940]: I0223 08:52:14.406205 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 23 08:52:14 crc kubenswrapper[4940]: I0223 08:52:14.573659 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/092286ba-1bc6-4eaa-b993-928701e1366f-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"092286ba-1bc6-4eaa-b993-928701e1366f\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 23 08:52:14 crc kubenswrapper[4940]: I0223 08:52:14.573733 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/092286ba-1bc6-4eaa-b993-928701e1366f-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"092286ba-1bc6-4eaa-b993-928701e1366f\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 23 08:52:14 crc kubenswrapper[4940]: I0223 08:52:14.675157 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/092286ba-1bc6-4eaa-b993-928701e1366f-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"092286ba-1bc6-4eaa-b993-928701e1366f\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 23 08:52:14 crc kubenswrapper[4940]: I0223 08:52:14.675219 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/092286ba-1bc6-4eaa-b993-928701e1366f-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"092286ba-1bc6-4eaa-b993-928701e1366f\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 23 08:52:14 crc kubenswrapper[4940]: I0223 08:52:14.675277 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/092286ba-1bc6-4eaa-b993-928701e1366f-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"092286ba-1bc6-4eaa-b993-928701e1366f\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 23 08:52:14 crc kubenswrapper[4940]: I0223 08:52:14.695168 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/092286ba-1bc6-4eaa-b993-928701e1366f-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"092286ba-1bc6-4eaa-b993-928701e1366f\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 23 08:52:14 crc kubenswrapper[4940]: I0223 08:52:14.785254 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 23 08:52:15 crc kubenswrapper[4940]: I0223 08:52:15.277188 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 23 08:52:15 crc kubenswrapper[4940]: I0223 08:52:15.554324 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"092286ba-1bc6-4eaa-b993-928701e1366f","Type":"ContainerStarted","Data":"2c77714bc4837b7e2bd98b90e91cbec49d4c8a377a36973f8c6e68f7d973e5d7"} Feb 23 08:52:16 crc kubenswrapper[4940]: I0223 08:52:16.273835 4940 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-wprw9" Feb 23 08:52:16 crc kubenswrapper[4940]: I0223 08:52:16.274153 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-wprw9" Feb 23 08:52:16 crc kubenswrapper[4940]: I0223 08:52:16.364714 4940 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-wprw9" Feb 23 08:52:16 crc kubenswrapper[4940]: I0223 08:52:16.445719 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-mqw5m" Feb 23 08:52:16 crc kubenswrapper[4940]: I0223 08:52:16.445797 4940 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-mqw5m" Feb 23 08:52:16 crc kubenswrapper[4940]: I0223 08:52:16.485539 4940 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-mqw5m" Feb 23 08:52:16 crc kubenswrapper[4940]: I0223 08:52:16.561534 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"092286ba-1bc6-4eaa-b993-928701e1366f","Type":"ContainerStarted","Data":"d52f3e6decbeb3dae6fa2073543c9a11470d077bfb6061b5550053559bd71713"} Feb 23 08:52:16 crc kubenswrapper[4940]: I0223 08:52:16.584098 4940 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-9-crc" podStartSLOduration=2.58408217 podStartE2EDuration="2.58408217s" podCreationTimestamp="2026-02-23 08:52:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 08:52:16.583750079 +0000 UTC m=+267.966956246" watchObservedRunningTime="2026-02-23 08:52:16.58408217 +0000 UTC m=+267.967288327" Feb 23 08:52:16 crc kubenswrapper[4940]: I0223 08:52:16.607429 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-wprw9" Feb 23 08:52:16 crc kubenswrapper[4940]: I0223 08:52:16.611337 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-mqw5m" Feb 23 08:52:16 crc kubenswrapper[4940]: I0223 08:52:16.688130 4940 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-cg7jw" Feb 23 08:52:16 crc kubenswrapper[4940]: I0223 08:52:16.688871 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-cg7jw" Feb 23 08:52:16 crc kubenswrapper[4940]: I0223 08:52:16.733150 4940 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-cg7jw" Feb 23 08:52:16 crc kubenswrapper[4940]: I0223 08:52:16.871555 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-mmcm9" Feb 23 08:52:16 crc kubenswrapper[4940]: I0223 08:52:16.871607 4940 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-mmcm9" Feb 23 08:52:16 crc kubenswrapper[4940]: I0223 08:52:16.923760 4940 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-mmcm9" Feb 23 08:52:17 crc kubenswrapper[4940]: I0223 08:52:17.567566 4940 generic.go:334] "Generic (PLEG): container finished" podID="092286ba-1bc6-4eaa-b993-928701e1366f" containerID="d52f3e6decbeb3dae6fa2073543c9a11470d077bfb6061b5550053559bd71713" exitCode=0 Feb 23 08:52:17 crc kubenswrapper[4940]: I0223 08:52:17.567761 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"092286ba-1bc6-4eaa-b993-928701e1366f","Type":"ContainerDied","Data":"d52f3e6decbeb3dae6fa2073543c9a11470d077bfb6061b5550053559bd71713"} Feb 23 08:52:17 crc kubenswrapper[4940]: I0223 08:52:17.613295 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-mmcm9" Feb 23 08:52:17 crc kubenswrapper[4940]: I0223 08:52:17.620061 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-cg7jw" Feb 23 08:52:18 crc kubenswrapper[4940]: I0223 08:52:18.275821 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-56j6r" Feb 23 08:52:18 crc kubenswrapper[4940]: I0223 08:52:18.275870 4940 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-56j6r" Feb 23 08:52:18 crc kubenswrapper[4940]: I0223 08:52:18.338719 4940 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-56j6r" Feb 23 08:52:18 crc kubenswrapper[4940]: I0223 08:52:18.605102 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-56j6r" Feb 23 08:52:18 crc kubenswrapper[4940]: I0223 08:52:18.641648 4940 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-9zgq2" Feb 23 08:52:18 crc kubenswrapper[4940]: I0223 08:52:18.641692 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-9zgq2" Feb 23 08:52:18 crc kubenswrapper[4940]: I0223 08:52:18.651657 4940 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-cg7jw"] Feb 23 08:52:18 crc kubenswrapper[4940]: I0223 08:52:18.687871 4940 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-9zgq2" Feb 23 08:52:18 crc kubenswrapper[4940]: I0223 08:52:18.781181 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 23 08:52:18 crc kubenswrapper[4940]: I0223 08:52:18.853985 4940 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-mmcm9"] Feb 23 08:52:18 crc kubenswrapper[4940]: I0223 08:52:18.940387 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/092286ba-1bc6-4eaa-b993-928701e1366f-kube-api-access\") pod \"092286ba-1bc6-4eaa-b993-928701e1366f\" (UID: \"092286ba-1bc6-4eaa-b993-928701e1366f\") " Feb 23 08:52:18 crc kubenswrapper[4940]: I0223 08:52:18.940543 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/092286ba-1bc6-4eaa-b993-928701e1366f-kubelet-dir\") pod \"092286ba-1bc6-4eaa-b993-928701e1366f\" (UID: \"092286ba-1bc6-4eaa-b993-928701e1366f\") " Feb 23 08:52:18 crc kubenswrapper[4940]: I0223 08:52:18.941025 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/092286ba-1bc6-4eaa-b993-928701e1366f-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "092286ba-1bc6-4eaa-b993-928701e1366f" (UID: "092286ba-1bc6-4eaa-b993-928701e1366f"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 08:52:18 crc kubenswrapper[4940]: I0223 08:52:18.956985 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/092286ba-1bc6-4eaa-b993-928701e1366f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "092286ba-1bc6-4eaa-b993-928701e1366f" (UID: "092286ba-1bc6-4eaa-b993-928701e1366f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 08:52:19 crc kubenswrapper[4940]: I0223 08:52:19.042555 4940 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/092286ba-1bc6-4eaa-b993-928701e1366f-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 23 08:52:19 crc kubenswrapper[4940]: I0223 08:52:19.042934 4940 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/092286ba-1bc6-4eaa-b993-928701e1366f-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 23 08:52:19 crc kubenswrapper[4940]: I0223 08:52:19.577954 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"092286ba-1bc6-4eaa-b993-928701e1366f","Type":"ContainerDied","Data":"2c77714bc4837b7e2bd98b90e91cbec49d4c8a377a36973f8c6e68f7d973e5d7"} Feb 23 08:52:19 crc kubenswrapper[4940]: I0223 08:52:19.578355 4940 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2c77714bc4837b7e2bd98b90e91cbec49d4c8a377a36973f8c6e68f7d973e5d7" Feb 23 08:52:19 crc kubenswrapper[4940]: I0223 08:52:19.578187 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 23 08:52:19 crc kubenswrapper[4940]: I0223 08:52:19.578780 4940 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-mmcm9" podUID="b627df02-a2a7-4070-b2c7-8b9260637500" containerName="registry-server" containerID="cri-o://d4bc9ce99c8c7a73553b4b20a7e4949547cc7c9521f81b59dbda48ebf62c9bbe" gracePeriod=2 Feb 23 08:52:19 crc kubenswrapper[4940]: I0223 08:52:19.579522 4940 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-cg7jw" podUID="d1d92eba-93c9-43cf-8c22-584d8a2e1579" containerName="registry-server" containerID="cri-o://322bc5c2861504d0cbb54e9c203e66d323d765ef2a6feb32e4d565f901ea707b" gracePeriod=2 Feb 23 08:52:19 crc kubenswrapper[4940]: I0223 08:52:19.630945 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-9zgq2" Feb 23 08:52:19 crc kubenswrapper[4940]: I0223 08:52:19.717827 4940 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-7w9jb" Feb 23 08:52:19 crc kubenswrapper[4940]: I0223 08:52:19.717882 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-7w9jb" Feb 23 08:52:19 crc kubenswrapper[4940]: I0223 08:52:19.761158 4940 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-7w9jb" Feb 23 08:52:19 crc kubenswrapper[4940]: I0223 08:52:19.932589 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mmcm9" Feb 23 08:52:20 crc kubenswrapper[4940]: I0223 08:52:20.056785 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pxvmt\" (UniqueName: \"kubernetes.io/projected/b627df02-a2a7-4070-b2c7-8b9260637500-kube-api-access-pxvmt\") pod \"b627df02-a2a7-4070-b2c7-8b9260637500\" (UID: \"b627df02-a2a7-4070-b2c7-8b9260637500\") " Feb 23 08:52:20 crc kubenswrapper[4940]: I0223 08:52:20.056868 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b627df02-a2a7-4070-b2c7-8b9260637500-utilities\") pod \"b627df02-a2a7-4070-b2c7-8b9260637500\" (UID: \"b627df02-a2a7-4070-b2c7-8b9260637500\") " Feb 23 08:52:20 crc kubenswrapper[4940]: I0223 08:52:20.056940 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b627df02-a2a7-4070-b2c7-8b9260637500-catalog-content\") pod \"b627df02-a2a7-4070-b2c7-8b9260637500\" (UID: \"b627df02-a2a7-4070-b2c7-8b9260637500\") " Feb 23 08:52:20 crc kubenswrapper[4940]: I0223 08:52:20.057727 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b627df02-a2a7-4070-b2c7-8b9260637500-utilities" (OuterVolumeSpecName: "utilities") pod "b627df02-a2a7-4070-b2c7-8b9260637500" (UID: "b627df02-a2a7-4070-b2c7-8b9260637500"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 08:52:20 crc kubenswrapper[4940]: I0223 08:52:20.063734 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b627df02-a2a7-4070-b2c7-8b9260637500-kube-api-access-pxvmt" (OuterVolumeSpecName: "kube-api-access-pxvmt") pod "b627df02-a2a7-4070-b2c7-8b9260637500" (UID: "b627df02-a2a7-4070-b2c7-8b9260637500"). InnerVolumeSpecName "kube-api-access-pxvmt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 08:52:20 crc kubenswrapper[4940]: I0223 08:52:20.092709 4940 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-6shbw" Feb 23 08:52:20 crc kubenswrapper[4940]: I0223 08:52:20.122564 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b627df02-a2a7-4070-b2c7-8b9260637500-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b627df02-a2a7-4070-b2c7-8b9260637500" (UID: "b627df02-a2a7-4070-b2c7-8b9260637500"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 08:52:20 crc kubenswrapper[4940]: I0223 08:52:20.130822 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-6shbw" Feb 23 08:52:20 crc kubenswrapper[4940]: I0223 08:52:20.158631 4940 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pxvmt\" (UniqueName: \"kubernetes.io/projected/b627df02-a2a7-4070-b2c7-8b9260637500-kube-api-access-pxvmt\") on node \"crc\" DevicePath \"\"" Feb 23 08:52:20 crc kubenswrapper[4940]: I0223 08:52:20.158672 4940 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b627df02-a2a7-4070-b2c7-8b9260637500-utilities\") on node \"crc\" DevicePath \"\"" Feb 23 08:52:20 crc kubenswrapper[4940]: I0223 08:52:20.158685 4940 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b627df02-a2a7-4070-b2c7-8b9260637500-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 23 08:52:20 crc kubenswrapper[4940]: I0223 08:52:20.432999 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cg7jw" Feb 23 08:52:20 crc kubenswrapper[4940]: I0223 08:52:20.563130 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d1d92eba-93c9-43cf-8c22-584d8a2e1579-utilities\") pod \"d1d92eba-93c9-43cf-8c22-584d8a2e1579\" (UID: \"d1d92eba-93c9-43cf-8c22-584d8a2e1579\") " Feb 23 08:52:20 crc kubenswrapper[4940]: I0223 08:52:20.563229 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d1d92eba-93c9-43cf-8c22-584d8a2e1579-catalog-content\") pod \"d1d92eba-93c9-43cf-8c22-584d8a2e1579\" (UID: \"d1d92eba-93c9-43cf-8c22-584d8a2e1579\") " Feb 23 08:52:20 crc kubenswrapper[4940]: I0223 08:52:20.563362 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-shs7s\" (UniqueName: \"kubernetes.io/projected/d1d92eba-93c9-43cf-8c22-584d8a2e1579-kube-api-access-shs7s\") pod \"d1d92eba-93c9-43cf-8c22-584d8a2e1579\" (UID: \"d1d92eba-93c9-43cf-8c22-584d8a2e1579\") " Feb 23 08:52:20 crc kubenswrapper[4940]: I0223 08:52:20.564645 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d1d92eba-93c9-43cf-8c22-584d8a2e1579-utilities" (OuterVolumeSpecName: "utilities") pod "d1d92eba-93c9-43cf-8c22-584d8a2e1579" (UID: "d1d92eba-93c9-43cf-8c22-584d8a2e1579"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 08:52:20 crc kubenswrapper[4940]: I0223 08:52:20.568505 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d1d92eba-93c9-43cf-8c22-584d8a2e1579-kube-api-access-shs7s" (OuterVolumeSpecName: "kube-api-access-shs7s") pod "d1d92eba-93c9-43cf-8c22-584d8a2e1579" (UID: "d1d92eba-93c9-43cf-8c22-584d8a2e1579"). InnerVolumeSpecName "kube-api-access-shs7s". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 08:52:20 crc kubenswrapper[4940]: I0223 08:52:20.587830 4940 generic.go:334] "Generic (PLEG): container finished" podID="b627df02-a2a7-4070-b2c7-8b9260637500" containerID="d4bc9ce99c8c7a73553b4b20a7e4949547cc7c9521f81b59dbda48ebf62c9bbe" exitCode=0 Feb 23 08:52:20 crc kubenswrapper[4940]: I0223 08:52:20.587929 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mmcm9" event={"ID":"b627df02-a2a7-4070-b2c7-8b9260637500","Type":"ContainerDied","Data":"d4bc9ce99c8c7a73553b4b20a7e4949547cc7c9521f81b59dbda48ebf62c9bbe"} Feb 23 08:52:20 crc kubenswrapper[4940]: I0223 08:52:20.587959 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mmcm9" event={"ID":"b627df02-a2a7-4070-b2c7-8b9260637500","Type":"ContainerDied","Data":"c6eb116cbda86fb79800ee166fd0190fe411ec4082dbafcb3a5c02f8f84e1ace"} Feb 23 08:52:20 crc kubenswrapper[4940]: I0223 08:52:20.587975 4940 scope.go:117] "RemoveContainer" containerID="d4bc9ce99c8c7a73553b4b20a7e4949547cc7c9521f81b59dbda48ebf62c9bbe" Feb 23 08:52:20 crc kubenswrapper[4940]: I0223 08:52:20.588109 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mmcm9" Feb 23 08:52:20 crc kubenswrapper[4940]: I0223 08:52:20.596271 4940 generic.go:334] "Generic (PLEG): container finished" podID="d1d92eba-93c9-43cf-8c22-584d8a2e1579" containerID="322bc5c2861504d0cbb54e9c203e66d323d765ef2a6feb32e4d565f901ea707b" exitCode=0 Feb 23 08:52:20 crc kubenswrapper[4940]: I0223 08:52:20.596332 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cg7jw" Feb 23 08:52:20 crc kubenswrapper[4940]: I0223 08:52:20.596399 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cg7jw" event={"ID":"d1d92eba-93c9-43cf-8c22-584d8a2e1579","Type":"ContainerDied","Data":"322bc5c2861504d0cbb54e9c203e66d323d765ef2a6feb32e4d565f901ea707b"} Feb 23 08:52:20 crc kubenswrapper[4940]: I0223 08:52:20.596479 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cg7jw" event={"ID":"d1d92eba-93c9-43cf-8c22-584d8a2e1579","Type":"ContainerDied","Data":"0f45b121fbf5bfc6155727e3725f638fb699fcf754116aa5803c6f9c93a81e67"} Feb 23 08:52:20 crc kubenswrapper[4940]: I0223 08:52:20.627702 4940 scope.go:117] "RemoveContainer" containerID="f2c9a3b20d7cbb753eca22d7547d236cc5da5dcdf16a799b11cedb2192028327" Feb 23 08:52:20 crc kubenswrapper[4940]: I0223 08:52:20.638205 4940 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-mmcm9"] Feb 23 08:52:20 crc kubenswrapper[4940]: I0223 08:52:20.639736 4940 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-mmcm9"] Feb 23 08:52:20 crc kubenswrapper[4940]: I0223 08:52:20.666063 4940 scope.go:117] "RemoveContainer" containerID="f70bc7ba72d690a203569460b38b1ab8a145c7f4a84b00d7902b59ea4185e401" Feb 23 08:52:20 crc kubenswrapper[4940]: I0223 08:52:20.667424 4940 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d1d92eba-93c9-43cf-8c22-584d8a2e1579-utilities\") on node \"crc\" DevicePath \"\"" Feb 23 08:52:20 crc kubenswrapper[4940]: I0223 08:52:20.667451 4940 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-shs7s\" (UniqueName: \"kubernetes.io/projected/d1d92eba-93c9-43cf-8c22-584d8a2e1579-kube-api-access-shs7s\") on node \"crc\" DevicePath \"\"" Feb 23 08:52:20 crc kubenswrapper[4940]: I0223 08:52:20.678016 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-7w9jb" Feb 23 08:52:20 crc kubenswrapper[4940]: I0223 08:52:20.680808 4940 scope.go:117] "RemoveContainer" containerID="d4bc9ce99c8c7a73553b4b20a7e4949547cc7c9521f81b59dbda48ebf62c9bbe" Feb 23 08:52:20 crc kubenswrapper[4940]: E0223 08:52:20.681438 4940 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d4bc9ce99c8c7a73553b4b20a7e4949547cc7c9521f81b59dbda48ebf62c9bbe\": container with ID starting with d4bc9ce99c8c7a73553b4b20a7e4949547cc7c9521f81b59dbda48ebf62c9bbe not found: ID does not exist" containerID="d4bc9ce99c8c7a73553b4b20a7e4949547cc7c9521f81b59dbda48ebf62c9bbe" Feb 23 08:52:20 crc kubenswrapper[4940]: I0223 08:52:20.681462 4940 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d4bc9ce99c8c7a73553b4b20a7e4949547cc7c9521f81b59dbda48ebf62c9bbe"} err="failed to get container status \"d4bc9ce99c8c7a73553b4b20a7e4949547cc7c9521f81b59dbda48ebf62c9bbe\": rpc error: code = NotFound desc = could not find container \"d4bc9ce99c8c7a73553b4b20a7e4949547cc7c9521f81b59dbda48ebf62c9bbe\": container with ID starting with d4bc9ce99c8c7a73553b4b20a7e4949547cc7c9521f81b59dbda48ebf62c9bbe not found: ID does not exist" Feb 23 08:52:20 crc kubenswrapper[4940]: I0223 08:52:20.681478 4940 scope.go:117] "RemoveContainer" containerID="f2c9a3b20d7cbb753eca22d7547d236cc5da5dcdf16a799b11cedb2192028327" Feb 23 08:52:20 crc kubenswrapper[4940]: E0223 08:52:20.681752 4940 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f2c9a3b20d7cbb753eca22d7547d236cc5da5dcdf16a799b11cedb2192028327\": container with ID starting with f2c9a3b20d7cbb753eca22d7547d236cc5da5dcdf16a799b11cedb2192028327 not found: ID does not exist" containerID="f2c9a3b20d7cbb753eca22d7547d236cc5da5dcdf16a799b11cedb2192028327" Feb 23 08:52:20 crc kubenswrapper[4940]: I0223 08:52:20.681773 4940 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f2c9a3b20d7cbb753eca22d7547d236cc5da5dcdf16a799b11cedb2192028327"} err="failed to get container status \"f2c9a3b20d7cbb753eca22d7547d236cc5da5dcdf16a799b11cedb2192028327\": rpc error: code = NotFound desc = could not find container \"f2c9a3b20d7cbb753eca22d7547d236cc5da5dcdf16a799b11cedb2192028327\": container with ID starting with f2c9a3b20d7cbb753eca22d7547d236cc5da5dcdf16a799b11cedb2192028327 not found: ID does not exist" Feb 23 08:52:20 crc kubenswrapper[4940]: I0223 08:52:20.681787 4940 scope.go:117] "RemoveContainer" containerID="f70bc7ba72d690a203569460b38b1ab8a145c7f4a84b00d7902b59ea4185e401" Feb 23 08:52:20 crc kubenswrapper[4940]: E0223 08:52:20.682090 4940 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f70bc7ba72d690a203569460b38b1ab8a145c7f4a84b00d7902b59ea4185e401\": container with ID starting with f70bc7ba72d690a203569460b38b1ab8a145c7f4a84b00d7902b59ea4185e401 not found: ID does not exist" containerID="f70bc7ba72d690a203569460b38b1ab8a145c7f4a84b00d7902b59ea4185e401" Feb 23 08:52:20 crc kubenswrapper[4940]: I0223 08:52:20.682107 4940 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f70bc7ba72d690a203569460b38b1ab8a145c7f4a84b00d7902b59ea4185e401"} err="failed to get container status \"f70bc7ba72d690a203569460b38b1ab8a145c7f4a84b00d7902b59ea4185e401\": rpc error: code = NotFound desc = could not find container \"f70bc7ba72d690a203569460b38b1ab8a145c7f4a84b00d7902b59ea4185e401\": container with ID starting with f70bc7ba72d690a203569460b38b1ab8a145c7f4a84b00d7902b59ea4185e401 not found: ID does not exist" Feb 23 08:52:20 crc kubenswrapper[4940]: I0223 08:52:20.682120 4940 scope.go:117] "RemoveContainer" containerID="322bc5c2861504d0cbb54e9c203e66d323d765ef2a6feb32e4d565f901ea707b" Feb 23 08:52:20 crc kubenswrapper[4940]: I0223 08:52:20.683214 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d1d92eba-93c9-43cf-8c22-584d8a2e1579-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d1d92eba-93c9-43cf-8c22-584d8a2e1579" (UID: "d1d92eba-93c9-43cf-8c22-584d8a2e1579"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 08:52:20 crc kubenswrapper[4940]: I0223 08:52:20.697853 4940 scope.go:117] "RemoveContainer" containerID="f8f48781d6d7d86de13825249378c0c34dd442c7fd04a81b54f17f571e3d9ecb" Feb 23 08:52:20 crc kubenswrapper[4940]: I0223 08:52:20.755119 4940 scope.go:117] "RemoveContainer" containerID="5bfdfbc6eac963cad0712336d4aa006674004cbbc0d70174671ccd11ecf20810" Feb 23 08:52:20 crc kubenswrapper[4940]: I0223 08:52:20.769190 4940 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d1d92eba-93c9-43cf-8c22-584d8a2e1579-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 23 08:52:20 crc kubenswrapper[4940]: I0223 08:52:20.773585 4940 scope.go:117] "RemoveContainer" containerID="322bc5c2861504d0cbb54e9c203e66d323d765ef2a6feb32e4d565f901ea707b" Feb 23 08:52:20 crc kubenswrapper[4940]: E0223 08:52:20.774027 4940 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"322bc5c2861504d0cbb54e9c203e66d323d765ef2a6feb32e4d565f901ea707b\": container with ID starting with 322bc5c2861504d0cbb54e9c203e66d323d765ef2a6feb32e4d565f901ea707b not found: ID does not exist" containerID="322bc5c2861504d0cbb54e9c203e66d323d765ef2a6feb32e4d565f901ea707b" Feb 23 08:52:20 crc kubenswrapper[4940]: I0223 08:52:20.774061 4940 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"322bc5c2861504d0cbb54e9c203e66d323d765ef2a6feb32e4d565f901ea707b"} err="failed to get container status \"322bc5c2861504d0cbb54e9c203e66d323d765ef2a6feb32e4d565f901ea707b\": rpc error: code = NotFound desc = could not find container \"322bc5c2861504d0cbb54e9c203e66d323d765ef2a6feb32e4d565f901ea707b\": container with ID starting with 322bc5c2861504d0cbb54e9c203e66d323d765ef2a6feb32e4d565f901ea707b not found: ID does not exist" Feb 23 08:52:20 crc kubenswrapper[4940]: I0223 08:52:20.774085 4940 scope.go:117] "RemoveContainer" containerID="f8f48781d6d7d86de13825249378c0c34dd442c7fd04a81b54f17f571e3d9ecb" Feb 23 08:52:20 crc kubenswrapper[4940]: E0223 08:52:20.774394 4940 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f8f48781d6d7d86de13825249378c0c34dd442c7fd04a81b54f17f571e3d9ecb\": container with ID starting with f8f48781d6d7d86de13825249378c0c34dd442c7fd04a81b54f17f571e3d9ecb not found: ID does not exist" containerID="f8f48781d6d7d86de13825249378c0c34dd442c7fd04a81b54f17f571e3d9ecb" Feb 23 08:52:20 crc kubenswrapper[4940]: I0223 08:52:20.774420 4940 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f8f48781d6d7d86de13825249378c0c34dd442c7fd04a81b54f17f571e3d9ecb"} err="failed to get container status \"f8f48781d6d7d86de13825249378c0c34dd442c7fd04a81b54f17f571e3d9ecb\": rpc error: code = NotFound desc = could not find container \"f8f48781d6d7d86de13825249378c0c34dd442c7fd04a81b54f17f571e3d9ecb\": container with ID starting with f8f48781d6d7d86de13825249378c0c34dd442c7fd04a81b54f17f571e3d9ecb not found: ID does not exist" Feb 23 08:52:20 crc kubenswrapper[4940]: I0223 08:52:20.774440 4940 scope.go:117] "RemoveContainer" containerID="5bfdfbc6eac963cad0712336d4aa006674004cbbc0d70174671ccd11ecf20810" Feb 23 08:52:20 crc kubenswrapper[4940]: E0223 08:52:20.774694 4940 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5bfdfbc6eac963cad0712336d4aa006674004cbbc0d70174671ccd11ecf20810\": container with ID starting with 5bfdfbc6eac963cad0712336d4aa006674004cbbc0d70174671ccd11ecf20810 not found: ID does not exist" containerID="5bfdfbc6eac963cad0712336d4aa006674004cbbc0d70174671ccd11ecf20810" Feb 23 08:52:20 crc kubenswrapper[4940]: I0223 08:52:20.774720 4940 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5bfdfbc6eac963cad0712336d4aa006674004cbbc0d70174671ccd11ecf20810"} err="failed to get container status \"5bfdfbc6eac963cad0712336d4aa006674004cbbc0d70174671ccd11ecf20810\": rpc error: code = NotFound desc = could not find container \"5bfdfbc6eac963cad0712336d4aa006674004cbbc0d70174671ccd11ecf20810\": container with ID starting with 5bfdfbc6eac963cad0712336d4aa006674004cbbc0d70174671ccd11ecf20810 not found: ID does not exist" Feb 23 08:52:20 crc kubenswrapper[4940]: I0223 08:52:20.945595 4940 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-cg7jw"] Feb 23 08:52:20 crc kubenswrapper[4940]: I0223 08:52:20.951228 4940 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-cg7jw"] Feb 23 08:52:21 crc kubenswrapper[4940]: I0223 08:52:21.056015 4940 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-9zgq2"] Feb 23 08:52:21 crc kubenswrapper[4940]: I0223 08:52:21.357783 4940 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b627df02-a2a7-4070-b2c7-8b9260637500" path="/var/lib/kubelet/pods/b627df02-a2a7-4070-b2c7-8b9260637500/volumes" Feb 23 08:52:21 crc kubenswrapper[4940]: I0223 08:52:21.360098 4940 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d1d92eba-93c9-43cf-8c22-584d8a2e1579" path="/var/lib/kubelet/pods/d1d92eba-93c9-43cf-8c22-584d8a2e1579/volumes" Feb 23 08:52:21 crc kubenswrapper[4940]: I0223 08:52:21.402585 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 23 08:52:21 crc kubenswrapper[4940]: E0223 08:52:21.403038 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b627df02-a2a7-4070-b2c7-8b9260637500" containerName="registry-server" Feb 23 08:52:21 crc kubenswrapper[4940]: I0223 08:52:21.403069 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="b627df02-a2a7-4070-b2c7-8b9260637500" containerName="registry-server" Feb 23 08:52:21 crc kubenswrapper[4940]: E0223 08:52:21.403099 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d1d92eba-93c9-43cf-8c22-584d8a2e1579" containerName="registry-server" Feb 23 08:52:21 crc kubenswrapper[4940]: I0223 08:52:21.403120 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1d92eba-93c9-43cf-8c22-584d8a2e1579" containerName="registry-server" Feb 23 08:52:21 crc kubenswrapper[4940]: E0223 08:52:21.403148 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d1d92eba-93c9-43cf-8c22-584d8a2e1579" containerName="extract-utilities" Feb 23 08:52:21 crc kubenswrapper[4940]: I0223 08:52:21.403166 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1d92eba-93c9-43cf-8c22-584d8a2e1579" containerName="extract-utilities" Feb 23 08:52:21 crc kubenswrapper[4940]: E0223 08:52:21.403189 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b627df02-a2a7-4070-b2c7-8b9260637500" containerName="extract-utilities" Feb 23 08:52:21 crc kubenswrapper[4940]: I0223 08:52:21.403208 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="b627df02-a2a7-4070-b2c7-8b9260637500" containerName="extract-utilities" Feb 23 08:52:21 crc kubenswrapper[4940]: E0223 08:52:21.403232 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b627df02-a2a7-4070-b2c7-8b9260637500" containerName="extract-content" Feb 23 08:52:21 crc kubenswrapper[4940]: I0223 08:52:21.403250 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="b627df02-a2a7-4070-b2c7-8b9260637500" containerName="extract-content" Feb 23 08:52:21 crc kubenswrapper[4940]: E0223 08:52:21.403291 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="092286ba-1bc6-4eaa-b993-928701e1366f" containerName="pruner" Feb 23 08:52:21 crc kubenswrapper[4940]: I0223 08:52:21.403309 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="092286ba-1bc6-4eaa-b993-928701e1366f" containerName="pruner" Feb 23 08:52:21 crc kubenswrapper[4940]: E0223 08:52:21.403338 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d1d92eba-93c9-43cf-8c22-584d8a2e1579" containerName="extract-content" Feb 23 08:52:21 crc kubenswrapper[4940]: I0223 08:52:21.403357 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1d92eba-93c9-43cf-8c22-584d8a2e1579" containerName="extract-content" Feb 23 08:52:21 crc kubenswrapper[4940]: I0223 08:52:21.403653 4940 memory_manager.go:354] "RemoveStaleState removing state" podUID="d1d92eba-93c9-43cf-8c22-584d8a2e1579" containerName="registry-server" Feb 23 08:52:21 crc kubenswrapper[4940]: I0223 08:52:21.403697 4940 memory_manager.go:354] "RemoveStaleState removing state" podUID="b627df02-a2a7-4070-b2c7-8b9260637500" containerName="registry-server" Feb 23 08:52:21 crc kubenswrapper[4940]: I0223 08:52:21.403722 4940 memory_manager.go:354] "RemoveStaleState removing state" podUID="092286ba-1bc6-4eaa-b993-928701e1366f" containerName="pruner" Feb 23 08:52:21 crc kubenswrapper[4940]: I0223 08:52:21.404540 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 23 08:52:21 crc kubenswrapper[4940]: I0223 08:52:21.409722 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 23 08:52:21 crc kubenswrapper[4940]: I0223 08:52:21.409931 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Feb 23 08:52:21 crc kubenswrapper[4940]: I0223 08:52:21.411246 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 23 08:52:21 crc kubenswrapper[4940]: I0223 08:52:21.578135 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/07c4db25-8f75-4bad-8f11-d06e6a20d747-kubelet-dir\") pod \"installer-9-crc\" (UID: \"07c4db25-8f75-4bad-8f11-d06e6a20d747\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 23 08:52:21 crc kubenswrapper[4940]: I0223 08:52:21.578661 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/07c4db25-8f75-4bad-8f11-d06e6a20d747-kube-api-access\") pod \"installer-9-crc\" (UID: \"07c4db25-8f75-4bad-8f11-d06e6a20d747\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 23 08:52:21 crc kubenswrapper[4940]: I0223 08:52:21.578939 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/07c4db25-8f75-4bad-8f11-d06e6a20d747-var-lock\") pod \"installer-9-crc\" (UID: \"07c4db25-8f75-4bad-8f11-d06e6a20d747\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 23 08:52:21 crc kubenswrapper[4940]: I0223 08:52:21.609248 4940 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-9zgq2" podUID="e5b4ab15-e0ee-4adb-814e-0aea200aa9d8" containerName="registry-server" containerID="cri-o://0c3dd9e80eb1e94d123a3e747d065e505a13f9319cc8ba56cad66e5f4f1a320d" gracePeriod=2 Feb 23 08:52:21 crc kubenswrapper[4940]: I0223 08:52:21.680126 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/07c4db25-8f75-4bad-8f11-d06e6a20d747-kube-api-access\") pod \"installer-9-crc\" (UID: \"07c4db25-8f75-4bad-8f11-d06e6a20d747\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 23 08:52:21 crc kubenswrapper[4940]: I0223 08:52:21.680221 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/07c4db25-8f75-4bad-8f11-d06e6a20d747-var-lock\") pod \"installer-9-crc\" (UID: \"07c4db25-8f75-4bad-8f11-d06e6a20d747\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 23 08:52:21 crc kubenswrapper[4940]: I0223 08:52:21.680263 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/07c4db25-8f75-4bad-8f11-d06e6a20d747-kubelet-dir\") pod \"installer-9-crc\" (UID: \"07c4db25-8f75-4bad-8f11-d06e6a20d747\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 23 08:52:21 crc kubenswrapper[4940]: I0223 08:52:21.680354 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/07c4db25-8f75-4bad-8f11-d06e6a20d747-kubelet-dir\") pod \"installer-9-crc\" (UID: \"07c4db25-8f75-4bad-8f11-d06e6a20d747\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 23 08:52:21 crc kubenswrapper[4940]: I0223 08:52:21.680804 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/07c4db25-8f75-4bad-8f11-d06e6a20d747-var-lock\") pod \"installer-9-crc\" (UID: \"07c4db25-8f75-4bad-8f11-d06e6a20d747\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 23 08:52:21 crc kubenswrapper[4940]: I0223 08:52:21.702968 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/07c4db25-8f75-4bad-8f11-d06e6a20d747-kube-api-access\") pod \"installer-9-crc\" (UID: \"07c4db25-8f75-4bad-8f11-d06e6a20d747\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 23 08:52:21 crc kubenswrapper[4940]: I0223 08:52:21.723221 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 23 08:52:21 crc kubenswrapper[4940]: I0223 08:52:21.967845 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 23 08:52:21 crc kubenswrapper[4940]: W0223 08:52:21.979407 4940 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod07c4db25_8f75_4bad_8f11_d06e6a20d747.slice/crio-fbf366600e79bf69c268e8c56bf01c1ec34bfc890c7521e8d8a59226f4f0e427 WatchSource:0}: Error finding container fbf366600e79bf69c268e8c56bf01c1ec34bfc890c7521e8d8a59226f4f0e427: Status 404 returned error can't find the container with id fbf366600e79bf69c268e8c56bf01c1ec34bfc890c7521e8d8a59226f4f0e427 Feb 23 08:52:22 crc kubenswrapper[4940]: I0223 08:52:22.056836 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9zgq2" Feb 23 08:52:22 crc kubenswrapper[4940]: I0223 08:52:22.186242 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e5b4ab15-e0ee-4adb-814e-0aea200aa9d8-catalog-content\") pod \"e5b4ab15-e0ee-4adb-814e-0aea200aa9d8\" (UID: \"e5b4ab15-e0ee-4adb-814e-0aea200aa9d8\") " Feb 23 08:52:22 crc kubenswrapper[4940]: I0223 08:52:22.186315 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e5b4ab15-e0ee-4adb-814e-0aea200aa9d8-utilities\") pod \"e5b4ab15-e0ee-4adb-814e-0aea200aa9d8\" (UID: \"e5b4ab15-e0ee-4adb-814e-0aea200aa9d8\") " Feb 23 08:52:22 crc kubenswrapper[4940]: I0223 08:52:22.186337 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5k24j\" (UniqueName: \"kubernetes.io/projected/e5b4ab15-e0ee-4adb-814e-0aea200aa9d8-kube-api-access-5k24j\") pod \"e5b4ab15-e0ee-4adb-814e-0aea200aa9d8\" (UID: \"e5b4ab15-e0ee-4adb-814e-0aea200aa9d8\") " Feb 23 08:52:22 crc kubenswrapper[4940]: I0223 08:52:22.188158 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e5b4ab15-e0ee-4adb-814e-0aea200aa9d8-utilities" (OuterVolumeSpecName: "utilities") pod "e5b4ab15-e0ee-4adb-814e-0aea200aa9d8" (UID: "e5b4ab15-e0ee-4adb-814e-0aea200aa9d8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 08:52:22 crc kubenswrapper[4940]: I0223 08:52:22.192740 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e5b4ab15-e0ee-4adb-814e-0aea200aa9d8-kube-api-access-5k24j" (OuterVolumeSpecName: "kube-api-access-5k24j") pod "e5b4ab15-e0ee-4adb-814e-0aea200aa9d8" (UID: "e5b4ab15-e0ee-4adb-814e-0aea200aa9d8"). InnerVolumeSpecName "kube-api-access-5k24j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 08:52:22 crc kubenswrapper[4940]: I0223 08:52:22.222192 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e5b4ab15-e0ee-4adb-814e-0aea200aa9d8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e5b4ab15-e0ee-4adb-814e-0aea200aa9d8" (UID: "e5b4ab15-e0ee-4adb-814e-0aea200aa9d8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 08:52:22 crc kubenswrapper[4940]: I0223 08:52:22.288965 4940 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e5b4ab15-e0ee-4adb-814e-0aea200aa9d8-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 23 08:52:22 crc kubenswrapper[4940]: I0223 08:52:22.289008 4940 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e5b4ab15-e0ee-4adb-814e-0aea200aa9d8-utilities\") on node \"crc\" DevicePath \"\"" Feb 23 08:52:22 crc kubenswrapper[4940]: I0223 08:52:22.289023 4940 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5k24j\" (UniqueName: \"kubernetes.io/projected/e5b4ab15-e0ee-4adb-814e-0aea200aa9d8-kube-api-access-5k24j\") on node \"crc\" DevicePath \"\"" Feb 23 08:52:22 crc kubenswrapper[4940]: I0223 08:52:22.615094 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"07c4db25-8f75-4bad-8f11-d06e6a20d747","Type":"ContainerStarted","Data":"dfb3db24e675c521711679c520699168c3131a8920411446a50d3fd99ea40f86"} Feb 23 08:52:22 crc kubenswrapper[4940]: I0223 08:52:22.615147 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"07c4db25-8f75-4bad-8f11-d06e6a20d747","Type":"ContainerStarted","Data":"fbf366600e79bf69c268e8c56bf01c1ec34bfc890c7521e8d8a59226f4f0e427"} Feb 23 08:52:22 crc kubenswrapper[4940]: I0223 08:52:22.632755 4940 generic.go:334] "Generic (PLEG): container finished" podID="e5b4ab15-e0ee-4adb-814e-0aea200aa9d8" containerID="0c3dd9e80eb1e94d123a3e747d065e505a13f9319cc8ba56cad66e5f4f1a320d" exitCode=0 Feb 23 08:52:22 crc kubenswrapper[4940]: I0223 08:52:22.632822 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9zgq2" event={"ID":"e5b4ab15-e0ee-4adb-814e-0aea200aa9d8","Type":"ContainerDied","Data":"0c3dd9e80eb1e94d123a3e747d065e505a13f9319cc8ba56cad66e5f4f1a320d"} Feb 23 08:52:22 crc kubenswrapper[4940]: I0223 08:52:22.632857 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9zgq2" event={"ID":"e5b4ab15-e0ee-4adb-814e-0aea200aa9d8","Type":"ContainerDied","Data":"5a94ef051733ac25de3250fa8525f81b28c9eb531f5ce82594d35e16b243f6b3"} Feb 23 08:52:22 crc kubenswrapper[4940]: I0223 08:52:22.632883 4940 scope.go:117] "RemoveContainer" containerID="0c3dd9e80eb1e94d123a3e747d065e505a13f9319cc8ba56cad66e5f4f1a320d" Feb 23 08:52:22 crc kubenswrapper[4940]: I0223 08:52:22.633093 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9zgq2" Feb 23 08:52:22 crc kubenswrapper[4940]: I0223 08:52:22.648358 4940 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=1.648342291 podStartE2EDuration="1.648342291s" podCreationTimestamp="2026-02-23 08:52:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 08:52:22.643753626 +0000 UTC m=+274.026959783" watchObservedRunningTime="2026-02-23 08:52:22.648342291 +0000 UTC m=+274.031548448" Feb 23 08:52:22 crc kubenswrapper[4940]: I0223 08:52:22.660534 4940 scope.go:117] "RemoveContainer" containerID="09087d2397927be93f120b32637375f1735fe6d0d071de86b9a174f2cb554b4f" Feb 23 08:52:22 crc kubenswrapper[4940]: I0223 08:52:22.680379 4940 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-9zgq2"] Feb 23 08:52:22 crc kubenswrapper[4940]: I0223 08:52:22.686056 4940 scope.go:117] "RemoveContainer" containerID="feb7300dd07a770d99a9da96807b929e5fff1fe0952b9815443bc3de122e09c9" Feb 23 08:52:22 crc kubenswrapper[4940]: I0223 08:52:22.687488 4940 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-9zgq2"] Feb 23 08:52:22 crc kubenswrapper[4940]: I0223 08:52:22.700316 4940 scope.go:117] "RemoveContainer" containerID="0c3dd9e80eb1e94d123a3e747d065e505a13f9319cc8ba56cad66e5f4f1a320d" Feb 23 08:52:22 crc kubenswrapper[4940]: E0223 08:52:22.700939 4940 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0c3dd9e80eb1e94d123a3e747d065e505a13f9319cc8ba56cad66e5f4f1a320d\": container with ID starting with 0c3dd9e80eb1e94d123a3e747d065e505a13f9319cc8ba56cad66e5f4f1a320d not found: ID does not exist" containerID="0c3dd9e80eb1e94d123a3e747d065e505a13f9319cc8ba56cad66e5f4f1a320d" Feb 23 08:52:22 crc kubenswrapper[4940]: I0223 08:52:22.701006 4940 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0c3dd9e80eb1e94d123a3e747d065e505a13f9319cc8ba56cad66e5f4f1a320d"} err="failed to get container status \"0c3dd9e80eb1e94d123a3e747d065e505a13f9319cc8ba56cad66e5f4f1a320d\": rpc error: code = NotFound desc = could not find container \"0c3dd9e80eb1e94d123a3e747d065e505a13f9319cc8ba56cad66e5f4f1a320d\": container with ID starting with 0c3dd9e80eb1e94d123a3e747d065e505a13f9319cc8ba56cad66e5f4f1a320d not found: ID does not exist" Feb 23 08:52:22 crc kubenswrapper[4940]: I0223 08:52:22.701041 4940 scope.go:117] "RemoveContainer" containerID="09087d2397927be93f120b32637375f1735fe6d0d071de86b9a174f2cb554b4f" Feb 23 08:52:22 crc kubenswrapper[4940]: E0223 08:52:22.701380 4940 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"09087d2397927be93f120b32637375f1735fe6d0d071de86b9a174f2cb554b4f\": container with ID starting with 09087d2397927be93f120b32637375f1735fe6d0d071de86b9a174f2cb554b4f not found: ID does not exist" containerID="09087d2397927be93f120b32637375f1735fe6d0d071de86b9a174f2cb554b4f" Feb 23 08:52:22 crc kubenswrapper[4940]: I0223 08:52:22.701409 4940 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"09087d2397927be93f120b32637375f1735fe6d0d071de86b9a174f2cb554b4f"} err="failed to get container status \"09087d2397927be93f120b32637375f1735fe6d0d071de86b9a174f2cb554b4f\": rpc error: code = NotFound desc = could not find container \"09087d2397927be93f120b32637375f1735fe6d0d071de86b9a174f2cb554b4f\": container with ID starting with 09087d2397927be93f120b32637375f1735fe6d0d071de86b9a174f2cb554b4f not found: ID does not exist" Feb 23 08:52:22 crc kubenswrapper[4940]: I0223 08:52:22.701425 4940 scope.go:117] "RemoveContainer" containerID="feb7300dd07a770d99a9da96807b929e5fff1fe0952b9815443bc3de122e09c9" Feb 23 08:52:22 crc kubenswrapper[4940]: E0223 08:52:22.701760 4940 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"feb7300dd07a770d99a9da96807b929e5fff1fe0952b9815443bc3de122e09c9\": container with ID starting with feb7300dd07a770d99a9da96807b929e5fff1fe0952b9815443bc3de122e09c9 not found: ID does not exist" containerID="feb7300dd07a770d99a9da96807b929e5fff1fe0952b9815443bc3de122e09c9" Feb 23 08:52:22 crc kubenswrapper[4940]: I0223 08:52:22.701786 4940 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"feb7300dd07a770d99a9da96807b929e5fff1fe0952b9815443bc3de122e09c9"} err="failed to get container status \"feb7300dd07a770d99a9da96807b929e5fff1fe0952b9815443bc3de122e09c9\": rpc error: code = NotFound desc = could not find container \"feb7300dd07a770d99a9da96807b929e5fff1fe0952b9815443bc3de122e09c9\": container with ID starting with feb7300dd07a770d99a9da96807b929e5fff1fe0952b9815443bc3de122e09c9 not found: ID does not exist" Feb 23 08:52:23 crc kubenswrapper[4940]: I0223 08:52:23.355767 4940 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e5b4ab15-e0ee-4adb-814e-0aea200aa9d8" path="/var/lib/kubelet/pods/e5b4ab15-e0ee-4adb-814e-0aea200aa9d8/volumes" Feb 23 08:52:23 crc kubenswrapper[4940]: I0223 08:52:23.461582 4940 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-6shbw"] Feb 23 08:52:23 crc kubenswrapper[4940]: I0223 08:52:23.461943 4940 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-6shbw" podUID="f0f689fc-e907-420c-869b-0a3d496358a4" containerName="registry-server" containerID="cri-o://6b6ccfddc245636386dcb0729fc553c9a914b0db47eb094e0e3aa613768c1ad4" gracePeriod=2 Feb 23 08:52:23 crc kubenswrapper[4940]: I0223 08:52:23.641302 4940 generic.go:334] "Generic (PLEG): container finished" podID="f0f689fc-e907-420c-869b-0a3d496358a4" containerID="6b6ccfddc245636386dcb0729fc553c9a914b0db47eb094e0e3aa613768c1ad4" exitCode=0 Feb 23 08:52:23 crc kubenswrapper[4940]: I0223 08:52:23.642028 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6shbw" event={"ID":"f0f689fc-e907-420c-869b-0a3d496358a4","Type":"ContainerDied","Data":"6b6ccfddc245636386dcb0729fc553c9a914b0db47eb094e0e3aa613768c1ad4"} Feb 23 08:52:23 crc kubenswrapper[4940]: I0223 08:52:23.832821 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6shbw" Feb 23 08:52:24 crc kubenswrapper[4940]: I0223 08:52:24.014790 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f0f689fc-e907-420c-869b-0a3d496358a4-utilities\") pod \"f0f689fc-e907-420c-869b-0a3d496358a4\" (UID: \"f0f689fc-e907-420c-869b-0a3d496358a4\") " Feb 23 08:52:24 crc kubenswrapper[4940]: I0223 08:52:24.015020 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p5429\" (UniqueName: \"kubernetes.io/projected/f0f689fc-e907-420c-869b-0a3d496358a4-kube-api-access-p5429\") pod \"f0f689fc-e907-420c-869b-0a3d496358a4\" (UID: \"f0f689fc-e907-420c-869b-0a3d496358a4\") " Feb 23 08:52:24 crc kubenswrapper[4940]: I0223 08:52:24.015067 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f0f689fc-e907-420c-869b-0a3d496358a4-catalog-content\") pod \"f0f689fc-e907-420c-869b-0a3d496358a4\" (UID: \"f0f689fc-e907-420c-869b-0a3d496358a4\") " Feb 23 08:52:24 crc kubenswrapper[4940]: I0223 08:52:24.015896 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f0f689fc-e907-420c-869b-0a3d496358a4-utilities" (OuterVolumeSpecName: "utilities") pod "f0f689fc-e907-420c-869b-0a3d496358a4" (UID: "f0f689fc-e907-420c-869b-0a3d496358a4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 08:52:24 crc kubenswrapper[4940]: I0223 08:52:24.021603 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f0f689fc-e907-420c-869b-0a3d496358a4-kube-api-access-p5429" (OuterVolumeSpecName: "kube-api-access-p5429") pod "f0f689fc-e907-420c-869b-0a3d496358a4" (UID: "f0f689fc-e907-420c-869b-0a3d496358a4"). InnerVolumeSpecName "kube-api-access-p5429". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 08:52:24 crc kubenswrapper[4940]: I0223 08:52:24.116294 4940 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f0f689fc-e907-420c-869b-0a3d496358a4-utilities\") on node \"crc\" DevicePath \"\"" Feb 23 08:52:24 crc kubenswrapper[4940]: I0223 08:52:24.116333 4940 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p5429\" (UniqueName: \"kubernetes.io/projected/f0f689fc-e907-420c-869b-0a3d496358a4-kube-api-access-p5429\") on node \"crc\" DevicePath \"\"" Feb 23 08:52:24 crc kubenswrapper[4940]: I0223 08:52:24.154668 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f0f689fc-e907-420c-869b-0a3d496358a4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f0f689fc-e907-420c-869b-0a3d496358a4" (UID: "f0f689fc-e907-420c-869b-0a3d496358a4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 08:52:24 crc kubenswrapper[4940]: I0223 08:52:24.218479 4940 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f0f689fc-e907-420c-869b-0a3d496358a4-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 23 08:52:24 crc kubenswrapper[4940]: I0223 08:52:24.648969 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6shbw" event={"ID":"f0f689fc-e907-420c-869b-0a3d496358a4","Type":"ContainerDied","Data":"46f3ac616ee244492e447917dc6a1964a7c228462e67d6e6f73c3b7c0cf3d24a"} Feb 23 08:52:24 crc kubenswrapper[4940]: I0223 08:52:24.649395 4940 scope.go:117] "RemoveContainer" containerID="6b6ccfddc245636386dcb0729fc553c9a914b0db47eb094e0e3aa613768c1ad4" Feb 23 08:52:24 crc kubenswrapper[4940]: I0223 08:52:24.649069 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6shbw" Feb 23 08:52:24 crc kubenswrapper[4940]: I0223 08:52:24.664310 4940 scope.go:117] "RemoveContainer" containerID="8c44eba136de2f29e0e6114913db95655e6557aab2438ea008b26ab23cd5c416" Feb 23 08:52:24 crc kubenswrapper[4940]: I0223 08:52:24.674479 4940 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-6shbw"] Feb 23 08:52:24 crc kubenswrapper[4940]: I0223 08:52:24.678363 4940 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-6shbw"] Feb 23 08:52:24 crc kubenswrapper[4940]: I0223 08:52:24.686746 4940 scope.go:117] "RemoveContainer" containerID="8f82bc158ea28d01e5fc05f6db76667e5d32f37cde8673cb60c65437f669c37e" Feb 23 08:52:25 crc kubenswrapper[4940]: I0223 08:52:25.353281 4940 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f0f689fc-e907-420c-869b-0a3d496358a4" path="/var/lib/kubelet/pods/f0f689fc-e907-420c-869b-0a3d496358a4/volumes" Feb 23 08:52:27 crc kubenswrapper[4940]: I0223 08:52:27.587261 4940 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-rrhk2"] Feb 23 08:52:31 crc kubenswrapper[4940]: I0223 08:52:31.429510 4940 patch_prober.go:28] interesting pod/machine-config-daemon-26mgs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 23 08:52:31 crc kubenswrapper[4940]: I0223 08:52:31.429831 4940 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" podUID="f3f2cfd6-5ddf-436d-998f-440f1cc642b1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 23 08:52:31 crc kubenswrapper[4940]: I0223 08:52:31.429875 4940 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" Feb 23 08:52:31 crc kubenswrapper[4940]: I0223 08:52:31.430430 4940 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a57b4e844c0d09d7debf8e4d2507287815cc75be308dc7010647ac2041181dc7"} pod="openshift-machine-config-operator/machine-config-daemon-26mgs" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 23 08:52:31 crc kubenswrapper[4940]: I0223 08:52:31.430485 4940 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" podUID="f3f2cfd6-5ddf-436d-998f-440f1cc642b1" containerName="machine-config-daemon" containerID="cri-o://a57b4e844c0d09d7debf8e4d2507287815cc75be308dc7010647ac2041181dc7" gracePeriod=600 Feb 23 08:52:31 crc kubenswrapper[4940]: I0223 08:52:31.687879 4940 generic.go:334] "Generic (PLEG): container finished" podID="f3f2cfd6-5ddf-436d-998f-440f1cc642b1" containerID="a57b4e844c0d09d7debf8e4d2507287815cc75be308dc7010647ac2041181dc7" exitCode=0 Feb 23 08:52:31 crc kubenswrapper[4940]: I0223 08:52:31.687931 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" event={"ID":"f3f2cfd6-5ddf-436d-998f-440f1cc642b1","Type":"ContainerDied","Data":"a57b4e844c0d09d7debf8e4d2507287815cc75be308dc7010647ac2041181dc7"} Feb 23 08:52:31 crc kubenswrapper[4940]: I0223 08:52:31.687959 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" event={"ID":"f3f2cfd6-5ddf-436d-998f-440f1cc642b1","Type":"ContainerStarted","Data":"829eb1f5afcbc1f2a52e3bf9dd8fc7112a0f410f0c39601eac390265ff2bb42a"} Feb 23 08:52:35 crc kubenswrapper[4940]: I0223 08:52:35.306677 4940 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-98f8ddb54-5qrql"] Feb 23 08:52:35 crc kubenswrapper[4940]: I0223 08:52:35.307597 4940 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-98f8ddb54-5qrql" podUID="576e1b7d-ed0d-45c7-8730-b93b9ee1d86d" containerName="controller-manager" containerID="cri-o://fa841dc3c61b6044bb1ece41f2e22d8cfd65c979a327cd2461ec6efed33db109" gracePeriod=30 Feb 23 08:52:35 crc kubenswrapper[4940]: I0223 08:52:35.341839 4940 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6d79fb88b9-6n2zj"] Feb 23 08:52:35 crc kubenswrapper[4940]: I0223 08:52:35.342143 4940 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6d79fb88b9-6n2zj" podUID="d66696cc-13f0-4bec-a40a-1874441498ee" containerName="route-controller-manager" containerID="cri-o://18db3853e6a450249c65461ba46aa0452cd05fcf8a4970d9468686bd576527d4" gracePeriod=30 Feb 23 08:52:35 crc kubenswrapper[4940]: I0223 08:52:35.683052 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-98f8ddb54-5qrql" Feb 23 08:52:35 crc kubenswrapper[4940]: I0223 08:52:35.689593 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6d79fb88b9-6n2zj" Feb 23 08:52:35 crc kubenswrapper[4940]: I0223 08:52:35.751994 4940 generic.go:334] "Generic (PLEG): container finished" podID="576e1b7d-ed0d-45c7-8730-b93b9ee1d86d" containerID="fa841dc3c61b6044bb1ece41f2e22d8cfd65c979a327cd2461ec6efed33db109" exitCode=0 Feb 23 08:52:35 crc kubenswrapper[4940]: I0223 08:52:35.752111 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-98f8ddb54-5qrql" Feb 23 08:52:35 crc kubenswrapper[4940]: I0223 08:52:35.752228 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-98f8ddb54-5qrql" event={"ID":"576e1b7d-ed0d-45c7-8730-b93b9ee1d86d","Type":"ContainerDied","Data":"fa841dc3c61b6044bb1ece41f2e22d8cfd65c979a327cd2461ec6efed33db109"} Feb 23 08:52:35 crc kubenswrapper[4940]: I0223 08:52:35.752292 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-98f8ddb54-5qrql" event={"ID":"576e1b7d-ed0d-45c7-8730-b93b9ee1d86d","Type":"ContainerDied","Data":"27335f41bcf6e5780150260eb461d358616b1355d17c7aeb43997451871183f7"} Feb 23 08:52:35 crc kubenswrapper[4940]: I0223 08:52:35.752318 4940 scope.go:117] "RemoveContainer" containerID="fa841dc3c61b6044bb1ece41f2e22d8cfd65c979a327cd2461ec6efed33db109" Feb 23 08:52:35 crc kubenswrapper[4940]: I0223 08:52:35.756493 4940 generic.go:334] "Generic (PLEG): container finished" podID="d66696cc-13f0-4bec-a40a-1874441498ee" containerID="18db3853e6a450249c65461ba46aa0452cd05fcf8a4970d9468686bd576527d4" exitCode=0 Feb 23 08:52:35 crc kubenswrapper[4940]: I0223 08:52:35.756543 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6d79fb88b9-6n2zj" event={"ID":"d66696cc-13f0-4bec-a40a-1874441498ee","Type":"ContainerDied","Data":"18db3853e6a450249c65461ba46aa0452cd05fcf8a4970d9468686bd576527d4"} Feb 23 08:52:35 crc kubenswrapper[4940]: I0223 08:52:35.756580 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6d79fb88b9-6n2zj" event={"ID":"d66696cc-13f0-4bec-a40a-1874441498ee","Type":"ContainerDied","Data":"db37f91299927042231ab874ff96de31f167ed839c27c1761263d0d0744edad9"} Feb 23 08:52:35 crc kubenswrapper[4940]: I0223 08:52:35.756659 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6d79fb88b9-6n2zj" Feb 23 08:52:35 crc kubenswrapper[4940]: I0223 08:52:35.774454 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d66696cc-13f0-4bec-a40a-1874441498ee-client-ca\") pod \"d66696cc-13f0-4bec-a40a-1874441498ee\" (UID: \"d66696cc-13f0-4bec-a40a-1874441498ee\") " Feb 23 08:52:35 crc kubenswrapper[4940]: I0223 08:52:35.774509 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z9rf7\" (UniqueName: \"kubernetes.io/projected/576e1b7d-ed0d-45c7-8730-b93b9ee1d86d-kube-api-access-z9rf7\") pod \"576e1b7d-ed0d-45c7-8730-b93b9ee1d86d\" (UID: \"576e1b7d-ed0d-45c7-8730-b93b9ee1d86d\") " Feb 23 08:52:35 crc kubenswrapper[4940]: I0223 08:52:35.774534 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/576e1b7d-ed0d-45c7-8730-b93b9ee1d86d-serving-cert\") pod \"576e1b7d-ed0d-45c7-8730-b93b9ee1d86d\" (UID: \"576e1b7d-ed0d-45c7-8730-b93b9ee1d86d\") " Feb 23 08:52:35 crc kubenswrapper[4940]: I0223 08:52:35.774564 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/576e1b7d-ed0d-45c7-8730-b93b9ee1d86d-config\") pod \"576e1b7d-ed0d-45c7-8730-b93b9ee1d86d\" (UID: \"576e1b7d-ed0d-45c7-8730-b93b9ee1d86d\") " Feb 23 08:52:35 crc kubenswrapper[4940]: I0223 08:52:35.776028 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/576e1b7d-ed0d-45c7-8730-b93b9ee1d86d-config" (OuterVolumeSpecName: "config") pod "576e1b7d-ed0d-45c7-8730-b93b9ee1d86d" (UID: "576e1b7d-ed0d-45c7-8730-b93b9ee1d86d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 08:52:35 crc kubenswrapper[4940]: I0223 08:52:35.774607 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d66696cc-13f0-4bec-a40a-1874441498ee-config\") pod \"d66696cc-13f0-4bec-a40a-1874441498ee\" (UID: \"d66696cc-13f0-4bec-a40a-1874441498ee\") " Feb 23 08:52:35 crc kubenswrapper[4940]: I0223 08:52:35.776140 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d66696cc-13f0-4bec-a40a-1874441498ee-serving-cert\") pod \"d66696cc-13f0-4bec-a40a-1874441498ee\" (UID: \"d66696cc-13f0-4bec-a40a-1874441498ee\") " Feb 23 08:52:35 crc kubenswrapper[4940]: I0223 08:52:35.776677 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d66696cc-13f0-4bec-a40a-1874441498ee-client-ca" (OuterVolumeSpecName: "client-ca") pod "d66696cc-13f0-4bec-a40a-1874441498ee" (UID: "d66696cc-13f0-4bec-a40a-1874441498ee"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 08:52:35 crc kubenswrapper[4940]: I0223 08:52:35.777283 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/576e1b7d-ed0d-45c7-8730-b93b9ee1d86d-client-ca\") pod \"576e1b7d-ed0d-45c7-8730-b93b9ee1d86d\" (UID: \"576e1b7d-ed0d-45c7-8730-b93b9ee1d86d\") " Feb 23 08:52:35 crc kubenswrapper[4940]: I0223 08:52:35.777331 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cl5hl\" (UniqueName: \"kubernetes.io/projected/d66696cc-13f0-4bec-a40a-1874441498ee-kube-api-access-cl5hl\") pod \"d66696cc-13f0-4bec-a40a-1874441498ee\" (UID: \"d66696cc-13f0-4bec-a40a-1874441498ee\") " Feb 23 08:52:35 crc kubenswrapper[4940]: I0223 08:52:35.777441 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d66696cc-13f0-4bec-a40a-1874441498ee-config" (OuterVolumeSpecName: "config") pod "d66696cc-13f0-4bec-a40a-1874441498ee" (UID: "d66696cc-13f0-4bec-a40a-1874441498ee"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 08:52:35 crc kubenswrapper[4940]: I0223 08:52:35.777833 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/576e1b7d-ed0d-45c7-8730-b93b9ee1d86d-client-ca" (OuterVolumeSpecName: "client-ca") pod "576e1b7d-ed0d-45c7-8730-b93b9ee1d86d" (UID: "576e1b7d-ed0d-45c7-8730-b93b9ee1d86d"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 08:52:35 crc kubenswrapper[4940]: I0223 08:52:35.777947 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/576e1b7d-ed0d-45c7-8730-b93b9ee1d86d-proxy-ca-bundles\") pod \"576e1b7d-ed0d-45c7-8730-b93b9ee1d86d\" (UID: \"576e1b7d-ed0d-45c7-8730-b93b9ee1d86d\") " Feb 23 08:52:35 crc kubenswrapper[4940]: I0223 08:52:35.778353 4940 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d66696cc-13f0-4bec-a40a-1874441498ee-client-ca\") on node \"crc\" DevicePath \"\"" Feb 23 08:52:35 crc kubenswrapper[4940]: I0223 08:52:35.778368 4940 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/576e1b7d-ed0d-45c7-8730-b93b9ee1d86d-config\") on node \"crc\" DevicePath \"\"" Feb 23 08:52:35 crc kubenswrapper[4940]: I0223 08:52:35.778377 4940 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d66696cc-13f0-4bec-a40a-1874441498ee-config\") on node \"crc\" DevicePath \"\"" Feb 23 08:52:35 crc kubenswrapper[4940]: I0223 08:52:35.778387 4940 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/576e1b7d-ed0d-45c7-8730-b93b9ee1d86d-client-ca\") on node \"crc\" DevicePath \"\"" Feb 23 08:52:35 crc kubenswrapper[4940]: I0223 08:52:35.778863 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/576e1b7d-ed0d-45c7-8730-b93b9ee1d86d-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "576e1b7d-ed0d-45c7-8730-b93b9ee1d86d" (UID: "576e1b7d-ed0d-45c7-8730-b93b9ee1d86d"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 08:52:35 crc kubenswrapper[4940]: I0223 08:52:35.781317 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/576e1b7d-ed0d-45c7-8730-b93b9ee1d86d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "576e1b7d-ed0d-45c7-8730-b93b9ee1d86d" (UID: "576e1b7d-ed0d-45c7-8730-b93b9ee1d86d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 08:52:35 crc kubenswrapper[4940]: I0223 08:52:35.782040 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d66696cc-13f0-4bec-a40a-1874441498ee-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "d66696cc-13f0-4bec-a40a-1874441498ee" (UID: "d66696cc-13f0-4bec-a40a-1874441498ee"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 08:52:35 crc kubenswrapper[4940]: I0223 08:52:35.782196 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d66696cc-13f0-4bec-a40a-1874441498ee-kube-api-access-cl5hl" (OuterVolumeSpecName: "kube-api-access-cl5hl") pod "d66696cc-13f0-4bec-a40a-1874441498ee" (UID: "d66696cc-13f0-4bec-a40a-1874441498ee"). InnerVolumeSpecName "kube-api-access-cl5hl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 08:52:35 crc kubenswrapper[4940]: I0223 08:52:35.782884 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/576e1b7d-ed0d-45c7-8730-b93b9ee1d86d-kube-api-access-z9rf7" (OuterVolumeSpecName: "kube-api-access-z9rf7") pod "576e1b7d-ed0d-45c7-8730-b93b9ee1d86d" (UID: "576e1b7d-ed0d-45c7-8730-b93b9ee1d86d"). InnerVolumeSpecName "kube-api-access-z9rf7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 08:52:35 crc kubenswrapper[4940]: I0223 08:52:35.787469 4940 scope.go:117] "RemoveContainer" containerID="fa841dc3c61b6044bb1ece41f2e22d8cfd65c979a327cd2461ec6efed33db109" Feb 23 08:52:35 crc kubenswrapper[4940]: E0223 08:52:35.788017 4940 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fa841dc3c61b6044bb1ece41f2e22d8cfd65c979a327cd2461ec6efed33db109\": container with ID starting with fa841dc3c61b6044bb1ece41f2e22d8cfd65c979a327cd2461ec6efed33db109 not found: ID does not exist" containerID="fa841dc3c61b6044bb1ece41f2e22d8cfd65c979a327cd2461ec6efed33db109" Feb 23 08:52:35 crc kubenswrapper[4940]: I0223 08:52:35.788071 4940 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fa841dc3c61b6044bb1ece41f2e22d8cfd65c979a327cd2461ec6efed33db109"} err="failed to get container status \"fa841dc3c61b6044bb1ece41f2e22d8cfd65c979a327cd2461ec6efed33db109\": rpc error: code = NotFound desc = could not find container \"fa841dc3c61b6044bb1ece41f2e22d8cfd65c979a327cd2461ec6efed33db109\": container with ID starting with fa841dc3c61b6044bb1ece41f2e22d8cfd65c979a327cd2461ec6efed33db109 not found: ID does not exist" Feb 23 08:52:35 crc kubenswrapper[4940]: I0223 08:52:35.788103 4940 scope.go:117] "RemoveContainer" containerID="18db3853e6a450249c65461ba46aa0452cd05fcf8a4970d9468686bd576527d4" Feb 23 08:52:35 crc kubenswrapper[4940]: I0223 08:52:35.807445 4940 scope.go:117] "RemoveContainer" containerID="18db3853e6a450249c65461ba46aa0452cd05fcf8a4970d9468686bd576527d4" Feb 23 08:52:35 crc kubenswrapper[4940]: E0223 08:52:35.807899 4940 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"18db3853e6a450249c65461ba46aa0452cd05fcf8a4970d9468686bd576527d4\": container with ID starting with 18db3853e6a450249c65461ba46aa0452cd05fcf8a4970d9468686bd576527d4 not found: ID does not exist" containerID="18db3853e6a450249c65461ba46aa0452cd05fcf8a4970d9468686bd576527d4" Feb 23 08:52:35 crc kubenswrapper[4940]: I0223 08:52:35.807955 4940 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"18db3853e6a450249c65461ba46aa0452cd05fcf8a4970d9468686bd576527d4"} err="failed to get container status \"18db3853e6a450249c65461ba46aa0452cd05fcf8a4970d9468686bd576527d4\": rpc error: code = NotFound desc = could not find container \"18db3853e6a450249c65461ba46aa0452cd05fcf8a4970d9468686bd576527d4\": container with ID starting with 18db3853e6a450249c65461ba46aa0452cd05fcf8a4970d9468686bd576527d4 not found: ID does not exist" Feb 23 08:52:35 crc kubenswrapper[4940]: I0223 08:52:35.879449 4940 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d66696cc-13f0-4bec-a40a-1874441498ee-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 23 08:52:35 crc kubenswrapper[4940]: I0223 08:52:35.879495 4940 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cl5hl\" (UniqueName: \"kubernetes.io/projected/d66696cc-13f0-4bec-a40a-1874441498ee-kube-api-access-cl5hl\") on node \"crc\" DevicePath \"\"" Feb 23 08:52:35 crc kubenswrapper[4940]: I0223 08:52:35.879516 4940 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/576e1b7d-ed0d-45c7-8730-b93b9ee1d86d-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 23 08:52:35 crc kubenswrapper[4940]: I0223 08:52:35.879535 4940 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z9rf7\" (UniqueName: \"kubernetes.io/projected/576e1b7d-ed0d-45c7-8730-b93b9ee1d86d-kube-api-access-z9rf7\") on node \"crc\" DevicePath \"\"" Feb 23 08:52:35 crc kubenswrapper[4940]: I0223 08:52:35.879551 4940 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/576e1b7d-ed0d-45c7-8730-b93b9ee1d86d-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 23 08:52:36 crc kubenswrapper[4940]: I0223 08:52:36.082750 4940 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-98f8ddb54-5qrql"] Feb 23 08:52:36 crc kubenswrapper[4940]: I0223 08:52:36.096834 4940 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-98f8ddb54-5qrql"] Feb 23 08:52:36 crc kubenswrapper[4940]: I0223 08:52:36.099335 4940 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6d79fb88b9-6n2zj"] Feb 23 08:52:36 crc kubenswrapper[4940]: I0223 08:52:36.101823 4940 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6d79fb88b9-6n2zj"] Feb 23 08:52:37 crc kubenswrapper[4940]: I0223 08:52:37.023732 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-77f498b9-88t7m"] Feb 23 08:52:37 crc kubenswrapper[4940]: E0223 08:52:37.024229 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="576e1b7d-ed0d-45c7-8730-b93b9ee1d86d" containerName="controller-manager" Feb 23 08:52:37 crc kubenswrapper[4940]: I0223 08:52:37.024262 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="576e1b7d-ed0d-45c7-8730-b93b9ee1d86d" containerName="controller-manager" Feb 23 08:52:37 crc kubenswrapper[4940]: E0223 08:52:37.024284 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f0f689fc-e907-420c-869b-0a3d496358a4" containerName="extract-utilities" Feb 23 08:52:37 crc kubenswrapper[4940]: I0223 08:52:37.024299 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="f0f689fc-e907-420c-869b-0a3d496358a4" containerName="extract-utilities" Feb 23 08:52:37 crc kubenswrapper[4940]: E0223 08:52:37.024326 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5b4ab15-e0ee-4adb-814e-0aea200aa9d8" containerName="extract-utilities" Feb 23 08:52:37 crc kubenswrapper[4940]: I0223 08:52:37.024341 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5b4ab15-e0ee-4adb-814e-0aea200aa9d8" containerName="extract-utilities" Feb 23 08:52:37 crc kubenswrapper[4940]: E0223 08:52:37.024358 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5b4ab15-e0ee-4adb-814e-0aea200aa9d8" containerName="registry-server" Feb 23 08:52:37 crc kubenswrapper[4940]: I0223 08:52:37.024373 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5b4ab15-e0ee-4adb-814e-0aea200aa9d8" containerName="registry-server" Feb 23 08:52:37 crc kubenswrapper[4940]: E0223 08:52:37.024392 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5b4ab15-e0ee-4adb-814e-0aea200aa9d8" containerName="extract-content" Feb 23 08:52:37 crc kubenswrapper[4940]: I0223 08:52:37.024406 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5b4ab15-e0ee-4adb-814e-0aea200aa9d8" containerName="extract-content" Feb 23 08:52:37 crc kubenswrapper[4940]: E0223 08:52:37.024438 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d66696cc-13f0-4bec-a40a-1874441498ee" containerName="route-controller-manager" Feb 23 08:52:37 crc kubenswrapper[4940]: I0223 08:52:37.024453 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="d66696cc-13f0-4bec-a40a-1874441498ee" containerName="route-controller-manager" Feb 23 08:52:37 crc kubenswrapper[4940]: E0223 08:52:37.024474 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f0f689fc-e907-420c-869b-0a3d496358a4" containerName="registry-server" Feb 23 08:52:37 crc kubenswrapper[4940]: I0223 08:52:37.024490 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="f0f689fc-e907-420c-869b-0a3d496358a4" containerName="registry-server" Feb 23 08:52:37 crc kubenswrapper[4940]: E0223 08:52:37.024514 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f0f689fc-e907-420c-869b-0a3d496358a4" containerName="extract-content" Feb 23 08:52:37 crc kubenswrapper[4940]: I0223 08:52:37.024529 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="f0f689fc-e907-420c-869b-0a3d496358a4" containerName="extract-content" Feb 23 08:52:37 crc kubenswrapper[4940]: I0223 08:52:37.024797 4940 memory_manager.go:354] "RemoveStaleState removing state" podUID="d66696cc-13f0-4bec-a40a-1874441498ee" containerName="route-controller-manager" Feb 23 08:52:37 crc kubenswrapper[4940]: I0223 08:52:37.024848 4940 memory_manager.go:354] "RemoveStaleState removing state" podUID="e5b4ab15-e0ee-4adb-814e-0aea200aa9d8" containerName="registry-server" Feb 23 08:52:37 crc kubenswrapper[4940]: I0223 08:52:37.024881 4940 memory_manager.go:354] "RemoveStaleState removing state" podUID="576e1b7d-ed0d-45c7-8730-b93b9ee1d86d" containerName="controller-manager" Feb 23 08:52:37 crc kubenswrapper[4940]: I0223 08:52:37.024909 4940 memory_manager.go:354] "RemoveStaleState removing state" podUID="f0f689fc-e907-420c-869b-0a3d496358a4" containerName="registry-server" Feb 23 08:52:37 crc kubenswrapper[4940]: I0223 08:52:37.025886 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-77f498b9-88t7m" Feb 23 08:52:37 crc kubenswrapper[4940]: I0223 08:52:37.029667 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 23 08:52:37 crc kubenswrapper[4940]: I0223 08:52:37.031194 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 23 08:52:37 crc kubenswrapper[4940]: I0223 08:52:37.031495 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 23 08:52:37 crc kubenswrapper[4940]: I0223 08:52:37.031924 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 23 08:52:37 crc kubenswrapper[4940]: I0223 08:52:37.032117 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 23 08:52:37 crc kubenswrapper[4940]: I0223 08:52:37.032208 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 23 08:52:37 crc kubenswrapper[4940]: I0223 08:52:37.033448 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-7cb7cb94c-lw7gh"] Feb 23 08:52:37 crc kubenswrapper[4940]: I0223 08:52:37.034863 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7cb7cb94c-lw7gh" Feb 23 08:52:37 crc kubenswrapper[4940]: I0223 08:52:37.040601 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 23 08:52:37 crc kubenswrapper[4940]: I0223 08:52:37.040846 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 23 08:52:37 crc kubenswrapper[4940]: I0223 08:52:37.040957 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 23 08:52:37 crc kubenswrapper[4940]: I0223 08:52:37.041248 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 23 08:52:37 crc kubenswrapper[4940]: I0223 08:52:37.041434 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 23 08:52:37 crc kubenswrapper[4940]: I0223 08:52:37.041739 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 23 08:52:37 crc kubenswrapper[4940]: I0223 08:52:37.047968 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7cb7cb94c-lw7gh"] Feb 23 08:52:37 crc kubenswrapper[4940]: I0223 08:52:37.052821 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-77f498b9-88t7m"] Feb 23 08:52:37 crc kubenswrapper[4940]: I0223 08:52:37.059796 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 23 08:52:37 crc kubenswrapper[4940]: I0223 08:52:37.106780 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-srzlp\" (UniqueName: \"kubernetes.io/projected/c623ed59-a1b3-491e-9498-b53d7f096ced-kube-api-access-srzlp\") pod \"route-controller-manager-77f498b9-88t7m\" (UID: \"c623ed59-a1b3-491e-9498-b53d7f096ced\") " pod="openshift-route-controller-manager/route-controller-manager-77f498b9-88t7m" Feb 23 08:52:37 crc kubenswrapper[4940]: I0223 08:52:37.107125 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rxrz2\" (UniqueName: \"kubernetes.io/projected/489ebeaa-82d0-4a83-ae8b-ffdc565ec43c-kube-api-access-rxrz2\") pod \"controller-manager-7cb7cb94c-lw7gh\" (UID: \"489ebeaa-82d0-4a83-ae8b-ffdc565ec43c\") " pod="openshift-controller-manager/controller-manager-7cb7cb94c-lw7gh" Feb 23 08:52:37 crc kubenswrapper[4940]: I0223 08:52:37.107271 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/489ebeaa-82d0-4a83-ae8b-ffdc565ec43c-config\") pod \"controller-manager-7cb7cb94c-lw7gh\" (UID: \"489ebeaa-82d0-4a83-ae8b-ffdc565ec43c\") " pod="openshift-controller-manager/controller-manager-7cb7cb94c-lw7gh" Feb 23 08:52:37 crc kubenswrapper[4940]: I0223 08:52:37.107397 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c623ed59-a1b3-491e-9498-b53d7f096ced-serving-cert\") pod \"route-controller-manager-77f498b9-88t7m\" (UID: \"c623ed59-a1b3-491e-9498-b53d7f096ced\") " pod="openshift-route-controller-manager/route-controller-manager-77f498b9-88t7m" Feb 23 08:52:37 crc kubenswrapper[4940]: I0223 08:52:37.107560 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/489ebeaa-82d0-4a83-ae8b-ffdc565ec43c-serving-cert\") pod \"controller-manager-7cb7cb94c-lw7gh\" (UID: \"489ebeaa-82d0-4a83-ae8b-ffdc565ec43c\") " pod="openshift-controller-manager/controller-manager-7cb7cb94c-lw7gh" Feb 23 08:52:37 crc kubenswrapper[4940]: I0223 08:52:37.107713 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/489ebeaa-82d0-4a83-ae8b-ffdc565ec43c-client-ca\") pod \"controller-manager-7cb7cb94c-lw7gh\" (UID: \"489ebeaa-82d0-4a83-ae8b-ffdc565ec43c\") " pod="openshift-controller-manager/controller-manager-7cb7cb94c-lw7gh" Feb 23 08:52:37 crc kubenswrapper[4940]: I0223 08:52:37.107847 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c623ed59-a1b3-491e-9498-b53d7f096ced-client-ca\") pod \"route-controller-manager-77f498b9-88t7m\" (UID: \"c623ed59-a1b3-491e-9498-b53d7f096ced\") " pod="openshift-route-controller-manager/route-controller-manager-77f498b9-88t7m" Feb 23 08:52:37 crc kubenswrapper[4940]: I0223 08:52:37.108021 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/489ebeaa-82d0-4a83-ae8b-ffdc565ec43c-proxy-ca-bundles\") pod \"controller-manager-7cb7cb94c-lw7gh\" (UID: \"489ebeaa-82d0-4a83-ae8b-ffdc565ec43c\") " pod="openshift-controller-manager/controller-manager-7cb7cb94c-lw7gh" Feb 23 08:52:37 crc kubenswrapper[4940]: I0223 08:52:37.108248 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c623ed59-a1b3-491e-9498-b53d7f096ced-config\") pod \"route-controller-manager-77f498b9-88t7m\" (UID: \"c623ed59-a1b3-491e-9498-b53d7f096ced\") " pod="openshift-route-controller-manager/route-controller-manager-77f498b9-88t7m" Feb 23 08:52:37 crc kubenswrapper[4940]: I0223 08:52:37.209439 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c623ed59-a1b3-491e-9498-b53d7f096ced-config\") pod \"route-controller-manager-77f498b9-88t7m\" (UID: \"c623ed59-a1b3-491e-9498-b53d7f096ced\") " pod="openshift-route-controller-manager/route-controller-manager-77f498b9-88t7m" Feb 23 08:52:37 crc kubenswrapper[4940]: I0223 08:52:37.209529 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-srzlp\" (UniqueName: \"kubernetes.io/projected/c623ed59-a1b3-491e-9498-b53d7f096ced-kube-api-access-srzlp\") pod \"route-controller-manager-77f498b9-88t7m\" (UID: \"c623ed59-a1b3-491e-9498-b53d7f096ced\") " pod="openshift-route-controller-manager/route-controller-manager-77f498b9-88t7m" Feb 23 08:52:37 crc kubenswrapper[4940]: I0223 08:52:37.209557 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rxrz2\" (UniqueName: \"kubernetes.io/projected/489ebeaa-82d0-4a83-ae8b-ffdc565ec43c-kube-api-access-rxrz2\") pod \"controller-manager-7cb7cb94c-lw7gh\" (UID: \"489ebeaa-82d0-4a83-ae8b-ffdc565ec43c\") " pod="openshift-controller-manager/controller-manager-7cb7cb94c-lw7gh" Feb 23 08:52:37 crc kubenswrapper[4940]: I0223 08:52:37.209582 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/489ebeaa-82d0-4a83-ae8b-ffdc565ec43c-config\") pod \"controller-manager-7cb7cb94c-lw7gh\" (UID: \"489ebeaa-82d0-4a83-ae8b-ffdc565ec43c\") " pod="openshift-controller-manager/controller-manager-7cb7cb94c-lw7gh" Feb 23 08:52:37 crc kubenswrapper[4940]: I0223 08:52:37.210496 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c623ed59-a1b3-491e-9498-b53d7f096ced-config\") pod \"route-controller-manager-77f498b9-88t7m\" (UID: \"c623ed59-a1b3-491e-9498-b53d7f096ced\") " pod="openshift-route-controller-manager/route-controller-manager-77f498b9-88t7m" Feb 23 08:52:37 crc kubenswrapper[4940]: I0223 08:52:37.210694 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c623ed59-a1b3-491e-9498-b53d7f096ced-serving-cert\") pod \"route-controller-manager-77f498b9-88t7m\" (UID: \"c623ed59-a1b3-491e-9498-b53d7f096ced\") " pod="openshift-route-controller-manager/route-controller-manager-77f498b9-88t7m" Feb 23 08:52:37 crc kubenswrapper[4940]: I0223 08:52:37.210971 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/489ebeaa-82d0-4a83-ae8b-ffdc565ec43c-serving-cert\") pod \"controller-manager-7cb7cb94c-lw7gh\" (UID: \"489ebeaa-82d0-4a83-ae8b-ffdc565ec43c\") " pod="openshift-controller-manager/controller-manager-7cb7cb94c-lw7gh" Feb 23 08:52:37 crc kubenswrapper[4940]: I0223 08:52:37.211160 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/489ebeaa-82d0-4a83-ae8b-ffdc565ec43c-client-ca\") pod \"controller-manager-7cb7cb94c-lw7gh\" (UID: \"489ebeaa-82d0-4a83-ae8b-ffdc565ec43c\") " pod="openshift-controller-manager/controller-manager-7cb7cb94c-lw7gh" Feb 23 08:52:37 crc kubenswrapper[4940]: I0223 08:52:37.211379 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c623ed59-a1b3-491e-9498-b53d7f096ced-client-ca\") pod \"route-controller-manager-77f498b9-88t7m\" (UID: \"c623ed59-a1b3-491e-9498-b53d7f096ced\") " pod="openshift-route-controller-manager/route-controller-manager-77f498b9-88t7m" Feb 23 08:52:37 crc kubenswrapper[4940]: I0223 08:52:37.211531 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/489ebeaa-82d0-4a83-ae8b-ffdc565ec43c-proxy-ca-bundles\") pod \"controller-manager-7cb7cb94c-lw7gh\" (UID: \"489ebeaa-82d0-4a83-ae8b-ffdc565ec43c\") " pod="openshift-controller-manager/controller-manager-7cb7cb94c-lw7gh" Feb 23 08:52:37 crc kubenswrapper[4940]: I0223 08:52:37.211425 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/489ebeaa-82d0-4a83-ae8b-ffdc565ec43c-config\") pod \"controller-manager-7cb7cb94c-lw7gh\" (UID: \"489ebeaa-82d0-4a83-ae8b-ffdc565ec43c\") " pod="openshift-controller-manager/controller-manager-7cb7cb94c-lw7gh" Feb 23 08:52:37 crc kubenswrapper[4940]: I0223 08:52:37.212342 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/489ebeaa-82d0-4a83-ae8b-ffdc565ec43c-client-ca\") pod \"controller-manager-7cb7cb94c-lw7gh\" (UID: \"489ebeaa-82d0-4a83-ae8b-ffdc565ec43c\") " pod="openshift-controller-manager/controller-manager-7cb7cb94c-lw7gh" Feb 23 08:52:37 crc kubenswrapper[4940]: I0223 08:52:37.213129 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/489ebeaa-82d0-4a83-ae8b-ffdc565ec43c-proxy-ca-bundles\") pod \"controller-manager-7cb7cb94c-lw7gh\" (UID: \"489ebeaa-82d0-4a83-ae8b-ffdc565ec43c\") " pod="openshift-controller-manager/controller-manager-7cb7cb94c-lw7gh" Feb 23 08:52:37 crc kubenswrapper[4940]: I0223 08:52:37.213441 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c623ed59-a1b3-491e-9498-b53d7f096ced-client-ca\") pod \"route-controller-manager-77f498b9-88t7m\" (UID: \"c623ed59-a1b3-491e-9498-b53d7f096ced\") " pod="openshift-route-controller-manager/route-controller-manager-77f498b9-88t7m" Feb 23 08:52:37 crc kubenswrapper[4940]: I0223 08:52:37.214844 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c623ed59-a1b3-491e-9498-b53d7f096ced-serving-cert\") pod \"route-controller-manager-77f498b9-88t7m\" (UID: \"c623ed59-a1b3-491e-9498-b53d7f096ced\") " pod="openshift-route-controller-manager/route-controller-manager-77f498b9-88t7m" Feb 23 08:52:37 crc kubenswrapper[4940]: I0223 08:52:37.217556 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/489ebeaa-82d0-4a83-ae8b-ffdc565ec43c-serving-cert\") pod \"controller-manager-7cb7cb94c-lw7gh\" (UID: \"489ebeaa-82d0-4a83-ae8b-ffdc565ec43c\") " pod="openshift-controller-manager/controller-manager-7cb7cb94c-lw7gh" Feb 23 08:52:37 crc kubenswrapper[4940]: I0223 08:52:37.229579 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rxrz2\" (UniqueName: \"kubernetes.io/projected/489ebeaa-82d0-4a83-ae8b-ffdc565ec43c-kube-api-access-rxrz2\") pod \"controller-manager-7cb7cb94c-lw7gh\" (UID: \"489ebeaa-82d0-4a83-ae8b-ffdc565ec43c\") " pod="openshift-controller-manager/controller-manager-7cb7cb94c-lw7gh" Feb 23 08:52:37 crc kubenswrapper[4940]: I0223 08:52:37.234854 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-srzlp\" (UniqueName: \"kubernetes.io/projected/c623ed59-a1b3-491e-9498-b53d7f096ced-kube-api-access-srzlp\") pod \"route-controller-manager-77f498b9-88t7m\" (UID: \"c623ed59-a1b3-491e-9498-b53d7f096ced\") " pod="openshift-route-controller-manager/route-controller-manager-77f498b9-88t7m" Feb 23 08:52:37 crc kubenswrapper[4940]: I0223 08:52:37.356582 4940 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="576e1b7d-ed0d-45c7-8730-b93b9ee1d86d" path="/var/lib/kubelet/pods/576e1b7d-ed0d-45c7-8730-b93b9ee1d86d/volumes" Feb 23 08:52:37 crc kubenswrapper[4940]: I0223 08:52:37.358933 4940 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d66696cc-13f0-4bec-a40a-1874441498ee" path="/var/lib/kubelet/pods/d66696cc-13f0-4bec-a40a-1874441498ee/volumes" Feb 23 08:52:37 crc kubenswrapper[4940]: I0223 08:52:37.359713 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-77f498b9-88t7m" Feb 23 08:52:37 crc kubenswrapper[4940]: I0223 08:52:37.375250 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7cb7cb94c-lw7gh" Feb 23 08:52:37 crc kubenswrapper[4940]: I0223 08:52:37.649199 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-77f498b9-88t7m"] Feb 23 08:52:37 crc kubenswrapper[4940]: W0223 08:52:37.654825 4940 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc623ed59_a1b3_491e_9498_b53d7f096ced.slice/crio-415a2044068d2ac6aa74eb3990543a37480fc2727d482abcf45c7edccaadaf0b WatchSource:0}: Error finding container 415a2044068d2ac6aa74eb3990543a37480fc2727d482abcf45c7edccaadaf0b: Status 404 returned error can't find the container with id 415a2044068d2ac6aa74eb3990543a37480fc2727d482abcf45c7edccaadaf0b Feb 23 08:52:37 crc kubenswrapper[4940]: I0223 08:52:37.772414 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-77f498b9-88t7m" event={"ID":"c623ed59-a1b3-491e-9498-b53d7f096ced","Type":"ContainerStarted","Data":"415a2044068d2ac6aa74eb3990543a37480fc2727d482abcf45c7edccaadaf0b"} Feb 23 08:52:37 crc kubenswrapper[4940]: I0223 08:52:37.807562 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7cb7cb94c-lw7gh"] Feb 23 08:52:37 crc kubenswrapper[4940]: W0223 08:52:37.815151 4940 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod489ebeaa_82d0_4a83_ae8b_ffdc565ec43c.slice/crio-311240400fd361a20e1a9dc543b3c77e2601af2ade07744ee34afd3480c73e60 WatchSource:0}: Error finding container 311240400fd361a20e1a9dc543b3c77e2601af2ade07744ee34afd3480c73e60: Status 404 returned error can't find the container with id 311240400fd361a20e1a9dc543b3c77e2601af2ade07744ee34afd3480c73e60 Feb 23 08:52:38 crc kubenswrapper[4940]: I0223 08:52:38.791113 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-77f498b9-88t7m" event={"ID":"c623ed59-a1b3-491e-9498-b53d7f096ced","Type":"ContainerStarted","Data":"68ffc185a8925cc49c6b4f73249efbf89f6933f4016d104237ec5cffe4681208"} Feb 23 08:52:38 crc kubenswrapper[4940]: I0223 08:52:38.792298 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-77f498b9-88t7m" Feb 23 08:52:38 crc kubenswrapper[4940]: I0223 08:52:38.796347 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7cb7cb94c-lw7gh" event={"ID":"489ebeaa-82d0-4a83-ae8b-ffdc565ec43c","Type":"ContainerStarted","Data":"21b635b1bbe36f6d0ac23b806c7483997279db955e42e7a66ac8164430ea19d9"} Feb 23 08:52:38 crc kubenswrapper[4940]: I0223 08:52:38.796378 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7cb7cb94c-lw7gh" event={"ID":"489ebeaa-82d0-4a83-ae8b-ffdc565ec43c","Type":"ContainerStarted","Data":"311240400fd361a20e1a9dc543b3c77e2601af2ade07744ee34afd3480c73e60"} Feb 23 08:52:38 crc kubenswrapper[4940]: I0223 08:52:38.797010 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-7cb7cb94c-lw7gh" Feb 23 08:52:38 crc kubenswrapper[4940]: I0223 08:52:38.805601 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-7cb7cb94c-lw7gh" Feb 23 08:52:38 crc kubenswrapper[4940]: I0223 08:52:38.816700 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-77f498b9-88t7m" Feb 23 08:52:38 crc kubenswrapper[4940]: I0223 08:52:38.819471 4940 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-77f498b9-88t7m" podStartSLOduration=3.8194531 podStartE2EDuration="3.8194531s" podCreationTimestamp="2026-02-23 08:52:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 08:52:38.816861298 +0000 UTC m=+290.200067455" watchObservedRunningTime="2026-02-23 08:52:38.8194531 +0000 UTC m=+290.202659257" Feb 23 08:52:49 crc kubenswrapper[4940]: I0223 08:52:49.114469 4940 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Feb 23 08:52:52 crc kubenswrapper[4940]: I0223 08:52:52.610896 4940 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-rrhk2" podUID="c733e45d-a072-4619-b2f8-aea6d77b112f" containerName="oauth-openshift" containerID="cri-o://b0f38fd326c8c7e639046b6dbffc5f3aeee8b6a9d51e0727dd2478b5c51b6b74" gracePeriod=15 Feb 23 08:52:52 crc kubenswrapper[4940]: I0223 08:52:52.884959 4940 generic.go:334] "Generic (PLEG): container finished" podID="c733e45d-a072-4619-b2f8-aea6d77b112f" containerID="b0f38fd326c8c7e639046b6dbffc5f3aeee8b6a9d51e0727dd2478b5c51b6b74" exitCode=0 Feb 23 08:52:52 crc kubenswrapper[4940]: I0223 08:52:52.885257 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-rrhk2" event={"ID":"c733e45d-a072-4619-b2f8-aea6d77b112f","Type":"ContainerDied","Data":"b0f38fd326c8c7e639046b6dbffc5f3aeee8b6a9d51e0727dd2478b5c51b6b74"} Feb 23 08:52:53 crc kubenswrapper[4940]: I0223 08:52:53.087976 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-rrhk2" Feb 23 08:52:53 crc kubenswrapper[4940]: I0223 08:52:53.112931 4940 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-7cb7cb94c-lw7gh" podStartSLOduration=18.112904131 podStartE2EDuration="18.112904131s" podCreationTimestamp="2026-02-23 08:52:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 08:52:38.871410488 +0000 UTC m=+290.254616675" watchObservedRunningTime="2026-02-23 08:52:53.112904131 +0000 UTC m=+304.496110288" Feb 23 08:52:53 crc kubenswrapper[4940]: I0223 08:52:53.120510 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-5d4b6f47b4-txxg2"] Feb 23 08:52:53 crc kubenswrapper[4940]: E0223 08:52:53.120865 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c733e45d-a072-4619-b2f8-aea6d77b112f" containerName="oauth-openshift" Feb 23 08:52:53 crc kubenswrapper[4940]: I0223 08:52:53.120879 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="c733e45d-a072-4619-b2f8-aea6d77b112f" containerName="oauth-openshift" Feb 23 08:52:53 crc kubenswrapper[4940]: I0223 08:52:53.121152 4940 memory_manager.go:354] "RemoveStaleState removing state" podUID="c733e45d-a072-4619-b2f8-aea6d77b112f" containerName="oauth-openshift" Feb 23 08:52:53 crc kubenswrapper[4940]: I0223 08:52:53.122007 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-5d4b6f47b4-txxg2" Feb 23 08:52:53 crc kubenswrapper[4940]: I0223 08:52:53.156483 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-5d4b6f47b4-txxg2"] Feb 23 08:52:53 crc kubenswrapper[4940]: I0223 08:52:53.156767 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c733e45d-a072-4619-b2f8-aea6d77b112f-v4-0-config-system-trusted-ca-bundle\") pod \"c733e45d-a072-4619-b2f8-aea6d77b112f\" (UID: \"c733e45d-a072-4619-b2f8-aea6d77b112f\") " Feb 23 08:52:53 crc kubenswrapper[4940]: I0223 08:52:53.156806 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/c733e45d-a072-4619-b2f8-aea6d77b112f-v4-0-config-system-cliconfig\") pod \"c733e45d-a072-4619-b2f8-aea6d77b112f\" (UID: \"c733e45d-a072-4619-b2f8-aea6d77b112f\") " Feb 23 08:52:53 crc kubenswrapper[4940]: I0223 08:52:53.156858 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/c733e45d-a072-4619-b2f8-aea6d77b112f-v4-0-config-system-ocp-branding-template\") pod \"c733e45d-a072-4619-b2f8-aea6d77b112f\" (UID: \"c733e45d-a072-4619-b2f8-aea6d77b112f\") " Feb 23 08:52:53 crc kubenswrapper[4940]: I0223 08:52:53.156888 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/c733e45d-a072-4619-b2f8-aea6d77b112f-v4-0-config-user-idp-0-file-data\") pod \"c733e45d-a072-4619-b2f8-aea6d77b112f\" (UID: \"c733e45d-a072-4619-b2f8-aea6d77b112f\") " Feb 23 08:52:53 crc kubenswrapper[4940]: I0223 08:52:53.156916 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/c733e45d-a072-4619-b2f8-aea6d77b112f-v4-0-config-system-serving-cert\") pod \"c733e45d-a072-4619-b2f8-aea6d77b112f\" (UID: \"c733e45d-a072-4619-b2f8-aea6d77b112f\") " Feb 23 08:52:53 crc kubenswrapper[4940]: I0223 08:52:53.156941 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ltvbx\" (UniqueName: \"kubernetes.io/projected/c733e45d-a072-4619-b2f8-aea6d77b112f-kube-api-access-ltvbx\") pod \"c733e45d-a072-4619-b2f8-aea6d77b112f\" (UID: \"c733e45d-a072-4619-b2f8-aea6d77b112f\") " Feb 23 08:52:53 crc kubenswrapper[4940]: I0223 08:52:53.156972 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/c733e45d-a072-4619-b2f8-aea6d77b112f-v4-0-config-system-session\") pod \"c733e45d-a072-4619-b2f8-aea6d77b112f\" (UID: \"c733e45d-a072-4619-b2f8-aea6d77b112f\") " Feb 23 08:52:53 crc kubenswrapper[4940]: I0223 08:52:53.157022 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/c733e45d-a072-4619-b2f8-aea6d77b112f-v4-0-config-user-template-login\") pod \"c733e45d-a072-4619-b2f8-aea6d77b112f\" (UID: \"c733e45d-a072-4619-b2f8-aea6d77b112f\") " Feb 23 08:52:53 crc kubenswrapper[4940]: I0223 08:52:53.157055 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/c733e45d-a072-4619-b2f8-aea6d77b112f-v4-0-config-system-service-ca\") pod \"c733e45d-a072-4619-b2f8-aea6d77b112f\" (UID: \"c733e45d-a072-4619-b2f8-aea6d77b112f\") " Feb 23 08:52:53 crc kubenswrapper[4940]: I0223 08:52:53.157085 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/c733e45d-a072-4619-b2f8-aea6d77b112f-audit-policies\") pod \"c733e45d-a072-4619-b2f8-aea6d77b112f\" (UID: \"c733e45d-a072-4619-b2f8-aea6d77b112f\") " Feb 23 08:52:53 crc kubenswrapper[4940]: I0223 08:52:53.157106 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/c733e45d-a072-4619-b2f8-aea6d77b112f-v4-0-config-system-router-certs\") pod \"c733e45d-a072-4619-b2f8-aea6d77b112f\" (UID: \"c733e45d-a072-4619-b2f8-aea6d77b112f\") " Feb 23 08:52:53 crc kubenswrapper[4940]: I0223 08:52:53.157134 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/c733e45d-a072-4619-b2f8-aea6d77b112f-v4-0-config-user-template-error\") pod \"c733e45d-a072-4619-b2f8-aea6d77b112f\" (UID: \"c733e45d-a072-4619-b2f8-aea6d77b112f\") " Feb 23 08:52:53 crc kubenswrapper[4940]: I0223 08:52:53.157162 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/c733e45d-a072-4619-b2f8-aea6d77b112f-v4-0-config-user-template-provider-selection\") pod \"c733e45d-a072-4619-b2f8-aea6d77b112f\" (UID: \"c733e45d-a072-4619-b2f8-aea6d77b112f\") " Feb 23 08:52:53 crc kubenswrapper[4940]: I0223 08:52:53.157186 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c733e45d-a072-4619-b2f8-aea6d77b112f-audit-dir\") pod \"c733e45d-a072-4619-b2f8-aea6d77b112f\" (UID: \"c733e45d-a072-4619-b2f8-aea6d77b112f\") " Feb 23 08:52:53 crc kubenswrapper[4940]: I0223 08:52:53.157498 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c733e45d-a072-4619-b2f8-aea6d77b112f-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "c733e45d-a072-4619-b2f8-aea6d77b112f" (UID: "c733e45d-a072-4619-b2f8-aea6d77b112f"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 08:52:53 crc kubenswrapper[4940]: I0223 08:52:53.158224 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c733e45d-a072-4619-b2f8-aea6d77b112f-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "c733e45d-a072-4619-b2f8-aea6d77b112f" (UID: "c733e45d-a072-4619-b2f8-aea6d77b112f"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 08:52:53 crc kubenswrapper[4940]: I0223 08:52:53.158421 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c733e45d-a072-4619-b2f8-aea6d77b112f-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "c733e45d-a072-4619-b2f8-aea6d77b112f" (UID: "c733e45d-a072-4619-b2f8-aea6d77b112f"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 08:52:53 crc kubenswrapper[4940]: I0223 08:52:53.158631 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c733e45d-a072-4619-b2f8-aea6d77b112f-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "c733e45d-a072-4619-b2f8-aea6d77b112f" (UID: "c733e45d-a072-4619-b2f8-aea6d77b112f"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 08:52:53 crc kubenswrapper[4940]: I0223 08:52:53.159298 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c733e45d-a072-4619-b2f8-aea6d77b112f-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "c733e45d-a072-4619-b2f8-aea6d77b112f" (UID: "c733e45d-a072-4619-b2f8-aea6d77b112f"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 08:52:53 crc kubenswrapper[4940]: I0223 08:52:53.163461 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c733e45d-a072-4619-b2f8-aea6d77b112f-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "c733e45d-a072-4619-b2f8-aea6d77b112f" (UID: "c733e45d-a072-4619-b2f8-aea6d77b112f"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 08:52:53 crc kubenswrapper[4940]: I0223 08:52:53.164492 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c733e45d-a072-4619-b2f8-aea6d77b112f-kube-api-access-ltvbx" (OuterVolumeSpecName: "kube-api-access-ltvbx") pod "c733e45d-a072-4619-b2f8-aea6d77b112f" (UID: "c733e45d-a072-4619-b2f8-aea6d77b112f"). InnerVolumeSpecName "kube-api-access-ltvbx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 08:52:53 crc kubenswrapper[4940]: I0223 08:52:53.166952 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c733e45d-a072-4619-b2f8-aea6d77b112f-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "c733e45d-a072-4619-b2f8-aea6d77b112f" (UID: "c733e45d-a072-4619-b2f8-aea6d77b112f"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 08:52:53 crc kubenswrapper[4940]: I0223 08:52:53.167228 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c733e45d-a072-4619-b2f8-aea6d77b112f-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "c733e45d-a072-4619-b2f8-aea6d77b112f" (UID: "c733e45d-a072-4619-b2f8-aea6d77b112f"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 08:52:53 crc kubenswrapper[4940]: I0223 08:52:53.167553 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c733e45d-a072-4619-b2f8-aea6d77b112f-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "c733e45d-a072-4619-b2f8-aea6d77b112f" (UID: "c733e45d-a072-4619-b2f8-aea6d77b112f"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 08:52:53 crc kubenswrapper[4940]: I0223 08:52:53.167794 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c733e45d-a072-4619-b2f8-aea6d77b112f-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "c733e45d-a072-4619-b2f8-aea6d77b112f" (UID: "c733e45d-a072-4619-b2f8-aea6d77b112f"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 08:52:53 crc kubenswrapper[4940]: I0223 08:52:53.174077 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c733e45d-a072-4619-b2f8-aea6d77b112f-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "c733e45d-a072-4619-b2f8-aea6d77b112f" (UID: "c733e45d-a072-4619-b2f8-aea6d77b112f"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 08:52:53 crc kubenswrapper[4940]: I0223 08:52:53.179781 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c733e45d-a072-4619-b2f8-aea6d77b112f-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "c733e45d-a072-4619-b2f8-aea6d77b112f" (UID: "c733e45d-a072-4619-b2f8-aea6d77b112f"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 08:52:53 crc kubenswrapper[4940]: I0223 08:52:53.180058 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c733e45d-a072-4619-b2f8-aea6d77b112f-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "c733e45d-a072-4619-b2f8-aea6d77b112f" (UID: "c733e45d-a072-4619-b2f8-aea6d77b112f"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 08:52:53 crc kubenswrapper[4940]: I0223 08:52:53.258467 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/ac5f7bee-6c1d-4fc8-b805-79a2f14aeca0-v4-0-config-system-cliconfig\") pod \"oauth-openshift-5d4b6f47b4-txxg2\" (UID: \"ac5f7bee-6c1d-4fc8-b805-79a2f14aeca0\") " pod="openshift-authentication/oauth-openshift-5d4b6f47b4-txxg2" Feb 23 08:52:53 crc kubenswrapper[4940]: I0223 08:52:53.258526 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/ac5f7bee-6c1d-4fc8-b805-79a2f14aeca0-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-5d4b6f47b4-txxg2\" (UID: \"ac5f7bee-6c1d-4fc8-b805-79a2f14aeca0\") " pod="openshift-authentication/oauth-openshift-5d4b6f47b4-txxg2" Feb 23 08:52:53 crc kubenswrapper[4940]: I0223 08:52:53.258562 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ac5f7bee-6c1d-4fc8-b805-79a2f14aeca0-audit-dir\") pod \"oauth-openshift-5d4b6f47b4-txxg2\" (UID: \"ac5f7bee-6c1d-4fc8-b805-79a2f14aeca0\") " pod="openshift-authentication/oauth-openshift-5d4b6f47b4-txxg2" Feb 23 08:52:53 crc kubenswrapper[4940]: I0223 08:52:53.258643 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mwh8w\" (UniqueName: \"kubernetes.io/projected/ac5f7bee-6c1d-4fc8-b805-79a2f14aeca0-kube-api-access-mwh8w\") pod \"oauth-openshift-5d4b6f47b4-txxg2\" (UID: \"ac5f7bee-6c1d-4fc8-b805-79a2f14aeca0\") " pod="openshift-authentication/oauth-openshift-5d4b6f47b4-txxg2" Feb 23 08:52:53 crc kubenswrapper[4940]: I0223 08:52:53.258701 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ac5f7bee-6c1d-4fc8-b805-79a2f14aeca0-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-5d4b6f47b4-txxg2\" (UID: \"ac5f7bee-6c1d-4fc8-b805-79a2f14aeca0\") " pod="openshift-authentication/oauth-openshift-5d4b6f47b4-txxg2" Feb 23 08:52:53 crc kubenswrapper[4940]: I0223 08:52:53.258732 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/ac5f7bee-6c1d-4fc8-b805-79a2f14aeca0-v4-0-config-system-service-ca\") pod \"oauth-openshift-5d4b6f47b4-txxg2\" (UID: \"ac5f7bee-6c1d-4fc8-b805-79a2f14aeca0\") " pod="openshift-authentication/oauth-openshift-5d4b6f47b4-txxg2" Feb 23 08:52:53 crc kubenswrapper[4940]: I0223 08:52:53.258794 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/ac5f7bee-6c1d-4fc8-b805-79a2f14aeca0-v4-0-config-user-template-login\") pod \"oauth-openshift-5d4b6f47b4-txxg2\" (UID: \"ac5f7bee-6c1d-4fc8-b805-79a2f14aeca0\") " pod="openshift-authentication/oauth-openshift-5d4b6f47b4-txxg2" Feb 23 08:52:53 crc kubenswrapper[4940]: I0223 08:52:53.258822 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/ac5f7bee-6c1d-4fc8-b805-79a2f14aeca0-v4-0-config-system-session\") pod \"oauth-openshift-5d4b6f47b4-txxg2\" (UID: \"ac5f7bee-6c1d-4fc8-b805-79a2f14aeca0\") " pod="openshift-authentication/oauth-openshift-5d4b6f47b4-txxg2" Feb 23 08:52:53 crc kubenswrapper[4940]: I0223 08:52:53.258879 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/ac5f7bee-6c1d-4fc8-b805-79a2f14aeca0-v4-0-config-system-serving-cert\") pod \"oauth-openshift-5d4b6f47b4-txxg2\" (UID: \"ac5f7bee-6c1d-4fc8-b805-79a2f14aeca0\") " pod="openshift-authentication/oauth-openshift-5d4b6f47b4-txxg2" Feb 23 08:52:53 crc kubenswrapper[4940]: I0223 08:52:53.258922 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/ac5f7bee-6c1d-4fc8-b805-79a2f14aeca0-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-5d4b6f47b4-txxg2\" (UID: \"ac5f7bee-6c1d-4fc8-b805-79a2f14aeca0\") " pod="openshift-authentication/oauth-openshift-5d4b6f47b4-txxg2" Feb 23 08:52:53 crc kubenswrapper[4940]: I0223 08:52:53.258947 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/ac5f7bee-6c1d-4fc8-b805-79a2f14aeca0-v4-0-config-system-router-certs\") pod \"oauth-openshift-5d4b6f47b4-txxg2\" (UID: \"ac5f7bee-6c1d-4fc8-b805-79a2f14aeca0\") " pod="openshift-authentication/oauth-openshift-5d4b6f47b4-txxg2" Feb 23 08:52:53 crc kubenswrapper[4940]: I0223 08:52:53.258977 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/ac5f7bee-6c1d-4fc8-b805-79a2f14aeca0-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-5d4b6f47b4-txxg2\" (UID: \"ac5f7bee-6c1d-4fc8-b805-79a2f14aeca0\") " pod="openshift-authentication/oauth-openshift-5d4b6f47b4-txxg2" Feb 23 08:52:53 crc kubenswrapper[4940]: I0223 08:52:53.259006 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/ac5f7bee-6c1d-4fc8-b805-79a2f14aeca0-v4-0-config-user-template-error\") pod \"oauth-openshift-5d4b6f47b4-txxg2\" (UID: \"ac5f7bee-6c1d-4fc8-b805-79a2f14aeca0\") " pod="openshift-authentication/oauth-openshift-5d4b6f47b4-txxg2" Feb 23 08:52:53 crc kubenswrapper[4940]: I0223 08:52:53.259032 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/ac5f7bee-6c1d-4fc8-b805-79a2f14aeca0-audit-policies\") pod \"oauth-openshift-5d4b6f47b4-txxg2\" (UID: \"ac5f7bee-6c1d-4fc8-b805-79a2f14aeca0\") " pod="openshift-authentication/oauth-openshift-5d4b6f47b4-txxg2" Feb 23 08:52:53 crc kubenswrapper[4940]: I0223 08:52:53.259079 4940 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/c733e45d-a072-4619-b2f8-aea6d77b112f-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Feb 23 08:52:53 crc kubenswrapper[4940]: I0223 08:52:53.259095 4940 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/c733e45d-a072-4619-b2f8-aea6d77b112f-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Feb 23 08:52:53 crc kubenswrapper[4940]: I0223 08:52:53.259109 4940 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c733e45d-a072-4619-b2f8-aea6d77b112f-audit-dir\") on node \"crc\" DevicePath \"\"" Feb 23 08:52:53 crc kubenswrapper[4940]: I0223 08:52:53.259123 4940 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c733e45d-a072-4619-b2f8-aea6d77b112f-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 08:52:53 crc kubenswrapper[4940]: I0223 08:52:53.259137 4940 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/c733e45d-a072-4619-b2f8-aea6d77b112f-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Feb 23 08:52:53 crc kubenswrapper[4940]: I0223 08:52:53.259150 4940 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/c733e45d-a072-4619-b2f8-aea6d77b112f-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Feb 23 08:52:53 crc kubenswrapper[4940]: I0223 08:52:53.259162 4940 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/c733e45d-a072-4619-b2f8-aea6d77b112f-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Feb 23 08:52:53 crc kubenswrapper[4940]: I0223 08:52:53.259174 4940 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/c733e45d-a072-4619-b2f8-aea6d77b112f-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 23 08:52:53 crc kubenswrapper[4940]: I0223 08:52:53.259186 4940 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ltvbx\" (UniqueName: \"kubernetes.io/projected/c733e45d-a072-4619-b2f8-aea6d77b112f-kube-api-access-ltvbx\") on node \"crc\" DevicePath \"\"" Feb 23 08:52:53 crc kubenswrapper[4940]: I0223 08:52:53.259200 4940 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/c733e45d-a072-4619-b2f8-aea6d77b112f-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Feb 23 08:52:53 crc kubenswrapper[4940]: I0223 08:52:53.259212 4940 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/c733e45d-a072-4619-b2f8-aea6d77b112f-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Feb 23 08:52:53 crc kubenswrapper[4940]: I0223 08:52:53.259223 4940 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/c733e45d-a072-4619-b2f8-aea6d77b112f-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Feb 23 08:52:53 crc kubenswrapper[4940]: I0223 08:52:53.259233 4940 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/c733e45d-a072-4619-b2f8-aea6d77b112f-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 23 08:52:53 crc kubenswrapper[4940]: I0223 08:52:53.259243 4940 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/c733e45d-a072-4619-b2f8-aea6d77b112f-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Feb 23 08:52:53 crc kubenswrapper[4940]: I0223 08:52:53.360008 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ac5f7bee-6c1d-4fc8-b805-79a2f14aeca0-audit-dir\") pod \"oauth-openshift-5d4b6f47b4-txxg2\" (UID: \"ac5f7bee-6c1d-4fc8-b805-79a2f14aeca0\") " pod="openshift-authentication/oauth-openshift-5d4b6f47b4-txxg2" Feb 23 08:52:53 crc kubenswrapper[4940]: I0223 08:52:53.360054 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mwh8w\" (UniqueName: \"kubernetes.io/projected/ac5f7bee-6c1d-4fc8-b805-79a2f14aeca0-kube-api-access-mwh8w\") pod \"oauth-openshift-5d4b6f47b4-txxg2\" (UID: \"ac5f7bee-6c1d-4fc8-b805-79a2f14aeca0\") " pod="openshift-authentication/oauth-openshift-5d4b6f47b4-txxg2" Feb 23 08:52:53 crc kubenswrapper[4940]: I0223 08:52:53.360093 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ac5f7bee-6c1d-4fc8-b805-79a2f14aeca0-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-5d4b6f47b4-txxg2\" (UID: \"ac5f7bee-6c1d-4fc8-b805-79a2f14aeca0\") " pod="openshift-authentication/oauth-openshift-5d4b6f47b4-txxg2" Feb 23 08:52:53 crc kubenswrapper[4940]: I0223 08:52:53.360116 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/ac5f7bee-6c1d-4fc8-b805-79a2f14aeca0-v4-0-config-system-service-ca\") pod \"oauth-openshift-5d4b6f47b4-txxg2\" (UID: \"ac5f7bee-6c1d-4fc8-b805-79a2f14aeca0\") " pod="openshift-authentication/oauth-openshift-5d4b6f47b4-txxg2" Feb 23 08:52:53 crc kubenswrapper[4940]: I0223 08:52:53.360147 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/ac5f7bee-6c1d-4fc8-b805-79a2f14aeca0-v4-0-config-user-template-login\") pod \"oauth-openshift-5d4b6f47b4-txxg2\" (UID: \"ac5f7bee-6c1d-4fc8-b805-79a2f14aeca0\") " pod="openshift-authentication/oauth-openshift-5d4b6f47b4-txxg2" Feb 23 08:52:53 crc kubenswrapper[4940]: I0223 08:52:53.360164 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/ac5f7bee-6c1d-4fc8-b805-79a2f14aeca0-v4-0-config-system-session\") pod \"oauth-openshift-5d4b6f47b4-txxg2\" (UID: \"ac5f7bee-6c1d-4fc8-b805-79a2f14aeca0\") " pod="openshift-authentication/oauth-openshift-5d4b6f47b4-txxg2" Feb 23 08:52:53 crc kubenswrapper[4940]: I0223 08:52:53.360188 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/ac5f7bee-6c1d-4fc8-b805-79a2f14aeca0-v4-0-config-system-serving-cert\") pod \"oauth-openshift-5d4b6f47b4-txxg2\" (UID: \"ac5f7bee-6c1d-4fc8-b805-79a2f14aeca0\") " pod="openshift-authentication/oauth-openshift-5d4b6f47b4-txxg2" Feb 23 08:52:53 crc kubenswrapper[4940]: I0223 08:52:53.360205 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/ac5f7bee-6c1d-4fc8-b805-79a2f14aeca0-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-5d4b6f47b4-txxg2\" (UID: \"ac5f7bee-6c1d-4fc8-b805-79a2f14aeca0\") " pod="openshift-authentication/oauth-openshift-5d4b6f47b4-txxg2" Feb 23 08:52:53 crc kubenswrapper[4940]: I0223 08:52:53.360219 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/ac5f7bee-6c1d-4fc8-b805-79a2f14aeca0-v4-0-config-system-router-certs\") pod \"oauth-openshift-5d4b6f47b4-txxg2\" (UID: \"ac5f7bee-6c1d-4fc8-b805-79a2f14aeca0\") " pod="openshift-authentication/oauth-openshift-5d4b6f47b4-txxg2" Feb 23 08:52:53 crc kubenswrapper[4940]: I0223 08:52:53.360240 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/ac5f7bee-6c1d-4fc8-b805-79a2f14aeca0-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-5d4b6f47b4-txxg2\" (UID: \"ac5f7bee-6c1d-4fc8-b805-79a2f14aeca0\") " pod="openshift-authentication/oauth-openshift-5d4b6f47b4-txxg2" Feb 23 08:52:53 crc kubenswrapper[4940]: I0223 08:52:53.360262 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/ac5f7bee-6c1d-4fc8-b805-79a2f14aeca0-v4-0-config-user-template-error\") pod \"oauth-openshift-5d4b6f47b4-txxg2\" (UID: \"ac5f7bee-6c1d-4fc8-b805-79a2f14aeca0\") " pod="openshift-authentication/oauth-openshift-5d4b6f47b4-txxg2" Feb 23 08:52:53 crc kubenswrapper[4940]: I0223 08:52:53.360279 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/ac5f7bee-6c1d-4fc8-b805-79a2f14aeca0-audit-policies\") pod \"oauth-openshift-5d4b6f47b4-txxg2\" (UID: \"ac5f7bee-6c1d-4fc8-b805-79a2f14aeca0\") " pod="openshift-authentication/oauth-openshift-5d4b6f47b4-txxg2" Feb 23 08:52:53 crc kubenswrapper[4940]: I0223 08:52:53.360297 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/ac5f7bee-6c1d-4fc8-b805-79a2f14aeca0-v4-0-config-system-cliconfig\") pod \"oauth-openshift-5d4b6f47b4-txxg2\" (UID: \"ac5f7bee-6c1d-4fc8-b805-79a2f14aeca0\") " pod="openshift-authentication/oauth-openshift-5d4b6f47b4-txxg2" Feb 23 08:52:53 crc kubenswrapper[4940]: I0223 08:52:53.360312 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/ac5f7bee-6c1d-4fc8-b805-79a2f14aeca0-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-5d4b6f47b4-txxg2\" (UID: \"ac5f7bee-6c1d-4fc8-b805-79a2f14aeca0\") " pod="openshift-authentication/oauth-openshift-5d4b6f47b4-txxg2" Feb 23 08:52:53 crc kubenswrapper[4940]: I0223 08:52:53.361116 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ac5f7bee-6c1d-4fc8-b805-79a2f14aeca0-audit-dir\") pod \"oauth-openshift-5d4b6f47b4-txxg2\" (UID: \"ac5f7bee-6c1d-4fc8-b805-79a2f14aeca0\") " pod="openshift-authentication/oauth-openshift-5d4b6f47b4-txxg2" Feb 23 08:52:53 crc kubenswrapper[4940]: I0223 08:52:53.362949 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/ac5f7bee-6c1d-4fc8-b805-79a2f14aeca0-v4-0-config-system-service-ca\") pod \"oauth-openshift-5d4b6f47b4-txxg2\" (UID: \"ac5f7bee-6c1d-4fc8-b805-79a2f14aeca0\") " pod="openshift-authentication/oauth-openshift-5d4b6f47b4-txxg2" Feb 23 08:52:53 crc kubenswrapper[4940]: I0223 08:52:53.362987 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/ac5f7bee-6c1d-4fc8-b805-79a2f14aeca0-v4-0-config-system-cliconfig\") pod \"oauth-openshift-5d4b6f47b4-txxg2\" (UID: \"ac5f7bee-6c1d-4fc8-b805-79a2f14aeca0\") " pod="openshift-authentication/oauth-openshift-5d4b6f47b4-txxg2" Feb 23 08:52:53 crc kubenswrapper[4940]: I0223 08:52:53.363178 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/ac5f7bee-6c1d-4fc8-b805-79a2f14aeca0-audit-policies\") pod \"oauth-openshift-5d4b6f47b4-txxg2\" (UID: \"ac5f7bee-6c1d-4fc8-b805-79a2f14aeca0\") " pod="openshift-authentication/oauth-openshift-5d4b6f47b4-txxg2" Feb 23 08:52:53 crc kubenswrapper[4940]: I0223 08:52:53.363286 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ac5f7bee-6c1d-4fc8-b805-79a2f14aeca0-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-5d4b6f47b4-txxg2\" (UID: \"ac5f7bee-6c1d-4fc8-b805-79a2f14aeca0\") " pod="openshift-authentication/oauth-openshift-5d4b6f47b4-txxg2" Feb 23 08:52:53 crc kubenswrapper[4940]: I0223 08:52:53.365681 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/ac5f7bee-6c1d-4fc8-b805-79a2f14aeca0-v4-0-config-system-serving-cert\") pod \"oauth-openshift-5d4b6f47b4-txxg2\" (UID: \"ac5f7bee-6c1d-4fc8-b805-79a2f14aeca0\") " pod="openshift-authentication/oauth-openshift-5d4b6f47b4-txxg2" Feb 23 08:52:53 crc kubenswrapper[4940]: I0223 08:52:53.365873 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/ac5f7bee-6c1d-4fc8-b805-79a2f14aeca0-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-5d4b6f47b4-txxg2\" (UID: \"ac5f7bee-6c1d-4fc8-b805-79a2f14aeca0\") " pod="openshift-authentication/oauth-openshift-5d4b6f47b4-txxg2" Feb 23 08:52:53 crc kubenswrapper[4940]: I0223 08:52:53.366059 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/ac5f7bee-6c1d-4fc8-b805-79a2f14aeca0-v4-0-config-system-router-certs\") pod \"oauth-openshift-5d4b6f47b4-txxg2\" (UID: \"ac5f7bee-6c1d-4fc8-b805-79a2f14aeca0\") " pod="openshift-authentication/oauth-openshift-5d4b6f47b4-txxg2" Feb 23 08:52:53 crc kubenswrapper[4940]: I0223 08:52:53.367295 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/ac5f7bee-6c1d-4fc8-b805-79a2f14aeca0-v4-0-config-user-template-error\") pod \"oauth-openshift-5d4b6f47b4-txxg2\" (UID: \"ac5f7bee-6c1d-4fc8-b805-79a2f14aeca0\") " pod="openshift-authentication/oauth-openshift-5d4b6f47b4-txxg2" Feb 23 08:52:53 crc kubenswrapper[4940]: I0223 08:52:53.367539 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/ac5f7bee-6c1d-4fc8-b805-79a2f14aeca0-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-5d4b6f47b4-txxg2\" (UID: \"ac5f7bee-6c1d-4fc8-b805-79a2f14aeca0\") " pod="openshift-authentication/oauth-openshift-5d4b6f47b4-txxg2" Feb 23 08:52:53 crc kubenswrapper[4940]: I0223 08:52:53.367919 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/ac5f7bee-6c1d-4fc8-b805-79a2f14aeca0-v4-0-config-user-template-login\") pod \"oauth-openshift-5d4b6f47b4-txxg2\" (UID: \"ac5f7bee-6c1d-4fc8-b805-79a2f14aeca0\") " pod="openshift-authentication/oauth-openshift-5d4b6f47b4-txxg2" Feb 23 08:52:53 crc kubenswrapper[4940]: I0223 08:52:53.368903 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/ac5f7bee-6c1d-4fc8-b805-79a2f14aeca0-v4-0-config-system-session\") pod \"oauth-openshift-5d4b6f47b4-txxg2\" (UID: \"ac5f7bee-6c1d-4fc8-b805-79a2f14aeca0\") " pod="openshift-authentication/oauth-openshift-5d4b6f47b4-txxg2" Feb 23 08:52:53 crc kubenswrapper[4940]: I0223 08:52:53.369126 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/ac5f7bee-6c1d-4fc8-b805-79a2f14aeca0-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-5d4b6f47b4-txxg2\" (UID: \"ac5f7bee-6c1d-4fc8-b805-79a2f14aeca0\") " pod="openshift-authentication/oauth-openshift-5d4b6f47b4-txxg2" Feb 23 08:52:53 crc kubenswrapper[4940]: I0223 08:52:53.379065 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mwh8w\" (UniqueName: \"kubernetes.io/projected/ac5f7bee-6c1d-4fc8-b805-79a2f14aeca0-kube-api-access-mwh8w\") pod \"oauth-openshift-5d4b6f47b4-txxg2\" (UID: \"ac5f7bee-6c1d-4fc8-b805-79a2f14aeca0\") " pod="openshift-authentication/oauth-openshift-5d4b6f47b4-txxg2" Feb 23 08:52:53 crc kubenswrapper[4940]: I0223 08:52:53.454356 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-5d4b6f47b4-txxg2" Feb 23 08:52:53 crc kubenswrapper[4940]: I0223 08:52:53.895161 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-rrhk2" event={"ID":"c733e45d-a072-4619-b2f8-aea6d77b112f","Type":"ContainerDied","Data":"3f739c33d43df28a656d8681974c2a2cdce1c262411a7308436aa14935e8d280"} Feb 23 08:52:53 crc kubenswrapper[4940]: I0223 08:52:53.895627 4940 scope.go:117] "RemoveContainer" containerID="b0f38fd326c8c7e639046b6dbffc5f3aeee8b6a9d51e0727dd2478b5c51b6b74" Feb 23 08:52:53 crc kubenswrapper[4940]: I0223 08:52:53.895278 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-rrhk2" Feb 23 08:52:53 crc kubenswrapper[4940]: I0223 08:52:53.913936 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-5d4b6f47b4-txxg2"] Feb 23 08:52:53 crc kubenswrapper[4940]: I0223 08:52:53.917445 4940 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-rrhk2"] Feb 23 08:52:53 crc kubenswrapper[4940]: I0223 08:52:53.923223 4940 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-rrhk2"] Feb 23 08:52:53 crc kubenswrapper[4940]: W0223 08:52:53.935413 4940 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podac5f7bee_6c1d_4fc8_b805_79a2f14aeca0.slice/crio-7d9b3df61fa981b4f16f92248e6c2cd81a2a80206bd6e79e581306f1cbec98ac WatchSource:0}: Error finding container 7d9b3df61fa981b4f16f92248e6c2cd81a2a80206bd6e79e581306f1cbec98ac: Status 404 returned error can't find the container with id 7d9b3df61fa981b4f16f92248e6c2cd81a2a80206bd6e79e581306f1cbec98ac Feb 23 08:52:54 crc kubenswrapper[4940]: I0223 08:52:54.901863 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-5d4b6f47b4-txxg2" event={"ID":"ac5f7bee-6c1d-4fc8-b805-79a2f14aeca0","Type":"ContainerStarted","Data":"a5fe44470a7fa9b47b8f51a5fa635933d7cbf0ea5bd74c0a4878e84fc2e3c55a"} Feb 23 08:52:54 crc kubenswrapper[4940]: I0223 08:52:54.902369 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-5d4b6f47b4-txxg2" Feb 23 08:52:54 crc kubenswrapper[4940]: I0223 08:52:54.902399 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-5d4b6f47b4-txxg2" event={"ID":"ac5f7bee-6c1d-4fc8-b805-79a2f14aeca0","Type":"ContainerStarted","Data":"7d9b3df61fa981b4f16f92248e6c2cd81a2a80206bd6e79e581306f1cbec98ac"} Feb 23 08:52:54 crc kubenswrapper[4940]: I0223 08:52:54.910309 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-5d4b6f47b4-txxg2" Feb 23 08:52:54 crc kubenswrapper[4940]: I0223 08:52:54.932576 4940 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-5d4b6f47b4-txxg2" podStartSLOduration=27.932559699 podStartE2EDuration="27.932559699s" podCreationTimestamp="2026-02-23 08:52:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 08:52:54.928107748 +0000 UTC m=+306.311313945" watchObservedRunningTime="2026-02-23 08:52:54.932559699 +0000 UTC m=+306.315765856" Feb 23 08:52:55 crc kubenswrapper[4940]: I0223 08:52:55.264938 4940 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7cb7cb94c-lw7gh"] Feb 23 08:52:55 crc kubenswrapper[4940]: I0223 08:52:55.265217 4940 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-7cb7cb94c-lw7gh" podUID="489ebeaa-82d0-4a83-ae8b-ffdc565ec43c" containerName="controller-manager" containerID="cri-o://21b635b1bbe36f6d0ac23b806c7483997279db955e42e7a66ac8164430ea19d9" gracePeriod=30 Feb 23 08:52:55 crc kubenswrapper[4940]: I0223 08:52:55.282487 4940 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-77f498b9-88t7m"] Feb 23 08:52:55 crc kubenswrapper[4940]: I0223 08:52:55.282706 4940 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-77f498b9-88t7m" podUID="c623ed59-a1b3-491e-9498-b53d7f096ced" containerName="route-controller-manager" containerID="cri-o://68ffc185a8925cc49c6b4f73249efbf89f6933f4016d104237ec5cffe4681208" gracePeriod=30 Feb 23 08:52:55 crc kubenswrapper[4940]: I0223 08:52:55.351911 4940 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c733e45d-a072-4619-b2f8-aea6d77b112f" path="/var/lib/kubelet/pods/c733e45d-a072-4619-b2f8-aea6d77b112f/volumes" Feb 23 08:52:55 crc kubenswrapper[4940]: I0223 08:52:55.722861 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-77f498b9-88t7m" Feb 23 08:52:55 crc kubenswrapper[4940]: I0223 08:52:55.791402 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-srzlp\" (UniqueName: \"kubernetes.io/projected/c623ed59-a1b3-491e-9498-b53d7f096ced-kube-api-access-srzlp\") pod \"c623ed59-a1b3-491e-9498-b53d7f096ced\" (UID: \"c623ed59-a1b3-491e-9498-b53d7f096ced\") " Feb 23 08:52:55 crc kubenswrapper[4940]: I0223 08:52:55.791644 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c623ed59-a1b3-491e-9498-b53d7f096ced-config\") pod \"c623ed59-a1b3-491e-9498-b53d7f096ced\" (UID: \"c623ed59-a1b3-491e-9498-b53d7f096ced\") " Feb 23 08:52:55 crc kubenswrapper[4940]: I0223 08:52:55.791703 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c623ed59-a1b3-491e-9498-b53d7f096ced-serving-cert\") pod \"c623ed59-a1b3-491e-9498-b53d7f096ced\" (UID: \"c623ed59-a1b3-491e-9498-b53d7f096ced\") " Feb 23 08:52:55 crc kubenswrapper[4940]: I0223 08:52:55.791751 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c623ed59-a1b3-491e-9498-b53d7f096ced-client-ca\") pod \"c623ed59-a1b3-491e-9498-b53d7f096ced\" (UID: \"c623ed59-a1b3-491e-9498-b53d7f096ced\") " Feb 23 08:52:55 crc kubenswrapper[4940]: I0223 08:52:55.792501 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c623ed59-a1b3-491e-9498-b53d7f096ced-config" (OuterVolumeSpecName: "config") pod "c623ed59-a1b3-491e-9498-b53d7f096ced" (UID: "c623ed59-a1b3-491e-9498-b53d7f096ced"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 08:52:55 crc kubenswrapper[4940]: I0223 08:52:55.792668 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c623ed59-a1b3-491e-9498-b53d7f096ced-client-ca" (OuterVolumeSpecName: "client-ca") pod "c623ed59-a1b3-491e-9498-b53d7f096ced" (UID: "c623ed59-a1b3-491e-9498-b53d7f096ced"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 08:52:55 crc kubenswrapper[4940]: I0223 08:52:55.797467 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c623ed59-a1b3-491e-9498-b53d7f096ced-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "c623ed59-a1b3-491e-9498-b53d7f096ced" (UID: "c623ed59-a1b3-491e-9498-b53d7f096ced"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 08:52:55 crc kubenswrapper[4940]: I0223 08:52:55.799986 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c623ed59-a1b3-491e-9498-b53d7f096ced-kube-api-access-srzlp" (OuterVolumeSpecName: "kube-api-access-srzlp") pod "c623ed59-a1b3-491e-9498-b53d7f096ced" (UID: "c623ed59-a1b3-491e-9498-b53d7f096ced"). InnerVolumeSpecName "kube-api-access-srzlp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 08:52:55 crc kubenswrapper[4940]: I0223 08:52:55.827577 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7cb7cb94c-lw7gh" Feb 23 08:52:55 crc kubenswrapper[4940]: I0223 08:52:55.892641 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rxrz2\" (UniqueName: \"kubernetes.io/projected/489ebeaa-82d0-4a83-ae8b-ffdc565ec43c-kube-api-access-rxrz2\") pod \"489ebeaa-82d0-4a83-ae8b-ffdc565ec43c\" (UID: \"489ebeaa-82d0-4a83-ae8b-ffdc565ec43c\") " Feb 23 08:52:55 crc kubenswrapper[4940]: I0223 08:52:55.892723 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/489ebeaa-82d0-4a83-ae8b-ffdc565ec43c-client-ca\") pod \"489ebeaa-82d0-4a83-ae8b-ffdc565ec43c\" (UID: \"489ebeaa-82d0-4a83-ae8b-ffdc565ec43c\") " Feb 23 08:52:55 crc kubenswrapper[4940]: I0223 08:52:55.892746 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/489ebeaa-82d0-4a83-ae8b-ffdc565ec43c-config\") pod \"489ebeaa-82d0-4a83-ae8b-ffdc565ec43c\" (UID: \"489ebeaa-82d0-4a83-ae8b-ffdc565ec43c\") " Feb 23 08:52:55 crc kubenswrapper[4940]: I0223 08:52:55.892844 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/489ebeaa-82d0-4a83-ae8b-ffdc565ec43c-serving-cert\") pod \"489ebeaa-82d0-4a83-ae8b-ffdc565ec43c\" (UID: \"489ebeaa-82d0-4a83-ae8b-ffdc565ec43c\") " Feb 23 08:52:55 crc kubenswrapper[4940]: I0223 08:52:55.892866 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/489ebeaa-82d0-4a83-ae8b-ffdc565ec43c-proxy-ca-bundles\") pod \"489ebeaa-82d0-4a83-ae8b-ffdc565ec43c\" (UID: \"489ebeaa-82d0-4a83-ae8b-ffdc565ec43c\") " Feb 23 08:52:55 crc kubenswrapper[4940]: I0223 08:52:55.893190 4940 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c623ed59-a1b3-491e-9498-b53d7f096ced-client-ca\") on node \"crc\" DevicePath \"\"" Feb 23 08:52:55 crc kubenswrapper[4940]: I0223 08:52:55.893207 4940 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-srzlp\" (UniqueName: \"kubernetes.io/projected/c623ed59-a1b3-491e-9498-b53d7f096ced-kube-api-access-srzlp\") on node \"crc\" DevicePath \"\"" Feb 23 08:52:55 crc kubenswrapper[4940]: I0223 08:52:55.893216 4940 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c623ed59-a1b3-491e-9498-b53d7f096ced-config\") on node \"crc\" DevicePath \"\"" Feb 23 08:52:55 crc kubenswrapper[4940]: I0223 08:52:55.893224 4940 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c623ed59-a1b3-491e-9498-b53d7f096ced-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 23 08:52:55 crc kubenswrapper[4940]: I0223 08:52:55.893790 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/489ebeaa-82d0-4a83-ae8b-ffdc565ec43c-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "489ebeaa-82d0-4a83-ae8b-ffdc565ec43c" (UID: "489ebeaa-82d0-4a83-ae8b-ffdc565ec43c"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 08:52:55 crc kubenswrapper[4940]: I0223 08:52:55.894033 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/489ebeaa-82d0-4a83-ae8b-ffdc565ec43c-config" (OuterVolumeSpecName: "config") pod "489ebeaa-82d0-4a83-ae8b-ffdc565ec43c" (UID: "489ebeaa-82d0-4a83-ae8b-ffdc565ec43c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 08:52:55 crc kubenswrapper[4940]: I0223 08:52:55.894021 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/489ebeaa-82d0-4a83-ae8b-ffdc565ec43c-client-ca" (OuterVolumeSpecName: "client-ca") pod "489ebeaa-82d0-4a83-ae8b-ffdc565ec43c" (UID: "489ebeaa-82d0-4a83-ae8b-ffdc565ec43c"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 08:52:55 crc kubenswrapper[4940]: I0223 08:52:55.896327 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/489ebeaa-82d0-4a83-ae8b-ffdc565ec43c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "489ebeaa-82d0-4a83-ae8b-ffdc565ec43c" (UID: "489ebeaa-82d0-4a83-ae8b-ffdc565ec43c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 08:52:55 crc kubenswrapper[4940]: I0223 08:52:55.896500 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/489ebeaa-82d0-4a83-ae8b-ffdc565ec43c-kube-api-access-rxrz2" (OuterVolumeSpecName: "kube-api-access-rxrz2") pod "489ebeaa-82d0-4a83-ae8b-ffdc565ec43c" (UID: "489ebeaa-82d0-4a83-ae8b-ffdc565ec43c"). InnerVolumeSpecName "kube-api-access-rxrz2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 08:52:55 crc kubenswrapper[4940]: I0223 08:52:55.911103 4940 generic.go:334] "Generic (PLEG): container finished" podID="c623ed59-a1b3-491e-9498-b53d7f096ced" containerID="68ffc185a8925cc49c6b4f73249efbf89f6933f4016d104237ec5cffe4681208" exitCode=0 Feb 23 08:52:55 crc kubenswrapper[4940]: I0223 08:52:55.911186 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-77f498b9-88t7m" event={"ID":"c623ed59-a1b3-491e-9498-b53d7f096ced","Type":"ContainerDied","Data":"68ffc185a8925cc49c6b4f73249efbf89f6933f4016d104237ec5cffe4681208"} Feb 23 08:52:55 crc kubenswrapper[4940]: I0223 08:52:55.911196 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-77f498b9-88t7m" Feb 23 08:52:55 crc kubenswrapper[4940]: I0223 08:52:55.911217 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-77f498b9-88t7m" event={"ID":"c623ed59-a1b3-491e-9498-b53d7f096ced","Type":"ContainerDied","Data":"415a2044068d2ac6aa74eb3990543a37480fc2727d482abcf45c7edccaadaf0b"} Feb 23 08:52:55 crc kubenswrapper[4940]: I0223 08:52:55.911236 4940 scope.go:117] "RemoveContainer" containerID="68ffc185a8925cc49c6b4f73249efbf89f6933f4016d104237ec5cffe4681208" Feb 23 08:52:55 crc kubenswrapper[4940]: I0223 08:52:55.914107 4940 generic.go:334] "Generic (PLEG): container finished" podID="489ebeaa-82d0-4a83-ae8b-ffdc565ec43c" containerID="21b635b1bbe36f6d0ac23b806c7483997279db955e42e7a66ac8164430ea19d9" exitCode=0 Feb 23 08:52:55 crc kubenswrapper[4940]: I0223 08:52:55.914156 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7cb7cb94c-lw7gh" Feb 23 08:52:55 crc kubenswrapper[4940]: I0223 08:52:55.914188 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7cb7cb94c-lw7gh" event={"ID":"489ebeaa-82d0-4a83-ae8b-ffdc565ec43c","Type":"ContainerDied","Data":"21b635b1bbe36f6d0ac23b806c7483997279db955e42e7a66ac8164430ea19d9"} Feb 23 08:52:55 crc kubenswrapper[4940]: I0223 08:52:55.914243 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7cb7cb94c-lw7gh" event={"ID":"489ebeaa-82d0-4a83-ae8b-ffdc565ec43c","Type":"ContainerDied","Data":"311240400fd361a20e1a9dc543b3c77e2601af2ade07744ee34afd3480c73e60"} Feb 23 08:52:55 crc kubenswrapper[4940]: I0223 08:52:55.933377 4940 scope.go:117] "RemoveContainer" containerID="68ffc185a8925cc49c6b4f73249efbf89f6933f4016d104237ec5cffe4681208" Feb 23 08:52:55 crc kubenswrapper[4940]: E0223 08:52:55.934573 4940 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"68ffc185a8925cc49c6b4f73249efbf89f6933f4016d104237ec5cffe4681208\": container with ID starting with 68ffc185a8925cc49c6b4f73249efbf89f6933f4016d104237ec5cffe4681208 not found: ID does not exist" containerID="68ffc185a8925cc49c6b4f73249efbf89f6933f4016d104237ec5cffe4681208" Feb 23 08:52:55 crc kubenswrapper[4940]: I0223 08:52:55.934647 4940 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"68ffc185a8925cc49c6b4f73249efbf89f6933f4016d104237ec5cffe4681208"} err="failed to get container status \"68ffc185a8925cc49c6b4f73249efbf89f6933f4016d104237ec5cffe4681208\": rpc error: code = NotFound desc = could not find container \"68ffc185a8925cc49c6b4f73249efbf89f6933f4016d104237ec5cffe4681208\": container with ID starting with 68ffc185a8925cc49c6b4f73249efbf89f6933f4016d104237ec5cffe4681208 not found: ID does not exist" Feb 23 08:52:55 crc kubenswrapper[4940]: I0223 08:52:55.934685 4940 scope.go:117] "RemoveContainer" containerID="21b635b1bbe36f6d0ac23b806c7483997279db955e42e7a66ac8164430ea19d9" Feb 23 08:52:55 crc kubenswrapper[4940]: I0223 08:52:55.954169 4940 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-77f498b9-88t7m"] Feb 23 08:52:55 crc kubenswrapper[4940]: I0223 08:52:55.957805 4940 scope.go:117] "RemoveContainer" containerID="21b635b1bbe36f6d0ac23b806c7483997279db955e42e7a66ac8164430ea19d9" Feb 23 08:52:55 crc kubenswrapper[4940]: I0223 08:52:55.957915 4940 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-77f498b9-88t7m"] Feb 23 08:52:55 crc kubenswrapper[4940]: E0223 08:52:55.958340 4940 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"21b635b1bbe36f6d0ac23b806c7483997279db955e42e7a66ac8164430ea19d9\": container with ID starting with 21b635b1bbe36f6d0ac23b806c7483997279db955e42e7a66ac8164430ea19d9 not found: ID does not exist" containerID="21b635b1bbe36f6d0ac23b806c7483997279db955e42e7a66ac8164430ea19d9" Feb 23 08:52:55 crc kubenswrapper[4940]: I0223 08:52:55.958383 4940 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"21b635b1bbe36f6d0ac23b806c7483997279db955e42e7a66ac8164430ea19d9"} err="failed to get container status \"21b635b1bbe36f6d0ac23b806c7483997279db955e42e7a66ac8164430ea19d9\": rpc error: code = NotFound desc = could not find container \"21b635b1bbe36f6d0ac23b806c7483997279db955e42e7a66ac8164430ea19d9\": container with ID starting with 21b635b1bbe36f6d0ac23b806c7483997279db955e42e7a66ac8164430ea19d9 not found: ID does not exist" Feb 23 08:52:55 crc kubenswrapper[4940]: I0223 08:52:55.966437 4940 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7cb7cb94c-lw7gh"] Feb 23 08:52:55 crc kubenswrapper[4940]: I0223 08:52:55.968520 4940 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-7cb7cb94c-lw7gh"] Feb 23 08:52:55 crc kubenswrapper[4940]: I0223 08:52:55.994972 4940 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/489ebeaa-82d0-4a83-ae8b-ffdc565ec43c-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 23 08:52:55 crc kubenswrapper[4940]: I0223 08:52:55.995001 4940 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/489ebeaa-82d0-4a83-ae8b-ffdc565ec43c-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 23 08:52:55 crc kubenswrapper[4940]: I0223 08:52:55.995011 4940 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rxrz2\" (UniqueName: \"kubernetes.io/projected/489ebeaa-82d0-4a83-ae8b-ffdc565ec43c-kube-api-access-rxrz2\") on node \"crc\" DevicePath \"\"" Feb 23 08:52:55 crc kubenswrapper[4940]: I0223 08:52:55.995022 4940 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/489ebeaa-82d0-4a83-ae8b-ffdc565ec43c-client-ca\") on node \"crc\" DevicePath \"\"" Feb 23 08:52:55 crc kubenswrapper[4940]: I0223 08:52:55.995030 4940 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/489ebeaa-82d0-4a83-ae8b-ffdc565ec43c-config\") on node \"crc\" DevicePath \"\"" Feb 23 08:52:57 crc kubenswrapper[4940]: I0223 08:52:57.042685 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-7b46dcdfdb-hgp6d"] Feb 23 08:52:57 crc kubenswrapper[4940]: E0223 08:52:57.042985 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c623ed59-a1b3-491e-9498-b53d7f096ced" containerName="route-controller-manager" Feb 23 08:52:57 crc kubenswrapper[4940]: I0223 08:52:57.043005 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="c623ed59-a1b3-491e-9498-b53d7f096ced" containerName="route-controller-manager" Feb 23 08:52:57 crc kubenswrapper[4940]: E0223 08:52:57.043025 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="489ebeaa-82d0-4a83-ae8b-ffdc565ec43c" containerName="controller-manager" Feb 23 08:52:57 crc kubenswrapper[4940]: I0223 08:52:57.043039 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="489ebeaa-82d0-4a83-ae8b-ffdc565ec43c" containerName="controller-manager" Feb 23 08:52:57 crc kubenswrapper[4940]: I0223 08:52:57.043225 4940 memory_manager.go:354] "RemoveStaleState removing state" podUID="489ebeaa-82d0-4a83-ae8b-ffdc565ec43c" containerName="controller-manager" Feb 23 08:52:57 crc kubenswrapper[4940]: I0223 08:52:57.043259 4940 memory_manager.go:354] "RemoveStaleState removing state" podUID="c623ed59-a1b3-491e-9498-b53d7f096ced" containerName="route-controller-manager" Feb 23 08:52:57 crc kubenswrapper[4940]: I0223 08:52:57.043889 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7b46dcdfdb-hgp6d" Feb 23 08:52:57 crc kubenswrapper[4940]: I0223 08:52:57.047310 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 23 08:52:57 crc kubenswrapper[4940]: I0223 08:52:57.050034 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 23 08:52:57 crc kubenswrapper[4940]: I0223 08:52:57.050083 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 23 08:52:57 crc kubenswrapper[4940]: I0223 08:52:57.050354 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 23 08:52:57 crc kubenswrapper[4940]: I0223 08:52:57.050573 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 23 08:52:57 crc kubenswrapper[4940]: I0223 08:52:57.051136 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 23 08:52:57 crc kubenswrapper[4940]: I0223 08:52:57.055641 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cc8f98f-xhxz5"] Feb 23 08:52:57 crc kubenswrapper[4940]: I0223 08:52:57.057029 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776cc8f98f-xhxz5" Feb 23 08:52:57 crc kubenswrapper[4940]: I0223 08:52:57.060353 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 23 08:52:57 crc kubenswrapper[4940]: I0223 08:52:57.061231 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 23 08:52:57 crc kubenswrapper[4940]: I0223 08:52:57.061479 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 23 08:52:57 crc kubenswrapper[4940]: I0223 08:52:57.061574 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 23 08:52:57 crc kubenswrapper[4940]: I0223 08:52:57.061768 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 23 08:52:57 crc kubenswrapper[4940]: I0223 08:52:57.061910 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 23 08:52:57 crc kubenswrapper[4940]: I0223 08:52:57.062427 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 23 08:52:57 crc kubenswrapper[4940]: I0223 08:52:57.067566 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cc8f98f-xhxz5"] Feb 23 08:52:57 crc kubenswrapper[4940]: I0223 08:52:57.072971 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7b46dcdfdb-hgp6d"] Feb 23 08:52:57 crc kubenswrapper[4940]: I0223 08:52:57.111933 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tw8fz\" (UniqueName: \"kubernetes.io/projected/a727a615-f260-4f2f-b318-1050563eabe4-kube-api-access-tw8fz\") pod \"controller-manager-7b46dcdfdb-hgp6d\" (UID: \"a727a615-f260-4f2f-b318-1050563eabe4\") " pod="openshift-controller-manager/controller-manager-7b46dcdfdb-hgp6d" Feb 23 08:52:57 crc kubenswrapper[4940]: I0223 08:52:57.112151 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f5394aaf-81fd-4241-915e-515d577dbc31-serving-cert\") pod \"route-controller-manager-776cc8f98f-xhxz5\" (UID: \"f5394aaf-81fd-4241-915e-515d577dbc31\") " pod="openshift-route-controller-manager/route-controller-manager-776cc8f98f-xhxz5" Feb 23 08:52:57 crc kubenswrapper[4940]: I0223 08:52:57.112441 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f5394aaf-81fd-4241-915e-515d577dbc31-client-ca\") pod \"route-controller-manager-776cc8f98f-xhxz5\" (UID: \"f5394aaf-81fd-4241-915e-515d577dbc31\") " pod="openshift-route-controller-manager/route-controller-manager-776cc8f98f-xhxz5" Feb 23 08:52:57 crc kubenswrapper[4940]: I0223 08:52:57.112554 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a727a615-f260-4f2f-b318-1050563eabe4-serving-cert\") pod \"controller-manager-7b46dcdfdb-hgp6d\" (UID: \"a727a615-f260-4f2f-b318-1050563eabe4\") " pod="openshift-controller-manager/controller-manager-7b46dcdfdb-hgp6d" Feb 23 08:52:57 crc kubenswrapper[4940]: I0223 08:52:57.112839 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a727a615-f260-4f2f-b318-1050563eabe4-client-ca\") pod \"controller-manager-7b46dcdfdb-hgp6d\" (UID: \"a727a615-f260-4f2f-b318-1050563eabe4\") " pod="openshift-controller-manager/controller-manager-7b46dcdfdb-hgp6d" Feb 23 08:52:57 crc kubenswrapper[4940]: I0223 08:52:57.112992 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a727a615-f260-4f2f-b318-1050563eabe4-config\") pod \"controller-manager-7b46dcdfdb-hgp6d\" (UID: \"a727a615-f260-4f2f-b318-1050563eabe4\") " pod="openshift-controller-manager/controller-manager-7b46dcdfdb-hgp6d" Feb 23 08:52:57 crc kubenswrapper[4940]: I0223 08:52:57.113099 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a727a615-f260-4f2f-b318-1050563eabe4-proxy-ca-bundles\") pod \"controller-manager-7b46dcdfdb-hgp6d\" (UID: \"a727a615-f260-4f2f-b318-1050563eabe4\") " pod="openshift-controller-manager/controller-manager-7b46dcdfdb-hgp6d" Feb 23 08:52:57 crc kubenswrapper[4940]: I0223 08:52:57.113355 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f5394aaf-81fd-4241-915e-515d577dbc31-config\") pod \"route-controller-manager-776cc8f98f-xhxz5\" (UID: \"f5394aaf-81fd-4241-915e-515d577dbc31\") " pod="openshift-route-controller-manager/route-controller-manager-776cc8f98f-xhxz5" Feb 23 08:52:57 crc kubenswrapper[4940]: I0223 08:52:57.113487 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vdg4s\" (UniqueName: \"kubernetes.io/projected/f5394aaf-81fd-4241-915e-515d577dbc31-kube-api-access-vdg4s\") pod \"route-controller-manager-776cc8f98f-xhxz5\" (UID: \"f5394aaf-81fd-4241-915e-515d577dbc31\") " pod="openshift-route-controller-manager/route-controller-manager-776cc8f98f-xhxz5" Feb 23 08:52:57 crc kubenswrapper[4940]: I0223 08:52:57.215241 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f5394aaf-81fd-4241-915e-515d577dbc31-client-ca\") pod \"route-controller-manager-776cc8f98f-xhxz5\" (UID: \"f5394aaf-81fd-4241-915e-515d577dbc31\") " pod="openshift-route-controller-manager/route-controller-manager-776cc8f98f-xhxz5" Feb 23 08:52:57 crc kubenswrapper[4940]: I0223 08:52:57.215290 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a727a615-f260-4f2f-b318-1050563eabe4-serving-cert\") pod \"controller-manager-7b46dcdfdb-hgp6d\" (UID: \"a727a615-f260-4f2f-b318-1050563eabe4\") " pod="openshift-controller-manager/controller-manager-7b46dcdfdb-hgp6d" Feb 23 08:52:57 crc kubenswrapper[4940]: I0223 08:52:57.215312 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a727a615-f260-4f2f-b318-1050563eabe4-client-ca\") pod \"controller-manager-7b46dcdfdb-hgp6d\" (UID: \"a727a615-f260-4f2f-b318-1050563eabe4\") " pod="openshift-controller-manager/controller-manager-7b46dcdfdb-hgp6d" Feb 23 08:52:57 crc kubenswrapper[4940]: I0223 08:52:57.215338 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a727a615-f260-4f2f-b318-1050563eabe4-config\") pod \"controller-manager-7b46dcdfdb-hgp6d\" (UID: \"a727a615-f260-4f2f-b318-1050563eabe4\") " pod="openshift-controller-manager/controller-manager-7b46dcdfdb-hgp6d" Feb 23 08:52:57 crc kubenswrapper[4940]: I0223 08:52:57.215359 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a727a615-f260-4f2f-b318-1050563eabe4-proxy-ca-bundles\") pod \"controller-manager-7b46dcdfdb-hgp6d\" (UID: \"a727a615-f260-4f2f-b318-1050563eabe4\") " pod="openshift-controller-manager/controller-manager-7b46dcdfdb-hgp6d" Feb 23 08:52:57 crc kubenswrapper[4940]: I0223 08:52:57.215386 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f5394aaf-81fd-4241-915e-515d577dbc31-config\") pod \"route-controller-manager-776cc8f98f-xhxz5\" (UID: \"f5394aaf-81fd-4241-915e-515d577dbc31\") " pod="openshift-route-controller-manager/route-controller-manager-776cc8f98f-xhxz5" Feb 23 08:52:57 crc kubenswrapper[4940]: I0223 08:52:57.215419 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vdg4s\" (UniqueName: \"kubernetes.io/projected/f5394aaf-81fd-4241-915e-515d577dbc31-kube-api-access-vdg4s\") pod \"route-controller-manager-776cc8f98f-xhxz5\" (UID: \"f5394aaf-81fd-4241-915e-515d577dbc31\") " pod="openshift-route-controller-manager/route-controller-manager-776cc8f98f-xhxz5" Feb 23 08:52:57 crc kubenswrapper[4940]: I0223 08:52:57.215442 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tw8fz\" (UniqueName: \"kubernetes.io/projected/a727a615-f260-4f2f-b318-1050563eabe4-kube-api-access-tw8fz\") pod \"controller-manager-7b46dcdfdb-hgp6d\" (UID: \"a727a615-f260-4f2f-b318-1050563eabe4\") " pod="openshift-controller-manager/controller-manager-7b46dcdfdb-hgp6d" Feb 23 08:52:57 crc kubenswrapper[4940]: I0223 08:52:57.215499 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f5394aaf-81fd-4241-915e-515d577dbc31-serving-cert\") pod \"route-controller-manager-776cc8f98f-xhxz5\" (UID: \"f5394aaf-81fd-4241-915e-515d577dbc31\") " pod="openshift-route-controller-manager/route-controller-manager-776cc8f98f-xhxz5" Feb 23 08:52:57 crc kubenswrapper[4940]: I0223 08:52:57.216213 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f5394aaf-81fd-4241-915e-515d577dbc31-client-ca\") pod \"route-controller-manager-776cc8f98f-xhxz5\" (UID: \"f5394aaf-81fd-4241-915e-515d577dbc31\") " pod="openshift-route-controller-manager/route-controller-manager-776cc8f98f-xhxz5" Feb 23 08:52:57 crc kubenswrapper[4940]: I0223 08:52:57.216890 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a727a615-f260-4f2f-b318-1050563eabe4-proxy-ca-bundles\") pod \"controller-manager-7b46dcdfdb-hgp6d\" (UID: \"a727a615-f260-4f2f-b318-1050563eabe4\") " pod="openshift-controller-manager/controller-manager-7b46dcdfdb-hgp6d" Feb 23 08:52:57 crc kubenswrapper[4940]: I0223 08:52:57.216906 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f5394aaf-81fd-4241-915e-515d577dbc31-config\") pod \"route-controller-manager-776cc8f98f-xhxz5\" (UID: \"f5394aaf-81fd-4241-915e-515d577dbc31\") " pod="openshift-route-controller-manager/route-controller-manager-776cc8f98f-xhxz5" Feb 23 08:52:57 crc kubenswrapper[4940]: I0223 08:52:57.217322 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a727a615-f260-4f2f-b318-1050563eabe4-client-ca\") pod \"controller-manager-7b46dcdfdb-hgp6d\" (UID: \"a727a615-f260-4f2f-b318-1050563eabe4\") " pod="openshift-controller-manager/controller-manager-7b46dcdfdb-hgp6d" Feb 23 08:52:57 crc kubenswrapper[4940]: I0223 08:52:57.219248 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a727a615-f260-4f2f-b318-1050563eabe4-config\") pod \"controller-manager-7b46dcdfdb-hgp6d\" (UID: \"a727a615-f260-4f2f-b318-1050563eabe4\") " pod="openshift-controller-manager/controller-manager-7b46dcdfdb-hgp6d" Feb 23 08:52:57 crc kubenswrapper[4940]: I0223 08:52:57.221912 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f5394aaf-81fd-4241-915e-515d577dbc31-serving-cert\") pod \"route-controller-manager-776cc8f98f-xhxz5\" (UID: \"f5394aaf-81fd-4241-915e-515d577dbc31\") " pod="openshift-route-controller-manager/route-controller-manager-776cc8f98f-xhxz5" Feb 23 08:52:57 crc kubenswrapper[4940]: I0223 08:52:57.222567 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a727a615-f260-4f2f-b318-1050563eabe4-serving-cert\") pod \"controller-manager-7b46dcdfdb-hgp6d\" (UID: \"a727a615-f260-4f2f-b318-1050563eabe4\") " pod="openshift-controller-manager/controller-manager-7b46dcdfdb-hgp6d" Feb 23 08:52:57 crc kubenswrapper[4940]: I0223 08:52:57.234521 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tw8fz\" (UniqueName: \"kubernetes.io/projected/a727a615-f260-4f2f-b318-1050563eabe4-kube-api-access-tw8fz\") pod \"controller-manager-7b46dcdfdb-hgp6d\" (UID: \"a727a615-f260-4f2f-b318-1050563eabe4\") " pod="openshift-controller-manager/controller-manager-7b46dcdfdb-hgp6d" Feb 23 08:52:57 crc kubenswrapper[4940]: I0223 08:52:57.240038 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vdg4s\" (UniqueName: \"kubernetes.io/projected/f5394aaf-81fd-4241-915e-515d577dbc31-kube-api-access-vdg4s\") pod \"route-controller-manager-776cc8f98f-xhxz5\" (UID: \"f5394aaf-81fd-4241-915e-515d577dbc31\") " pod="openshift-route-controller-manager/route-controller-manager-776cc8f98f-xhxz5" Feb 23 08:52:57 crc kubenswrapper[4940]: I0223 08:52:57.352166 4940 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="489ebeaa-82d0-4a83-ae8b-ffdc565ec43c" path="/var/lib/kubelet/pods/489ebeaa-82d0-4a83-ae8b-ffdc565ec43c/volumes" Feb 23 08:52:57 crc kubenswrapper[4940]: I0223 08:52:57.352960 4940 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c623ed59-a1b3-491e-9498-b53d7f096ced" path="/var/lib/kubelet/pods/c623ed59-a1b3-491e-9498-b53d7f096ced/volumes" Feb 23 08:52:57 crc kubenswrapper[4940]: I0223 08:52:57.395417 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7b46dcdfdb-hgp6d" Feb 23 08:52:57 crc kubenswrapper[4940]: I0223 08:52:57.410758 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776cc8f98f-xhxz5" Feb 23 08:52:57 crc kubenswrapper[4940]: I0223 08:52:57.683337 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7b46dcdfdb-hgp6d"] Feb 23 08:52:57 crc kubenswrapper[4940]: I0223 08:52:57.852792 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cc8f98f-xhxz5"] Feb 23 08:52:57 crc kubenswrapper[4940]: W0223 08:52:57.863905 4940 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf5394aaf_81fd_4241_915e_515d577dbc31.slice/crio-5d3d2c5859998fe37ad772355db52ead430e6165bf0608c5e9ce34679bf7ee53 WatchSource:0}: Error finding container 5d3d2c5859998fe37ad772355db52ead430e6165bf0608c5e9ce34679bf7ee53: Status 404 returned error can't find the container with id 5d3d2c5859998fe37ad772355db52ead430e6165bf0608c5e9ce34679bf7ee53 Feb 23 08:52:57 crc kubenswrapper[4940]: I0223 08:52:57.933909 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-776cc8f98f-xhxz5" event={"ID":"f5394aaf-81fd-4241-915e-515d577dbc31","Type":"ContainerStarted","Data":"5d3d2c5859998fe37ad772355db52ead430e6165bf0608c5e9ce34679bf7ee53"} Feb 23 08:52:57 crc kubenswrapper[4940]: I0223 08:52:57.936072 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7b46dcdfdb-hgp6d" event={"ID":"a727a615-f260-4f2f-b318-1050563eabe4","Type":"ContainerStarted","Data":"b46dc984ce7f2ebf19646191d285ecf1de0820c668d32766e1a9062966970aeb"} Feb 23 08:52:57 crc kubenswrapper[4940]: I0223 08:52:57.936101 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7b46dcdfdb-hgp6d" event={"ID":"a727a615-f260-4f2f-b318-1050563eabe4","Type":"ContainerStarted","Data":"3242643ce33435f2eb5a094178fcacaaa17cfc835518d7153b236c9fa9b1046c"} Feb 23 08:52:57 crc kubenswrapper[4940]: I0223 08:52:57.937706 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-7b46dcdfdb-hgp6d" Feb 23 08:52:57 crc kubenswrapper[4940]: I0223 08:52:57.939354 4940 patch_prober.go:28] interesting pod/controller-manager-7b46dcdfdb-hgp6d container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.62:8443/healthz\": dial tcp 10.217.0.62:8443: connect: connection refused" start-of-body= Feb 23 08:52:57 crc kubenswrapper[4940]: I0223 08:52:57.939407 4940 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-7b46dcdfdb-hgp6d" podUID="a727a615-f260-4f2f-b318-1050563eabe4" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.62:8443/healthz\": dial tcp 10.217.0.62:8443: connect: connection refused" Feb 23 08:52:57 crc kubenswrapper[4940]: I0223 08:52:57.954138 4940 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-7b46dcdfdb-hgp6d" podStartSLOduration=2.954112529 podStartE2EDuration="2.954112529s" podCreationTimestamp="2026-02-23 08:52:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 08:52:57.951038932 +0000 UTC m=+309.334245079" watchObservedRunningTime="2026-02-23 08:52:57.954112529 +0000 UTC m=+309.337318686" Feb 23 08:52:58 crc kubenswrapper[4940]: I0223 08:52:58.945038 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-776cc8f98f-xhxz5" event={"ID":"f5394aaf-81fd-4241-915e-515d577dbc31","Type":"ContainerStarted","Data":"00f2786a049d2e51a2db6c53d09695198aadf5ec03a0f5e51c12cf7557e66c69"} Feb 23 08:52:58 crc kubenswrapper[4940]: I0223 08:52:58.945412 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-776cc8f98f-xhxz5" Feb 23 08:52:58 crc kubenswrapper[4940]: I0223 08:52:58.948851 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-7b46dcdfdb-hgp6d" Feb 23 08:52:58 crc kubenswrapper[4940]: I0223 08:52:58.951639 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-776cc8f98f-xhxz5" Feb 23 08:52:58 crc kubenswrapper[4940]: I0223 08:52:58.963435 4940 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-776cc8f98f-xhxz5" podStartSLOduration=3.963414733 podStartE2EDuration="3.963414733s" podCreationTimestamp="2026-02-23 08:52:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 08:52:58.959946732 +0000 UTC m=+310.343152889" watchObservedRunningTime="2026-02-23 08:52:58.963414733 +0000 UTC m=+310.346620890" Feb 23 08:53:00 crc kubenswrapper[4940]: I0223 08:53:00.146060 4940 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 23 08:53:00 crc kubenswrapper[4940]: I0223 08:53:00.146551 4940 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://0ee3bc200ebaed1c133ad32e65db44c8613c48203736dada71c0881d3d4f45a5" gracePeriod=15 Feb 23 08:53:00 crc kubenswrapper[4940]: I0223 08:53:00.146857 4940 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://36385cbf7f0935ced8cb20a3a65c7d1802e2faf23bce8aeca1a040662b878b0f" gracePeriod=15 Feb 23 08:53:00 crc kubenswrapper[4940]: I0223 08:53:00.146919 4940 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://c47633e36ba0e4b4070060e2925e103b6efa68771b18f7a7f429546f9f05c1d9" gracePeriod=15 Feb 23 08:53:00 crc kubenswrapper[4940]: I0223 08:53:00.146968 4940 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://b6fefd6cf7ee54d7687a700f08f52ff20d438289f846da4d811f3295e85455c1" gracePeriod=15 Feb 23 08:53:00 crc kubenswrapper[4940]: I0223 08:53:00.147021 4940 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://ee77b54ead604e1f64b0057494a4ba082ac7f59289f563ce28ec6048485e3b07" gracePeriod=15 Feb 23 08:53:00 crc kubenswrapper[4940]: I0223 08:53:00.147855 4940 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 23 08:53:00 crc kubenswrapper[4940]: E0223 08:53:00.148219 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 23 08:53:00 crc kubenswrapper[4940]: I0223 08:53:00.148234 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 23 08:53:00 crc kubenswrapper[4940]: E0223 08:53:00.148246 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 23 08:53:00 crc kubenswrapper[4940]: I0223 08:53:00.148312 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 23 08:53:00 crc kubenswrapper[4940]: E0223 08:53:00.148325 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 23 08:53:00 crc kubenswrapper[4940]: I0223 08:53:00.148334 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 23 08:53:00 crc kubenswrapper[4940]: E0223 08:53:00.148347 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 23 08:53:00 crc kubenswrapper[4940]: I0223 08:53:00.148355 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 23 08:53:00 crc kubenswrapper[4940]: E0223 08:53:00.148367 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 23 08:53:00 crc kubenswrapper[4940]: I0223 08:53:00.148374 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 23 08:53:00 crc kubenswrapper[4940]: E0223 08:53:00.148383 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 23 08:53:00 crc kubenswrapper[4940]: I0223 08:53:00.148393 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 23 08:53:00 crc kubenswrapper[4940]: E0223 08:53:00.148403 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Feb 23 08:53:00 crc kubenswrapper[4940]: I0223 08:53:00.148410 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Feb 23 08:53:00 crc kubenswrapper[4940]: E0223 08:53:00.148419 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 23 08:53:00 crc kubenswrapper[4940]: I0223 08:53:00.148426 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 23 08:53:00 crc kubenswrapper[4940]: I0223 08:53:00.148557 4940 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 23 08:53:00 crc kubenswrapper[4940]: I0223 08:53:00.148570 4940 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 23 08:53:00 crc kubenswrapper[4940]: I0223 08:53:00.148577 4940 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 23 08:53:00 crc kubenswrapper[4940]: I0223 08:53:00.148586 4940 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 23 08:53:00 crc kubenswrapper[4940]: I0223 08:53:00.148594 4940 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 23 08:53:00 crc kubenswrapper[4940]: I0223 08:53:00.148602 4940 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 23 08:53:00 crc kubenswrapper[4940]: I0223 08:53:00.148625 4940 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 23 08:53:00 crc kubenswrapper[4940]: E0223 08:53:00.148814 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 23 08:53:00 crc kubenswrapper[4940]: I0223 08:53:00.148829 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 23 08:53:00 crc kubenswrapper[4940]: E0223 08:53:00.148849 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 23 08:53:00 crc kubenswrapper[4940]: I0223 08:53:00.148855 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 23 08:53:00 crc kubenswrapper[4940]: I0223 08:53:00.148987 4940 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 23 08:53:00 crc kubenswrapper[4940]: I0223 08:53:00.149364 4940 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 23 08:53:00 crc kubenswrapper[4940]: I0223 08:53:00.150423 4940 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 23 08:53:00 crc kubenswrapper[4940]: I0223 08:53:00.150950 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 23 08:53:00 crc kubenswrapper[4940]: I0223 08:53:00.160567 4940 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="f4b27818a5e8e43d0dc095d08835c792" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" Feb 23 08:53:00 crc kubenswrapper[4940]: I0223 08:53:00.263177 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 23 08:53:00 crc kubenswrapper[4940]: I0223 08:53:00.263240 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 23 08:53:00 crc kubenswrapper[4940]: I0223 08:53:00.263280 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 23 08:53:00 crc kubenswrapper[4940]: I0223 08:53:00.263303 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 23 08:53:00 crc kubenswrapper[4940]: I0223 08:53:00.263378 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 23 08:53:00 crc kubenswrapper[4940]: I0223 08:53:00.263409 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 23 08:53:00 crc kubenswrapper[4940]: I0223 08:53:00.263435 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 23 08:53:00 crc kubenswrapper[4940]: I0223 08:53:00.263467 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 23 08:53:00 crc kubenswrapper[4940]: I0223 08:53:00.364795 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 23 08:53:00 crc kubenswrapper[4940]: I0223 08:53:00.365237 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 23 08:53:00 crc kubenswrapper[4940]: I0223 08:53:00.365329 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 23 08:53:00 crc kubenswrapper[4940]: I0223 08:53:00.365490 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 23 08:53:00 crc kubenswrapper[4940]: I0223 08:53:00.365463 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 23 08:53:00 crc kubenswrapper[4940]: I0223 08:53:00.365191 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 23 08:53:00 crc kubenswrapper[4940]: I0223 08:53:00.365546 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 23 08:53:00 crc kubenswrapper[4940]: I0223 08:53:00.365546 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 23 08:53:00 crc kubenswrapper[4940]: I0223 08:53:00.365791 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 23 08:53:00 crc kubenswrapper[4940]: I0223 08:53:00.365900 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 23 08:53:00 crc kubenswrapper[4940]: I0223 08:53:00.365999 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 23 08:53:00 crc kubenswrapper[4940]: I0223 08:53:00.366074 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 23 08:53:00 crc kubenswrapper[4940]: I0223 08:53:00.366087 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 23 08:53:00 crc kubenswrapper[4940]: I0223 08:53:00.366235 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 23 08:53:00 crc kubenswrapper[4940]: I0223 08:53:00.366306 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 23 08:53:00 crc kubenswrapper[4940]: I0223 08:53:00.366185 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 23 08:53:00 crc kubenswrapper[4940]: I0223 08:53:00.956287 4940 generic.go:334] "Generic (PLEG): container finished" podID="07c4db25-8f75-4bad-8f11-d06e6a20d747" containerID="dfb3db24e675c521711679c520699168c3131a8920411446a50d3fd99ea40f86" exitCode=0 Feb 23 08:53:00 crc kubenswrapper[4940]: I0223 08:53:00.956359 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"07c4db25-8f75-4bad-8f11-d06e6a20d747","Type":"ContainerDied","Data":"dfb3db24e675c521711679c520699168c3131a8920411446a50d3fd99ea40f86"} Feb 23 08:53:00 crc kubenswrapper[4940]: I0223 08:53:00.956961 4940 status_manager.go:851] "Failed to get status for pod" podUID="07c4db25-8f75-4bad-8f11-d06e6a20d747" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.222:6443: connect: connection refused" Feb 23 08:53:00 crc kubenswrapper[4940]: I0223 08:53:00.958367 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/3.log" Feb 23 08:53:00 crc kubenswrapper[4940]: I0223 08:53:00.959684 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 23 08:53:00 crc kubenswrapper[4940]: I0223 08:53:00.960771 4940 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="36385cbf7f0935ced8cb20a3a65c7d1802e2faf23bce8aeca1a040662b878b0f" exitCode=0 Feb 23 08:53:00 crc kubenswrapper[4940]: I0223 08:53:00.960798 4940 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="c47633e36ba0e4b4070060e2925e103b6efa68771b18f7a7f429546f9f05c1d9" exitCode=0 Feb 23 08:53:00 crc kubenswrapper[4940]: I0223 08:53:00.960808 4940 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="b6fefd6cf7ee54d7687a700f08f52ff20d438289f846da4d811f3295e85455c1" exitCode=0 Feb 23 08:53:00 crc kubenswrapper[4940]: I0223 08:53:00.960818 4940 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="ee77b54ead604e1f64b0057494a4ba082ac7f59289f563ce28ec6048485e3b07" exitCode=2 Feb 23 08:53:00 crc kubenswrapper[4940]: I0223 08:53:00.960860 4940 scope.go:117] "RemoveContainer" containerID="947d88435f9dfeac492c7a476d5413d6c9f65fea2e6a8322bed3a197be4e7c11" Feb 23 08:53:01 crc kubenswrapper[4940]: I0223 08:53:01.978589 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 23 08:53:02 crc kubenswrapper[4940]: I0223 08:53:02.301148 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 23 08:53:02 crc kubenswrapper[4940]: I0223 08:53:02.301729 4940 status_manager.go:851] "Failed to get status for pod" podUID="07c4db25-8f75-4bad-8f11-d06e6a20d747" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.222:6443: connect: connection refused" Feb 23 08:53:02 crc kubenswrapper[4940]: I0223 08:53:02.394367 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/07c4db25-8f75-4bad-8f11-d06e6a20d747-kube-api-access\") pod \"07c4db25-8f75-4bad-8f11-d06e6a20d747\" (UID: \"07c4db25-8f75-4bad-8f11-d06e6a20d747\") " Feb 23 08:53:02 crc kubenswrapper[4940]: I0223 08:53:02.394765 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/07c4db25-8f75-4bad-8f11-d06e6a20d747-kubelet-dir\") pod \"07c4db25-8f75-4bad-8f11-d06e6a20d747\" (UID: \"07c4db25-8f75-4bad-8f11-d06e6a20d747\") " Feb 23 08:53:02 crc kubenswrapper[4940]: I0223 08:53:02.394847 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/07c4db25-8f75-4bad-8f11-d06e6a20d747-var-lock\") pod \"07c4db25-8f75-4bad-8f11-d06e6a20d747\" (UID: \"07c4db25-8f75-4bad-8f11-d06e6a20d747\") " Feb 23 08:53:02 crc kubenswrapper[4940]: I0223 08:53:02.395070 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/07c4db25-8f75-4bad-8f11-d06e6a20d747-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "07c4db25-8f75-4bad-8f11-d06e6a20d747" (UID: "07c4db25-8f75-4bad-8f11-d06e6a20d747"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 08:53:02 crc kubenswrapper[4940]: I0223 08:53:02.395212 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/07c4db25-8f75-4bad-8f11-d06e6a20d747-var-lock" (OuterVolumeSpecName: "var-lock") pod "07c4db25-8f75-4bad-8f11-d06e6a20d747" (UID: "07c4db25-8f75-4bad-8f11-d06e6a20d747"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 08:53:02 crc kubenswrapper[4940]: I0223 08:53:02.401811 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/07c4db25-8f75-4bad-8f11-d06e6a20d747-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "07c4db25-8f75-4bad-8f11-d06e6a20d747" (UID: "07c4db25-8f75-4bad-8f11-d06e6a20d747"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 08:53:02 crc kubenswrapper[4940]: I0223 08:53:02.496711 4940 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/07c4db25-8f75-4bad-8f11-d06e6a20d747-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 23 08:53:02 crc kubenswrapper[4940]: I0223 08:53:02.496746 4940 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/07c4db25-8f75-4bad-8f11-d06e6a20d747-var-lock\") on node \"crc\" DevicePath \"\"" Feb 23 08:53:02 crc kubenswrapper[4940]: I0223 08:53:02.496776 4940 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/07c4db25-8f75-4bad-8f11-d06e6a20d747-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 23 08:53:02 crc kubenswrapper[4940]: I0223 08:53:02.595350 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 23 08:53:02 crc kubenswrapper[4940]: I0223 08:53:02.596238 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 23 08:53:02 crc kubenswrapper[4940]: I0223 08:53:02.596751 4940 status_manager.go:851] "Failed to get status for pod" podUID="07c4db25-8f75-4bad-8f11-d06e6a20d747" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.222:6443: connect: connection refused" Feb 23 08:53:02 crc kubenswrapper[4940]: I0223 08:53:02.597152 4940 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.222:6443: connect: connection refused" Feb 23 08:53:02 crc kubenswrapper[4940]: I0223 08:53:02.698821 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 23 08:53:02 crc kubenswrapper[4940]: I0223 08:53:02.698862 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 23 08:53:02 crc kubenswrapper[4940]: I0223 08:53:02.698957 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 23 08:53:02 crc kubenswrapper[4940]: I0223 08:53:02.698955 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 08:53:02 crc kubenswrapper[4940]: I0223 08:53:02.699009 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 08:53:02 crc kubenswrapper[4940]: I0223 08:53:02.699093 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 08:53:02 crc kubenswrapper[4940]: I0223 08:53:02.699221 4940 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 23 08:53:02 crc kubenswrapper[4940]: I0223 08:53:02.699232 4940 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Feb 23 08:53:02 crc kubenswrapper[4940]: I0223 08:53:02.699240 4940 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Feb 23 08:53:02 crc kubenswrapper[4940]: I0223 08:53:02.988447 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"07c4db25-8f75-4bad-8f11-d06e6a20d747","Type":"ContainerDied","Data":"fbf366600e79bf69c268e8c56bf01c1ec34bfc890c7521e8d8a59226f4f0e427"} Feb 23 08:53:02 crc kubenswrapper[4940]: I0223 08:53:02.988850 4940 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fbf366600e79bf69c268e8c56bf01c1ec34bfc890c7521e8d8a59226f4f0e427" Feb 23 08:53:02 crc kubenswrapper[4940]: I0223 08:53:02.988536 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 23 08:53:02 crc kubenswrapper[4940]: I0223 08:53:02.992930 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 23 08:53:02 crc kubenswrapper[4940]: I0223 08:53:02.993789 4940 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="0ee3bc200ebaed1c133ad32e65db44c8613c48203736dada71c0881d3d4f45a5" exitCode=0 Feb 23 08:53:02 crc kubenswrapper[4940]: I0223 08:53:02.993863 4940 scope.go:117] "RemoveContainer" containerID="36385cbf7f0935ced8cb20a3a65c7d1802e2faf23bce8aeca1a040662b878b0f" Feb 23 08:53:02 crc kubenswrapper[4940]: I0223 08:53:02.994021 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 23 08:53:03 crc kubenswrapper[4940]: I0223 08:53:03.001751 4940 status_manager.go:851] "Failed to get status for pod" podUID="07c4db25-8f75-4bad-8f11-d06e6a20d747" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.222:6443: connect: connection refused" Feb 23 08:53:03 crc kubenswrapper[4940]: I0223 08:53:03.002389 4940 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.222:6443: connect: connection refused" Feb 23 08:53:03 crc kubenswrapper[4940]: I0223 08:53:03.011932 4940 scope.go:117] "RemoveContainer" containerID="c47633e36ba0e4b4070060e2925e103b6efa68771b18f7a7f429546f9f05c1d9" Feb 23 08:53:03 crc kubenswrapper[4940]: I0223 08:53:03.018476 4940 status_manager.go:851] "Failed to get status for pod" podUID="07c4db25-8f75-4bad-8f11-d06e6a20d747" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.222:6443: connect: connection refused" Feb 23 08:53:03 crc kubenswrapper[4940]: I0223 08:53:03.018928 4940 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.222:6443: connect: connection refused" Feb 23 08:53:03 crc kubenswrapper[4940]: I0223 08:53:03.023627 4940 scope.go:117] "RemoveContainer" containerID="b6fefd6cf7ee54d7687a700f08f52ff20d438289f846da4d811f3295e85455c1" Feb 23 08:53:03 crc kubenswrapper[4940]: I0223 08:53:03.036634 4940 scope.go:117] "RemoveContainer" containerID="ee77b54ead604e1f64b0057494a4ba082ac7f59289f563ce28ec6048485e3b07" Feb 23 08:53:03 crc kubenswrapper[4940]: I0223 08:53:03.053031 4940 scope.go:117] "RemoveContainer" containerID="0ee3bc200ebaed1c133ad32e65db44c8613c48203736dada71c0881d3d4f45a5" Feb 23 08:53:03 crc kubenswrapper[4940]: I0223 08:53:03.067445 4940 scope.go:117] "RemoveContainer" containerID="a7d8e4f8739813cb8d24afc9999755886cd4a77958b8b4e1514fc0bc53403258" Feb 23 08:53:03 crc kubenswrapper[4940]: I0223 08:53:03.086428 4940 scope.go:117] "RemoveContainer" containerID="36385cbf7f0935ced8cb20a3a65c7d1802e2faf23bce8aeca1a040662b878b0f" Feb 23 08:53:03 crc kubenswrapper[4940]: E0223 08:53:03.087239 4940 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"36385cbf7f0935ced8cb20a3a65c7d1802e2faf23bce8aeca1a040662b878b0f\": container with ID starting with 36385cbf7f0935ced8cb20a3a65c7d1802e2faf23bce8aeca1a040662b878b0f not found: ID does not exist" containerID="36385cbf7f0935ced8cb20a3a65c7d1802e2faf23bce8aeca1a040662b878b0f" Feb 23 08:53:03 crc kubenswrapper[4940]: I0223 08:53:03.087284 4940 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"36385cbf7f0935ced8cb20a3a65c7d1802e2faf23bce8aeca1a040662b878b0f"} err="failed to get container status \"36385cbf7f0935ced8cb20a3a65c7d1802e2faf23bce8aeca1a040662b878b0f\": rpc error: code = NotFound desc = could not find container \"36385cbf7f0935ced8cb20a3a65c7d1802e2faf23bce8aeca1a040662b878b0f\": container with ID starting with 36385cbf7f0935ced8cb20a3a65c7d1802e2faf23bce8aeca1a040662b878b0f not found: ID does not exist" Feb 23 08:53:03 crc kubenswrapper[4940]: I0223 08:53:03.087310 4940 scope.go:117] "RemoveContainer" containerID="c47633e36ba0e4b4070060e2925e103b6efa68771b18f7a7f429546f9f05c1d9" Feb 23 08:53:03 crc kubenswrapper[4940]: E0223 08:53:03.087881 4940 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c47633e36ba0e4b4070060e2925e103b6efa68771b18f7a7f429546f9f05c1d9\": container with ID starting with c47633e36ba0e4b4070060e2925e103b6efa68771b18f7a7f429546f9f05c1d9 not found: ID does not exist" containerID="c47633e36ba0e4b4070060e2925e103b6efa68771b18f7a7f429546f9f05c1d9" Feb 23 08:53:03 crc kubenswrapper[4940]: I0223 08:53:03.087941 4940 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c47633e36ba0e4b4070060e2925e103b6efa68771b18f7a7f429546f9f05c1d9"} err="failed to get container status \"c47633e36ba0e4b4070060e2925e103b6efa68771b18f7a7f429546f9f05c1d9\": rpc error: code = NotFound desc = could not find container \"c47633e36ba0e4b4070060e2925e103b6efa68771b18f7a7f429546f9f05c1d9\": container with ID starting with c47633e36ba0e4b4070060e2925e103b6efa68771b18f7a7f429546f9f05c1d9 not found: ID does not exist" Feb 23 08:53:03 crc kubenswrapper[4940]: I0223 08:53:03.087957 4940 scope.go:117] "RemoveContainer" containerID="b6fefd6cf7ee54d7687a700f08f52ff20d438289f846da4d811f3295e85455c1" Feb 23 08:53:03 crc kubenswrapper[4940]: E0223 08:53:03.088433 4940 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b6fefd6cf7ee54d7687a700f08f52ff20d438289f846da4d811f3295e85455c1\": container with ID starting with b6fefd6cf7ee54d7687a700f08f52ff20d438289f846da4d811f3295e85455c1 not found: ID does not exist" containerID="b6fefd6cf7ee54d7687a700f08f52ff20d438289f846da4d811f3295e85455c1" Feb 23 08:53:03 crc kubenswrapper[4940]: I0223 08:53:03.088476 4940 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b6fefd6cf7ee54d7687a700f08f52ff20d438289f846da4d811f3295e85455c1"} err="failed to get container status \"b6fefd6cf7ee54d7687a700f08f52ff20d438289f846da4d811f3295e85455c1\": rpc error: code = NotFound desc = could not find container \"b6fefd6cf7ee54d7687a700f08f52ff20d438289f846da4d811f3295e85455c1\": container with ID starting with b6fefd6cf7ee54d7687a700f08f52ff20d438289f846da4d811f3295e85455c1 not found: ID does not exist" Feb 23 08:53:03 crc kubenswrapper[4940]: I0223 08:53:03.088510 4940 scope.go:117] "RemoveContainer" containerID="ee77b54ead604e1f64b0057494a4ba082ac7f59289f563ce28ec6048485e3b07" Feb 23 08:53:03 crc kubenswrapper[4940]: E0223 08:53:03.088782 4940 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ee77b54ead604e1f64b0057494a4ba082ac7f59289f563ce28ec6048485e3b07\": container with ID starting with ee77b54ead604e1f64b0057494a4ba082ac7f59289f563ce28ec6048485e3b07 not found: ID does not exist" containerID="ee77b54ead604e1f64b0057494a4ba082ac7f59289f563ce28ec6048485e3b07" Feb 23 08:53:03 crc kubenswrapper[4940]: I0223 08:53:03.088806 4940 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ee77b54ead604e1f64b0057494a4ba082ac7f59289f563ce28ec6048485e3b07"} err="failed to get container status \"ee77b54ead604e1f64b0057494a4ba082ac7f59289f563ce28ec6048485e3b07\": rpc error: code = NotFound desc = could not find container \"ee77b54ead604e1f64b0057494a4ba082ac7f59289f563ce28ec6048485e3b07\": container with ID starting with ee77b54ead604e1f64b0057494a4ba082ac7f59289f563ce28ec6048485e3b07 not found: ID does not exist" Feb 23 08:53:03 crc kubenswrapper[4940]: I0223 08:53:03.088820 4940 scope.go:117] "RemoveContainer" containerID="0ee3bc200ebaed1c133ad32e65db44c8613c48203736dada71c0881d3d4f45a5" Feb 23 08:53:03 crc kubenswrapper[4940]: E0223 08:53:03.089216 4940 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0ee3bc200ebaed1c133ad32e65db44c8613c48203736dada71c0881d3d4f45a5\": container with ID starting with 0ee3bc200ebaed1c133ad32e65db44c8613c48203736dada71c0881d3d4f45a5 not found: ID does not exist" containerID="0ee3bc200ebaed1c133ad32e65db44c8613c48203736dada71c0881d3d4f45a5" Feb 23 08:53:03 crc kubenswrapper[4940]: I0223 08:53:03.089237 4940 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0ee3bc200ebaed1c133ad32e65db44c8613c48203736dada71c0881d3d4f45a5"} err="failed to get container status \"0ee3bc200ebaed1c133ad32e65db44c8613c48203736dada71c0881d3d4f45a5\": rpc error: code = NotFound desc = could not find container \"0ee3bc200ebaed1c133ad32e65db44c8613c48203736dada71c0881d3d4f45a5\": container with ID starting with 0ee3bc200ebaed1c133ad32e65db44c8613c48203736dada71c0881d3d4f45a5 not found: ID does not exist" Feb 23 08:53:03 crc kubenswrapper[4940]: I0223 08:53:03.089254 4940 scope.go:117] "RemoveContainer" containerID="a7d8e4f8739813cb8d24afc9999755886cd4a77958b8b4e1514fc0bc53403258" Feb 23 08:53:03 crc kubenswrapper[4940]: E0223 08:53:03.089648 4940 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a7d8e4f8739813cb8d24afc9999755886cd4a77958b8b4e1514fc0bc53403258\": container with ID starting with a7d8e4f8739813cb8d24afc9999755886cd4a77958b8b4e1514fc0bc53403258 not found: ID does not exist" containerID="a7d8e4f8739813cb8d24afc9999755886cd4a77958b8b4e1514fc0bc53403258" Feb 23 08:53:03 crc kubenswrapper[4940]: I0223 08:53:03.089669 4940 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a7d8e4f8739813cb8d24afc9999755886cd4a77958b8b4e1514fc0bc53403258"} err="failed to get container status \"a7d8e4f8739813cb8d24afc9999755886cd4a77958b8b4e1514fc0bc53403258\": rpc error: code = NotFound desc = could not find container \"a7d8e4f8739813cb8d24afc9999755886cd4a77958b8b4e1514fc0bc53403258\": container with ID starting with a7d8e4f8739813cb8d24afc9999755886cd4a77958b8b4e1514fc0bc53403258 not found: ID does not exist" Feb 23 08:53:03 crc kubenswrapper[4940]: I0223 08:53:03.354797 4940 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Feb 23 08:53:05 crc kubenswrapper[4940]: E0223 08:53:05.193706 4940 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.222:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 23 08:53:05 crc kubenswrapper[4940]: I0223 08:53:05.194761 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 23 08:53:05 crc kubenswrapper[4940]: E0223 08:53:05.222814 4940 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.222:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.1896d42b27237087 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-23 08:53:05.222185095 +0000 UTC m=+316.605391272,LastTimestamp:2026-02-23 08:53:05.222185095 +0000 UTC m=+316.605391272,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 23 08:53:05 crc kubenswrapper[4940]: E0223 08:53:05.790314 4940 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.222:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.1896d42b27237087 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-23 08:53:05.222185095 +0000 UTC m=+316.605391272,LastTimestamp:2026-02-23 08:53:05.222185095 +0000 UTC m=+316.605391272,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 23 08:53:06 crc kubenswrapper[4940]: I0223 08:53:06.010088 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"1a4938524af72dd26c60b7563971ff5edc625bb2c38bdef051265652d9c1b4c2"} Feb 23 08:53:06 crc kubenswrapper[4940]: I0223 08:53:06.010170 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"d7b6048c918d59d433dfdf5d19752f9433d96a71ef8113eb5236b18fd82e04de"} Feb 23 08:53:06 crc kubenswrapper[4940]: I0223 08:53:06.010733 4940 status_manager.go:851] "Failed to get status for pod" podUID="07c4db25-8f75-4bad-8f11-d06e6a20d747" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.222:6443: connect: connection refused" Feb 23 08:53:06 crc kubenswrapper[4940]: E0223 08:53:06.010739 4940 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.222:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 23 08:53:06 crc kubenswrapper[4940]: E0223 08:53:06.497727 4940 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.222:6443: connect: connection refused" Feb 23 08:53:06 crc kubenswrapper[4940]: E0223 08:53:06.498108 4940 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.222:6443: connect: connection refused" Feb 23 08:53:06 crc kubenswrapper[4940]: E0223 08:53:06.498694 4940 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.222:6443: connect: connection refused" Feb 23 08:53:06 crc kubenswrapper[4940]: E0223 08:53:06.498915 4940 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.222:6443: connect: connection refused" Feb 23 08:53:06 crc kubenswrapper[4940]: E0223 08:53:06.499189 4940 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.222:6443: connect: connection refused" Feb 23 08:53:06 crc kubenswrapper[4940]: I0223 08:53:06.499218 4940 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Feb 23 08:53:06 crc kubenswrapper[4940]: E0223 08:53:06.499539 4940 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.222:6443: connect: connection refused" interval="200ms" Feb 23 08:53:06 crc kubenswrapper[4940]: E0223 08:53:06.700178 4940 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.222:6443: connect: connection refused" interval="400ms" Feb 23 08:53:07 crc kubenswrapper[4940]: E0223 08:53:07.100795 4940 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.222:6443: connect: connection refused" interval="800ms" Feb 23 08:53:07 crc kubenswrapper[4940]: E0223 08:53:07.902572 4940 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.222:6443: connect: connection refused" interval="1.6s" Feb 23 08:53:09 crc kubenswrapper[4940]: I0223 08:53:09.284588 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 08:53:09 crc kubenswrapper[4940]: I0223 08:53:09.285063 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 08:53:09 crc kubenswrapper[4940]: W0223 08:53:09.285468 4940 reflector.go:561] object-"openshift-network-console"/"networking-console-plugin": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-console/configmaps?fieldSelector=metadata.name%3Dnetworking-console-plugin&resourceVersion=27341": dial tcp 38.102.83.222:6443: connect: connection refused Feb 23 08:53:09 crc kubenswrapper[4940]: E0223 08:53:09.285564 4940 reflector.go:158] "Unhandled Error" err="object-\"openshift-network-console\"/\"networking-console-plugin\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-console/configmaps?fieldSelector=metadata.name%3Dnetworking-console-plugin&resourceVersion=27341\": dial tcp 38.102.83.222:6443: connect: connection refused" logger="UnhandledError" Feb 23 08:53:09 crc kubenswrapper[4940]: W0223 08:53:09.285788 4940 reflector.go:561] object-"openshift-network-console"/"networking-console-plugin-cert": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-console/secrets?fieldSelector=metadata.name%3Dnetworking-console-plugin-cert&resourceVersion=27343": dial tcp 38.102.83.222:6443: connect: connection refused Feb 23 08:53:09 crc kubenswrapper[4940]: E0223 08:53:09.285890 4940 reflector.go:158] "Unhandled Error" err="object-\"openshift-network-console\"/\"networking-console-plugin-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-console/secrets?fieldSelector=metadata.name%3Dnetworking-console-plugin-cert&resourceVersion=27343\": dial tcp 38.102.83.222:6443: connect: connection refused" logger="UnhandledError" Feb 23 08:53:09 crc kubenswrapper[4940]: I0223 08:53:09.347717 4940 status_manager.go:851] "Failed to get status for pod" podUID="07c4db25-8f75-4bad-8f11-d06e6a20d747" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.222:6443: connect: connection refused" Feb 23 08:53:09 crc kubenswrapper[4940]: I0223 08:53:09.386151 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 08:53:09 crc kubenswrapper[4940]: I0223 08:53:09.386209 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 08:53:09 crc kubenswrapper[4940]: W0223 08:53:09.387142 4940 reflector.go:561] object-"openshift-network-diagnostics"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-diagnostics/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=27341": dial tcp 38.102.83.222:6443: connect: connection refused Feb 23 08:53:09 crc kubenswrapper[4940]: E0223 08:53:09.387213 4940 reflector.go:158] "Unhandled Error" err="object-\"openshift-network-diagnostics\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-diagnostics/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=27341\": dial tcp 38.102.83.222:6443: connect: connection refused" logger="UnhandledError" Feb 23 08:53:09 crc kubenswrapper[4940]: E0223 08:53:09.504471 4940 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.222:6443: connect: connection refused" interval="3.2s" Feb 23 08:53:10 crc kubenswrapper[4940]: E0223 08:53:10.285232 4940 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: failed to sync configmap cache: timed out waiting for the condition Feb 23 08:53:10 crc kubenswrapper[4940]: E0223 08:53:10.285362 4940 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-23 08:55:12.285331722 +0000 UTC m=+443.668537909 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : failed to sync configmap cache: timed out waiting for the condition Feb 23 08:53:10 crc kubenswrapper[4940]: E0223 08:53:10.285429 4940 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: failed to sync secret cache: timed out waiting for the condition Feb 23 08:53:10 crc kubenswrapper[4940]: E0223 08:53:10.285508 4940 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-23 08:55:12.285489077 +0000 UTC m=+443.668695264 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : failed to sync secret cache: timed out waiting for the condition Feb 23 08:53:10 crc kubenswrapper[4940]: E0223 08:53:10.386496 4940 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 23 08:53:10 crc kubenswrapper[4940]: E0223 08:53:10.386525 4940 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 23 08:53:10 crc kubenswrapper[4940]: W0223 08:53:10.386922 4940 reflector.go:561] object-"openshift-network-diagnostics"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-diagnostics/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=27341": dial tcp 38.102.83.222:6443: connect: connection refused Feb 23 08:53:10 crc kubenswrapper[4940]: E0223 08:53:10.387003 4940 reflector.go:158] "Unhandled Error" err="object-\"openshift-network-diagnostics\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-diagnostics/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=27341\": dial tcp 38.102.83.222:6443: connect: connection refused" logger="UnhandledError" Feb 23 08:53:11 crc kubenswrapper[4940]: E0223 08:53:11.386708 4940 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 23 08:53:11 crc kubenswrapper[4940]: E0223 08:53:11.386783 4940 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: failed to sync configmap cache: timed out waiting for the condition Feb 23 08:53:11 crc kubenswrapper[4940]: E0223 08:53:11.386871 4940 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-23 08:55:13.386843011 +0000 UTC m=+444.770049208 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : failed to sync configmap cache: timed out waiting for the condition Feb 23 08:53:11 crc kubenswrapper[4940]: E0223 08:53:11.386710 4940 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 23 08:53:11 crc kubenswrapper[4940]: E0223 08:53:11.386907 4940 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: failed to sync configmap cache: timed out waiting for the condition Feb 23 08:53:11 crc kubenswrapper[4940]: E0223 08:53:11.386948 4940 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-23 08:55:13.386935934 +0000 UTC m=+444.770142121 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : failed to sync configmap cache: timed out waiting for the condition Feb 23 08:53:11 crc kubenswrapper[4940]: W0223 08:53:11.578090 4940 reflector.go:561] object-"openshift-network-console"/"networking-console-plugin-cert": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-console/secrets?fieldSelector=metadata.name%3Dnetworking-console-plugin-cert&resourceVersion=27343": dial tcp 38.102.83.222:6443: connect: connection refused Feb 23 08:53:11 crc kubenswrapper[4940]: E0223 08:53:11.578175 4940 reflector.go:158] "Unhandled Error" err="object-\"openshift-network-console\"/\"networking-console-plugin-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-console/secrets?fieldSelector=metadata.name%3Dnetworking-console-plugin-cert&resourceVersion=27343\": dial tcp 38.102.83.222:6443: connect: connection refused" logger="UnhandledError" Feb 23 08:53:11 crc kubenswrapper[4940]: W0223 08:53:11.861560 4940 reflector.go:561] object-"openshift-network-diagnostics"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-diagnostics/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=27341": dial tcp 38.102.83.222:6443: connect: connection refused Feb 23 08:53:11 crc kubenswrapper[4940]: E0223 08:53:11.862114 4940 reflector.go:158] "Unhandled Error" err="object-\"openshift-network-diagnostics\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-diagnostics/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=27341\": dial tcp 38.102.83.222:6443: connect: connection refused" logger="UnhandledError" Feb 23 08:53:11 crc kubenswrapper[4940]: W0223 08:53:11.981427 4940 reflector.go:561] object-"openshift-network-console"/"networking-console-plugin": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-console/configmaps?fieldSelector=metadata.name%3Dnetworking-console-plugin&resourceVersion=27341": dial tcp 38.102.83.222:6443: connect: connection refused Feb 23 08:53:11 crc kubenswrapper[4940]: E0223 08:53:11.981537 4940 reflector.go:158] "Unhandled Error" err="object-\"openshift-network-console\"/\"networking-console-plugin\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-console/configmaps?fieldSelector=metadata.name%3Dnetworking-console-plugin&resourceVersion=27341\": dial tcp 38.102.83.222:6443: connect: connection refused" logger="UnhandledError" Feb 23 08:53:12 crc kubenswrapper[4940]: E0223 08:53:12.705493 4940 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.222:6443: connect: connection refused" interval="6.4s" Feb 23 08:53:13 crc kubenswrapper[4940]: W0223 08:53:13.271375 4940 reflector.go:561] object-"openshift-network-diagnostics"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-diagnostics/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=27341": dial tcp 38.102.83.222:6443: connect: connection refused Feb 23 08:53:13 crc kubenswrapper[4940]: E0223 08:53:13.271497 4940 reflector.go:158] "Unhandled Error" err="object-\"openshift-network-diagnostics\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-diagnostics/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=27341\": dial tcp 38.102.83.222:6443: connect: connection refused" logger="UnhandledError" Feb 23 08:53:13 crc kubenswrapper[4940]: I0223 08:53:13.349665 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 23 08:53:13 crc kubenswrapper[4940]: I0223 08:53:13.351149 4940 status_manager.go:851] "Failed to get status for pod" podUID="07c4db25-8f75-4bad-8f11-d06e6a20d747" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.222:6443: connect: connection refused" Feb 23 08:53:13 crc kubenswrapper[4940]: I0223 08:53:13.381989 4940 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="67d1926d-47ed-4a5c-b868-690599126446" Feb 23 08:53:13 crc kubenswrapper[4940]: I0223 08:53:13.382042 4940 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="67d1926d-47ed-4a5c-b868-690599126446" Feb 23 08:53:13 crc kubenswrapper[4940]: E0223 08:53:13.383258 4940 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.222:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 23 08:53:13 crc kubenswrapper[4940]: I0223 08:53:13.384575 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 23 08:53:14 crc kubenswrapper[4940]: I0223 08:53:14.063712 4940 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="7d3a80331fdfda61e0376df30819346059afa39c71933da04c167db2c7c9537b" exitCode=0 Feb 23 08:53:14 crc kubenswrapper[4940]: I0223 08:53:14.063843 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"7d3a80331fdfda61e0376df30819346059afa39c71933da04c167db2c7c9537b"} Feb 23 08:53:14 crc kubenswrapper[4940]: I0223 08:53:14.064050 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"4b7e8b54ceabbcc71ca4a45b6d460367aa8a90ae56166fd4098116b3cd6aa6da"} Feb 23 08:53:14 crc kubenswrapper[4940]: I0223 08:53:14.064545 4940 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="67d1926d-47ed-4a5c-b868-690599126446" Feb 23 08:53:14 crc kubenswrapper[4940]: I0223 08:53:14.064581 4940 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="67d1926d-47ed-4a5c-b868-690599126446" Feb 23 08:53:14 crc kubenswrapper[4940]: E0223 08:53:14.065204 4940 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.222:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 23 08:53:14 crc kubenswrapper[4940]: I0223 08:53:14.065263 4940 status_manager.go:851] "Failed to get status for pod" podUID="07c4db25-8f75-4bad-8f11-d06e6a20d747" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.222:6443: connect: connection refused" Feb 23 08:53:15 crc kubenswrapper[4940]: I0223 08:53:15.073731 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/cluster-policy-controller/1.log" Feb 23 08:53:15 crc kubenswrapper[4940]: I0223 08:53:15.075275 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Feb 23 08:53:15 crc kubenswrapper[4940]: I0223 08:53:15.075313 4940 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="4aceeb7fc620d96b7e2a474de52d7b3fb007417f4f1ec81e5853724d54381566" exitCode=1 Feb 23 08:53:15 crc kubenswrapper[4940]: I0223 08:53:15.075360 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"4aceeb7fc620d96b7e2a474de52d7b3fb007417f4f1ec81e5853724d54381566"} Feb 23 08:53:15 crc kubenswrapper[4940]: I0223 08:53:15.075786 4940 scope.go:117] "RemoveContainer" containerID="4aceeb7fc620d96b7e2a474de52d7b3fb007417f4f1ec81e5853724d54381566" Feb 23 08:53:15 crc kubenswrapper[4940]: I0223 08:53:15.085785 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"3dd29cec310129e77de000a6d51b108714ebb81fc42f57598fa4ff9636ad7f9d"} Feb 23 08:53:15 crc kubenswrapper[4940]: I0223 08:53:15.085827 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"02f5896c587d7ed29d3bcc13be52642c9d35ce005555a14d0463266aa26e104a"} Feb 23 08:53:15 crc kubenswrapper[4940]: I0223 08:53:15.085839 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"377b2842032777cf755ca44cb6e46b558f371bee2d79fb69741aba539e90a424"} Feb 23 08:53:15 crc kubenswrapper[4940]: I0223 08:53:15.967169 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 23 08:53:16 crc kubenswrapper[4940]: I0223 08:53:16.093502 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/cluster-policy-controller/1.log" Feb 23 08:53:16 crc kubenswrapper[4940]: I0223 08:53:16.095230 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Feb 23 08:53:16 crc kubenswrapper[4940]: I0223 08:53:16.095358 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"679db20711c871612c2dbc9e990ff4a74baa67be80289ce1d09cb92c22be8e4d"} Feb 23 08:53:16 crc kubenswrapper[4940]: I0223 08:53:16.098620 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"31c60392345bb948cfbf6d7a2d166481a23b7a5c0313c2a1fe85be58d3f7732d"} Feb 23 08:53:16 crc kubenswrapper[4940]: I0223 08:53:16.098677 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"fe7ee70dfe2bb0a436835e029d6a743f29f2697314a9d0550f3e6d0c54c0b799"} Feb 23 08:53:16 crc kubenswrapper[4940]: I0223 08:53:16.098781 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 23 08:53:16 crc kubenswrapper[4940]: I0223 08:53:16.098938 4940 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="67d1926d-47ed-4a5c-b868-690599126446" Feb 23 08:53:16 crc kubenswrapper[4940]: I0223 08:53:16.098968 4940 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="67d1926d-47ed-4a5c-b868-690599126446" Feb 23 08:53:18 crc kubenswrapper[4940]: I0223 08:53:18.260711 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Feb 23 08:53:18 crc kubenswrapper[4940]: I0223 08:53:18.385135 4940 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 23 08:53:18 crc kubenswrapper[4940]: I0223 08:53:18.385233 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 23 08:53:18 crc kubenswrapper[4940]: I0223 08:53:18.394098 4940 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 23 08:53:20 crc kubenswrapper[4940]: I0223 08:53:20.483292 4940 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 23 08:53:20 crc kubenswrapper[4940]: I0223 08:53:20.492205 4940 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 23 08:53:20 crc kubenswrapper[4940]: I0223 08:53:20.883573 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Feb 23 08:53:20 crc kubenswrapper[4940]: I0223 08:53:20.883598 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Feb 23 08:53:21 crc kubenswrapper[4940]: I0223 08:53:21.107483 4940 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 23 08:53:21 crc kubenswrapper[4940]: I0223 08:53:21.123925 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 23 08:53:21 crc kubenswrapper[4940]: I0223 08:53:21.608767 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Feb 23 08:53:22 crc kubenswrapper[4940]: I0223 08:53:22.128487 4940 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="67d1926d-47ed-4a5c-b868-690599126446" Feb 23 08:53:22 crc kubenswrapper[4940]: I0223 08:53:22.128528 4940 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="67d1926d-47ed-4a5c-b868-690599126446" Feb 23 08:53:22 crc kubenswrapper[4940]: I0223 08:53:22.134780 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 23 08:53:22 crc kubenswrapper[4940]: I0223 08:53:22.137734 4940 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="d3bbf0e0-8d27-4922-81e2-c5cf1603ba9c" Feb 23 08:53:23 crc kubenswrapper[4940]: I0223 08:53:23.136054 4940 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="67d1926d-47ed-4a5c-b868-690599126446" Feb 23 08:53:23 crc kubenswrapper[4940]: I0223 08:53:23.136479 4940 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="67d1926d-47ed-4a5c-b868-690599126446" Feb 23 08:53:24 crc kubenswrapper[4940]: E0223 08:53:24.377858 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[kube-api-access-cqllr], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 08:53:24 crc kubenswrapper[4940]: E0223 08:53:24.392348 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[kube-api-access-s2dwl], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 08:53:24 crc kubenswrapper[4940]: E0223 08:53:24.418763 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[networking-console-plugin-cert nginx-conf], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 08:53:25 crc kubenswrapper[4940]: I0223 08:53:25.973241 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 23 08:53:29 crc kubenswrapper[4940]: I0223 08:53:29.372358 4940 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="d3bbf0e0-8d27-4922-81e2-c5cf1603ba9c" Feb 23 08:53:30 crc kubenswrapper[4940]: I0223 08:53:30.471125 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Feb 23 08:53:30 crc kubenswrapper[4940]: I0223 08:53:30.885334 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Feb 23 08:53:31 crc kubenswrapper[4940]: I0223 08:53:31.806972 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Feb 23 08:53:32 crc kubenswrapper[4940]: I0223 08:53:32.272431 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Feb 23 08:53:32 crc kubenswrapper[4940]: I0223 08:53:32.445141 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Feb 23 08:53:32 crc kubenswrapper[4940]: I0223 08:53:32.583184 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Feb 23 08:53:32 crc kubenswrapper[4940]: I0223 08:53:32.669013 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Feb 23 08:53:32 crc kubenswrapper[4940]: I0223 08:53:32.778678 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 23 08:53:33 crc kubenswrapper[4940]: I0223 08:53:33.111878 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Feb 23 08:53:33 crc kubenswrapper[4940]: I0223 08:53:33.140421 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Feb 23 08:53:33 crc kubenswrapper[4940]: I0223 08:53:33.173463 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 23 08:53:33 crc kubenswrapper[4940]: I0223 08:53:33.358305 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 23 08:53:33 crc kubenswrapper[4940]: I0223 08:53:33.438693 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Feb 23 08:53:33 crc kubenswrapper[4940]: I0223 08:53:33.518377 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Feb 23 08:53:33 crc kubenswrapper[4940]: I0223 08:53:33.726949 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Feb 23 08:53:33 crc kubenswrapper[4940]: I0223 08:53:33.732207 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 23 08:53:33 crc kubenswrapper[4940]: I0223 08:53:33.825906 4940 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 23 08:53:33 crc kubenswrapper[4940]: I0223 08:53:33.843907 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 23 08:53:34 crc kubenswrapper[4940]: I0223 08:53:34.172663 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Feb 23 08:53:34 crc kubenswrapper[4940]: I0223 08:53:34.207977 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Feb 23 08:53:34 crc kubenswrapper[4940]: I0223 08:53:34.209010 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Feb 23 08:53:34 crc kubenswrapper[4940]: I0223 08:53:34.225906 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Feb 23 08:53:34 crc kubenswrapper[4940]: I0223 08:53:34.238889 4940 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Feb 23 08:53:34 crc kubenswrapper[4940]: I0223 08:53:34.240316 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Feb 23 08:53:34 crc kubenswrapper[4940]: I0223 08:53:34.405295 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Feb 23 08:53:34 crc kubenswrapper[4940]: I0223 08:53:34.414725 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Feb 23 08:53:34 crc kubenswrapper[4940]: I0223 08:53:34.430456 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Feb 23 08:53:34 crc kubenswrapper[4940]: I0223 08:53:34.456964 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Feb 23 08:53:34 crc kubenswrapper[4940]: I0223 08:53:34.791209 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Feb 23 08:53:34 crc kubenswrapper[4940]: I0223 08:53:34.838627 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Feb 23 08:53:34 crc kubenswrapper[4940]: I0223 08:53:34.977996 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Feb 23 08:53:35 crc kubenswrapper[4940]: I0223 08:53:35.032148 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Feb 23 08:53:35 crc kubenswrapper[4940]: I0223 08:53:35.121501 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Feb 23 08:53:35 crc kubenswrapper[4940]: I0223 08:53:35.129773 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Feb 23 08:53:35 crc kubenswrapper[4940]: I0223 08:53:35.140661 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 23 08:53:35 crc kubenswrapper[4940]: I0223 08:53:35.189888 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Feb 23 08:53:35 crc kubenswrapper[4940]: I0223 08:53:35.223891 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Feb 23 08:53:35 crc kubenswrapper[4940]: I0223 08:53:35.302732 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Feb 23 08:53:35 crc kubenswrapper[4940]: I0223 08:53:35.346282 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 08:53:35 crc kubenswrapper[4940]: I0223 08:53:35.424394 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Feb 23 08:53:35 crc kubenswrapper[4940]: I0223 08:53:35.424533 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Feb 23 08:53:35 crc kubenswrapper[4940]: I0223 08:53:35.512336 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Feb 23 08:53:35 crc kubenswrapper[4940]: I0223 08:53:35.548560 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Feb 23 08:53:35 crc kubenswrapper[4940]: I0223 08:53:35.590543 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Feb 23 08:53:35 crc kubenswrapper[4940]: I0223 08:53:35.591856 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Feb 23 08:53:35 crc kubenswrapper[4940]: I0223 08:53:35.629467 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 23 08:53:35 crc kubenswrapper[4940]: I0223 08:53:35.687956 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 23 08:53:35 crc kubenswrapper[4940]: I0223 08:53:35.705313 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Feb 23 08:53:35 crc kubenswrapper[4940]: I0223 08:53:35.738460 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Feb 23 08:53:35 crc kubenswrapper[4940]: I0223 08:53:35.773277 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Feb 23 08:53:35 crc kubenswrapper[4940]: I0223 08:53:35.844835 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Feb 23 08:53:35 crc kubenswrapper[4940]: I0223 08:53:35.859220 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Feb 23 08:53:35 crc kubenswrapper[4940]: I0223 08:53:35.881363 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Feb 23 08:53:35 crc kubenswrapper[4940]: I0223 08:53:35.904309 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Feb 23 08:53:35 crc kubenswrapper[4940]: I0223 08:53:35.932976 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Feb 23 08:53:35 crc kubenswrapper[4940]: I0223 08:53:35.961846 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Feb 23 08:53:36 crc kubenswrapper[4940]: I0223 08:53:36.096291 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Feb 23 08:53:36 crc kubenswrapper[4940]: I0223 08:53:36.103865 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Feb 23 08:53:36 crc kubenswrapper[4940]: I0223 08:53:36.155235 4940 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 23 08:53:36 crc kubenswrapper[4940]: I0223 08:53:36.173753 4940 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Feb 23 08:53:36 crc kubenswrapper[4940]: I0223 08:53:36.178483 4940 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 23 08:53:36 crc kubenswrapper[4940]: I0223 08:53:36.178533 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 23 08:53:36 crc kubenswrapper[4940]: I0223 08:53:36.180159 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Feb 23 08:53:36 crc kubenswrapper[4940]: I0223 08:53:36.186069 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 23 08:53:36 crc kubenswrapper[4940]: I0223 08:53:36.203606 4940 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=15.203579978 podStartE2EDuration="15.203579978s" podCreationTimestamp="2026-02-23 08:53:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 08:53:36.202377181 +0000 UTC m=+347.585583378" watchObservedRunningTime="2026-02-23 08:53:36.203579978 +0000 UTC m=+347.586786175" Feb 23 08:53:36 crc kubenswrapper[4940]: I0223 08:53:36.206956 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 23 08:53:36 crc kubenswrapper[4940]: I0223 08:53:36.220675 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Feb 23 08:53:36 crc kubenswrapper[4940]: I0223 08:53:36.245488 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Feb 23 08:53:36 crc kubenswrapper[4940]: I0223 08:53:36.251402 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Feb 23 08:53:36 crc kubenswrapper[4940]: I0223 08:53:36.265476 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Feb 23 08:53:36 crc kubenswrapper[4940]: I0223 08:53:36.344809 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 08:53:36 crc kubenswrapper[4940]: I0223 08:53:36.451750 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Feb 23 08:53:36 crc kubenswrapper[4940]: I0223 08:53:36.559145 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Feb 23 08:53:36 crc kubenswrapper[4940]: I0223 08:53:36.685348 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Feb 23 08:53:36 crc kubenswrapper[4940]: I0223 08:53:36.699583 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Feb 23 08:53:36 crc kubenswrapper[4940]: I0223 08:53:36.785150 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Feb 23 08:53:36 crc kubenswrapper[4940]: I0223 08:53:36.795369 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Feb 23 08:53:36 crc kubenswrapper[4940]: I0223 08:53:36.805134 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Feb 23 08:53:36 crc kubenswrapper[4940]: I0223 08:53:36.876855 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Feb 23 08:53:36 crc kubenswrapper[4940]: I0223 08:53:36.894547 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Feb 23 08:53:37 crc kubenswrapper[4940]: I0223 08:53:37.103522 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 23 08:53:37 crc kubenswrapper[4940]: I0223 08:53:37.106269 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Feb 23 08:53:37 crc kubenswrapper[4940]: I0223 08:53:37.240115 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Feb 23 08:53:37 crc kubenswrapper[4940]: I0223 08:53:37.261089 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Feb 23 08:53:37 crc kubenswrapper[4940]: I0223 08:53:37.322473 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Feb 23 08:53:37 crc kubenswrapper[4940]: I0223 08:53:37.347469 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Feb 23 08:53:37 crc kubenswrapper[4940]: I0223 08:53:37.364637 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Feb 23 08:53:37 crc kubenswrapper[4940]: I0223 08:53:37.376428 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 23 08:53:37 crc kubenswrapper[4940]: I0223 08:53:37.390733 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Feb 23 08:53:37 crc kubenswrapper[4940]: I0223 08:53:37.391170 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Feb 23 08:53:37 crc kubenswrapper[4940]: I0223 08:53:37.406096 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Feb 23 08:53:37 crc kubenswrapper[4940]: I0223 08:53:37.411192 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Feb 23 08:53:37 crc kubenswrapper[4940]: I0223 08:53:37.427466 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Feb 23 08:53:37 crc kubenswrapper[4940]: I0223 08:53:37.428554 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Feb 23 08:53:37 crc kubenswrapper[4940]: I0223 08:53:37.439611 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Feb 23 08:53:37 crc kubenswrapper[4940]: I0223 08:53:37.455310 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Feb 23 08:53:37 crc kubenswrapper[4940]: I0223 08:53:37.463447 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Feb 23 08:53:37 crc kubenswrapper[4940]: I0223 08:53:37.610098 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 23 08:53:37 crc kubenswrapper[4940]: I0223 08:53:37.649085 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Feb 23 08:53:37 crc kubenswrapper[4940]: I0223 08:53:37.831059 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Feb 23 08:53:37 crc kubenswrapper[4940]: I0223 08:53:37.843361 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Feb 23 08:53:37 crc kubenswrapper[4940]: I0223 08:53:37.882172 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Feb 23 08:53:38 crc kubenswrapper[4940]: I0223 08:53:38.046328 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Feb 23 08:53:38 crc kubenswrapper[4940]: I0223 08:53:38.058985 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 23 08:53:38 crc kubenswrapper[4940]: I0223 08:53:38.076341 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Feb 23 08:53:38 crc kubenswrapper[4940]: I0223 08:53:38.077497 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Feb 23 08:53:38 crc kubenswrapper[4940]: I0223 08:53:38.134731 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Feb 23 08:53:38 crc kubenswrapper[4940]: I0223 08:53:38.140435 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Feb 23 08:53:38 crc kubenswrapper[4940]: I0223 08:53:38.156804 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Feb 23 08:53:38 crc kubenswrapper[4940]: I0223 08:53:38.179734 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Feb 23 08:53:38 crc kubenswrapper[4940]: I0223 08:53:38.319815 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Feb 23 08:53:38 crc kubenswrapper[4940]: I0223 08:53:38.345687 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 08:53:38 crc kubenswrapper[4940]: I0223 08:53:38.370382 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Feb 23 08:53:38 crc kubenswrapper[4940]: I0223 08:53:38.434916 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Feb 23 08:53:38 crc kubenswrapper[4940]: I0223 08:53:38.452339 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Feb 23 08:53:38 crc kubenswrapper[4940]: I0223 08:53:38.605244 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Feb 23 08:53:38 crc kubenswrapper[4940]: I0223 08:53:38.665106 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Feb 23 08:53:38 crc kubenswrapper[4940]: I0223 08:53:38.675861 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Feb 23 08:53:38 crc kubenswrapper[4940]: I0223 08:53:38.694474 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Feb 23 08:53:38 crc kubenswrapper[4940]: I0223 08:53:38.766711 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Feb 23 08:53:38 crc kubenswrapper[4940]: I0223 08:53:38.766718 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 23 08:53:38 crc kubenswrapper[4940]: I0223 08:53:38.827934 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Feb 23 08:53:38 crc kubenswrapper[4940]: I0223 08:53:38.850048 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Feb 23 08:53:38 crc kubenswrapper[4940]: I0223 08:53:38.879987 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Feb 23 08:53:38 crc kubenswrapper[4940]: I0223 08:53:38.966169 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Feb 23 08:53:38 crc kubenswrapper[4940]: I0223 08:53:38.966396 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Feb 23 08:53:38 crc kubenswrapper[4940]: I0223 08:53:38.973801 4940 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 23 08:53:38 crc kubenswrapper[4940]: I0223 08:53:38.998462 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Feb 23 08:53:39 crc kubenswrapper[4940]: I0223 08:53:39.074125 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Feb 23 08:53:39 crc kubenswrapper[4940]: I0223 08:53:39.121877 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Feb 23 08:53:39 crc kubenswrapper[4940]: I0223 08:53:39.184830 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Feb 23 08:53:39 crc kubenswrapper[4940]: I0223 08:53:39.229176 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Feb 23 08:53:39 crc kubenswrapper[4940]: I0223 08:53:39.231512 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Feb 23 08:53:39 crc kubenswrapper[4940]: I0223 08:53:39.384298 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Feb 23 08:53:39 crc kubenswrapper[4940]: I0223 08:53:39.435111 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 23 08:53:39 crc kubenswrapper[4940]: I0223 08:53:39.462228 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 23 08:53:39 crc kubenswrapper[4940]: I0223 08:53:39.471435 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Feb 23 08:53:39 crc kubenswrapper[4940]: I0223 08:53:39.483258 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Feb 23 08:53:39 crc kubenswrapper[4940]: I0223 08:53:39.522187 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Feb 23 08:53:39 crc kubenswrapper[4940]: I0223 08:53:39.528814 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 23 08:53:39 crc kubenswrapper[4940]: I0223 08:53:39.545323 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Feb 23 08:53:39 crc kubenswrapper[4940]: I0223 08:53:39.606875 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Feb 23 08:53:39 crc kubenswrapper[4940]: I0223 08:53:39.691366 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Feb 23 08:53:39 crc kubenswrapper[4940]: I0223 08:53:39.767902 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Feb 23 08:53:39 crc kubenswrapper[4940]: I0223 08:53:39.879354 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Feb 23 08:53:40 crc kubenswrapper[4940]: I0223 08:53:40.051218 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Feb 23 08:53:40 crc kubenswrapper[4940]: I0223 08:53:40.072426 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 23 08:53:40 crc kubenswrapper[4940]: I0223 08:53:40.109234 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Feb 23 08:53:40 crc kubenswrapper[4940]: I0223 08:53:40.182825 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 23 08:53:40 crc kubenswrapper[4940]: I0223 08:53:40.199361 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 23 08:53:40 crc kubenswrapper[4940]: I0223 08:53:40.203251 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Feb 23 08:53:40 crc kubenswrapper[4940]: I0223 08:53:40.228695 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Feb 23 08:53:40 crc kubenswrapper[4940]: I0223 08:53:40.341966 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Feb 23 08:53:40 crc kubenswrapper[4940]: I0223 08:53:40.342974 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Feb 23 08:53:40 crc kubenswrapper[4940]: I0223 08:53:40.395459 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Feb 23 08:53:40 crc kubenswrapper[4940]: I0223 08:53:40.440348 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Feb 23 08:53:40 crc kubenswrapper[4940]: I0223 08:53:40.554986 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Feb 23 08:53:40 crc kubenswrapper[4940]: I0223 08:53:40.762854 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Feb 23 08:53:40 crc kubenswrapper[4940]: I0223 08:53:40.764733 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Feb 23 08:53:40 crc kubenswrapper[4940]: I0223 08:53:40.823966 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Feb 23 08:53:40 crc kubenswrapper[4940]: I0223 08:53:40.893153 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Feb 23 08:53:40 crc kubenswrapper[4940]: I0223 08:53:40.897730 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Feb 23 08:53:41 crc kubenswrapper[4940]: I0223 08:53:41.006969 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Feb 23 08:53:41 crc kubenswrapper[4940]: I0223 08:53:41.026803 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Feb 23 08:53:41 crc kubenswrapper[4940]: I0223 08:53:41.068330 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Feb 23 08:53:41 crc kubenswrapper[4940]: I0223 08:53:41.080349 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 23 08:53:41 crc kubenswrapper[4940]: I0223 08:53:41.117735 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Feb 23 08:53:41 crc kubenswrapper[4940]: I0223 08:53:41.173354 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Feb 23 08:53:41 crc kubenswrapper[4940]: I0223 08:53:41.300974 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Feb 23 08:53:41 crc kubenswrapper[4940]: I0223 08:53:41.372811 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Feb 23 08:53:41 crc kubenswrapper[4940]: I0223 08:53:41.383884 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Feb 23 08:53:41 crc kubenswrapper[4940]: I0223 08:53:41.440334 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 23 08:53:41 crc kubenswrapper[4940]: I0223 08:53:41.528469 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Feb 23 08:53:41 crc kubenswrapper[4940]: I0223 08:53:41.528869 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Feb 23 08:53:41 crc kubenswrapper[4940]: I0223 08:53:41.625188 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Feb 23 08:53:41 crc kubenswrapper[4940]: I0223 08:53:41.948101 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Feb 23 08:53:41 crc kubenswrapper[4940]: I0223 08:53:41.962849 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Feb 23 08:53:41 crc kubenswrapper[4940]: I0223 08:53:41.979022 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Feb 23 08:53:42 crc kubenswrapper[4940]: I0223 08:53:42.065509 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Feb 23 08:53:42 crc kubenswrapper[4940]: I0223 08:53:42.117873 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Feb 23 08:53:42 crc kubenswrapper[4940]: I0223 08:53:42.125475 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Feb 23 08:53:42 crc kubenswrapper[4940]: I0223 08:53:42.180761 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Feb 23 08:53:42 crc kubenswrapper[4940]: I0223 08:53:42.220179 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Feb 23 08:53:42 crc kubenswrapper[4940]: I0223 08:53:42.316796 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Feb 23 08:53:42 crc kubenswrapper[4940]: I0223 08:53:42.383757 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Feb 23 08:53:42 crc kubenswrapper[4940]: I0223 08:53:42.471037 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Feb 23 08:53:42 crc kubenswrapper[4940]: I0223 08:53:42.533832 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Feb 23 08:53:42 crc kubenswrapper[4940]: I0223 08:53:42.605722 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Feb 23 08:53:42 crc kubenswrapper[4940]: I0223 08:53:42.729049 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Feb 23 08:53:42 crc kubenswrapper[4940]: I0223 08:53:42.751554 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Feb 23 08:53:42 crc kubenswrapper[4940]: I0223 08:53:42.753945 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Feb 23 08:53:42 crc kubenswrapper[4940]: I0223 08:53:42.762094 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Feb 23 08:53:42 crc kubenswrapper[4940]: I0223 08:53:42.773833 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Feb 23 08:53:42 crc kubenswrapper[4940]: I0223 08:53:42.852923 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Feb 23 08:53:42 crc kubenswrapper[4940]: I0223 08:53:42.869203 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Feb 23 08:53:42 crc kubenswrapper[4940]: I0223 08:53:42.965011 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 23 08:53:43 crc kubenswrapper[4940]: I0223 08:53:43.012565 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Feb 23 08:53:43 crc kubenswrapper[4940]: I0223 08:53:43.092982 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Feb 23 08:53:43 crc kubenswrapper[4940]: I0223 08:53:43.220361 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Feb 23 08:53:43 crc kubenswrapper[4940]: I0223 08:53:43.239876 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Feb 23 08:53:43 crc kubenswrapper[4940]: I0223 08:53:43.290344 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Feb 23 08:53:43 crc kubenswrapper[4940]: I0223 08:53:43.329152 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Feb 23 08:53:43 crc kubenswrapper[4940]: I0223 08:53:43.574196 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Feb 23 08:53:43 crc kubenswrapper[4940]: I0223 08:53:43.595545 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 23 08:53:43 crc kubenswrapper[4940]: I0223 08:53:43.692066 4940 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 23 08:53:43 crc kubenswrapper[4940]: I0223 08:53:43.692491 4940 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://1a4938524af72dd26c60b7563971ff5edc625bb2c38bdef051265652d9c1b4c2" gracePeriod=5 Feb 23 08:53:43 crc kubenswrapper[4940]: I0223 08:53:43.742893 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Feb 23 08:53:43 crc kubenswrapper[4940]: I0223 08:53:43.790910 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Feb 23 08:53:43 crc kubenswrapper[4940]: I0223 08:53:43.808669 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Feb 23 08:53:43 crc kubenswrapper[4940]: I0223 08:53:43.828994 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 23 08:53:43 crc kubenswrapper[4940]: I0223 08:53:43.883901 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Feb 23 08:53:43 crc kubenswrapper[4940]: I0223 08:53:43.938792 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Feb 23 08:53:43 crc kubenswrapper[4940]: I0223 08:53:43.946501 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Feb 23 08:53:43 crc kubenswrapper[4940]: I0223 08:53:43.950139 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Feb 23 08:53:44 crc kubenswrapper[4940]: I0223 08:53:44.024932 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Feb 23 08:53:44 crc kubenswrapper[4940]: I0223 08:53:44.107299 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Feb 23 08:53:44 crc kubenswrapper[4940]: I0223 08:53:44.114115 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Feb 23 08:53:44 crc kubenswrapper[4940]: I0223 08:53:44.215687 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Feb 23 08:53:44 crc kubenswrapper[4940]: I0223 08:53:44.324713 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 23 08:53:44 crc kubenswrapper[4940]: I0223 08:53:44.371598 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Feb 23 08:53:44 crc kubenswrapper[4940]: I0223 08:53:44.410940 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Feb 23 08:53:44 crc kubenswrapper[4940]: I0223 08:53:44.432577 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Feb 23 08:53:44 crc kubenswrapper[4940]: I0223 08:53:44.459929 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Feb 23 08:53:44 crc kubenswrapper[4940]: I0223 08:53:44.499060 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Feb 23 08:53:44 crc kubenswrapper[4940]: I0223 08:53:44.640570 4940 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Feb 23 08:53:44 crc kubenswrapper[4940]: I0223 08:53:44.780282 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Feb 23 08:53:44 crc kubenswrapper[4940]: I0223 08:53:44.950506 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Feb 23 08:53:45 crc kubenswrapper[4940]: I0223 08:53:45.068171 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 23 08:53:45 crc kubenswrapper[4940]: I0223 08:53:45.109470 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Feb 23 08:53:45 crc kubenswrapper[4940]: I0223 08:53:45.186312 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 23 08:53:45 crc kubenswrapper[4940]: I0223 08:53:45.337500 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Feb 23 08:53:45 crc kubenswrapper[4940]: I0223 08:53:45.500013 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Feb 23 08:53:45 crc kubenswrapper[4940]: I0223 08:53:45.529920 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 23 08:53:45 crc kubenswrapper[4940]: I0223 08:53:45.572563 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Feb 23 08:53:45 crc kubenswrapper[4940]: I0223 08:53:45.593136 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Feb 23 08:53:45 crc kubenswrapper[4940]: I0223 08:53:45.791666 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Feb 23 08:53:46 crc kubenswrapper[4940]: I0223 08:53:46.006057 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Feb 23 08:53:46 crc kubenswrapper[4940]: I0223 08:53:46.040103 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Feb 23 08:53:46 crc kubenswrapper[4940]: I0223 08:53:46.065184 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Feb 23 08:53:46 crc kubenswrapper[4940]: I0223 08:53:46.217051 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Feb 23 08:53:46 crc kubenswrapper[4940]: I0223 08:53:46.236126 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Feb 23 08:53:46 crc kubenswrapper[4940]: I0223 08:53:46.289375 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Feb 23 08:53:46 crc kubenswrapper[4940]: I0223 08:53:46.368099 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Feb 23 08:53:46 crc kubenswrapper[4940]: I0223 08:53:46.626412 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Feb 23 08:53:46 crc kubenswrapper[4940]: I0223 08:53:46.637704 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Feb 23 08:53:46 crc kubenswrapper[4940]: I0223 08:53:46.694231 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Feb 23 08:53:47 crc kubenswrapper[4940]: I0223 08:53:47.048192 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Feb 23 08:53:47 crc kubenswrapper[4940]: I0223 08:53:47.064129 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Feb 23 08:53:47 crc kubenswrapper[4940]: I0223 08:53:47.224239 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Feb 23 08:53:47 crc kubenswrapper[4940]: I0223 08:53:47.398922 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Feb 23 08:53:47 crc kubenswrapper[4940]: I0223 08:53:47.460790 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 23 08:53:47 crc kubenswrapper[4940]: I0223 08:53:47.876136 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Feb 23 08:53:48 crc kubenswrapper[4940]: I0223 08:53:48.169245 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Feb 23 08:53:48 crc kubenswrapper[4940]: I0223 08:53:48.327851 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Feb 23 08:53:48 crc kubenswrapper[4940]: I0223 08:53:48.501291 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Feb 23 08:53:48 crc kubenswrapper[4940]: I0223 08:53:48.797149 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Feb 23 08:53:48 crc kubenswrapper[4940]: I0223 08:53:48.810212 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Feb 23 08:53:48 crc kubenswrapper[4940]: I0223 08:53:48.810316 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 23 08:53:48 crc kubenswrapper[4940]: I0223 08:53:48.997383 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 23 08:53:48 crc kubenswrapper[4940]: I0223 08:53:48.997463 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 23 08:53:48 crc kubenswrapper[4940]: I0223 08:53:48.997516 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 23 08:53:48 crc kubenswrapper[4940]: I0223 08:53:48.997534 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 23 08:53:48 crc kubenswrapper[4940]: I0223 08:53:48.997575 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 23 08:53:48 crc kubenswrapper[4940]: I0223 08:53:48.997817 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 08:53:48 crc kubenswrapper[4940]: I0223 08:53:48.997820 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 08:53:48 crc kubenswrapper[4940]: I0223 08:53:48.997835 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 08:53:48 crc kubenswrapper[4940]: I0223 08:53:48.997878 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 08:53:49 crc kubenswrapper[4940]: I0223 08:53:49.005419 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 08:53:49 crc kubenswrapper[4940]: I0223 08:53:49.006296 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Feb 23 08:53:49 crc kubenswrapper[4940]: I0223 08:53:49.099187 4940 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Feb 23 08:53:49 crc kubenswrapper[4940]: I0223 08:53:49.099530 4940 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 23 08:53:49 crc kubenswrapper[4940]: I0223 08:53:49.099747 4940 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Feb 23 08:53:49 crc kubenswrapper[4940]: I0223 08:53:49.099900 4940 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 23 08:53:49 crc kubenswrapper[4940]: I0223 08:53:49.100394 4940 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Feb 23 08:53:49 crc kubenswrapper[4940]: I0223 08:53:49.304233 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Feb 23 08:53:49 crc kubenswrapper[4940]: I0223 08:53:49.305062 4940 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="1a4938524af72dd26c60b7563971ff5edc625bb2c38bdef051265652d9c1b4c2" exitCode=137 Feb 23 08:53:49 crc kubenswrapper[4940]: I0223 08:53:49.305125 4940 scope.go:117] "RemoveContainer" containerID="1a4938524af72dd26c60b7563971ff5edc625bb2c38bdef051265652d9c1b4c2" Feb 23 08:53:49 crc kubenswrapper[4940]: I0223 08:53:49.305132 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 23 08:53:49 crc kubenswrapper[4940]: I0223 08:53:49.321942 4940 scope.go:117] "RemoveContainer" containerID="1a4938524af72dd26c60b7563971ff5edc625bb2c38bdef051265652d9c1b4c2" Feb 23 08:53:49 crc kubenswrapper[4940]: E0223 08:53:49.322533 4940 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1a4938524af72dd26c60b7563971ff5edc625bb2c38bdef051265652d9c1b4c2\": container with ID starting with 1a4938524af72dd26c60b7563971ff5edc625bb2c38bdef051265652d9c1b4c2 not found: ID does not exist" containerID="1a4938524af72dd26c60b7563971ff5edc625bb2c38bdef051265652d9c1b4c2" Feb 23 08:53:49 crc kubenswrapper[4940]: I0223 08:53:49.322683 4940 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1a4938524af72dd26c60b7563971ff5edc625bb2c38bdef051265652d9c1b4c2"} err="failed to get container status \"1a4938524af72dd26c60b7563971ff5edc625bb2c38bdef051265652d9c1b4c2\": rpc error: code = NotFound desc = could not find container \"1a4938524af72dd26c60b7563971ff5edc625bb2c38bdef051265652d9c1b4c2\": container with ID starting with 1a4938524af72dd26c60b7563971ff5edc625bb2c38bdef051265652d9c1b4c2 not found: ID does not exist" Feb 23 08:53:49 crc kubenswrapper[4940]: I0223 08:53:49.353668 4940 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Feb 23 08:53:56 crc kubenswrapper[4940]: I0223 08:53:56.646681 4940 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-wprw9"] Feb 23 08:53:56 crc kubenswrapper[4940]: I0223 08:53:56.647659 4940 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-wprw9" podUID="fc357ef5-0994-4918-859b-d623e534da2a" containerName="registry-server" containerID="cri-o://22e5105f9236d955be99940e29187626b8258b579494937380299cb4182398ee" gracePeriod=30 Feb 23 08:53:56 crc kubenswrapper[4940]: I0223 08:53:56.659827 4940 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-mqw5m"] Feb 23 08:53:56 crc kubenswrapper[4940]: I0223 08:53:56.660356 4940 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-mqw5m" podUID="0917e41b-2e05-4596-9ad4-05b382ee9f56" containerName="registry-server" containerID="cri-o://58422578cda5f9650c1dc8ae7b5c834ee4cc093018ad56812e160eb7aa441d9a" gracePeriod=30 Feb 23 08:53:56 crc kubenswrapper[4940]: I0223 08:53:56.667335 4940 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-j9x9v"] Feb 23 08:53:56 crc kubenswrapper[4940]: I0223 08:53:56.667567 4940 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-j9x9v" podUID="9de4a20c-3f76-4aa8-8347-42f3b3f53145" containerName="marketplace-operator" containerID="cri-o://c8e99b53927abcd81861c6fbe79afe435a022e99706e38f1d1b81d38554144d6" gracePeriod=30 Feb 23 08:53:56 crc kubenswrapper[4940]: I0223 08:53:56.674345 4940 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-56j6r"] Feb 23 08:53:56 crc kubenswrapper[4940]: I0223 08:53:56.675506 4940 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-56j6r" podUID="c723f067-bc1f-4d88-aa37-00f8896a9d38" containerName="registry-server" containerID="cri-o://1175022f71819c931fcc7554b8a5901ca23eb6f7c509e2cde4a60490bde43107" gracePeriod=30 Feb 23 08:53:56 crc kubenswrapper[4940]: I0223 08:53:56.690087 4940 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-7w9jb"] Feb 23 08:53:56 crc kubenswrapper[4940]: I0223 08:53:56.690326 4940 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-7w9jb" podUID="000a58ff-706e-452e-8fa1-493f98d2e314" containerName="registry-server" containerID="cri-o://019d73b114a5ddd64f206e2e51eb0b9186f9ec8748f063f3273874cfa6be9954" gracePeriod=30 Feb 23 08:53:56 crc kubenswrapper[4940]: I0223 08:53:56.704645 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-hf78k"] Feb 23 08:53:56 crc kubenswrapper[4940]: E0223 08:53:56.704899 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07c4db25-8f75-4bad-8f11-d06e6a20d747" containerName="installer" Feb 23 08:53:56 crc kubenswrapper[4940]: I0223 08:53:56.704914 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="07c4db25-8f75-4bad-8f11-d06e6a20d747" containerName="installer" Feb 23 08:53:56 crc kubenswrapper[4940]: E0223 08:53:56.704926 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 23 08:53:56 crc kubenswrapper[4940]: I0223 08:53:56.704933 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 23 08:53:56 crc kubenswrapper[4940]: I0223 08:53:56.705039 4940 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 23 08:53:56 crc kubenswrapper[4940]: I0223 08:53:56.705050 4940 memory_manager.go:354] "RemoveStaleState removing state" podUID="07c4db25-8f75-4bad-8f11-d06e6a20d747" containerName="installer" Feb 23 08:53:56 crc kubenswrapper[4940]: I0223 08:53:56.705444 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-hf78k" Feb 23 08:53:56 crc kubenswrapper[4940]: I0223 08:53:56.725045 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-hf78k"] Feb 23 08:53:56 crc kubenswrapper[4940]: I0223 08:53:56.905277 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/4e776654-5212-41ae-ac30-a4dafdf7a349-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-hf78k\" (UID: \"4e776654-5212-41ae-ac30-a4dafdf7a349\") " pod="openshift-marketplace/marketplace-operator-79b997595-hf78k" Feb 23 08:53:56 crc kubenswrapper[4940]: I0223 08:53:56.905354 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4e776654-5212-41ae-ac30-a4dafdf7a349-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-hf78k\" (UID: \"4e776654-5212-41ae-ac30-a4dafdf7a349\") " pod="openshift-marketplace/marketplace-operator-79b997595-hf78k" Feb 23 08:53:56 crc kubenswrapper[4940]: I0223 08:53:56.905398 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6z562\" (UniqueName: \"kubernetes.io/projected/4e776654-5212-41ae-ac30-a4dafdf7a349-kube-api-access-6z562\") pod \"marketplace-operator-79b997595-hf78k\" (UID: \"4e776654-5212-41ae-ac30-a4dafdf7a349\") " pod="openshift-marketplace/marketplace-operator-79b997595-hf78k" Feb 23 08:53:57 crc kubenswrapper[4940]: I0223 08:53:57.006525 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/4e776654-5212-41ae-ac30-a4dafdf7a349-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-hf78k\" (UID: \"4e776654-5212-41ae-ac30-a4dafdf7a349\") " pod="openshift-marketplace/marketplace-operator-79b997595-hf78k" Feb 23 08:53:57 crc kubenswrapper[4940]: I0223 08:53:57.006929 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4e776654-5212-41ae-ac30-a4dafdf7a349-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-hf78k\" (UID: \"4e776654-5212-41ae-ac30-a4dafdf7a349\") " pod="openshift-marketplace/marketplace-operator-79b997595-hf78k" Feb 23 08:53:57 crc kubenswrapper[4940]: I0223 08:53:57.006994 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6z562\" (UniqueName: \"kubernetes.io/projected/4e776654-5212-41ae-ac30-a4dafdf7a349-kube-api-access-6z562\") pod \"marketplace-operator-79b997595-hf78k\" (UID: \"4e776654-5212-41ae-ac30-a4dafdf7a349\") " pod="openshift-marketplace/marketplace-operator-79b997595-hf78k" Feb 23 08:53:57 crc kubenswrapper[4940]: I0223 08:53:57.008985 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4e776654-5212-41ae-ac30-a4dafdf7a349-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-hf78k\" (UID: \"4e776654-5212-41ae-ac30-a4dafdf7a349\") " pod="openshift-marketplace/marketplace-operator-79b997595-hf78k" Feb 23 08:53:57 crc kubenswrapper[4940]: I0223 08:53:57.014543 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/4e776654-5212-41ae-ac30-a4dafdf7a349-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-hf78k\" (UID: \"4e776654-5212-41ae-ac30-a4dafdf7a349\") " pod="openshift-marketplace/marketplace-operator-79b997595-hf78k" Feb 23 08:53:57 crc kubenswrapper[4940]: I0223 08:53:57.038224 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6z562\" (UniqueName: \"kubernetes.io/projected/4e776654-5212-41ae-ac30-a4dafdf7a349-kube-api-access-6z562\") pod \"marketplace-operator-79b997595-hf78k\" (UID: \"4e776654-5212-41ae-ac30-a4dafdf7a349\") " pod="openshift-marketplace/marketplace-operator-79b997595-hf78k" Feb 23 08:53:57 crc kubenswrapper[4940]: I0223 08:53:57.057225 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-hf78k" Feb 23 08:53:57 crc kubenswrapper[4940]: I0223 08:53:57.088689 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wprw9" Feb 23 08:53:57 crc kubenswrapper[4940]: I0223 08:53:57.217230 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fc357ef5-0994-4918-859b-d623e534da2a-utilities\") pod \"fc357ef5-0994-4918-859b-d623e534da2a\" (UID: \"fc357ef5-0994-4918-859b-d623e534da2a\") " Feb 23 08:53:57 crc kubenswrapper[4940]: I0223 08:53:57.217263 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fc357ef5-0994-4918-859b-d623e534da2a-catalog-content\") pod \"fc357ef5-0994-4918-859b-d623e534da2a\" (UID: \"fc357ef5-0994-4918-859b-d623e534da2a\") " Feb 23 08:53:57 crc kubenswrapper[4940]: I0223 08:53:57.218573 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fc357ef5-0994-4918-859b-d623e534da2a-utilities" (OuterVolumeSpecName: "utilities") pod "fc357ef5-0994-4918-859b-d623e534da2a" (UID: "fc357ef5-0994-4918-859b-d623e534da2a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 08:53:57 crc kubenswrapper[4940]: I0223 08:53:57.221980 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc357ef5-0994-4918-859b-d623e534da2a-kube-api-access-5n729" (OuterVolumeSpecName: "kube-api-access-5n729") pod "fc357ef5-0994-4918-859b-d623e534da2a" (UID: "fc357ef5-0994-4918-859b-d623e534da2a"). InnerVolumeSpecName "kube-api-access-5n729". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 08:53:57 crc kubenswrapper[4940]: I0223 08:53:57.226707 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5n729\" (UniqueName: \"kubernetes.io/projected/fc357ef5-0994-4918-859b-d623e534da2a-kube-api-access-5n729\") pod \"fc357ef5-0994-4918-859b-d623e534da2a\" (UID: \"fc357ef5-0994-4918-859b-d623e534da2a\") " Feb 23 08:53:57 crc kubenswrapper[4940]: I0223 08:53:57.227515 4940 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fc357ef5-0994-4918-859b-d623e534da2a-utilities\") on node \"crc\" DevicePath \"\"" Feb 23 08:53:57 crc kubenswrapper[4940]: I0223 08:53:57.227550 4940 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5n729\" (UniqueName: \"kubernetes.io/projected/fc357ef5-0994-4918-859b-d623e534da2a-kube-api-access-5n729\") on node \"crc\" DevicePath \"\"" Feb 23 08:53:57 crc kubenswrapper[4940]: I0223 08:53:57.228930 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-j9x9v" Feb 23 08:53:57 crc kubenswrapper[4940]: I0223 08:53:57.230519 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mqw5m" Feb 23 08:53:57 crc kubenswrapper[4940]: I0223 08:53:57.271558 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fc357ef5-0994-4918-859b-d623e534da2a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "fc357ef5-0994-4918-859b-d623e534da2a" (UID: "fc357ef5-0994-4918-859b-d623e534da2a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 08:53:57 crc kubenswrapper[4940]: I0223 08:53:57.277782 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-56j6r" Feb 23 08:53:57 crc kubenswrapper[4940]: I0223 08:53:57.311378 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7w9jb" Feb 23 08:53:57 crc kubenswrapper[4940]: I0223 08:53:57.328373 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4nm4p\" (UniqueName: \"kubernetes.io/projected/0917e41b-2e05-4596-9ad4-05b382ee9f56-kube-api-access-4nm4p\") pod \"0917e41b-2e05-4596-9ad4-05b382ee9f56\" (UID: \"0917e41b-2e05-4596-9ad4-05b382ee9f56\") " Feb 23 08:53:57 crc kubenswrapper[4940]: I0223 08:53:57.328432 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/000a58ff-706e-452e-8fa1-493f98d2e314-utilities\") pod \"000a58ff-706e-452e-8fa1-493f98d2e314\" (UID: \"000a58ff-706e-452e-8fa1-493f98d2e314\") " Feb 23 08:53:57 crc kubenswrapper[4940]: I0223 08:53:57.328458 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xj69h\" (UniqueName: \"kubernetes.io/projected/000a58ff-706e-452e-8fa1-493f98d2e314-kube-api-access-xj69h\") pod \"000a58ff-706e-452e-8fa1-493f98d2e314\" (UID: \"000a58ff-706e-452e-8fa1-493f98d2e314\") " Feb 23 08:53:57 crc kubenswrapper[4940]: I0223 08:53:57.328488 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0917e41b-2e05-4596-9ad4-05b382ee9f56-catalog-content\") pod \"0917e41b-2e05-4596-9ad4-05b382ee9f56\" (UID: \"0917e41b-2e05-4596-9ad4-05b382ee9f56\") " Feb 23 08:53:57 crc kubenswrapper[4940]: I0223 08:53:57.328524 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/000a58ff-706e-452e-8fa1-493f98d2e314-catalog-content\") pod \"000a58ff-706e-452e-8fa1-493f98d2e314\" (UID: \"000a58ff-706e-452e-8fa1-493f98d2e314\") " Feb 23 08:53:57 crc kubenswrapper[4940]: I0223 08:53:57.328548 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pn57d\" (UniqueName: \"kubernetes.io/projected/c723f067-bc1f-4d88-aa37-00f8896a9d38-kube-api-access-pn57d\") pod \"c723f067-bc1f-4d88-aa37-00f8896a9d38\" (UID: \"c723f067-bc1f-4d88-aa37-00f8896a9d38\") " Feb 23 08:53:57 crc kubenswrapper[4940]: I0223 08:53:57.328568 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c723f067-bc1f-4d88-aa37-00f8896a9d38-utilities\") pod \"c723f067-bc1f-4d88-aa37-00f8896a9d38\" (UID: \"c723f067-bc1f-4d88-aa37-00f8896a9d38\") " Feb 23 08:53:57 crc kubenswrapper[4940]: I0223 08:53:57.328598 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0917e41b-2e05-4596-9ad4-05b382ee9f56-utilities\") pod \"0917e41b-2e05-4596-9ad4-05b382ee9f56\" (UID: \"0917e41b-2e05-4596-9ad4-05b382ee9f56\") " Feb 23 08:53:57 crc kubenswrapper[4940]: I0223 08:53:57.328638 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xgvf7\" (UniqueName: \"kubernetes.io/projected/9de4a20c-3f76-4aa8-8347-42f3b3f53145-kube-api-access-xgvf7\") pod \"9de4a20c-3f76-4aa8-8347-42f3b3f53145\" (UID: \"9de4a20c-3f76-4aa8-8347-42f3b3f53145\") " Feb 23 08:53:57 crc kubenswrapper[4940]: I0223 08:53:57.328665 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c723f067-bc1f-4d88-aa37-00f8896a9d38-catalog-content\") pod \"c723f067-bc1f-4d88-aa37-00f8896a9d38\" (UID: \"c723f067-bc1f-4d88-aa37-00f8896a9d38\") " Feb 23 08:53:57 crc kubenswrapper[4940]: I0223 08:53:57.328685 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/9de4a20c-3f76-4aa8-8347-42f3b3f53145-marketplace-operator-metrics\") pod \"9de4a20c-3f76-4aa8-8347-42f3b3f53145\" (UID: \"9de4a20c-3f76-4aa8-8347-42f3b3f53145\") " Feb 23 08:53:57 crc kubenswrapper[4940]: I0223 08:53:57.328723 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9de4a20c-3f76-4aa8-8347-42f3b3f53145-marketplace-trusted-ca\") pod \"9de4a20c-3f76-4aa8-8347-42f3b3f53145\" (UID: \"9de4a20c-3f76-4aa8-8347-42f3b3f53145\") " Feb 23 08:53:57 crc kubenswrapper[4940]: I0223 08:53:57.329013 4940 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fc357ef5-0994-4918-859b-d623e534da2a-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 23 08:53:57 crc kubenswrapper[4940]: I0223 08:53:57.329396 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/000a58ff-706e-452e-8fa1-493f98d2e314-utilities" (OuterVolumeSpecName: "utilities") pod "000a58ff-706e-452e-8fa1-493f98d2e314" (UID: "000a58ff-706e-452e-8fa1-493f98d2e314"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 08:53:57 crc kubenswrapper[4940]: I0223 08:53:57.330251 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c723f067-bc1f-4d88-aa37-00f8896a9d38-utilities" (OuterVolumeSpecName: "utilities") pod "c723f067-bc1f-4d88-aa37-00f8896a9d38" (UID: "c723f067-bc1f-4d88-aa37-00f8896a9d38"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 08:53:57 crc kubenswrapper[4940]: I0223 08:53:57.330631 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9de4a20c-3f76-4aa8-8347-42f3b3f53145-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "9de4a20c-3f76-4aa8-8347-42f3b3f53145" (UID: "9de4a20c-3f76-4aa8-8347-42f3b3f53145"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 08:53:57 crc kubenswrapper[4940]: I0223 08:53:57.331370 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0917e41b-2e05-4596-9ad4-05b382ee9f56-utilities" (OuterVolumeSpecName: "utilities") pod "0917e41b-2e05-4596-9ad4-05b382ee9f56" (UID: "0917e41b-2e05-4596-9ad4-05b382ee9f56"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 08:53:57 crc kubenswrapper[4940]: I0223 08:53:57.331723 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c723f067-bc1f-4d88-aa37-00f8896a9d38-kube-api-access-pn57d" (OuterVolumeSpecName: "kube-api-access-pn57d") pod "c723f067-bc1f-4d88-aa37-00f8896a9d38" (UID: "c723f067-bc1f-4d88-aa37-00f8896a9d38"). InnerVolumeSpecName "kube-api-access-pn57d". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 08:53:57 crc kubenswrapper[4940]: I0223 08:53:57.332386 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9de4a20c-3f76-4aa8-8347-42f3b3f53145-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "9de4a20c-3f76-4aa8-8347-42f3b3f53145" (UID: "9de4a20c-3f76-4aa8-8347-42f3b3f53145"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 08:53:57 crc kubenswrapper[4940]: I0223 08:53:57.334865 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/000a58ff-706e-452e-8fa1-493f98d2e314-kube-api-access-xj69h" (OuterVolumeSpecName: "kube-api-access-xj69h") pod "000a58ff-706e-452e-8fa1-493f98d2e314" (UID: "000a58ff-706e-452e-8fa1-493f98d2e314"). InnerVolumeSpecName "kube-api-access-xj69h". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 08:53:57 crc kubenswrapper[4940]: I0223 08:53:57.335553 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0917e41b-2e05-4596-9ad4-05b382ee9f56-kube-api-access-4nm4p" (OuterVolumeSpecName: "kube-api-access-4nm4p") pod "0917e41b-2e05-4596-9ad4-05b382ee9f56" (UID: "0917e41b-2e05-4596-9ad4-05b382ee9f56"). InnerVolumeSpecName "kube-api-access-4nm4p". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 08:53:57 crc kubenswrapper[4940]: I0223 08:53:57.340050 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9de4a20c-3f76-4aa8-8347-42f3b3f53145-kube-api-access-xgvf7" (OuterVolumeSpecName: "kube-api-access-xgvf7") pod "9de4a20c-3f76-4aa8-8347-42f3b3f53145" (UID: "9de4a20c-3f76-4aa8-8347-42f3b3f53145"). InnerVolumeSpecName "kube-api-access-xgvf7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 08:53:57 crc kubenswrapper[4940]: I0223 08:53:57.364384 4940 generic.go:334] "Generic (PLEG): container finished" podID="9de4a20c-3f76-4aa8-8347-42f3b3f53145" containerID="c8e99b53927abcd81861c6fbe79afe435a022e99706e38f1d1b81d38554144d6" exitCode=0 Feb 23 08:53:57 crc kubenswrapper[4940]: I0223 08:53:57.364503 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-j9x9v" Feb 23 08:53:57 crc kubenswrapper[4940]: I0223 08:53:57.364536 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-j9x9v" event={"ID":"9de4a20c-3f76-4aa8-8347-42f3b3f53145","Type":"ContainerDied","Data":"c8e99b53927abcd81861c6fbe79afe435a022e99706e38f1d1b81d38554144d6"} Feb 23 08:53:57 crc kubenswrapper[4940]: I0223 08:53:57.364580 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-j9x9v" event={"ID":"9de4a20c-3f76-4aa8-8347-42f3b3f53145","Type":"ContainerDied","Data":"ee288c906ac0e67b3520b29e0f987e1ea4c2abfb1f71555f74c6a3a74e194ced"} Feb 23 08:53:57 crc kubenswrapper[4940]: I0223 08:53:57.364601 4940 scope.go:117] "RemoveContainer" containerID="c8e99b53927abcd81861c6fbe79afe435a022e99706e38f1d1b81d38554144d6" Feb 23 08:53:57 crc kubenswrapper[4940]: I0223 08:53:57.368244 4940 generic.go:334] "Generic (PLEG): container finished" podID="0917e41b-2e05-4596-9ad4-05b382ee9f56" containerID="58422578cda5f9650c1dc8ae7b5c834ee4cc093018ad56812e160eb7aa441d9a" exitCode=0 Feb 23 08:53:57 crc kubenswrapper[4940]: I0223 08:53:57.368340 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mqw5m" event={"ID":"0917e41b-2e05-4596-9ad4-05b382ee9f56","Type":"ContainerDied","Data":"58422578cda5f9650c1dc8ae7b5c834ee4cc093018ad56812e160eb7aa441d9a"} Feb 23 08:53:57 crc kubenswrapper[4940]: I0223 08:53:57.368373 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mqw5m" event={"ID":"0917e41b-2e05-4596-9ad4-05b382ee9f56","Type":"ContainerDied","Data":"54043031f90f7b77931128beca5b5a9aa21e1648c68e41a3fe41c22034d9511e"} Feb 23 08:53:57 crc kubenswrapper[4940]: I0223 08:53:57.368438 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mqw5m" Feb 23 08:53:57 crc kubenswrapper[4940]: I0223 08:53:57.370683 4940 generic.go:334] "Generic (PLEG): container finished" podID="c723f067-bc1f-4d88-aa37-00f8896a9d38" containerID="1175022f71819c931fcc7554b8a5901ca23eb6f7c509e2cde4a60490bde43107" exitCode=0 Feb 23 08:53:57 crc kubenswrapper[4940]: I0223 08:53:57.370750 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-56j6r" event={"ID":"c723f067-bc1f-4d88-aa37-00f8896a9d38","Type":"ContainerDied","Data":"1175022f71819c931fcc7554b8a5901ca23eb6f7c509e2cde4a60490bde43107"} Feb 23 08:53:57 crc kubenswrapper[4940]: I0223 08:53:57.370783 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-56j6r" event={"ID":"c723f067-bc1f-4d88-aa37-00f8896a9d38","Type":"ContainerDied","Data":"af6420bcd7623b9239412c41965d458c6e7ada3f5b50007544bbcea361894f48"} Feb 23 08:53:57 crc kubenswrapper[4940]: I0223 08:53:57.370854 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-56j6r" Feb 23 08:53:57 crc kubenswrapper[4940]: I0223 08:53:57.371067 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c723f067-bc1f-4d88-aa37-00f8896a9d38-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c723f067-bc1f-4d88-aa37-00f8896a9d38" (UID: "c723f067-bc1f-4d88-aa37-00f8896a9d38"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 08:53:57 crc kubenswrapper[4940]: I0223 08:53:57.375913 4940 generic.go:334] "Generic (PLEG): container finished" podID="000a58ff-706e-452e-8fa1-493f98d2e314" containerID="019d73b114a5ddd64f206e2e51eb0b9186f9ec8748f063f3273874cfa6be9954" exitCode=0 Feb 23 08:53:57 crc kubenswrapper[4940]: I0223 08:53:57.375960 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7w9jb" Feb 23 08:53:57 crc kubenswrapper[4940]: I0223 08:53:57.376021 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7w9jb" event={"ID":"000a58ff-706e-452e-8fa1-493f98d2e314","Type":"ContainerDied","Data":"019d73b114a5ddd64f206e2e51eb0b9186f9ec8748f063f3273874cfa6be9954"} Feb 23 08:53:57 crc kubenswrapper[4940]: I0223 08:53:57.376049 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7w9jb" event={"ID":"000a58ff-706e-452e-8fa1-493f98d2e314","Type":"ContainerDied","Data":"e974e27648c51fe2f52df96c81b8212a21cadc8150f286f2259ea35df0f4e4ce"} Feb 23 08:53:57 crc kubenswrapper[4940]: I0223 08:53:57.378534 4940 generic.go:334] "Generic (PLEG): container finished" podID="fc357ef5-0994-4918-859b-d623e534da2a" containerID="22e5105f9236d955be99940e29187626b8258b579494937380299cb4182398ee" exitCode=0 Feb 23 08:53:57 crc kubenswrapper[4940]: I0223 08:53:57.378571 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wprw9" event={"ID":"fc357ef5-0994-4918-859b-d623e534da2a","Type":"ContainerDied","Data":"22e5105f9236d955be99940e29187626b8258b579494937380299cb4182398ee"} Feb 23 08:53:57 crc kubenswrapper[4940]: I0223 08:53:57.378593 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wprw9" event={"ID":"fc357ef5-0994-4918-859b-d623e534da2a","Type":"ContainerDied","Data":"a186e27b1095869988f46249fb2a428f422bd7a4816bc0d1cc1d70b853317975"} Feb 23 08:53:57 crc kubenswrapper[4940]: I0223 08:53:57.378708 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wprw9" Feb 23 08:53:57 crc kubenswrapper[4940]: I0223 08:53:57.390135 4940 scope.go:117] "RemoveContainer" containerID="c8e99b53927abcd81861c6fbe79afe435a022e99706e38f1d1b81d38554144d6" Feb 23 08:53:57 crc kubenswrapper[4940]: E0223 08:53:57.391794 4940 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c8e99b53927abcd81861c6fbe79afe435a022e99706e38f1d1b81d38554144d6\": container with ID starting with c8e99b53927abcd81861c6fbe79afe435a022e99706e38f1d1b81d38554144d6 not found: ID does not exist" containerID="c8e99b53927abcd81861c6fbe79afe435a022e99706e38f1d1b81d38554144d6" Feb 23 08:53:57 crc kubenswrapper[4940]: I0223 08:53:57.391832 4940 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c8e99b53927abcd81861c6fbe79afe435a022e99706e38f1d1b81d38554144d6"} err="failed to get container status \"c8e99b53927abcd81861c6fbe79afe435a022e99706e38f1d1b81d38554144d6\": rpc error: code = NotFound desc = could not find container \"c8e99b53927abcd81861c6fbe79afe435a022e99706e38f1d1b81d38554144d6\": container with ID starting with c8e99b53927abcd81861c6fbe79afe435a022e99706e38f1d1b81d38554144d6 not found: ID does not exist" Feb 23 08:53:57 crc kubenswrapper[4940]: I0223 08:53:57.391854 4940 scope.go:117] "RemoveContainer" containerID="58422578cda5f9650c1dc8ae7b5c834ee4cc093018ad56812e160eb7aa441d9a" Feb 23 08:53:57 crc kubenswrapper[4940]: I0223 08:53:57.394345 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0917e41b-2e05-4596-9ad4-05b382ee9f56-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0917e41b-2e05-4596-9ad4-05b382ee9f56" (UID: "0917e41b-2e05-4596-9ad4-05b382ee9f56"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 08:53:57 crc kubenswrapper[4940]: I0223 08:53:57.412476 4940 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-j9x9v"] Feb 23 08:53:57 crc kubenswrapper[4940]: I0223 08:53:57.426579 4940 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-j9x9v"] Feb 23 08:53:57 crc kubenswrapper[4940]: I0223 08:53:57.430284 4940 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4nm4p\" (UniqueName: \"kubernetes.io/projected/0917e41b-2e05-4596-9ad4-05b382ee9f56-kube-api-access-4nm4p\") on node \"crc\" DevicePath \"\"" Feb 23 08:53:57 crc kubenswrapper[4940]: I0223 08:53:57.430306 4940 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/000a58ff-706e-452e-8fa1-493f98d2e314-utilities\") on node \"crc\" DevicePath \"\"" Feb 23 08:53:57 crc kubenswrapper[4940]: I0223 08:53:57.430316 4940 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xj69h\" (UniqueName: \"kubernetes.io/projected/000a58ff-706e-452e-8fa1-493f98d2e314-kube-api-access-xj69h\") on node \"crc\" DevicePath \"\"" Feb 23 08:53:57 crc kubenswrapper[4940]: I0223 08:53:57.430326 4940 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0917e41b-2e05-4596-9ad4-05b382ee9f56-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 23 08:53:57 crc kubenswrapper[4940]: I0223 08:53:57.430334 4940 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pn57d\" (UniqueName: \"kubernetes.io/projected/c723f067-bc1f-4d88-aa37-00f8896a9d38-kube-api-access-pn57d\") on node \"crc\" DevicePath \"\"" Feb 23 08:53:57 crc kubenswrapper[4940]: I0223 08:53:57.430485 4940 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c723f067-bc1f-4d88-aa37-00f8896a9d38-utilities\") on node \"crc\" DevicePath \"\"" Feb 23 08:53:57 crc kubenswrapper[4940]: I0223 08:53:57.430497 4940 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0917e41b-2e05-4596-9ad4-05b382ee9f56-utilities\") on node \"crc\" DevicePath \"\"" Feb 23 08:53:57 crc kubenswrapper[4940]: I0223 08:53:57.430505 4940 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xgvf7\" (UniqueName: \"kubernetes.io/projected/9de4a20c-3f76-4aa8-8347-42f3b3f53145-kube-api-access-xgvf7\") on node \"crc\" DevicePath \"\"" Feb 23 08:53:57 crc kubenswrapper[4940]: I0223 08:53:57.430513 4940 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c723f067-bc1f-4d88-aa37-00f8896a9d38-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 23 08:53:57 crc kubenswrapper[4940]: I0223 08:53:57.430457 4940 scope.go:117] "RemoveContainer" containerID="2de3e70c5ef4c59aa5d97f8f4e97511c31312fefba903285ec2309fdcc30a823" Feb 23 08:53:57 crc kubenswrapper[4940]: I0223 08:53:57.430665 4940 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/9de4a20c-3f76-4aa8-8347-42f3b3f53145-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Feb 23 08:53:57 crc kubenswrapper[4940]: I0223 08:53:57.430682 4940 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9de4a20c-3f76-4aa8-8347-42f3b3f53145-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 23 08:53:57 crc kubenswrapper[4940]: I0223 08:53:57.433362 4940 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-wprw9"] Feb 23 08:53:57 crc kubenswrapper[4940]: I0223 08:53:57.436993 4940 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-wprw9"] Feb 23 08:53:57 crc kubenswrapper[4940]: I0223 08:53:57.447897 4940 scope.go:117] "RemoveContainer" containerID="e3c26e2458bf9b516eb733b368692ed4cb6c27e9fc4092b7c616d2438debbc22" Feb 23 08:53:57 crc kubenswrapper[4940]: I0223 08:53:57.463775 4940 scope.go:117] "RemoveContainer" containerID="58422578cda5f9650c1dc8ae7b5c834ee4cc093018ad56812e160eb7aa441d9a" Feb 23 08:53:57 crc kubenswrapper[4940]: E0223 08:53:57.464253 4940 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"58422578cda5f9650c1dc8ae7b5c834ee4cc093018ad56812e160eb7aa441d9a\": container with ID starting with 58422578cda5f9650c1dc8ae7b5c834ee4cc093018ad56812e160eb7aa441d9a not found: ID does not exist" containerID="58422578cda5f9650c1dc8ae7b5c834ee4cc093018ad56812e160eb7aa441d9a" Feb 23 08:53:57 crc kubenswrapper[4940]: I0223 08:53:57.464325 4940 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"58422578cda5f9650c1dc8ae7b5c834ee4cc093018ad56812e160eb7aa441d9a"} err="failed to get container status \"58422578cda5f9650c1dc8ae7b5c834ee4cc093018ad56812e160eb7aa441d9a\": rpc error: code = NotFound desc = could not find container \"58422578cda5f9650c1dc8ae7b5c834ee4cc093018ad56812e160eb7aa441d9a\": container with ID starting with 58422578cda5f9650c1dc8ae7b5c834ee4cc093018ad56812e160eb7aa441d9a not found: ID does not exist" Feb 23 08:53:57 crc kubenswrapper[4940]: I0223 08:53:57.464359 4940 scope.go:117] "RemoveContainer" containerID="2de3e70c5ef4c59aa5d97f8f4e97511c31312fefba903285ec2309fdcc30a823" Feb 23 08:53:57 crc kubenswrapper[4940]: E0223 08:53:57.464750 4940 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2de3e70c5ef4c59aa5d97f8f4e97511c31312fefba903285ec2309fdcc30a823\": container with ID starting with 2de3e70c5ef4c59aa5d97f8f4e97511c31312fefba903285ec2309fdcc30a823 not found: ID does not exist" containerID="2de3e70c5ef4c59aa5d97f8f4e97511c31312fefba903285ec2309fdcc30a823" Feb 23 08:53:57 crc kubenswrapper[4940]: I0223 08:53:57.464781 4940 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2de3e70c5ef4c59aa5d97f8f4e97511c31312fefba903285ec2309fdcc30a823"} err="failed to get container status \"2de3e70c5ef4c59aa5d97f8f4e97511c31312fefba903285ec2309fdcc30a823\": rpc error: code = NotFound desc = could not find container \"2de3e70c5ef4c59aa5d97f8f4e97511c31312fefba903285ec2309fdcc30a823\": container with ID starting with 2de3e70c5ef4c59aa5d97f8f4e97511c31312fefba903285ec2309fdcc30a823 not found: ID does not exist" Feb 23 08:53:57 crc kubenswrapper[4940]: I0223 08:53:57.464803 4940 scope.go:117] "RemoveContainer" containerID="e3c26e2458bf9b516eb733b368692ed4cb6c27e9fc4092b7c616d2438debbc22" Feb 23 08:53:57 crc kubenswrapper[4940]: E0223 08:53:57.465391 4940 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e3c26e2458bf9b516eb733b368692ed4cb6c27e9fc4092b7c616d2438debbc22\": container with ID starting with e3c26e2458bf9b516eb733b368692ed4cb6c27e9fc4092b7c616d2438debbc22 not found: ID does not exist" containerID="e3c26e2458bf9b516eb733b368692ed4cb6c27e9fc4092b7c616d2438debbc22" Feb 23 08:53:57 crc kubenswrapper[4940]: I0223 08:53:57.465415 4940 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e3c26e2458bf9b516eb733b368692ed4cb6c27e9fc4092b7c616d2438debbc22"} err="failed to get container status \"e3c26e2458bf9b516eb733b368692ed4cb6c27e9fc4092b7c616d2438debbc22\": rpc error: code = NotFound desc = could not find container \"e3c26e2458bf9b516eb733b368692ed4cb6c27e9fc4092b7c616d2438debbc22\": container with ID starting with e3c26e2458bf9b516eb733b368692ed4cb6c27e9fc4092b7c616d2438debbc22 not found: ID does not exist" Feb 23 08:53:57 crc kubenswrapper[4940]: I0223 08:53:57.465431 4940 scope.go:117] "RemoveContainer" containerID="1175022f71819c931fcc7554b8a5901ca23eb6f7c509e2cde4a60490bde43107" Feb 23 08:53:57 crc kubenswrapper[4940]: I0223 08:53:57.482460 4940 scope.go:117] "RemoveContainer" containerID="6e097091f20ba50bea8fd842361443e25d8f042b1c7eca9bb8585213cdd1f7b9" Feb 23 08:53:57 crc kubenswrapper[4940]: I0223 08:53:57.500527 4940 scope.go:117] "RemoveContainer" containerID="b96e2cb65f19360ec966a4ffd3a6bfe1d49693a85538d6a1b10aaa39c1f84030" Feb 23 08:53:57 crc kubenswrapper[4940]: I0223 08:53:57.501057 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/000a58ff-706e-452e-8fa1-493f98d2e314-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "000a58ff-706e-452e-8fa1-493f98d2e314" (UID: "000a58ff-706e-452e-8fa1-493f98d2e314"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 08:53:57 crc kubenswrapper[4940]: I0223 08:53:57.517309 4940 scope.go:117] "RemoveContainer" containerID="1175022f71819c931fcc7554b8a5901ca23eb6f7c509e2cde4a60490bde43107" Feb 23 08:53:57 crc kubenswrapper[4940]: E0223 08:53:57.517677 4940 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1175022f71819c931fcc7554b8a5901ca23eb6f7c509e2cde4a60490bde43107\": container with ID starting with 1175022f71819c931fcc7554b8a5901ca23eb6f7c509e2cde4a60490bde43107 not found: ID does not exist" containerID="1175022f71819c931fcc7554b8a5901ca23eb6f7c509e2cde4a60490bde43107" Feb 23 08:53:57 crc kubenswrapper[4940]: I0223 08:53:57.517708 4940 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1175022f71819c931fcc7554b8a5901ca23eb6f7c509e2cde4a60490bde43107"} err="failed to get container status \"1175022f71819c931fcc7554b8a5901ca23eb6f7c509e2cde4a60490bde43107\": rpc error: code = NotFound desc = could not find container \"1175022f71819c931fcc7554b8a5901ca23eb6f7c509e2cde4a60490bde43107\": container with ID starting with 1175022f71819c931fcc7554b8a5901ca23eb6f7c509e2cde4a60490bde43107 not found: ID does not exist" Feb 23 08:53:57 crc kubenswrapper[4940]: I0223 08:53:57.517737 4940 scope.go:117] "RemoveContainer" containerID="6e097091f20ba50bea8fd842361443e25d8f042b1c7eca9bb8585213cdd1f7b9" Feb 23 08:53:57 crc kubenswrapper[4940]: E0223 08:53:57.518028 4940 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6e097091f20ba50bea8fd842361443e25d8f042b1c7eca9bb8585213cdd1f7b9\": container with ID starting with 6e097091f20ba50bea8fd842361443e25d8f042b1c7eca9bb8585213cdd1f7b9 not found: ID does not exist" containerID="6e097091f20ba50bea8fd842361443e25d8f042b1c7eca9bb8585213cdd1f7b9" Feb 23 08:53:57 crc kubenswrapper[4940]: I0223 08:53:57.518059 4940 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6e097091f20ba50bea8fd842361443e25d8f042b1c7eca9bb8585213cdd1f7b9"} err="failed to get container status \"6e097091f20ba50bea8fd842361443e25d8f042b1c7eca9bb8585213cdd1f7b9\": rpc error: code = NotFound desc = could not find container \"6e097091f20ba50bea8fd842361443e25d8f042b1c7eca9bb8585213cdd1f7b9\": container with ID starting with 6e097091f20ba50bea8fd842361443e25d8f042b1c7eca9bb8585213cdd1f7b9 not found: ID does not exist" Feb 23 08:53:57 crc kubenswrapper[4940]: I0223 08:53:57.518076 4940 scope.go:117] "RemoveContainer" containerID="b96e2cb65f19360ec966a4ffd3a6bfe1d49693a85538d6a1b10aaa39c1f84030" Feb 23 08:53:57 crc kubenswrapper[4940]: E0223 08:53:57.518471 4940 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b96e2cb65f19360ec966a4ffd3a6bfe1d49693a85538d6a1b10aaa39c1f84030\": container with ID starting with b96e2cb65f19360ec966a4ffd3a6bfe1d49693a85538d6a1b10aaa39c1f84030 not found: ID does not exist" containerID="b96e2cb65f19360ec966a4ffd3a6bfe1d49693a85538d6a1b10aaa39c1f84030" Feb 23 08:53:57 crc kubenswrapper[4940]: I0223 08:53:57.518519 4940 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b96e2cb65f19360ec966a4ffd3a6bfe1d49693a85538d6a1b10aaa39c1f84030"} err="failed to get container status \"b96e2cb65f19360ec966a4ffd3a6bfe1d49693a85538d6a1b10aaa39c1f84030\": rpc error: code = NotFound desc = could not find container \"b96e2cb65f19360ec966a4ffd3a6bfe1d49693a85538d6a1b10aaa39c1f84030\": container with ID starting with b96e2cb65f19360ec966a4ffd3a6bfe1d49693a85538d6a1b10aaa39c1f84030 not found: ID does not exist" Feb 23 08:53:57 crc kubenswrapper[4940]: I0223 08:53:57.518549 4940 scope.go:117] "RemoveContainer" containerID="019d73b114a5ddd64f206e2e51eb0b9186f9ec8748f063f3273874cfa6be9954" Feb 23 08:53:57 crc kubenswrapper[4940]: I0223 08:53:57.532206 4940 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/000a58ff-706e-452e-8fa1-493f98d2e314-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 23 08:53:57 crc kubenswrapper[4940]: I0223 08:53:57.536682 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-hf78k"] Feb 23 08:53:57 crc kubenswrapper[4940]: I0223 08:53:57.536713 4940 scope.go:117] "RemoveContainer" containerID="6e499ed69c2486907a2224441d634116c23f5ac490cae61461f6eccce172f349" Feb 23 08:53:57 crc kubenswrapper[4940]: I0223 08:53:57.555059 4940 scope.go:117] "RemoveContainer" containerID="4008745a0eb03cf617bef2c70c3bd5d21cb592264c0955ee07f3a69074c33918" Feb 23 08:53:57 crc kubenswrapper[4940]: I0223 08:53:57.572135 4940 scope.go:117] "RemoveContainer" containerID="019d73b114a5ddd64f206e2e51eb0b9186f9ec8748f063f3273874cfa6be9954" Feb 23 08:53:57 crc kubenswrapper[4940]: E0223 08:53:57.572824 4940 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"019d73b114a5ddd64f206e2e51eb0b9186f9ec8748f063f3273874cfa6be9954\": container with ID starting with 019d73b114a5ddd64f206e2e51eb0b9186f9ec8748f063f3273874cfa6be9954 not found: ID does not exist" containerID="019d73b114a5ddd64f206e2e51eb0b9186f9ec8748f063f3273874cfa6be9954" Feb 23 08:53:57 crc kubenswrapper[4940]: I0223 08:53:57.572857 4940 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"019d73b114a5ddd64f206e2e51eb0b9186f9ec8748f063f3273874cfa6be9954"} err="failed to get container status \"019d73b114a5ddd64f206e2e51eb0b9186f9ec8748f063f3273874cfa6be9954\": rpc error: code = NotFound desc = could not find container \"019d73b114a5ddd64f206e2e51eb0b9186f9ec8748f063f3273874cfa6be9954\": container with ID starting with 019d73b114a5ddd64f206e2e51eb0b9186f9ec8748f063f3273874cfa6be9954 not found: ID does not exist" Feb 23 08:53:57 crc kubenswrapper[4940]: I0223 08:53:57.572882 4940 scope.go:117] "RemoveContainer" containerID="6e499ed69c2486907a2224441d634116c23f5ac490cae61461f6eccce172f349" Feb 23 08:53:57 crc kubenswrapper[4940]: E0223 08:53:57.573206 4940 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6e499ed69c2486907a2224441d634116c23f5ac490cae61461f6eccce172f349\": container with ID starting with 6e499ed69c2486907a2224441d634116c23f5ac490cae61461f6eccce172f349 not found: ID does not exist" containerID="6e499ed69c2486907a2224441d634116c23f5ac490cae61461f6eccce172f349" Feb 23 08:53:57 crc kubenswrapper[4940]: I0223 08:53:57.573240 4940 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6e499ed69c2486907a2224441d634116c23f5ac490cae61461f6eccce172f349"} err="failed to get container status \"6e499ed69c2486907a2224441d634116c23f5ac490cae61461f6eccce172f349\": rpc error: code = NotFound desc = could not find container \"6e499ed69c2486907a2224441d634116c23f5ac490cae61461f6eccce172f349\": container with ID starting with 6e499ed69c2486907a2224441d634116c23f5ac490cae61461f6eccce172f349 not found: ID does not exist" Feb 23 08:53:57 crc kubenswrapper[4940]: I0223 08:53:57.573268 4940 scope.go:117] "RemoveContainer" containerID="4008745a0eb03cf617bef2c70c3bd5d21cb592264c0955ee07f3a69074c33918" Feb 23 08:53:57 crc kubenswrapper[4940]: E0223 08:53:57.573567 4940 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4008745a0eb03cf617bef2c70c3bd5d21cb592264c0955ee07f3a69074c33918\": container with ID starting with 4008745a0eb03cf617bef2c70c3bd5d21cb592264c0955ee07f3a69074c33918 not found: ID does not exist" containerID="4008745a0eb03cf617bef2c70c3bd5d21cb592264c0955ee07f3a69074c33918" Feb 23 08:53:57 crc kubenswrapper[4940]: I0223 08:53:57.573590 4940 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4008745a0eb03cf617bef2c70c3bd5d21cb592264c0955ee07f3a69074c33918"} err="failed to get container status \"4008745a0eb03cf617bef2c70c3bd5d21cb592264c0955ee07f3a69074c33918\": rpc error: code = NotFound desc = could not find container \"4008745a0eb03cf617bef2c70c3bd5d21cb592264c0955ee07f3a69074c33918\": container with ID starting with 4008745a0eb03cf617bef2c70c3bd5d21cb592264c0955ee07f3a69074c33918 not found: ID does not exist" Feb 23 08:53:57 crc kubenswrapper[4940]: I0223 08:53:57.573628 4940 scope.go:117] "RemoveContainer" containerID="22e5105f9236d955be99940e29187626b8258b579494937380299cb4182398ee" Feb 23 08:53:57 crc kubenswrapper[4940]: I0223 08:53:57.588342 4940 scope.go:117] "RemoveContainer" containerID="9e0b8b2dd6940a6c2f6b61c84c38c8f1b91743505e3b0f7eb5ba21d5ee3bc1d9" Feb 23 08:53:57 crc kubenswrapper[4940]: I0223 08:53:57.613506 4940 scope.go:117] "RemoveContainer" containerID="cfc9f66e639a8eff2a2cbbdba1e035526c227113f4135eba0828333b03efc051" Feb 23 08:53:57 crc kubenswrapper[4940]: I0223 08:53:57.639956 4940 scope.go:117] "RemoveContainer" containerID="22e5105f9236d955be99940e29187626b8258b579494937380299cb4182398ee" Feb 23 08:53:57 crc kubenswrapper[4940]: E0223 08:53:57.640323 4940 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"22e5105f9236d955be99940e29187626b8258b579494937380299cb4182398ee\": container with ID starting with 22e5105f9236d955be99940e29187626b8258b579494937380299cb4182398ee not found: ID does not exist" containerID="22e5105f9236d955be99940e29187626b8258b579494937380299cb4182398ee" Feb 23 08:53:57 crc kubenswrapper[4940]: I0223 08:53:57.640369 4940 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"22e5105f9236d955be99940e29187626b8258b579494937380299cb4182398ee"} err="failed to get container status \"22e5105f9236d955be99940e29187626b8258b579494937380299cb4182398ee\": rpc error: code = NotFound desc = could not find container \"22e5105f9236d955be99940e29187626b8258b579494937380299cb4182398ee\": container with ID starting with 22e5105f9236d955be99940e29187626b8258b579494937380299cb4182398ee not found: ID does not exist" Feb 23 08:53:57 crc kubenswrapper[4940]: I0223 08:53:57.640396 4940 scope.go:117] "RemoveContainer" containerID="9e0b8b2dd6940a6c2f6b61c84c38c8f1b91743505e3b0f7eb5ba21d5ee3bc1d9" Feb 23 08:53:57 crc kubenswrapper[4940]: E0223 08:53:57.640799 4940 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9e0b8b2dd6940a6c2f6b61c84c38c8f1b91743505e3b0f7eb5ba21d5ee3bc1d9\": container with ID starting with 9e0b8b2dd6940a6c2f6b61c84c38c8f1b91743505e3b0f7eb5ba21d5ee3bc1d9 not found: ID does not exist" containerID="9e0b8b2dd6940a6c2f6b61c84c38c8f1b91743505e3b0f7eb5ba21d5ee3bc1d9" Feb 23 08:53:57 crc kubenswrapper[4940]: I0223 08:53:57.640826 4940 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9e0b8b2dd6940a6c2f6b61c84c38c8f1b91743505e3b0f7eb5ba21d5ee3bc1d9"} err="failed to get container status \"9e0b8b2dd6940a6c2f6b61c84c38c8f1b91743505e3b0f7eb5ba21d5ee3bc1d9\": rpc error: code = NotFound desc = could not find container \"9e0b8b2dd6940a6c2f6b61c84c38c8f1b91743505e3b0f7eb5ba21d5ee3bc1d9\": container with ID starting with 9e0b8b2dd6940a6c2f6b61c84c38c8f1b91743505e3b0f7eb5ba21d5ee3bc1d9 not found: ID does not exist" Feb 23 08:53:57 crc kubenswrapper[4940]: I0223 08:53:57.640847 4940 scope.go:117] "RemoveContainer" containerID="cfc9f66e639a8eff2a2cbbdba1e035526c227113f4135eba0828333b03efc051" Feb 23 08:53:57 crc kubenswrapper[4940]: E0223 08:53:57.641229 4940 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cfc9f66e639a8eff2a2cbbdba1e035526c227113f4135eba0828333b03efc051\": container with ID starting with cfc9f66e639a8eff2a2cbbdba1e035526c227113f4135eba0828333b03efc051 not found: ID does not exist" containerID="cfc9f66e639a8eff2a2cbbdba1e035526c227113f4135eba0828333b03efc051" Feb 23 08:53:57 crc kubenswrapper[4940]: I0223 08:53:57.641264 4940 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cfc9f66e639a8eff2a2cbbdba1e035526c227113f4135eba0828333b03efc051"} err="failed to get container status \"cfc9f66e639a8eff2a2cbbdba1e035526c227113f4135eba0828333b03efc051\": rpc error: code = NotFound desc = could not find container \"cfc9f66e639a8eff2a2cbbdba1e035526c227113f4135eba0828333b03efc051\": container with ID starting with cfc9f66e639a8eff2a2cbbdba1e035526c227113f4135eba0828333b03efc051 not found: ID does not exist" Feb 23 08:53:57 crc kubenswrapper[4940]: I0223 08:53:57.713516 4940 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-56j6r"] Feb 23 08:53:57 crc kubenswrapper[4940]: I0223 08:53:57.722881 4940 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-56j6r"] Feb 23 08:53:57 crc kubenswrapper[4940]: I0223 08:53:57.736074 4940 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-7w9jb"] Feb 23 08:53:57 crc kubenswrapper[4940]: I0223 08:53:57.743751 4940 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-7w9jb"] Feb 23 08:53:57 crc kubenswrapper[4940]: I0223 08:53:57.750216 4940 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-mqw5m"] Feb 23 08:53:57 crc kubenswrapper[4940]: I0223 08:53:57.755158 4940 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-mqw5m"] Feb 23 08:53:58 crc kubenswrapper[4940]: I0223 08:53:58.391499 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-hf78k" event={"ID":"4e776654-5212-41ae-ac30-a4dafdf7a349","Type":"ContainerStarted","Data":"006fc74fd78c236d2577d972a6970f5b3f60eeef79c4ef99b582f6af99c548d7"} Feb 23 08:53:58 crc kubenswrapper[4940]: I0223 08:53:58.391543 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-hf78k" event={"ID":"4e776654-5212-41ae-ac30-a4dafdf7a349","Type":"ContainerStarted","Data":"544c2ddbcfbf26d345856a5ee9400854a92d809166caffc76c02bda1d62d59e3"} Feb 23 08:53:58 crc kubenswrapper[4940]: I0223 08:53:58.392202 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-hf78k" Feb 23 08:53:58 crc kubenswrapper[4940]: I0223 08:53:58.395540 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-hf78k" Feb 23 08:53:58 crc kubenswrapper[4940]: I0223 08:53:58.414172 4940 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-hf78k" podStartSLOduration=2.4141469300000002 podStartE2EDuration="2.41414693s" podCreationTimestamp="2026-02-23 08:53:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 08:53:58.408858053 +0000 UTC m=+369.792064270" watchObservedRunningTime="2026-02-23 08:53:58.41414693 +0000 UTC m=+369.797353117" Feb 23 08:53:59 crc kubenswrapper[4940]: I0223 08:53:59.354857 4940 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="000a58ff-706e-452e-8fa1-493f98d2e314" path="/var/lib/kubelet/pods/000a58ff-706e-452e-8fa1-493f98d2e314/volumes" Feb 23 08:53:59 crc kubenswrapper[4940]: I0223 08:53:59.358562 4940 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0917e41b-2e05-4596-9ad4-05b382ee9f56" path="/var/lib/kubelet/pods/0917e41b-2e05-4596-9ad4-05b382ee9f56/volumes" Feb 23 08:53:59 crc kubenswrapper[4940]: I0223 08:53:59.360512 4940 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9de4a20c-3f76-4aa8-8347-42f3b3f53145" path="/var/lib/kubelet/pods/9de4a20c-3f76-4aa8-8347-42f3b3f53145/volumes" Feb 23 08:53:59 crc kubenswrapper[4940]: I0223 08:53:59.363032 4940 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c723f067-bc1f-4d88-aa37-00f8896a9d38" path="/var/lib/kubelet/pods/c723f067-bc1f-4d88-aa37-00f8896a9d38/volumes" Feb 23 08:53:59 crc kubenswrapper[4940]: I0223 08:53:59.364737 4940 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fc357ef5-0994-4918-859b-d623e534da2a" path="/var/lib/kubelet/pods/fc357ef5-0994-4918-859b-d623e534da2a/volumes" Feb 23 08:54:31 crc kubenswrapper[4940]: I0223 08:54:31.429724 4940 patch_prober.go:28] interesting pod/machine-config-daemon-26mgs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 23 08:54:31 crc kubenswrapper[4940]: I0223 08:54:31.430322 4940 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" podUID="f3f2cfd6-5ddf-436d-998f-440f1cc642b1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 23 08:55:01 crc kubenswrapper[4940]: I0223 08:55:01.429537 4940 patch_prober.go:28] interesting pod/machine-config-daemon-26mgs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 23 08:55:01 crc kubenswrapper[4940]: I0223 08:55:01.430064 4940 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" podUID="f3f2cfd6-5ddf-436d-998f-440f1cc642b1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 23 08:55:07 crc kubenswrapper[4940]: I0223 08:55:07.761738 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-rhbzs"] Feb 23 08:55:07 crc kubenswrapper[4940]: E0223 08:55:07.762373 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="000a58ff-706e-452e-8fa1-493f98d2e314" containerName="extract-utilities" Feb 23 08:55:07 crc kubenswrapper[4940]: I0223 08:55:07.762394 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="000a58ff-706e-452e-8fa1-493f98d2e314" containerName="extract-utilities" Feb 23 08:55:07 crc kubenswrapper[4940]: E0223 08:55:07.762414 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c723f067-bc1f-4d88-aa37-00f8896a9d38" containerName="registry-server" Feb 23 08:55:07 crc kubenswrapper[4940]: I0223 08:55:07.762426 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="c723f067-bc1f-4d88-aa37-00f8896a9d38" containerName="registry-server" Feb 23 08:55:07 crc kubenswrapper[4940]: E0223 08:55:07.762454 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc357ef5-0994-4918-859b-d623e534da2a" containerName="extract-utilities" Feb 23 08:55:07 crc kubenswrapper[4940]: I0223 08:55:07.762467 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc357ef5-0994-4918-859b-d623e534da2a" containerName="extract-utilities" Feb 23 08:55:07 crc kubenswrapper[4940]: E0223 08:55:07.762481 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0917e41b-2e05-4596-9ad4-05b382ee9f56" containerName="extract-content" Feb 23 08:55:07 crc kubenswrapper[4940]: I0223 08:55:07.762493 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="0917e41b-2e05-4596-9ad4-05b382ee9f56" containerName="extract-content" Feb 23 08:55:07 crc kubenswrapper[4940]: E0223 08:55:07.762506 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0917e41b-2e05-4596-9ad4-05b382ee9f56" containerName="registry-server" Feb 23 08:55:07 crc kubenswrapper[4940]: I0223 08:55:07.762518 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="0917e41b-2e05-4596-9ad4-05b382ee9f56" containerName="registry-server" Feb 23 08:55:07 crc kubenswrapper[4940]: E0223 08:55:07.762538 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc357ef5-0994-4918-859b-d623e534da2a" containerName="registry-server" Feb 23 08:55:07 crc kubenswrapper[4940]: I0223 08:55:07.762550 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc357ef5-0994-4918-859b-d623e534da2a" containerName="registry-server" Feb 23 08:55:07 crc kubenswrapper[4940]: E0223 08:55:07.762568 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0917e41b-2e05-4596-9ad4-05b382ee9f56" containerName="extract-utilities" Feb 23 08:55:07 crc kubenswrapper[4940]: I0223 08:55:07.762580 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="0917e41b-2e05-4596-9ad4-05b382ee9f56" containerName="extract-utilities" Feb 23 08:55:07 crc kubenswrapper[4940]: E0223 08:55:07.762598 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc357ef5-0994-4918-859b-d623e534da2a" containerName="extract-content" Feb 23 08:55:07 crc kubenswrapper[4940]: I0223 08:55:07.762616 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc357ef5-0994-4918-859b-d623e534da2a" containerName="extract-content" Feb 23 08:55:07 crc kubenswrapper[4940]: E0223 08:55:07.762629 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c723f067-bc1f-4d88-aa37-00f8896a9d38" containerName="extract-content" Feb 23 08:55:07 crc kubenswrapper[4940]: I0223 08:55:07.762640 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="c723f067-bc1f-4d88-aa37-00f8896a9d38" containerName="extract-content" Feb 23 08:55:07 crc kubenswrapper[4940]: E0223 08:55:07.762678 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="000a58ff-706e-452e-8fa1-493f98d2e314" containerName="extract-content" Feb 23 08:55:07 crc kubenswrapper[4940]: I0223 08:55:07.762691 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="000a58ff-706e-452e-8fa1-493f98d2e314" containerName="extract-content" Feb 23 08:55:07 crc kubenswrapper[4940]: E0223 08:55:07.762707 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="000a58ff-706e-452e-8fa1-493f98d2e314" containerName="registry-server" Feb 23 08:55:07 crc kubenswrapper[4940]: I0223 08:55:07.762719 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="000a58ff-706e-452e-8fa1-493f98d2e314" containerName="registry-server" Feb 23 08:55:07 crc kubenswrapper[4940]: E0223 08:55:07.762736 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c723f067-bc1f-4d88-aa37-00f8896a9d38" containerName="extract-utilities" Feb 23 08:55:07 crc kubenswrapper[4940]: I0223 08:55:07.762747 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="c723f067-bc1f-4d88-aa37-00f8896a9d38" containerName="extract-utilities" Feb 23 08:55:07 crc kubenswrapper[4940]: E0223 08:55:07.762769 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9de4a20c-3f76-4aa8-8347-42f3b3f53145" containerName="marketplace-operator" Feb 23 08:55:07 crc kubenswrapper[4940]: I0223 08:55:07.762781 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="9de4a20c-3f76-4aa8-8347-42f3b3f53145" containerName="marketplace-operator" Feb 23 08:55:07 crc kubenswrapper[4940]: I0223 08:55:07.762927 4940 memory_manager.go:354] "RemoveStaleState removing state" podUID="c723f067-bc1f-4d88-aa37-00f8896a9d38" containerName="registry-server" Feb 23 08:55:07 crc kubenswrapper[4940]: I0223 08:55:07.762943 4940 memory_manager.go:354] "RemoveStaleState removing state" podUID="fc357ef5-0994-4918-859b-d623e534da2a" containerName="registry-server" Feb 23 08:55:07 crc kubenswrapper[4940]: I0223 08:55:07.762960 4940 memory_manager.go:354] "RemoveStaleState removing state" podUID="9de4a20c-3f76-4aa8-8347-42f3b3f53145" containerName="marketplace-operator" Feb 23 08:55:07 crc kubenswrapper[4940]: I0223 08:55:07.762982 4940 memory_manager.go:354] "RemoveStaleState removing state" podUID="000a58ff-706e-452e-8fa1-493f98d2e314" containerName="registry-server" Feb 23 08:55:07 crc kubenswrapper[4940]: I0223 08:55:07.763042 4940 memory_manager.go:354] "RemoveStaleState removing state" podUID="0917e41b-2e05-4596-9ad4-05b382ee9f56" containerName="registry-server" Feb 23 08:55:07 crc kubenswrapper[4940]: I0223 08:55:07.763593 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-rhbzs" Feb 23 08:55:07 crc kubenswrapper[4940]: I0223 08:55:07.775275 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-rhbzs"] Feb 23 08:55:07 crc kubenswrapper[4940]: I0223 08:55:07.859385 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/de8eab13-13ee-4b01-8f8c-7ac85271995c-installation-pull-secrets\") pod \"image-registry-66df7c8f76-rhbzs\" (UID: \"de8eab13-13ee-4b01-8f8c-7ac85271995c\") " pod="openshift-image-registry/image-registry-66df7c8f76-rhbzs" Feb 23 08:55:07 crc kubenswrapper[4940]: I0223 08:55:07.859451 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/de8eab13-13ee-4b01-8f8c-7ac85271995c-registry-tls\") pod \"image-registry-66df7c8f76-rhbzs\" (UID: \"de8eab13-13ee-4b01-8f8c-7ac85271995c\") " pod="openshift-image-registry/image-registry-66df7c8f76-rhbzs" Feb 23 08:55:07 crc kubenswrapper[4940]: I0223 08:55:07.859477 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-rhbzs\" (UID: \"de8eab13-13ee-4b01-8f8c-7ac85271995c\") " pod="openshift-image-registry/image-registry-66df7c8f76-rhbzs" Feb 23 08:55:07 crc kubenswrapper[4940]: I0223 08:55:07.859509 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/de8eab13-13ee-4b01-8f8c-7ac85271995c-ca-trust-extracted\") pod \"image-registry-66df7c8f76-rhbzs\" (UID: \"de8eab13-13ee-4b01-8f8c-7ac85271995c\") " pod="openshift-image-registry/image-registry-66df7c8f76-rhbzs" Feb 23 08:55:07 crc kubenswrapper[4940]: I0223 08:55:07.859527 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/de8eab13-13ee-4b01-8f8c-7ac85271995c-bound-sa-token\") pod \"image-registry-66df7c8f76-rhbzs\" (UID: \"de8eab13-13ee-4b01-8f8c-7ac85271995c\") " pod="openshift-image-registry/image-registry-66df7c8f76-rhbzs" Feb 23 08:55:07 crc kubenswrapper[4940]: I0223 08:55:07.859581 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/de8eab13-13ee-4b01-8f8c-7ac85271995c-registry-certificates\") pod \"image-registry-66df7c8f76-rhbzs\" (UID: \"de8eab13-13ee-4b01-8f8c-7ac85271995c\") " pod="openshift-image-registry/image-registry-66df7c8f76-rhbzs" Feb 23 08:55:07 crc kubenswrapper[4940]: I0223 08:55:07.859684 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/de8eab13-13ee-4b01-8f8c-7ac85271995c-trusted-ca\") pod \"image-registry-66df7c8f76-rhbzs\" (UID: \"de8eab13-13ee-4b01-8f8c-7ac85271995c\") " pod="openshift-image-registry/image-registry-66df7c8f76-rhbzs" Feb 23 08:55:07 crc kubenswrapper[4940]: I0223 08:55:07.859717 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hkp4d\" (UniqueName: \"kubernetes.io/projected/de8eab13-13ee-4b01-8f8c-7ac85271995c-kube-api-access-hkp4d\") pod \"image-registry-66df7c8f76-rhbzs\" (UID: \"de8eab13-13ee-4b01-8f8c-7ac85271995c\") " pod="openshift-image-registry/image-registry-66df7c8f76-rhbzs" Feb 23 08:55:07 crc kubenswrapper[4940]: I0223 08:55:07.883594 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-rhbzs\" (UID: \"de8eab13-13ee-4b01-8f8c-7ac85271995c\") " pod="openshift-image-registry/image-registry-66df7c8f76-rhbzs" Feb 23 08:55:07 crc kubenswrapper[4940]: I0223 08:55:07.961139 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/de8eab13-13ee-4b01-8f8c-7ac85271995c-registry-certificates\") pod \"image-registry-66df7c8f76-rhbzs\" (UID: \"de8eab13-13ee-4b01-8f8c-7ac85271995c\") " pod="openshift-image-registry/image-registry-66df7c8f76-rhbzs" Feb 23 08:55:07 crc kubenswrapper[4940]: I0223 08:55:07.961200 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/de8eab13-13ee-4b01-8f8c-7ac85271995c-trusted-ca\") pod \"image-registry-66df7c8f76-rhbzs\" (UID: \"de8eab13-13ee-4b01-8f8c-7ac85271995c\") " pod="openshift-image-registry/image-registry-66df7c8f76-rhbzs" Feb 23 08:55:07 crc kubenswrapper[4940]: I0223 08:55:07.961226 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hkp4d\" (UniqueName: \"kubernetes.io/projected/de8eab13-13ee-4b01-8f8c-7ac85271995c-kube-api-access-hkp4d\") pod \"image-registry-66df7c8f76-rhbzs\" (UID: \"de8eab13-13ee-4b01-8f8c-7ac85271995c\") " pod="openshift-image-registry/image-registry-66df7c8f76-rhbzs" Feb 23 08:55:07 crc kubenswrapper[4940]: I0223 08:55:07.961284 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/de8eab13-13ee-4b01-8f8c-7ac85271995c-installation-pull-secrets\") pod \"image-registry-66df7c8f76-rhbzs\" (UID: \"de8eab13-13ee-4b01-8f8c-7ac85271995c\") " pod="openshift-image-registry/image-registry-66df7c8f76-rhbzs" Feb 23 08:55:07 crc kubenswrapper[4940]: I0223 08:55:07.961326 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/de8eab13-13ee-4b01-8f8c-7ac85271995c-registry-tls\") pod \"image-registry-66df7c8f76-rhbzs\" (UID: \"de8eab13-13ee-4b01-8f8c-7ac85271995c\") " pod="openshift-image-registry/image-registry-66df7c8f76-rhbzs" Feb 23 08:55:07 crc kubenswrapper[4940]: I0223 08:55:07.961361 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/de8eab13-13ee-4b01-8f8c-7ac85271995c-ca-trust-extracted\") pod \"image-registry-66df7c8f76-rhbzs\" (UID: \"de8eab13-13ee-4b01-8f8c-7ac85271995c\") " pod="openshift-image-registry/image-registry-66df7c8f76-rhbzs" Feb 23 08:55:07 crc kubenswrapper[4940]: I0223 08:55:07.961384 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/de8eab13-13ee-4b01-8f8c-7ac85271995c-bound-sa-token\") pod \"image-registry-66df7c8f76-rhbzs\" (UID: \"de8eab13-13ee-4b01-8f8c-7ac85271995c\") " pod="openshift-image-registry/image-registry-66df7c8f76-rhbzs" Feb 23 08:55:07 crc kubenswrapper[4940]: I0223 08:55:07.962601 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/de8eab13-13ee-4b01-8f8c-7ac85271995c-trusted-ca\") pod \"image-registry-66df7c8f76-rhbzs\" (UID: \"de8eab13-13ee-4b01-8f8c-7ac85271995c\") " pod="openshift-image-registry/image-registry-66df7c8f76-rhbzs" Feb 23 08:55:07 crc kubenswrapper[4940]: I0223 08:55:07.962746 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/de8eab13-13ee-4b01-8f8c-7ac85271995c-ca-trust-extracted\") pod \"image-registry-66df7c8f76-rhbzs\" (UID: \"de8eab13-13ee-4b01-8f8c-7ac85271995c\") " pod="openshift-image-registry/image-registry-66df7c8f76-rhbzs" Feb 23 08:55:07 crc kubenswrapper[4940]: I0223 08:55:07.964450 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/de8eab13-13ee-4b01-8f8c-7ac85271995c-registry-certificates\") pod \"image-registry-66df7c8f76-rhbzs\" (UID: \"de8eab13-13ee-4b01-8f8c-7ac85271995c\") " pod="openshift-image-registry/image-registry-66df7c8f76-rhbzs" Feb 23 08:55:07 crc kubenswrapper[4940]: I0223 08:55:07.967862 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/de8eab13-13ee-4b01-8f8c-7ac85271995c-installation-pull-secrets\") pod \"image-registry-66df7c8f76-rhbzs\" (UID: \"de8eab13-13ee-4b01-8f8c-7ac85271995c\") " pod="openshift-image-registry/image-registry-66df7c8f76-rhbzs" Feb 23 08:55:07 crc kubenswrapper[4940]: I0223 08:55:07.967887 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/de8eab13-13ee-4b01-8f8c-7ac85271995c-registry-tls\") pod \"image-registry-66df7c8f76-rhbzs\" (UID: \"de8eab13-13ee-4b01-8f8c-7ac85271995c\") " pod="openshift-image-registry/image-registry-66df7c8f76-rhbzs" Feb 23 08:55:07 crc kubenswrapper[4940]: I0223 08:55:07.982389 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/de8eab13-13ee-4b01-8f8c-7ac85271995c-bound-sa-token\") pod \"image-registry-66df7c8f76-rhbzs\" (UID: \"de8eab13-13ee-4b01-8f8c-7ac85271995c\") " pod="openshift-image-registry/image-registry-66df7c8f76-rhbzs" Feb 23 08:55:07 crc kubenswrapper[4940]: I0223 08:55:07.983465 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hkp4d\" (UniqueName: \"kubernetes.io/projected/de8eab13-13ee-4b01-8f8c-7ac85271995c-kube-api-access-hkp4d\") pod \"image-registry-66df7c8f76-rhbzs\" (UID: \"de8eab13-13ee-4b01-8f8c-7ac85271995c\") " pod="openshift-image-registry/image-registry-66df7c8f76-rhbzs" Feb 23 08:55:08 crc kubenswrapper[4940]: I0223 08:55:08.088736 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-rhbzs" Feb 23 08:55:08 crc kubenswrapper[4940]: I0223 08:55:08.280184 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-rhbzs"] Feb 23 08:55:08 crc kubenswrapper[4940]: I0223 08:55:08.759570 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-rhbzs" event={"ID":"de8eab13-13ee-4b01-8f8c-7ac85271995c","Type":"ContainerStarted","Data":"b8ac852cafd24172c6affcd9f425a7514b3dda43528b81fa198f16271480cef9"} Feb 23 08:55:08 crc kubenswrapper[4940]: I0223 08:55:08.759626 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-rhbzs" event={"ID":"de8eab13-13ee-4b01-8f8c-7ac85271995c","Type":"ContainerStarted","Data":"2b4d3cf9af6b8c785636fb43cefcdaf28486eca851da35c0fe9473e94e431e2c"} Feb 23 08:55:08 crc kubenswrapper[4940]: I0223 08:55:08.761087 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-rhbzs" Feb 23 08:55:08 crc kubenswrapper[4940]: I0223 08:55:08.788284 4940 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-rhbzs" podStartSLOduration=1.788267753 podStartE2EDuration="1.788267753s" podCreationTimestamp="2026-02-23 08:55:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 08:55:08.787362894 +0000 UTC m=+440.170569061" watchObservedRunningTime="2026-02-23 08:55:08.788267753 +0000 UTC m=+440.171473910" Feb 23 08:55:12 crc kubenswrapper[4940]: I0223 08:55:12.314756 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 08:55:12 crc kubenswrapper[4940]: I0223 08:55:12.315442 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 08:55:12 crc kubenswrapper[4940]: I0223 08:55:12.316337 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 08:55:12 crc kubenswrapper[4940]: I0223 08:55:12.324033 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 08:55:12 crc kubenswrapper[4940]: I0223 08:55:12.547941 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 08:55:13 crc kubenswrapper[4940]: I0223 08:55:13.430153 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 08:55:13 crc kubenswrapper[4940]: I0223 08:55:13.430864 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 08:55:13 crc kubenswrapper[4940]: I0223 08:55:13.437962 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 08:55:13 crc kubenswrapper[4940]: I0223 08:55:13.438160 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 08:55:13 crc kubenswrapper[4940]: I0223 08:55:13.448035 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 08:55:13 crc kubenswrapper[4940]: I0223 08:55:13.546366 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 08:55:13 crc kubenswrapper[4940]: I0223 08:55:13.782728 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"f650cc8c266351b7334acae20167308c97c20395b268650934ad496fbfba1e5e"} Feb 23 08:55:13 crc kubenswrapper[4940]: I0223 08:55:13.784262 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"ec306af9716bc8e2566351054634cfe30645bcbf7b17f208f370d9c553ff0eff"} Feb 23 08:55:13 crc kubenswrapper[4940]: I0223 08:55:13.784294 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"c25d016ca3ac2b5853e2552176c13915cab94b769785214681a5ec8c08e1bd2c"} Feb 23 08:55:14 crc kubenswrapper[4940]: I0223 08:55:14.794101 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"69f7a71e7848b5b26e0bd08ecaba0bf890c04f164d1c1290bf37407fd1b83216"} Feb 23 08:55:14 crc kubenswrapper[4940]: I0223 08:55:14.794721 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 08:55:14 crc kubenswrapper[4940]: I0223 08:55:14.797782 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"268287961ae2dc4ff7fbb02aee0ad66a21dbe5a9af5606fa33d9f46ec063b5b3"} Feb 23 08:55:14 crc kubenswrapper[4940]: I0223 08:55:14.797823 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"3f78eb6d834f655846848b92d29225881e4e861aae4ed621b59df9d54b294b0d"} Feb 23 08:55:27 crc kubenswrapper[4940]: I0223 08:55:27.740079 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-qmdzn"] Feb 23 08:55:27 crc kubenswrapper[4940]: I0223 08:55:27.742755 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qmdzn" Feb 23 08:55:27 crc kubenswrapper[4940]: I0223 08:55:27.745169 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 23 08:55:27 crc kubenswrapper[4940]: I0223 08:55:27.750661 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-qmdzn"] Feb 23 08:55:27 crc kubenswrapper[4940]: I0223 08:55:27.858315 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fcb67\" (UniqueName: \"kubernetes.io/projected/612eb23a-7ea4-4c79-bcfe-a627918a7e3f-kube-api-access-fcb67\") pod \"redhat-operators-qmdzn\" (UID: \"612eb23a-7ea4-4c79-bcfe-a627918a7e3f\") " pod="openshift-marketplace/redhat-operators-qmdzn" Feb 23 08:55:27 crc kubenswrapper[4940]: I0223 08:55:27.858367 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/612eb23a-7ea4-4c79-bcfe-a627918a7e3f-catalog-content\") pod \"redhat-operators-qmdzn\" (UID: \"612eb23a-7ea4-4c79-bcfe-a627918a7e3f\") " pod="openshift-marketplace/redhat-operators-qmdzn" Feb 23 08:55:27 crc kubenswrapper[4940]: I0223 08:55:27.858408 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/612eb23a-7ea4-4c79-bcfe-a627918a7e3f-utilities\") pod \"redhat-operators-qmdzn\" (UID: \"612eb23a-7ea4-4c79-bcfe-a627918a7e3f\") " pod="openshift-marketplace/redhat-operators-qmdzn" Feb 23 08:55:27 crc kubenswrapper[4940]: I0223 08:55:27.923001 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-6t7mh"] Feb 23 08:55:27 crc kubenswrapper[4940]: I0223 08:55:27.924416 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6t7mh" Feb 23 08:55:27 crc kubenswrapper[4940]: I0223 08:55:27.929166 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 23 08:55:27 crc kubenswrapper[4940]: I0223 08:55:27.939698 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-6t7mh"] Feb 23 08:55:27 crc kubenswrapper[4940]: I0223 08:55:27.959747 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fcb67\" (UniqueName: \"kubernetes.io/projected/612eb23a-7ea4-4c79-bcfe-a627918a7e3f-kube-api-access-fcb67\") pod \"redhat-operators-qmdzn\" (UID: \"612eb23a-7ea4-4c79-bcfe-a627918a7e3f\") " pod="openshift-marketplace/redhat-operators-qmdzn" Feb 23 08:55:27 crc kubenswrapper[4940]: I0223 08:55:27.959803 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/612eb23a-7ea4-4c79-bcfe-a627918a7e3f-catalog-content\") pod \"redhat-operators-qmdzn\" (UID: \"612eb23a-7ea4-4c79-bcfe-a627918a7e3f\") " pod="openshift-marketplace/redhat-operators-qmdzn" Feb 23 08:55:27 crc kubenswrapper[4940]: I0223 08:55:27.959833 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fr28n\" (UniqueName: \"kubernetes.io/projected/b812b371-b4f8-439d-8a46-152ba8e9b7bf-kube-api-access-fr28n\") pod \"redhat-marketplace-6t7mh\" (UID: \"b812b371-b4f8-439d-8a46-152ba8e9b7bf\") " pod="openshift-marketplace/redhat-marketplace-6t7mh" Feb 23 08:55:27 crc kubenswrapper[4940]: I0223 08:55:27.959864 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b812b371-b4f8-439d-8a46-152ba8e9b7bf-utilities\") pod \"redhat-marketplace-6t7mh\" (UID: \"b812b371-b4f8-439d-8a46-152ba8e9b7bf\") " pod="openshift-marketplace/redhat-marketplace-6t7mh" Feb 23 08:55:27 crc kubenswrapper[4940]: I0223 08:55:27.959892 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/612eb23a-7ea4-4c79-bcfe-a627918a7e3f-utilities\") pod \"redhat-operators-qmdzn\" (UID: \"612eb23a-7ea4-4c79-bcfe-a627918a7e3f\") " pod="openshift-marketplace/redhat-operators-qmdzn" Feb 23 08:55:27 crc kubenswrapper[4940]: I0223 08:55:27.959923 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b812b371-b4f8-439d-8a46-152ba8e9b7bf-catalog-content\") pod \"redhat-marketplace-6t7mh\" (UID: \"b812b371-b4f8-439d-8a46-152ba8e9b7bf\") " pod="openshift-marketplace/redhat-marketplace-6t7mh" Feb 23 08:55:27 crc kubenswrapper[4940]: I0223 08:55:27.960892 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/612eb23a-7ea4-4c79-bcfe-a627918a7e3f-catalog-content\") pod \"redhat-operators-qmdzn\" (UID: \"612eb23a-7ea4-4c79-bcfe-a627918a7e3f\") " pod="openshift-marketplace/redhat-operators-qmdzn" Feb 23 08:55:27 crc kubenswrapper[4940]: I0223 08:55:27.961180 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/612eb23a-7ea4-4c79-bcfe-a627918a7e3f-utilities\") pod \"redhat-operators-qmdzn\" (UID: \"612eb23a-7ea4-4c79-bcfe-a627918a7e3f\") " pod="openshift-marketplace/redhat-operators-qmdzn" Feb 23 08:55:27 crc kubenswrapper[4940]: I0223 08:55:27.991642 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fcb67\" (UniqueName: \"kubernetes.io/projected/612eb23a-7ea4-4c79-bcfe-a627918a7e3f-kube-api-access-fcb67\") pod \"redhat-operators-qmdzn\" (UID: \"612eb23a-7ea4-4c79-bcfe-a627918a7e3f\") " pod="openshift-marketplace/redhat-operators-qmdzn" Feb 23 08:55:28 crc kubenswrapper[4940]: I0223 08:55:28.061651 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b812b371-b4f8-439d-8a46-152ba8e9b7bf-catalog-content\") pod \"redhat-marketplace-6t7mh\" (UID: \"b812b371-b4f8-439d-8a46-152ba8e9b7bf\") " pod="openshift-marketplace/redhat-marketplace-6t7mh" Feb 23 08:55:28 crc kubenswrapper[4940]: I0223 08:55:28.061773 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fr28n\" (UniqueName: \"kubernetes.io/projected/b812b371-b4f8-439d-8a46-152ba8e9b7bf-kube-api-access-fr28n\") pod \"redhat-marketplace-6t7mh\" (UID: \"b812b371-b4f8-439d-8a46-152ba8e9b7bf\") " pod="openshift-marketplace/redhat-marketplace-6t7mh" Feb 23 08:55:28 crc kubenswrapper[4940]: I0223 08:55:28.061802 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b812b371-b4f8-439d-8a46-152ba8e9b7bf-utilities\") pod \"redhat-marketplace-6t7mh\" (UID: \"b812b371-b4f8-439d-8a46-152ba8e9b7bf\") " pod="openshift-marketplace/redhat-marketplace-6t7mh" Feb 23 08:55:28 crc kubenswrapper[4940]: I0223 08:55:28.062313 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b812b371-b4f8-439d-8a46-152ba8e9b7bf-utilities\") pod \"redhat-marketplace-6t7mh\" (UID: \"b812b371-b4f8-439d-8a46-152ba8e9b7bf\") " pod="openshift-marketplace/redhat-marketplace-6t7mh" Feb 23 08:55:28 crc kubenswrapper[4940]: I0223 08:55:28.062655 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b812b371-b4f8-439d-8a46-152ba8e9b7bf-catalog-content\") pod \"redhat-marketplace-6t7mh\" (UID: \"b812b371-b4f8-439d-8a46-152ba8e9b7bf\") " pod="openshift-marketplace/redhat-marketplace-6t7mh" Feb 23 08:55:28 crc kubenswrapper[4940]: I0223 08:55:28.075811 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qmdzn" Feb 23 08:55:28 crc kubenswrapper[4940]: I0223 08:55:28.079444 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fr28n\" (UniqueName: \"kubernetes.io/projected/b812b371-b4f8-439d-8a46-152ba8e9b7bf-kube-api-access-fr28n\") pod \"redhat-marketplace-6t7mh\" (UID: \"b812b371-b4f8-439d-8a46-152ba8e9b7bf\") " pod="openshift-marketplace/redhat-marketplace-6t7mh" Feb 23 08:55:28 crc kubenswrapper[4940]: I0223 08:55:28.099315 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-rhbzs" Feb 23 08:55:28 crc kubenswrapper[4940]: I0223 08:55:28.147301 4940 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-kzrfw"] Feb 23 08:55:28 crc kubenswrapper[4940]: I0223 08:55:28.248596 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6t7mh" Feb 23 08:55:28 crc kubenswrapper[4940]: I0223 08:55:28.423334 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-6t7mh"] Feb 23 08:55:28 crc kubenswrapper[4940]: W0223 08:55:28.429378 4940 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb812b371_b4f8_439d_8a46_152ba8e9b7bf.slice/crio-392a1304c38e41c96dce79494fb2061dbe6b229c86bb7f7f4886337e2d1649ce WatchSource:0}: Error finding container 392a1304c38e41c96dce79494fb2061dbe6b229c86bb7f7f4886337e2d1649ce: Status 404 returned error can't find the container with id 392a1304c38e41c96dce79494fb2061dbe6b229c86bb7f7f4886337e2d1649ce Feb 23 08:55:28 crc kubenswrapper[4940]: I0223 08:55:28.493716 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-qmdzn"] Feb 23 08:55:28 crc kubenswrapper[4940]: W0223 08:55:28.505768 4940 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod612eb23a_7ea4_4c79_bcfe_a627918a7e3f.slice/crio-5ab34b9a015f74b65d40b4e4b4cf3024c85ac7fd11f62baf29124b2db57faa2e WatchSource:0}: Error finding container 5ab34b9a015f74b65d40b4e4b4cf3024c85ac7fd11f62baf29124b2db57faa2e: Status 404 returned error can't find the container with id 5ab34b9a015f74b65d40b4e4b4cf3024c85ac7fd11f62baf29124b2db57faa2e Feb 23 08:55:28 crc kubenswrapper[4940]: I0223 08:55:28.882558 4940 generic.go:334] "Generic (PLEG): container finished" podID="b812b371-b4f8-439d-8a46-152ba8e9b7bf" containerID="bd7f784e390e75c7c6c2fade136fa0716a2a4e7d6be134531eaddb2bd3d96896" exitCode=0 Feb 23 08:55:28 crc kubenswrapper[4940]: I0223 08:55:28.882782 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6t7mh" event={"ID":"b812b371-b4f8-439d-8a46-152ba8e9b7bf","Type":"ContainerDied","Data":"bd7f784e390e75c7c6c2fade136fa0716a2a4e7d6be134531eaddb2bd3d96896"} Feb 23 08:55:28 crc kubenswrapper[4940]: I0223 08:55:28.883662 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6t7mh" event={"ID":"b812b371-b4f8-439d-8a46-152ba8e9b7bf","Type":"ContainerStarted","Data":"392a1304c38e41c96dce79494fb2061dbe6b229c86bb7f7f4886337e2d1649ce"} Feb 23 08:55:28 crc kubenswrapper[4940]: I0223 08:55:28.885753 4940 generic.go:334] "Generic (PLEG): container finished" podID="612eb23a-7ea4-4c79-bcfe-a627918a7e3f" containerID="6c9807e5b55b0ba07270e17a4918f736f8bba0503ed08dcb749a5ab28a7d7477" exitCode=0 Feb 23 08:55:28 crc kubenswrapper[4940]: I0223 08:55:28.885795 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qmdzn" event={"ID":"612eb23a-7ea4-4c79-bcfe-a627918a7e3f","Type":"ContainerDied","Data":"6c9807e5b55b0ba07270e17a4918f736f8bba0503ed08dcb749a5ab28a7d7477"} Feb 23 08:55:28 crc kubenswrapper[4940]: I0223 08:55:28.885823 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qmdzn" event={"ID":"612eb23a-7ea4-4c79-bcfe-a627918a7e3f","Type":"ContainerStarted","Data":"5ab34b9a015f74b65d40b4e4b4cf3024c85ac7fd11f62baf29124b2db57faa2e"} Feb 23 08:55:29 crc kubenswrapper[4940]: I0223 08:55:29.893891 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6t7mh" event={"ID":"b812b371-b4f8-439d-8a46-152ba8e9b7bf","Type":"ContainerStarted","Data":"285def2e565f3e708dd91119aa4cf936e28139b14d5b9df7f6b3513478f305c9"} Feb 23 08:55:29 crc kubenswrapper[4940]: I0223 08:55:29.895267 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qmdzn" event={"ID":"612eb23a-7ea4-4c79-bcfe-a627918a7e3f","Type":"ContainerStarted","Data":"a117302b24c3956da01e292ba08c65122e6d8f97847a10e916ef75c39f0e5ee6"} Feb 23 08:55:30 crc kubenswrapper[4940]: I0223 08:55:30.121980 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-xj85m"] Feb 23 08:55:30 crc kubenswrapper[4940]: I0223 08:55:30.122925 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xj85m" Feb 23 08:55:30 crc kubenswrapper[4940]: I0223 08:55:30.124619 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 23 08:55:30 crc kubenswrapper[4940]: I0223 08:55:30.143352 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-xj85m"] Feb 23 08:55:30 crc kubenswrapper[4940]: I0223 08:55:30.186098 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/67ecdea2-a9cb-4de7-8350-894a47f81718-catalog-content\") pod \"certified-operators-xj85m\" (UID: \"67ecdea2-a9cb-4de7-8350-894a47f81718\") " pod="openshift-marketplace/certified-operators-xj85m" Feb 23 08:55:30 crc kubenswrapper[4940]: I0223 08:55:30.186150 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qc7rg\" (UniqueName: \"kubernetes.io/projected/67ecdea2-a9cb-4de7-8350-894a47f81718-kube-api-access-qc7rg\") pod \"certified-operators-xj85m\" (UID: \"67ecdea2-a9cb-4de7-8350-894a47f81718\") " pod="openshift-marketplace/certified-operators-xj85m" Feb 23 08:55:30 crc kubenswrapper[4940]: I0223 08:55:30.186189 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/67ecdea2-a9cb-4de7-8350-894a47f81718-utilities\") pod \"certified-operators-xj85m\" (UID: \"67ecdea2-a9cb-4de7-8350-894a47f81718\") " pod="openshift-marketplace/certified-operators-xj85m" Feb 23 08:55:30 crc kubenswrapper[4940]: I0223 08:55:30.287441 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/67ecdea2-a9cb-4de7-8350-894a47f81718-catalog-content\") pod \"certified-operators-xj85m\" (UID: \"67ecdea2-a9cb-4de7-8350-894a47f81718\") " pod="openshift-marketplace/certified-operators-xj85m" Feb 23 08:55:30 crc kubenswrapper[4940]: I0223 08:55:30.287753 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qc7rg\" (UniqueName: \"kubernetes.io/projected/67ecdea2-a9cb-4de7-8350-894a47f81718-kube-api-access-qc7rg\") pod \"certified-operators-xj85m\" (UID: \"67ecdea2-a9cb-4de7-8350-894a47f81718\") " pod="openshift-marketplace/certified-operators-xj85m" Feb 23 08:55:30 crc kubenswrapper[4940]: I0223 08:55:30.287800 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/67ecdea2-a9cb-4de7-8350-894a47f81718-utilities\") pod \"certified-operators-xj85m\" (UID: \"67ecdea2-a9cb-4de7-8350-894a47f81718\") " pod="openshift-marketplace/certified-operators-xj85m" Feb 23 08:55:30 crc kubenswrapper[4940]: I0223 08:55:30.288185 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/67ecdea2-a9cb-4de7-8350-894a47f81718-catalog-content\") pod \"certified-operators-xj85m\" (UID: \"67ecdea2-a9cb-4de7-8350-894a47f81718\") " pod="openshift-marketplace/certified-operators-xj85m" Feb 23 08:55:30 crc kubenswrapper[4940]: I0223 08:55:30.288387 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/67ecdea2-a9cb-4de7-8350-894a47f81718-utilities\") pod \"certified-operators-xj85m\" (UID: \"67ecdea2-a9cb-4de7-8350-894a47f81718\") " pod="openshift-marketplace/certified-operators-xj85m" Feb 23 08:55:30 crc kubenswrapper[4940]: I0223 08:55:30.307090 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qc7rg\" (UniqueName: \"kubernetes.io/projected/67ecdea2-a9cb-4de7-8350-894a47f81718-kube-api-access-qc7rg\") pod \"certified-operators-xj85m\" (UID: \"67ecdea2-a9cb-4de7-8350-894a47f81718\") " pod="openshift-marketplace/certified-operators-xj85m" Feb 23 08:55:30 crc kubenswrapper[4940]: I0223 08:55:30.338177 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-nmz5s"] Feb 23 08:55:30 crc kubenswrapper[4940]: I0223 08:55:30.341660 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nmz5s" Feb 23 08:55:30 crc kubenswrapper[4940]: I0223 08:55:30.344875 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-nmz5s"] Feb 23 08:55:30 crc kubenswrapper[4940]: I0223 08:55:30.348212 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 23 08:55:30 crc kubenswrapper[4940]: I0223 08:55:30.388840 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f93c4964-18cf-48c3-b3b1-dc7107d8542a-catalog-content\") pod \"community-operators-nmz5s\" (UID: \"f93c4964-18cf-48c3-b3b1-dc7107d8542a\") " pod="openshift-marketplace/community-operators-nmz5s" Feb 23 08:55:30 crc kubenswrapper[4940]: I0223 08:55:30.388952 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qsg45\" (UniqueName: \"kubernetes.io/projected/f93c4964-18cf-48c3-b3b1-dc7107d8542a-kube-api-access-qsg45\") pod \"community-operators-nmz5s\" (UID: \"f93c4964-18cf-48c3-b3b1-dc7107d8542a\") " pod="openshift-marketplace/community-operators-nmz5s" Feb 23 08:55:30 crc kubenswrapper[4940]: I0223 08:55:30.388991 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f93c4964-18cf-48c3-b3b1-dc7107d8542a-utilities\") pod \"community-operators-nmz5s\" (UID: \"f93c4964-18cf-48c3-b3b1-dc7107d8542a\") " pod="openshift-marketplace/community-operators-nmz5s" Feb 23 08:55:30 crc kubenswrapper[4940]: I0223 08:55:30.435507 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xj85m" Feb 23 08:55:30 crc kubenswrapper[4940]: I0223 08:55:30.490447 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f93c4964-18cf-48c3-b3b1-dc7107d8542a-catalog-content\") pod \"community-operators-nmz5s\" (UID: \"f93c4964-18cf-48c3-b3b1-dc7107d8542a\") " pod="openshift-marketplace/community-operators-nmz5s" Feb 23 08:55:30 crc kubenswrapper[4940]: I0223 08:55:30.490577 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qsg45\" (UniqueName: \"kubernetes.io/projected/f93c4964-18cf-48c3-b3b1-dc7107d8542a-kube-api-access-qsg45\") pod \"community-operators-nmz5s\" (UID: \"f93c4964-18cf-48c3-b3b1-dc7107d8542a\") " pod="openshift-marketplace/community-operators-nmz5s" Feb 23 08:55:30 crc kubenswrapper[4940]: I0223 08:55:30.490701 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f93c4964-18cf-48c3-b3b1-dc7107d8542a-utilities\") pod \"community-operators-nmz5s\" (UID: \"f93c4964-18cf-48c3-b3b1-dc7107d8542a\") " pod="openshift-marketplace/community-operators-nmz5s" Feb 23 08:55:30 crc kubenswrapper[4940]: I0223 08:55:30.491393 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f93c4964-18cf-48c3-b3b1-dc7107d8542a-utilities\") pod \"community-operators-nmz5s\" (UID: \"f93c4964-18cf-48c3-b3b1-dc7107d8542a\") " pod="openshift-marketplace/community-operators-nmz5s" Feb 23 08:55:30 crc kubenswrapper[4940]: I0223 08:55:30.491777 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f93c4964-18cf-48c3-b3b1-dc7107d8542a-catalog-content\") pod \"community-operators-nmz5s\" (UID: \"f93c4964-18cf-48c3-b3b1-dc7107d8542a\") " pod="openshift-marketplace/community-operators-nmz5s" Feb 23 08:55:30 crc kubenswrapper[4940]: I0223 08:55:30.520735 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qsg45\" (UniqueName: \"kubernetes.io/projected/f93c4964-18cf-48c3-b3b1-dc7107d8542a-kube-api-access-qsg45\") pod \"community-operators-nmz5s\" (UID: \"f93c4964-18cf-48c3-b3b1-dc7107d8542a\") " pod="openshift-marketplace/community-operators-nmz5s" Feb 23 08:55:30 crc kubenswrapper[4940]: I0223 08:55:30.640998 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-xj85m"] Feb 23 08:55:30 crc kubenswrapper[4940]: W0223 08:55:30.645065 4940 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod67ecdea2_a9cb_4de7_8350_894a47f81718.slice/crio-597b0952c3d1134efc8220d6ccd84d2fbf4d2f69a33a184624158d51e3402eb9 WatchSource:0}: Error finding container 597b0952c3d1134efc8220d6ccd84d2fbf4d2f69a33a184624158d51e3402eb9: Status 404 returned error can't find the container with id 597b0952c3d1134efc8220d6ccd84d2fbf4d2f69a33a184624158d51e3402eb9 Feb 23 08:55:30 crc kubenswrapper[4940]: I0223 08:55:30.657162 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nmz5s" Feb 23 08:55:30 crc kubenswrapper[4940]: I0223 08:55:30.902348 4940 generic.go:334] "Generic (PLEG): container finished" podID="b812b371-b4f8-439d-8a46-152ba8e9b7bf" containerID="285def2e565f3e708dd91119aa4cf936e28139b14d5b9df7f6b3513478f305c9" exitCode=0 Feb 23 08:55:30 crc kubenswrapper[4940]: I0223 08:55:30.902409 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6t7mh" event={"ID":"b812b371-b4f8-439d-8a46-152ba8e9b7bf","Type":"ContainerDied","Data":"285def2e565f3e708dd91119aa4cf936e28139b14d5b9df7f6b3513478f305c9"} Feb 23 08:55:30 crc kubenswrapper[4940]: I0223 08:55:30.905636 4940 generic.go:334] "Generic (PLEG): container finished" podID="612eb23a-7ea4-4c79-bcfe-a627918a7e3f" containerID="a117302b24c3956da01e292ba08c65122e6d8f97847a10e916ef75c39f0e5ee6" exitCode=0 Feb 23 08:55:30 crc kubenswrapper[4940]: I0223 08:55:30.906403 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qmdzn" event={"ID":"612eb23a-7ea4-4c79-bcfe-a627918a7e3f","Type":"ContainerDied","Data":"a117302b24c3956da01e292ba08c65122e6d8f97847a10e916ef75c39f0e5ee6"} Feb 23 08:55:30 crc kubenswrapper[4940]: I0223 08:55:30.909374 4940 generic.go:334] "Generic (PLEG): container finished" podID="67ecdea2-a9cb-4de7-8350-894a47f81718" containerID="b268e3397600f032ca6e965b7e80582aa3b72aa69a7d6a1d213a8aa3361180c3" exitCode=0 Feb 23 08:55:30 crc kubenswrapper[4940]: I0223 08:55:30.909486 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xj85m" event={"ID":"67ecdea2-a9cb-4de7-8350-894a47f81718","Type":"ContainerDied","Data":"b268e3397600f032ca6e965b7e80582aa3b72aa69a7d6a1d213a8aa3361180c3"} Feb 23 08:55:30 crc kubenswrapper[4940]: I0223 08:55:30.910012 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xj85m" event={"ID":"67ecdea2-a9cb-4de7-8350-894a47f81718","Type":"ContainerStarted","Data":"597b0952c3d1134efc8220d6ccd84d2fbf4d2f69a33a184624158d51e3402eb9"} Feb 23 08:55:31 crc kubenswrapper[4940]: I0223 08:55:31.037445 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-nmz5s"] Feb 23 08:55:31 crc kubenswrapper[4940]: W0223 08:55:31.041886 4940 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf93c4964_18cf_48c3_b3b1_dc7107d8542a.slice/crio-9bcc4197e41884827aa6beaff37a92034f2fc996dc30eb1462382738d2a6669d WatchSource:0}: Error finding container 9bcc4197e41884827aa6beaff37a92034f2fc996dc30eb1462382738d2a6669d: Status 404 returned error can't find the container with id 9bcc4197e41884827aa6beaff37a92034f2fc996dc30eb1462382738d2a6669d Feb 23 08:55:31 crc kubenswrapper[4940]: I0223 08:55:31.429203 4940 patch_prober.go:28] interesting pod/machine-config-daemon-26mgs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 23 08:55:31 crc kubenswrapper[4940]: I0223 08:55:31.429249 4940 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" podUID="f3f2cfd6-5ddf-436d-998f-440f1cc642b1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 23 08:55:31 crc kubenswrapper[4940]: I0223 08:55:31.429289 4940 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" Feb 23 08:55:31 crc kubenswrapper[4940]: I0223 08:55:31.430353 4940 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"829eb1f5afcbc1f2a52e3bf9dd8fc7112a0f410f0c39601eac390265ff2bb42a"} pod="openshift-machine-config-operator/machine-config-daemon-26mgs" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 23 08:55:31 crc kubenswrapper[4940]: I0223 08:55:31.430465 4940 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" podUID="f3f2cfd6-5ddf-436d-998f-440f1cc642b1" containerName="machine-config-daemon" containerID="cri-o://829eb1f5afcbc1f2a52e3bf9dd8fc7112a0f410f0c39601eac390265ff2bb42a" gracePeriod=600 Feb 23 08:55:31 crc kubenswrapper[4940]: I0223 08:55:31.926194 4940 generic.go:334] "Generic (PLEG): container finished" podID="67ecdea2-a9cb-4de7-8350-894a47f81718" containerID="2a7a7e36c6b83e68641e4beb9faee4dcd6f50a820bb8cd0c2905a54dad3a76b2" exitCode=0 Feb 23 08:55:31 crc kubenswrapper[4940]: I0223 08:55:31.926265 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xj85m" event={"ID":"67ecdea2-a9cb-4de7-8350-894a47f81718","Type":"ContainerDied","Data":"2a7a7e36c6b83e68641e4beb9faee4dcd6f50a820bb8cd0c2905a54dad3a76b2"} Feb 23 08:55:31 crc kubenswrapper[4940]: I0223 08:55:31.938734 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6t7mh" event={"ID":"b812b371-b4f8-439d-8a46-152ba8e9b7bf","Type":"ContainerStarted","Data":"a75a725b8fff133fe054034c8693c90acb553ff685c4d7d287f1b50b017e7d5f"} Feb 23 08:55:31 crc kubenswrapper[4940]: I0223 08:55:31.942623 4940 generic.go:334] "Generic (PLEG): container finished" podID="f3f2cfd6-5ddf-436d-998f-440f1cc642b1" containerID="829eb1f5afcbc1f2a52e3bf9dd8fc7112a0f410f0c39601eac390265ff2bb42a" exitCode=0 Feb 23 08:55:31 crc kubenswrapper[4940]: I0223 08:55:31.942652 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" event={"ID":"f3f2cfd6-5ddf-436d-998f-440f1cc642b1","Type":"ContainerDied","Data":"829eb1f5afcbc1f2a52e3bf9dd8fc7112a0f410f0c39601eac390265ff2bb42a"} Feb 23 08:55:31 crc kubenswrapper[4940]: I0223 08:55:31.942720 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" event={"ID":"f3f2cfd6-5ddf-436d-998f-440f1cc642b1","Type":"ContainerStarted","Data":"082fd847d235e860d5089e1f82b477d7544abdb6fa8b0e1a1f32dd0087a19959"} Feb 23 08:55:31 crc kubenswrapper[4940]: I0223 08:55:31.942743 4940 scope.go:117] "RemoveContainer" containerID="a57b4e844c0d09d7debf8e4d2507287815cc75be308dc7010647ac2041181dc7" Feb 23 08:55:31 crc kubenswrapper[4940]: I0223 08:55:31.952112 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qmdzn" event={"ID":"612eb23a-7ea4-4c79-bcfe-a627918a7e3f","Type":"ContainerStarted","Data":"dc6784f062f6e25248c6d62512362baa54003793ef922777ba1742f871807b15"} Feb 23 08:55:31 crc kubenswrapper[4940]: I0223 08:55:31.956998 4940 generic.go:334] "Generic (PLEG): container finished" podID="f93c4964-18cf-48c3-b3b1-dc7107d8542a" containerID="36e31c6f9835ed07f395266619dfd34681121488e5f016f68e84d63d1c11b179" exitCode=0 Feb 23 08:55:31 crc kubenswrapper[4940]: I0223 08:55:31.957059 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nmz5s" event={"ID":"f93c4964-18cf-48c3-b3b1-dc7107d8542a","Type":"ContainerDied","Data":"36e31c6f9835ed07f395266619dfd34681121488e5f016f68e84d63d1c11b179"} Feb 23 08:55:31 crc kubenswrapper[4940]: I0223 08:55:31.957099 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nmz5s" event={"ID":"f93c4964-18cf-48c3-b3b1-dc7107d8542a","Type":"ContainerStarted","Data":"9bcc4197e41884827aa6beaff37a92034f2fc996dc30eb1462382738d2a6669d"} Feb 23 08:55:31 crc kubenswrapper[4940]: I0223 08:55:31.982303 4940 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-6t7mh" podStartSLOduration=2.468857523 podStartE2EDuration="4.982283073s" podCreationTimestamp="2026-02-23 08:55:27 +0000 UTC" firstStartedPulling="2026-02-23 08:55:28.885718997 +0000 UTC m=+460.268925154" lastFinishedPulling="2026-02-23 08:55:31.399144547 +0000 UTC m=+462.782350704" observedRunningTime="2026-02-23 08:55:31.969798237 +0000 UTC m=+463.353004394" watchObservedRunningTime="2026-02-23 08:55:31.982283073 +0000 UTC m=+463.365489230" Feb 23 08:55:32 crc kubenswrapper[4940]: I0223 08:55:32.970860 4940 generic.go:334] "Generic (PLEG): container finished" podID="f93c4964-18cf-48c3-b3b1-dc7107d8542a" containerID="5dd3d8d82d7334b3db487875fe6c57d64a53029511c6562c99081298913093ee" exitCode=0 Feb 23 08:55:32 crc kubenswrapper[4940]: I0223 08:55:32.970944 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nmz5s" event={"ID":"f93c4964-18cf-48c3-b3b1-dc7107d8542a","Type":"ContainerDied","Data":"5dd3d8d82d7334b3db487875fe6c57d64a53029511c6562c99081298913093ee"} Feb 23 08:55:32 crc kubenswrapper[4940]: I0223 08:55:32.974189 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xj85m" event={"ID":"67ecdea2-a9cb-4de7-8350-894a47f81718","Type":"ContainerStarted","Data":"8b6045a482b117e66f4de4ad021ea277c4cb65d1b82c113346f665bcfc5e0f8b"} Feb 23 08:55:33 crc kubenswrapper[4940]: I0223 08:55:33.000525 4940 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-qmdzn" podStartSLOduration=3.600474165 podStartE2EDuration="6.000504541s" podCreationTimestamp="2026-02-23 08:55:27 +0000 UTC" firstStartedPulling="2026-02-23 08:55:28.887950059 +0000 UTC m=+460.271156226" lastFinishedPulling="2026-02-23 08:55:31.287980445 +0000 UTC m=+462.671186602" observedRunningTime="2026-02-23 08:55:32.029178073 +0000 UTC m=+463.412384350" watchObservedRunningTime="2026-02-23 08:55:33.000504541 +0000 UTC m=+464.383710718" Feb 23 08:55:33 crc kubenswrapper[4940]: I0223 08:55:33.017270 4940 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-xj85m" podStartSLOduration=1.619287001 podStartE2EDuration="3.017254134s" podCreationTimestamp="2026-02-23 08:55:30 +0000 UTC" firstStartedPulling="2026-02-23 08:55:30.910502313 +0000 UTC m=+462.293708470" lastFinishedPulling="2026-02-23 08:55:32.308469436 +0000 UTC m=+463.691675603" observedRunningTime="2026-02-23 08:55:33.012706198 +0000 UTC m=+464.395912355" watchObservedRunningTime="2026-02-23 08:55:33.017254134 +0000 UTC m=+464.400460281" Feb 23 08:55:33 crc kubenswrapper[4940]: I0223 08:55:33.980449 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nmz5s" event={"ID":"f93c4964-18cf-48c3-b3b1-dc7107d8542a","Type":"ContainerStarted","Data":"d90ece7f18c28fb843f8daad6f4839987b5a782d66d3f5ba12fa47ea602b08d9"} Feb 23 08:55:33 crc kubenswrapper[4940]: I0223 08:55:33.999447 4940 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-nmz5s" podStartSLOduration=2.551179546 podStartE2EDuration="3.999425696s" podCreationTimestamp="2026-02-23 08:55:30 +0000 UTC" firstStartedPulling="2026-02-23 08:55:31.958359743 +0000 UTC m=+463.341565900" lastFinishedPulling="2026-02-23 08:55:33.406605893 +0000 UTC m=+464.789812050" observedRunningTime="2026-02-23 08:55:33.998554708 +0000 UTC m=+465.381760875" watchObservedRunningTime="2026-02-23 08:55:33.999425696 +0000 UTC m=+465.382631853" Feb 23 08:55:38 crc kubenswrapper[4940]: I0223 08:55:38.077165 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-qmdzn" Feb 23 08:55:38 crc kubenswrapper[4940]: I0223 08:55:38.077763 4940 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-qmdzn" Feb 23 08:55:38 crc kubenswrapper[4940]: I0223 08:55:38.120898 4940 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-qmdzn" Feb 23 08:55:38 crc kubenswrapper[4940]: I0223 08:55:38.249281 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-6t7mh" Feb 23 08:55:38 crc kubenswrapper[4940]: I0223 08:55:38.249354 4940 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-6t7mh" Feb 23 08:55:38 crc kubenswrapper[4940]: I0223 08:55:38.288937 4940 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-6t7mh" Feb 23 08:55:39 crc kubenswrapper[4940]: I0223 08:55:39.068114 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-6t7mh" Feb 23 08:55:39 crc kubenswrapper[4940]: I0223 08:55:39.085516 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-qmdzn" Feb 23 08:55:40 crc kubenswrapper[4940]: I0223 08:55:40.436687 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-xj85m" Feb 23 08:55:40 crc kubenswrapper[4940]: I0223 08:55:40.436741 4940 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-xj85m" Feb 23 08:55:40 crc kubenswrapper[4940]: I0223 08:55:40.481586 4940 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-xj85m" Feb 23 08:55:40 crc kubenswrapper[4940]: I0223 08:55:40.658766 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-nmz5s" Feb 23 08:55:40 crc kubenswrapper[4940]: I0223 08:55:40.659869 4940 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-nmz5s" Feb 23 08:55:40 crc kubenswrapper[4940]: I0223 08:55:40.705963 4940 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-nmz5s" Feb 23 08:55:41 crc kubenswrapper[4940]: I0223 08:55:41.082364 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-nmz5s" Feb 23 08:55:41 crc kubenswrapper[4940]: I0223 08:55:41.101532 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-xj85m" Feb 23 08:55:43 crc kubenswrapper[4940]: I0223 08:55:43.454043 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 08:55:53 crc kubenswrapper[4940]: I0223 08:55:53.186378 4940 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-kzrfw" podUID="05cfdf5e-5390-4f32-986d-02872c05f444" containerName="registry" containerID="cri-o://1c8867353593e94f0ede13e01668356dc2d183093456d7d2df192c012790e029" gracePeriod=30 Feb 23 08:55:53 crc kubenswrapper[4940]: I0223 08:55:53.564664 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-kzrfw" Feb 23 08:55:53 crc kubenswrapper[4940]: I0223 08:55:53.619541 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/05cfdf5e-5390-4f32-986d-02872c05f444-ca-trust-extracted\") pod \"05cfdf5e-5390-4f32-986d-02872c05f444\" (UID: \"05cfdf5e-5390-4f32-986d-02872c05f444\") " Feb 23 08:55:53 crc kubenswrapper[4940]: I0223 08:55:53.619601 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/05cfdf5e-5390-4f32-986d-02872c05f444-registry-certificates\") pod \"05cfdf5e-5390-4f32-986d-02872c05f444\" (UID: \"05cfdf5e-5390-4f32-986d-02872c05f444\") " Feb 23 08:55:53 crc kubenswrapper[4940]: I0223 08:55:53.619832 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"05cfdf5e-5390-4f32-986d-02872c05f444\" (UID: \"05cfdf5e-5390-4f32-986d-02872c05f444\") " Feb 23 08:55:53 crc kubenswrapper[4940]: I0223 08:55:53.619867 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/05cfdf5e-5390-4f32-986d-02872c05f444-bound-sa-token\") pod \"05cfdf5e-5390-4f32-986d-02872c05f444\" (UID: \"05cfdf5e-5390-4f32-986d-02872c05f444\") " Feb 23 08:55:53 crc kubenswrapper[4940]: I0223 08:55:53.619920 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/05cfdf5e-5390-4f32-986d-02872c05f444-trusted-ca\") pod \"05cfdf5e-5390-4f32-986d-02872c05f444\" (UID: \"05cfdf5e-5390-4f32-986d-02872c05f444\") " Feb 23 08:55:53 crc kubenswrapper[4940]: I0223 08:55:53.619972 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/05cfdf5e-5390-4f32-986d-02872c05f444-installation-pull-secrets\") pod \"05cfdf5e-5390-4f32-986d-02872c05f444\" (UID: \"05cfdf5e-5390-4f32-986d-02872c05f444\") " Feb 23 08:55:53 crc kubenswrapper[4940]: I0223 08:55:53.620019 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5btc8\" (UniqueName: \"kubernetes.io/projected/05cfdf5e-5390-4f32-986d-02872c05f444-kube-api-access-5btc8\") pod \"05cfdf5e-5390-4f32-986d-02872c05f444\" (UID: \"05cfdf5e-5390-4f32-986d-02872c05f444\") " Feb 23 08:55:53 crc kubenswrapper[4940]: I0223 08:55:53.620059 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/05cfdf5e-5390-4f32-986d-02872c05f444-registry-tls\") pod \"05cfdf5e-5390-4f32-986d-02872c05f444\" (UID: \"05cfdf5e-5390-4f32-986d-02872c05f444\") " Feb 23 08:55:53 crc kubenswrapper[4940]: I0223 08:55:53.621085 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/05cfdf5e-5390-4f32-986d-02872c05f444-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "05cfdf5e-5390-4f32-986d-02872c05f444" (UID: "05cfdf5e-5390-4f32-986d-02872c05f444"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 08:55:53 crc kubenswrapper[4940]: I0223 08:55:53.621211 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/05cfdf5e-5390-4f32-986d-02872c05f444-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "05cfdf5e-5390-4f32-986d-02872c05f444" (UID: "05cfdf5e-5390-4f32-986d-02872c05f444"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 08:55:53 crc kubenswrapper[4940]: I0223 08:55:53.625838 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/05cfdf5e-5390-4f32-986d-02872c05f444-kube-api-access-5btc8" (OuterVolumeSpecName: "kube-api-access-5btc8") pod "05cfdf5e-5390-4f32-986d-02872c05f444" (UID: "05cfdf5e-5390-4f32-986d-02872c05f444"). InnerVolumeSpecName "kube-api-access-5btc8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 08:55:53 crc kubenswrapper[4940]: I0223 08:55:53.626081 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/05cfdf5e-5390-4f32-986d-02872c05f444-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "05cfdf5e-5390-4f32-986d-02872c05f444" (UID: "05cfdf5e-5390-4f32-986d-02872c05f444"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 08:55:53 crc kubenswrapper[4940]: I0223 08:55:53.626688 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/05cfdf5e-5390-4f32-986d-02872c05f444-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "05cfdf5e-5390-4f32-986d-02872c05f444" (UID: "05cfdf5e-5390-4f32-986d-02872c05f444"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 08:55:53 crc kubenswrapper[4940]: I0223 08:55:53.626886 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/05cfdf5e-5390-4f32-986d-02872c05f444-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "05cfdf5e-5390-4f32-986d-02872c05f444" (UID: "05cfdf5e-5390-4f32-986d-02872c05f444"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 08:55:53 crc kubenswrapper[4940]: I0223 08:55:53.630759 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "05cfdf5e-5390-4f32-986d-02872c05f444" (UID: "05cfdf5e-5390-4f32-986d-02872c05f444"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 23 08:55:53 crc kubenswrapper[4940]: I0223 08:55:53.642890 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/05cfdf5e-5390-4f32-986d-02872c05f444-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "05cfdf5e-5390-4f32-986d-02872c05f444" (UID: "05cfdf5e-5390-4f32-986d-02872c05f444"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 08:55:53 crc kubenswrapper[4940]: I0223 08:55:53.720994 4940 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/05cfdf5e-5390-4f32-986d-02872c05f444-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Feb 23 08:55:53 crc kubenswrapper[4940]: I0223 08:55:53.721024 4940 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/05cfdf5e-5390-4f32-986d-02872c05f444-registry-certificates\") on node \"crc\" DevicePath \"\"" Feb 23 08:55:53 crc kubenswrapper[4940]: I0223 08:55:53.721037 4940 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/05cfdf5e-5390-4f32-986d-02872c05f444-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 23 08:55:53 crc kubenswrapper[4940]: I0223 08:55:53.721045 4940 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/05cfdf5e-5390-4f32-986d-02872c05f444-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 23 08:55:53 crc kubenswrapper[4940]: I0223 08:55:53.721053 4940 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/05cfdf5e-5390-4f32-986d-02872c05f444-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Feb 23 08:55:53 crc kubenswrapper[4940]: I0223 08:55:53.721062 4940 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5btc8\" (UniqueName: \"kubernetes.io/projected/05cfdf5e-5390-4f32-986d-02872c05f444-kube-api-access-5btc8\") on node \"crc\" DevicePath \"\"" Feb 23 08:55:53 crc kubenswrapper[4940]: I0223 08:55:53.721070 4940 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/05cfdf5e-5390-4f32-986d-02872c05f444-registry-tls\") on node \"crc\" DevicePath \"\"" Feb 23 08:55:54 crc kubenswrapper[4940]: I0223 08:55:54.103299 4940 generic.go:334] "Generic (PLEG): container finished" podID="05cfdf5e-5390-4f32-986d-02872c05f444" containerID="1c8867353593e94f0ede13e01668356dc2d183093456d7d2df192c012790e029" exitCode=0 Feb 23 08:55:54 crc kubenswrapper[4940]: I0223 08:55:54.103373 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-kzrfw" event={"ID":"05cfdf5e-5390-4f32-986d-02872c05f444","Type":"ContainerDied","Data":"1c8867353593e94f0ede13e01668356dc2d183093456d7d2df192c012790e029"} Feb 23 08:55:54 crc kubenswrapper[4940]: I0223 08:55:54.103423 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-kzrfw" event={"ID":"05cfdf5e-5390-4f32-986d-02872c05f444","Type":"ContainerDied","Data":"ef97393bd95d235ea6f1c38119a4749b446c23f6e22aafecb53ba9c5c8018584"} Feb 23 08:55:54 crc kubenswrapper[4940]: I0223 08:55:54.103460 4940 scope.go:117] "RemoveContainer" containerID="1c8867353593e94f0ede13e01668356dc2d183093456d7d2df192c012790e029" Feb 23 08:55:54 crc kubenswrapper[4940]: I0223 08:55:54.103758 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-kzrfw" Feb 23 08:55:54 crc kubenswrapper[4940]: I0223 08:55:54.131338 4940 scope.go:117] "RemoveContainer" containerID="1c8867353593e94f0ede13e01668356dc2d183093456d7d2df192c012790e029" Feb 23 08:55:54 crc kubenswrapper[4940]: E0223 08:55:54.132280 4940 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1c8867353593e94f0ede13e01668356dc2d183093456d7d2df192c012790e029\": container with ID starting with 1c8867353593e94f0ede13e01668356dc2d183093456d7d2df192c012790e029 not found: ID does not exist" containerID="1c8867353593e94f0ede13e01668356dc2d183093456d7d2df192c012790e029" Feb 23 08:55:54 crc kubenswrapper[4940]: I0223 08:55:54.132439 4940 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1c8867353593e94f0ede13e01668356dc2d183093456d7d2df192c012790e029"} err="failed to get container status \"1c8867353593e94f0ede13e01668356dc2d183093456d7d2df192c012790e029\": rpc error: code = NotFound desc = could not find container \"1c8867353593e94f0ede13e01668356dc2d183093456d7d2df192c012790e029\": container with ID starting with 1c8867353593e94f0ede13e01668356dc2d183093456d7d2df192c012790e029 not found: ID does not exist" Feb 23 08:55:54 crc kubenswrapper[4940]: I0223 08:55:54.149946 4940 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-kzrfw"] Feb 23 08:55:54 crc kubenswrapper[4940]: I0223 08:55:54.156441 4940 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-kzrfw"] Feb 23 08:55:55 crc kubenswrapper[4940]: I0223 08:55:55.355336 4940 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="05cfdf5e-5390-4f32-986d-02872c05f444" path="/var/lib/kubelet/pods/05cfdf5e-5390-4f32-986d-02872c05f444/volumes" Feb 23 08:57:31 crc kubenswrapper[4940]: I0223 08:57:31.429519 4940 patch_prober.go:28] interesting pod/machine-config-daemon-26mgs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 23 08:57:31 crc kubenswrapper[4940]: I0223 08:57:31.430376 4940 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" podUID="f3f2cfd6-5ddf-436d-998f-440f1cc642b1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 23 08:58:01 crc kubenswrapper[4940]: I0223 08:58:01.429545 4940 patch_prober.go:28] interesting pod/machine-config-daemon-26mgs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 23 08:58:01 crc kubenswrapper[4940]: I0223 08:58:01.430251 4940 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" podUID="f3f2cfd6-5ddf-436d-998f-440f1cc642b1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 23 08:58:31 crc kubenswrapper[4940]: I0223 08:58:31.429708 4940 patch_prober.go:28] interesting pod/machine-config-daemon-26mgs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 23 08:58:31 crc kubenswrapper[4940]: I0223 08:58:31.430525 4940 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" podUID="f3f2cfd6-5ddf-436d-998f-440f1cc642b1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 23 08:58:31 crc kubenswrapper[4940]: I0223 08:58:31.430609 4940 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" Feb 23 08:58:31 crc kubenswrapper[4940]: I0223 08:58:31.431883 4940 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"082fd847d235e860d5089e1f82b477d7544abdb6fa8b0e1a1f32dd0087a19959"} pod="openshift-machine-config-operator/machine-config-daemon-26mgs" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 23 08:58:31 crc kubenswrapper[4940]: I0223 08:58:31.432014 4940 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" podUID="f3f2cfd6-5ddf-436d-998f-440f1cc642b1" containerName="machine-config-daemon" containerID="cri-o://082fd847d235e860d5089e1f82b477d7544abdb6fa8b0e1a1f32dd0087a19959" gracePeriod=600 Feb 23 08:58:31 crc kubenswrapper[4940]: I0223 08:58:31.595138 4940 generic.go:334] "Generic (PLEG): container finished" podID="f3f2cfd6-5ddf-436d-998f-440f1cc642b1" containerID="082fd847d235e860d5089e1f82b477d7544abdb6fa8b0e1a1f32dd0087a19959" exitCode=0 Feb 23 08:58:31 crc kubenswrapper[4940]: I0223 08:58:31.595209 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" event={"ID":"f3f2cfd6-5ddf-436d-998f-440f1cc642b1","Type":"ContainerDied","Data":"082fd847d235e860d5089e1f82b477d7544abdb6fa8b0e1a1f32dd0087a19959"} Feb 23 08:58:31 crc kubenswrapper[4940]: I0223 08:58:31.595293 4940 scope.go:117] "RemoveContainer" containerID="829eb1f5afcbc1f2a52e3bf9dd8fc7112a0f410f0c39601eac390265ff2bb42a" Feb 23 08:58:32 crc kubenswrapper[4940]: I0223 08:58:32.607771 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" event={"ID":"f3f2cfd6-5ddf-436d-998f-440f1cc642b1","Type":"ContainerStarted","Data":"2bd3eb7943bad5600333867d165292448d5139e55ddeeb5836e6b84ea71a9212"} Feb 23 08:58:53 crc kubenswrapper[4940]: I0223 08:58:53.234164 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-ls9d2"] Feb 23 08:58:53 crc kubenswrapper[4940]: E0223 08:58:53.235380 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="05cfdf5e-5390-4f32-986d-02872c05f444" containerName="registry" Feb 23 08:58:53 crc kubenswrapper[4940]: I0223 08:58:53.235398 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="05cfdf5e-5390-4f32-986d-02872c05f444" containerName="registry" Feb 23 08:58:53 crc kubenswrapper[4940]: I0223 08:58:53.235527 4940 memory_manager.go:354] "RemoveStaleState removing state" podUID="05cfdf5e-5390-4f32-986d-02872c05f444" containerName="registry" Feb 23 08:58:53 crc kubenswrapper[4940]: I0223 08:58:53.236072 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-ls9d2" Feb 23 08:58:53 crc kubenswrapper[4940]: I0223 08:58:53.237790 4940 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-pmrc5" Feb 23 08:58:53 crc kubenswrapper[4940]: I0223 08:58:53.238678 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Feb 23 08:58:53 crc kubenswrapper[4940]: I0223 08:58:53.238830 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-858654f9db-csxvp"] Feb 23 08:58:53 crc kubenswrapper[4940]: I0223 08:58:53.239758 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-csxvp" Feb 23 08:58:53 crc kubenswrapper[4940]: I0223 08:58:53.241716 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Feb 23 08:58:53 crc kubenswrapper[4940]: I0223 08:58:53.243730 4940 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-jkzr7" Feb 23 08:58:53 crc kubenswrapper[4940]: I0223 08:58:53.246481 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-ls9d2"] Feb 23 08:58:53 crc kubenswrapper[4940]: I0223 08:58:53.282291 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-csxvp"] Feb 23 08:58:53 crc kubenswrapper[4940]: I0223 08:58:53.293147 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-wv7r6"] Feb 23 08:58:53 crc kubenswrapper[4940]: I0223 08:58:53.294448 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-wv7r6" Feb 23 08:58:53 crc kubenswrapper[4940]: I0223 08:58:53.307488 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-wv7r6"] Feb 23 08:58:53 crc kubenswrapper[4940]: I0223 08:58:53.308068 4940 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-sdzqz" Feb 23 08:58:53 crc kubenswrapper[4940]: I0223 08:58:53.401270 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b5pmr\" (UniqueName: \"kubernetes.io/projected/ea6d2e05-15f3-4d73-b9e7-d22652f685ff-kube-api-access-b5pmr\") pod \"cert-manager-858654f9db-csxvp\" (UID: \"ea6d2e05-15f3-4d73-b9e7-d22652f685ff\") " pod="cert-manager/cert-manager-858654f9db-csxvp" Feb 23 08:58:53 crc kubenswrapper[4940]: I0223 08:58:53.401484 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mxmf9\" (UniqueName: \"kubernetes.io/projected/33328c1e-cfb4-435b-a5a0-8b1ec675055a-kube-api-access-mxmf9\") pod \"cert-manager-webhook-687f57d79b-wv7r6\" (UID: \"33328c1e-cfb4-435b-a5a0-8b1ec675055a\") " pod="cert-manager/cert-manager-webhook-687f57d79b-wv7r6" Feb 23 08:58:53 crc kubenswrapper[4940]: I0223 08:58:53.401549 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xzbsx\" (UniqueName: \"kubernetes.io/projected/ff0dd0c0-0c3d-4373-bf64-bfbfbda693d3-kube-api-access-xzbsx\") pod \"cert-manager-cainjector-cf98fcc89-ls9d2\" (UID: \"ff0dd0c0-0c3d-4373-bf64-bfbfbda693d3\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-ls9d2" Feb 23 08:58:53 crc kubenswrapper[4940]: I0223 08:58:53.502945 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mxmf9\" (UniqueName: \"kubernetes.io/projected/33328c1e-cfb4-435b-a5a0-8b1ec675055a-kube-api-access-mxmf9\") pod \"cert-manager-webhook-687f57d79b-wv7r6\" (UID: \"33328c1e-cfb4-435b-a5a0-8b1ec675055a\") " pod="cert-manager/cert-manager-webhook-687f57d79b-wv7r6" Feb 23 08:58:53 crc kubenswrapper[4940]: I0223 08:58:53.502995 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xzbsx\" (UniqueName: \"kubernetes.io/projected/ff0dd0c0-0c3d-4373-bf64-bfbfbda693d3-kube-api-access-xzbsx\") pod \"cert-manager-cainjector-cf98fcc89-ls9d2\" (UID: \"ff0dd0c0-0c3d-4373-bf64-bfbfbda693d3\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-ls9d2" Feb 23 08:58:53 crc kubenswrapper[4940]: I0223 08:58:53.503031 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b5pmr\" (UniqueName: \"kubernetes.io/projected/ea6d2e05-15f3-4d73-b9e7-d22652f685ff-kube-api-access-b5pmr\") pod \"cert-manager-858654f9db-csxvp\" (UID: \"ea6d2e05-15f3-4d73-b9e7-d22652f685ff\") " pod="cert-manager/cert-manager-858654f9db-csxvp" Feb 23 08:58:53 crc kubenswrapper[4940]: I0223 08:58:53.521058 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mxmf9\" (UniqueName: \"kubernetes.io/projected/33328c1e-cfb4-435b-a5a0-8b1ec675055a-kube-api-access-mxmf9\") pod \"cert-manager-webhook-687f57d79b-wv7r6\" (UID: \"33328c1e-cfb4-435b-a5a0-8b1ec675055a\") " pod="cert-manager/cert-manager-webhook-687f57d79b-wv7r6" Feb 23 08:58:53 crc kubenswrapper[4940]: I0223 08:58:53.521248 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xzbsx\" (UniqueName: \"kubernetes.io/projected/ff0dd0c0-0c3d-4373-bf64-bfbfbda693d3-kube-api-access-xzbsx\") pod \"cert-manager-cainjector-cf98fcc89-ls9d2\" (UID: \"ff0dd0c0-0c3d-4373-bf64-bfbfbda693d3\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-ls9d2" Feb 23 08:58:53 crc kubenswrapper[4940]: I0223 08:58:53.522524 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b5pmr\" (UniqueName: \"kubernetes.io/projected/ea6d2e05-15f3-4d73-b9e7-d22652f685ff-kube-api-access-b5pmr\") pod \"cert-manager-858654f9db-csxvp\" (UID: \"ea6d2e05-15f3-4d73-b9e7-d22652f685ff\") " pod="cert-manager/cert-manager-858654f9db-csxvp" Feb 23 08:58:53 crc kubenswrapper[4940]: I0223 08:58:53.608518 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-ls9d2" Feb 23 08:58:53 crc kubenswrapper[4940]: I0223 08:58:53.619464 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-csxvp" Feb 23 08:58:53 crc kubenswrapper[4940]: I0223 08:58:53.626102 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-wv7r6" Feb 23 08:58:53 crc kubenswrapper[4940]: I0223 08:58:53.838675 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-wv7r6"] Feb 23 08:58:53 crc kubenswrapper[4940]: W0223 08:58:53.846021 4940 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod33328c1e_cfb4_435b_a5a0_8b1ec675055a.slice/crio-f8a1063fb8b3c3d87af8f488a06578f20593c7456d937ffa84272d2cdbe089fb WatchSource:0}: Error finding container f8a1063fb8b3c3d87af8f488a06578f20593c7456d937ffa84272d2cdbe089fb: Status 404 returned error can't find the container with id f8a1063fb8b3c3d87af8f488a06578f20593c7456d937ffa84272d2cdbe089fb Feb 23 08:58:53 crc kubenswrapper[4940]: I0223 08:58:53.849449 4940 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 23 08:58:54 crc kubenswrapper[4940]: I0223 08:58:54.111576 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-csxvp"] Feb 23 08:58:54 crc kubenswrapper[4940]: W0223 08:58:54.118190 4940 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podff0dd0c0_0c3d_4373_bf64_bfbfbda693d3.slice/crio-d78629cd64cb7c113ceded7e44e556258e620d51c35558daf2b6d7b252b71bc0 WatchSource:0}: Error finding container d78629cd64cb7c113ceded7e44e556258e620d51c35558daf2b6d7b252b71bc0: Status 404 returned error can't find the container with id d78629cd64cb7c113ceded7e44e556258e620d51c35558daf2b6d7b252b71bc0 Feb 23 08:58:54 crc kubenswrapper[4940]: W0223 08:58:54.119522 4940 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podea6d2e05_15f3_4d73_b9e7_d22652f685ff.slice/crio-fe9fe901581a6cd1851b215b580fbe1a4be6b05e2d0b52348df194587e20d795 WatchSource:0}: Error finding container fe9fe901581a6cd1851b215b580fbe1a4be6b05e2d0b52348df194587e20d795: Status 404 returned error can't find the container with id fe9fe901581a6cd1851b215b580fbe1a4be6b05e2d0b52348df194587e20d795 Feb 23 08:58:54 crc kubenswrapper[4940]: I0223 08:58:54.120312 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-ls9d2"] Feb 23 08:58:54 crc kubenswrapper[4940]: I0223 08:58:54.768842 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-wv7r6" event={"ID":"33328c1e-cfb4-435b-a5a0-8b1ec675055a","Type":"ContainerStarted","Data":"f8a1063fb8b3c3d87af8f488a06578f20593c7456d937ffa84272d2cdbe089fb"} Feb 23 08:58:54 crc kubenswrapper[4940]: I0223 08:58:54.771462 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-ls9d2" event={"ID":"ff0dd0c0-0c3d-4373-bf64-bfbfbda693d3","Type":"ContainerStarted","Data":"d78629cd64cb7c113ceded7e44e556258e620d51c35558daf2b6d7b252b71bc0"} Feb 23 08:58:54 crc kubenswrapper[4940]: I0223 08:58:54.773009 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-csxvp" event={"ID":"ea6d2e05-15f3-4d73-b9e7-d22652f685ff","Type":"ContainerStarted","Data":"fe9fe901581a6cd1851b215b580fbe1a4be6b05e2d0b52348df194587e20d795"} Feb 23 08:58:56 crc kubenswrapper[4940]: I0223 08:58:56.785598 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-wv7r6" event={"ID":"33328c1e-cfb4-435b-a5a0-8b1ec675055a","Type":"ContainerStarted","Data":"fa9ce3989ebbf1f1de8441cd1be59269a6362dfe80d7f02ca1805c89ca88f3f5"} Feb 23 08:58:56 crc kubenswrapper[4940]: I0223 08:58:56.786187 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-687f57d79b-wv7r6" Feb 23 08:58:56 crc kubenswrapper[4940]: I0223 08:58:56.799258 4940 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-687f57d79b-wv7r6" podStartSLOduration=1.3478372 podStartE2EDuration="3.799244339s" podCreationTimestamp="2026-02-23 08:58:53 +0000 UTC" firstStartedPulling="2026-02-23 08:58:53.849128325 +0000 UTC m=+665.232334482" lastFinishedPulling="2026-02-23 08:58:56.300535454 +0000 UTC m=+667.683741621" observedRunningTime="2026-02-23 08:58:56.798027583 +0000 UTC m=+668.181233740" watchObservedRunningTime="2026-02-23 08:58:56.799244339 +0000 UTC m=+668.182450496" Feb 23 08:58:57 crc kubenswrapper[4940]: I0223 08:58:57.793891 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-csxvp" event={"ID":"ea6d2e05-15f3-4d73-b9e7-d22652f685ff","Type":"ContainerStarted","Data":"a7173e022e599904e07dde37de87627bdc53e0fb6ab8f83957984ec5ee9159a8"} Feb 23 08:58:57 crc kubenswrapper[4940]: I0223 08:58:57.796532 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-ls9d2" event={"ID":"ff0dd0c0-0c3d-4373-bf64-bfbfbda693d3","Type":"ContainerStarted","Data":"4e05478573f9089ea80384d54ece532a7233a95d18232c39d2766dae7e810100"} Feb 23 08:58:57 crc kubenswrapper[4940]: I0223 08:58:57.815394 4940 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-858654f9db-csxvp" podStartSLOduration=1.374940172 podStartE2EDuration="4.815379988s" podCreationTimestamp="2026-02-23 08:58:53 +0000 UTC" firstStartedPulling="2026-02-23 08:58:54.121518356 +0000 UTC m=+665.504724513" lastFinishedPulling="2026-02-23 08:58:57.561958152 +0000 UTC m=+668.945164329" observedRunningTime="2026-02-23 08:58:57.811760608 +0000 UTC m=+669.194966775" watchObservedRunningTime="2026-02-23 08:58:57.815379988 +0000 UTC m=+669.198586165" Feb 23 08:58:57 crc kubenswrapper[4940]: I0223 08:58:57.827021 4940 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-cf98fcc89-ls9d2" podStartSLOduration=1.458412313 podStartE2EDuration="4.82700296s" podCreationTimestamp="2026-02-23 08:58:53 +0000 UTC" firstStartedPulling="2026-02-23 08:58:54.120795474 +0000 UTC m=+665.504001631" lastFinishedPulling="2026-02-23 08:58:57.489386121 +0000 UTC m=+668.872592278" observedRunningTime="2026-02-23 08:58:57.82569246 +0000 UTC m=+669.208898637" watchObservedRunningTime="2026-02-23 08:58:57.82700296 +0000 UTC m=+669.210209127" Feb 23 08:59:02 crc kubenswrapper[4940]: I0223 08:59:02.943159 4940 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-qkw6w"] Feb 23 08:59:02 crc kubenswrapper[4940]: I0223 08:59:02.944201 4940 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-qkw6w" podUID="d0b5a971-c6f4-4518-9bb3-49d228275668" containerName="northd" containerID="cri-o://5cff793d0cde25f9d7edf3abaaf81005870d7ad01f985cfcc3cf3f5a57ea062b" gracePeriod=30 Feb 23 08:59:02 crc kubenswrapper[4940]: I0223 08:59:02.944376 4940 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-qkw6w" podUID="d0b5a971-c6f4-4518-9bb3-49d228275668" containerName="nbdb" containerID="cri-o://683969deceafc005dbe91b7dbc1fd159cc1da9f55e48112ff68947c40997891e" gracePeriod=30 Feb 23 08:59:02 crc kubenswrapper[4940]: I0223 08:59:02.944454 4940 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-qkw6w" podUID="d0b5a971-c6f4-4518-9bb3-49d228275668" containerName="kube-rbac-proxy-node" containerID="cri-o://524ac500bdffecad2d1533ea863b5f2dc0acd130368e80faa476facf6817bcc0" gracePeriod=30 Feb 23 08:59:02 crc kubenswrapper[4940]: I0223 08:59:02.944503 4940 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-qkw6w" podUID="d0b5a971-c6f4-4518-9bb3-49d228275668" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://4558e2adbf43351cb79df3a10fc2e37fa7edc8eae79ceb459ae1ea068e8df21b" gracePeriod=30 Feb 23 08:59:02 crc kubenswrapper[4940]: I0223 08:59:02.944572 4940 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-qkw6w" podUID="d0b5a971-c6f4-4518-9bb3-49d228275668" containerName="ovn-acl-logging" containerID="cri-o://2f189355dc4d43c8b1df6111c9c3a128e90b724b976602933c384817e4f51882" gracePeriod=30 Feb 23 08:59:02 crc kubenswrapper[4940]: I0223 08:59:02.944577 4940 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-qkw6w" podUID="d0b5a971-c6f4-4518-9bb3-49d228275668" containerName="ovn-controller" containerID="cri-o://9ef9add89b50dd7b480368148d404e8f7128d92e6d921c3c89a17463706f3dca" gracePeriod=30 Feb 23 08:59:02 crc kubenswrapper[4940]: I0223 08:59:02.944768 4940 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-qkw6w" podUID="d0b5a971-c6f4-4518-9bb3-49d228275668" containerName="sbdb" containerID="cri-o://a7a97208ce587c7633bbea9ad618e0721a712dfd62ff249a5843018bc125922a" gracePeriod=30 Feb 23 08:59:03 crc kubenswrapper[4940]: I0223 08:59:03.000151 4940 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-qkw6w" podUID="d0b5a971-c6f4-4518-9bb3-49d228275668" containerName="ovnkube-controller" containerID="cri-o://340eaaffa94ddb12c76a83ddbb966d6a4f34ca8e74f15b11fe931f5f2c8cca12" gracePeriod=30 Feb 23 08:59:03 crc kubenswrapper[4940]: I0223 08:59:03.262534 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qkw6w_d0b5a971-c6f4-4518-9bb3-49d228275668/ovnkube-controller/3.log" Feb 23 08:59:03 crc kubenswrapper[4940]: I0223 08:59:03.264429 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qkw6w_d0b5a971-c6f4-4518-9bb3-49d228275668/ovn-acl-logging/0.log" Feb 23 08:59:03 crc kubenswrapper[4940]: I0223 08:59:03.264804 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qkw6w_d0b5a971-c6f4-4518-9bb3-49d228275668/ovn-controller/0.log" Feb 23 08:59:03 crc kubenswrapper[4940]: I0223 08:59:03.265119 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-qkw6w" Feb 23 08:59:03 crc kubenswrapper[4940]: I0223 08:59:03.328956 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-qsgqr"] Feb 23 08:59:03 crc kubenswrapper[4940]: E0223 08:59:03.329210 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d0b5a971-c6f4-4518-9bb3-49d228275668" containerName="ovnkube-controller" Feb 23 08:59:03 crc kubenswrapper[4940]: I0223 08:59:03.329225 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="d0b5a971-c6f4-4518-9bb3-49d228275668" containerName="ovnkube-controller" Feb 23 08:59:03 crc kubenswrapper[4940]: E0223 08:59:03.329236 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d0b5a971-c6f4-4518-9bb3-49d228275668" containerName="ovn-acl-logging" Feb 23 08:59:03 crc kubenswrapper[4940]: I0223 08:59:03.329244 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="d0b5a971-c6f4-4518-9bb3-49d228275668" containerName="ovn-acl-logging" Feb 23 08:59:03 crc kubenswrapper[4940]: E0223 08:59:03.329255 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d0b5a971-c6f4-4518-9bb3-49d228275668" containerName="kube-rbac-proxy-node" Feb 23 08:59:03 crc kubenswrapper[4940]: I0223 08:59:03.329264 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="d0b5a971-c6f4-4518-9bb3-49d228275668" containerName="kube-rbac-proxy-node" Feb 23 08:59:03 crc kubenswrapper[4940]: E0223 08:59:03.329283 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d0b5a971-c6f4-4518-9bb3-49d228275668" containerName="nbdb" Feb 23 08:59:03 crc kubenswrapper[4940]: I0223 08:59:03.329291 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="d0b5a971-c6f4-4518-9bb3-49d228275668" containerName="nbdb" Feb 23 08:59:03 crc kubenswrapper[4940]: E0223 08:59:03.329303 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d0b5a971-c6f4-4518-9bb3-49d228275668" containerName="ovnkube-controller" Feb 23 08:59:03 crc kubenswrapper[4940]: I0223 08:59:03.329311 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="d0b5a971-c6f4-4518-9bb3-49d228275668" containerName="ovnkube-controller" Feb 23 08:59:03 crc kubenswrapper[4940]: E0223 08:59:03.329321 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d0b5a971-c6f4-4518-9bb3-49d228275668" containerName="kubecfg-setup" Feb 23 08:59:03 crc kubenswrapper[4940]: I0223 08:59:03.329331 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="d0b5a971-c6f4-4518-9bb3-49d228275668" containerName="kubecfg-setup" Feb 23 08:59:03 crc kubenswrapper[4940]: E0223 08:59:03.329340 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d0b5a971-c6f4-4518-9bb3-49d228275668" containerName="ovn-controller" Feb 23 08:59:03 crc kubenswrapper[4940]: I0223 08:59:03.329348 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="d0b5a971-c6f4-4518-9bb3-49d228275668" containerName="ovn-controller" Feb 23 08:59:03 crc kubenswrapper[4940]: E0223 08:59:03.329358 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d0b5a971-c6f4-4518-9bb3-49d228275668" containerName="sbdb" Feb 23 08:59:03 crc kubenswrapper[4940]: I0223 08:59:03.329367 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="d0b5a971-c6f4-4518-9bb3-49d228275668" containerName="sbdb" Feb 23 08:59:03 crc kubenswrapper[4940]: E0223 08:59:03.329375 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d0b5a971-c6f4-4518-9bb3-49d228275668" containerName="ovnkube-controller" Feb 23 08:59:03 crc kubenswrapper[4940]: I0223 08:59:03.329383 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="d0b5a971-c6f4-4518-9bb3-49d228275668" containerName="ovnkube-controller" Feb 23 08:59:03 crc kubenswrapper[4940]: E0223 08:59:03.329393 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d0b5a971-c6f4-4518-9bb3-49d228275668" containerName="kube-rbac-proxy-ovn-metrics" Feb 23 08:59:03 crc kubenswrapper[4940]: I0223 08:59:03.329401 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="d0b5a971-c6f4-4518-9bb3-49d228275668" containerName="kube-rbac-proxy-ovn-metrics" Feb 23 08:59:03 crc kubenswrapper[4940]: E0223 08:59:03.329416 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d0b5a971-c6f4-4518-9bb3-49d228275668" containerName="northd" Feb 23 08:59:03 crc kubenswrapper[4940]: I0223 08:59:03.329424 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="d0b5a971-c6f4-4518-9bb3-49d228275668" containerName="northd" Feb 23 08:59:03 crc kubenswrapper[4940]: E0223 08:59:03.329435 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d0b5a971-c6f4-4518-9bb3-49d228275668" containerName="ovnkube-controller" Feb 23 08:59:03 crc kubenswrapper[4940]: I0223 08:59:03.329443 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="d0b5a971-c6f4-4518-9bb3-49d228275668" containerName="ovnkube-controller" Feb 23 08:59:03 crc kubenswrapper[4940]: I0223 08:59:03.329551 4940 memory_manager.go:354] "RemoveStaleState removing state" podUID="d0b5a971-c6f4-4518-9bb3-49d228275668" containerName="northd" Feb 23 08:59:03 crc kubenswrapper[4940]: I0223 08:59:03.329563 4940 memory_manager.go:354] "RemoveStaleState removing state" podUID="d0b5a971-c6f4-4518-9bb3-49d228275668" containerName="sbdb" Feb 23 08:59:03 crc kubenswrapper[4940]: I0223 08:59:03.329573 4940 memory_manager.go:354] "RemoveStaleState removing state" podUID="d0b5a971-c6f4-4518-9bb3-49d228275668" containerName="ovnkube-controller" Feb 23 08:59:03 crc kubenswrapper[4940]: I0223 08:59:03.329583 4940 memory_manager.go:354] "RemoveStaleState removing state" podUID="d0b5a971-c6f4-4518-9bb3-49d228275668" containerName="ovn-acl-logging" Feb 23 08:59:03 crc kubenswrapper[4940]: I0223 08:59:03.329595 4940 memory_manager.go:354] "RemoveStaleState removing state" podUID="d0b5a971-c6f4-4518-9bb3-49d228275668" containerName="kube-rbac-proxy-ovn-metrics" Feb 23 08:59:03 crc kubenswrapper[4940]: I0223 08:59:03.329605 4940 memory_manager.go:354] "RemoveStaleState removing state" podUID="d0b5a971-c6f4-4518-9bb3-49d228275668" containerName="ovnkube-controller" Feb 23 08:59:03 crc kubenswrapper[4940]: I0223 08:59:03.329637 4940 memory_manager.go:354] "RemoveStaleState removing state" podUID="d0b5a971-c6f4-4518-9bb3-49d228275668" containerName="ovn-controller" Feb 23 08:59:03 crc kubenswrapper[4940]: I0223 08:59:03.329649 4940 memory_manager.go:354] "RemoveStaleState removing state" podUID="d0b5a971-c6f4-4518-9bb3-49d228275668" containerName="ovnkube-controller" Feb 23 08:59:03 crc kubenswrapper[4940]: I0223 08:59:03.329662 4940 memory_manager.go:354] "RemoveStaleState removing state" podUID="d0b5a971-c6f4-4518-9bb3-49d228275668" containerName="nbdb" Feb 23 08:59:03 crc kubenswrapper[4940]: I0223 08:59:03.329671 4940 memory_manager.go:354] "RemoveStaleState removing state" podUID="d0b5a971-c6f4-4518-9bb3-49d228275668" containerName="kube-rbac-proxy-node" Feb 23 08:59:03 crc kubenswrapper[4940]: E0223 08:59:03.329786 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d0b5a971-c6f4-4518-9bb3-49d228275668" containerName="ovnkube-controller" Feb 23 08:59:03 crc kubenswrapper[4940]: I0223 08:59:03.329795 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="d0b5a971-c6f4-4518-9bb3-49d228275668" containerName="ovnkube-controller" Feb 23 08:59:03 crc kubenswrapper[4940]: I0223 08:59:03.329906 4940 memory_manager.go:354] "RemoveStaleState removing state" podUID="d0b5a971-c6f4-4518-9bb3-49d228275668" containerName="ovnkube-controller" Feb 23 08:59:03 crc kubenswrapper[4940]: I0223 08:59:03.329917 4940 memory_manager.go:354] "RemoveStaleState removing state" podUID="d0b5a971-c6f4-4518-9bb3-49d228275668" containerName="ovnkube-controller" Feb 23 08:59:03 crc kubenswrapper[4940]: I0223 08:59:03.331836 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-qsgqr" Feb 23 08:59:03 crc kubenswrapper[4940]: I0223 08:59:03.353550 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/d0b5a971-c6f4-4518-9bb3-49d228275668-env-overrides\") pod \"d0b5a971-c6f4-4518-9bb3-49d228275668\" (UID: \"d0b5a971-c6f4-4518-9bb3-49d228275668\") " Feb 23 08:59:03 crc kubenswrapper[4940]: I0223 08:59:03.353633 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sq2ck\" (UniqueName: \"kubernetes.io/projected/d0b5a971-c6f4-4518-9bb3-49d228275668-kube-api-access-sq2ck\") pod \"d0b5a971-c6f4-4518-9bb3-49d228275668\" (UID: \"d0b5a971-c6f4-4518-9bb3-49d228275668\") " Feb 23 08:59:03 crc kubenswrapper[4940]: I0223 08:59:03.353679 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d0b5a971-c6f4-4518-9bb3-49d228275668-run-openvswitch\") pod \"d0b5a971-c6f4-4518-9bb3-49d228275668\" (UID: \"d0b5a971-c6f4-4518-9bb3-49d228275668\") " Feb 23 08:59:03 crc kubenswrapper[4940]: I0223 08:59:03.353719 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/d0b5a971-c6f4-4518-9bb3-49d228275668-node-log\") pod \"d0b5a971-c6f4-4518-9bb3-49d228275668\" (UID: \"d0b5a971-c6f4-4518-9bb3-49d228275668\") " Feb 23 08:59:03 crc kubenswrapper[4940]: I0223 08:59:03.353754 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/d0b5a971-c6f4-4518-9bb3-49d228275668-run-systemd\") pod \"d0b5a971-c6f4-4518-9bb3-49d228275668\" (UID: \"d0b5a971-c6f4-4518-9bb3-49d228275668\") " Feb 23 08:59:03 crc kubenswrapper[4940]: I0223 08:59:03.353787 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/d0b5a971-c6f4-4518-9bb3-49d228275668-log-socket\") pod \"d0b5a971-c6f4-4518-9bb3-49d228275668\" (UID: \"d0b5a971-c6f4-4518-9bb3-49d228275668\") " Feb 23 08:59:03 crc kubenswrapper[4940]: I0223 08:59:03.353811 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d0b5a971-c6f4-4518-9bb3-49d228275668-var-lib-openvswitch\") pod \"d0b5a971-c6f4-4518-9bb3-49d228275668\" (UID: \"d0b5a971-c6f4-4518-9bb3-49d228275668\") " Feb 23 08:59:03 crc kubenswrapper[4940]: I0223 08:59:03.353835 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/d0b5a971-c6f4-4518-9bb3-49d228275668-run-ovn\") pod \"d0b5a971-c6f4-4518-9bb3-49d228275668\" (UID: \"d0b5a971-c6f4-4518-9bb3-49d228275668\") " Feb 23 08:59:03 crc kubenswrapper[4940]: I0223 08:59:03.353855 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d0b5a971-c6f4-4518-9bb3-49d228275668-host-var-lib-cni-networks-ovn-kubernetes\") pod \"d0b5a971-c6f4-4518-9bb3-49d228275668\" (UID: \"d0b5a971-c6f4-4518-9bb3-49d228275668\") " Feb 23 08:59:03 crc kubenswrapper[4940]: I0223 08:59:03.353884 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/d0b5a971-c6f4-4518-9bb3-49d228275668-host-kubelet\") pod \"d0b5a971-c6f4-4518-9bb3-49d228275668\" (UID: \"d0b5a971-c6f4-4518-9bb3-49d228275668\") " Feb 23 08:59:03 crc kubenswrapper[4940]: I0223 08:59:03.353914 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d0b5a971-c6f4-4518-9bb3-49d228275668-host-slash\") pod \"d0b5a971-c6f4-4518-9bb3-49d228275668\" (UID: \"d0b5a971-c6f4-4518-9bb3-49d228275668\") " Feb 23 08:59:03 crc kubenswrapper[4940]: I0223 08:59:03.353937 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/d0b5a971-c6f4-4518-9bb3-49d228275668-ovnkube-script-lib\") pod \"d0b5a971-c6f4-4518-9bb3-49d228275668\" (UID: \"d0b5a971-c6f4-4518-9bb3-49d228275668\") " Feb 23 08:59:03 crc kubenswrapper[4940]: I0223 08:59:03.353958 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/d0b5a971-c6f4-4518-9bb3-49d228275668-host-run-netns\") pod \"d0b5a971-c6f4-4518-9bb3-49d228275668\" (UID: \"d0b5a971-c6f4-4518-9bb3-49d228275668\") " Feb 23 08:59:03 crc kubenswrapper[4940]: I0223 08:59:03.353987 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/d0b5a971-c6f4-4518-9bb3-49d228275668-host-cni-bin\") pod \"d0b5a971-c6f4-4518-9bb3-49d228275668\" (UID: \"d0b5a971-c6f4-4518-9bb3-49d228275668\") " Feb 23 08:59:03 crc kubenswrapper[4940]: I0223 08:59:03.354011 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/d0b5a971-c6f4-4518-9bb3-49d228275668-ovnkube-config\") pod \"d0b5a971-c6f4-4518-9bb3-49d228275668\" (UID: \"d0b5a971-c6f4-4518-9bb3-49d228275668\") " Feb 23 08:59:03 crc kubenswrapper[4940]: I0223 08:59:03.354042 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d0b5a971-c6f4-4518-9bb3-49d228275668-host-run-ovn-kubernetes\") pod \"d0b5a971-c6f4-4518-9bb3-49d228275668\" (UID: \"d0b5a971-c6f4-4518-9bb3-49d228275668\") " Feb 23 08:59:03 crc kubenswrapper[4940]: I0223 08:59:03.354072 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/d0b5a971-c6f4-4518-9bb3-49d228275668-systemd-units\") pod \"d0b5a971-c6f4-4518-9bb3-49d228275668\" (UID: \"d0b5a971-c6f4-4518-9bb3-49d228275668\") " Feb 23 08:59:03 crc kubenswrapper[4940]: I0223 08:59:03.354099 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d0b5a971-c6f4-4518-9bb3-49d228275668-ovn-node-metrics-cert\") pod \"d0b5a971-c6f4-4518-9bb3-49d228275668\" (UID: \"d0b5a971-c6f4-4518-9bb3-49d228275668\") " Feb 23 08:59:03 crc kubenswrapper[4940]: I0223 08:59:03.354120 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d0b5a971-c6f4-4518-9bb3-49d228275668-etc-openvswitch\") pod \"d0b5a971-c6f4-4518-9bb3-49d228275668\" (UID: \"d0b5a971-c6f4-4518-9bb3-49d228275668\") " Feb 23 08:59:03 crc kubenswrapper[4940]: I0223 08:59:03.354143 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d0b5a971-c6f4-4518-9bb3-49d228275668-host-cni-netd\") pod \"d0b5a971-c6f4-4518-9bb3-49d228275668\" (UID: \"d0b5a971-c6f4-4518-9bb3-49d228275668\") " Feb 23 08:59:03 crc kubenswrapper[4940]: I0223 08:59:03.354293 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/c660e08f-37bb-4df2-84ad-be72e4aef556-run-ovn\") pod \"ovnkube-node-qsgqr\" (UID: \"c660e08f-37bb-4df2-84ad-be72e4aef556\") " pod="openshift-ovn-kubernetes/ovnkube-node-qsgqr" Feb 23 08:59:03 crc kubenswrapper[4940]: I0223 08:59:03.354320 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/c660e08f-37bb-4df2-84ad-be72e4aef556-ovnkube-config\") pod \"ovnkube-node-qsgqr\" (UID: \"c660e08f-37bb-4df2-84ad-be72e4aef556\") " pod="openshift-ovn-kubernetes/ovnkube-node-qsgqr" Feb 23 08:59:03 crc kubenswrapper[4940]: I0223 08:59:03.354362 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/c660e08f-37bb-4df2-84ad-be72e4aef556-host-slash\") pod \"ovnkube-node-qsgqr\" (UID: \"c660e08f-37bb-4df2-84ad-be72e4aef556\") " pod="openshift-ovn-kubernetes/ovnkube-node-qsgqr" Feb 23 08:59:03 crc kubenswrapper[4940]: I0223 08:59:03.354393 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c660e08f-37bb-4df2-84ad-be72e4aef556-host-run-ovn-kubernetes\") pod \"ovnkube-node-qsgqr\" (UID: \"c660e08f-37bb-4df2-84ad-be72e4aef556\") " pod="openshift-ovn-kubernetes/ovnkube-node-qsgqr" Feb 23 08:59:03 crc kubenswrapper[4940]: I0223 08:59:03.354417 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c660e08f-37bb-4df2-84ad-be72e4aef556-host-cni-bin\") pod \"ovnkube-node-qsgqr\" (UID: \"c660e08f-37bb-4df2-84ad-be72e4aef556\") " pod="openshift-ovn-kubernetes/ovnkube-node-qsgqr" Feb 23 08:59:03 crc kubenswrapper[4940]: I0223 08:59:03.354436 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c660e08f-37bb-4df2-84ad-be72e4aef556-etc-openvswitch\") pod \"ovnkube-node-qsgqr\" (UID: \"c660e08f-37bb-4df2-84ad-be72e4aef556\") " pod="openshift-ovn-kubernetes/ovnkube-node-qsgqr" Feb 23 08:59:03 crc kubenswrapper[4940]: I0223 08:59:03.354419 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d0b5a971-c6f4-4518-9bb3-49d228275668-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "d0b5a971-c6f4-4518-9bb3-49d228275668" (UID: "d0b5a971-c6f4-4518-9bb3-49d228275668"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 08:59:03 crc kubenswrapper[4940]: I0223 08:59:03.354457 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/c660e08f-37bb-4df2-84ad-be72e4aef556-systemd-units\") pod \"ovnkube-node-qsgqr\" (UID: \"c660e08f-37bb-4df2-84ad-be72e4aef556\") " pod="openshift-ovn-kubernetes/ovnkube-node-qsgqr" Feb 23 08:59:03 crc kubenswrapper[4940]: I0223 08:59:03.354572 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c660e08f-37bb-4df2-84ad-be72e4aef556-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-qsgqr\" (UID: \"c660e08f-37bb-4df2-84ad-be72e4aef556\") " pod="openshift-ovn-kubernetes/ovnkube-node-qsgqr" Feb 23 08:59:03 crc kubenswrapper[4940]: I0223 08:59:03.354653 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c660e08f-37bb-4df2-84ad-be72e4aef556-run-openvswitch\") pod \"ovnkube-node-qsgqr\" (UID: \"c660e08f-37bb-4df2-84ad-be72e4aef556\") " pod="openshift-ovn-kubernetes/ovnkube-node-qsgqr" Feb 23 08:59:03 crc kubenswrapper[4940]: I0223 08:59:03.354682 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/c660e08f-37bb-4df2-84ad-be72e4aef556-run-systemd\") pod \"ovnkube-node-qsgqr\" (UID: \"c660e08f-37bb-4df2-84ad-be72e4aef556\") " pod="openshift-ovn-kubernetes/ovnkube-node-qsgqr" Feb 23 08:59:03 crc kubenswrapper[4940]: I0223 08:59:03.354752 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/c660e08f-37bb-4df2-84ad-be72e4aef556-env-overrides\") pod \"ovnkube-node-qsgqr\" (UID: \"c660e08f-37bb-4df2-84ad-be72e4aef556\") " pod="openshift-ovn-kubernetes/ovnkube-node-qsgqr" Feb 23 08:59:03 crc kubenswrapper[4940]: I0223 08:59:03.354823 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zhgvp\" (UniqueName: \"kubernetes.io/projected/c660e08f-37bb-4df2-84ad-be72e4aef556-kube-api-access-zhgvp\") pod \"ovnkube-node-qsgqr\" (UID: \"c660e08f-37bb-4df2-84ad-be72e4aef556\") " pod="openshift-ovn-kubernetes/ovnkube-node-qsgqr" Feb 23 08:59:03 crc kubenswrapper[4940]: I0223 08:59:03.354854 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/c660e08f-37bb-4df2-84ad-be72e4aef556-host-kubelet\") pod \"ovnkube-node-qsgqr\" (UID: \"c660e08f-37bb-4df2-84ad-be72e4aef556\") " pod="openshift-ovn-kubernetes/ovnkube-node-qsgqr" Feb 23 08:59:03 crc kubenswrapper[4940]: I0223 08:59:03.354906 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/c660e08f-37bb-4df2-84ad-be72e4aef556-node-log\") pod \"ovnkube-node-qsgqr\" (UID: \"c660e08f-37bb-4df2-84ad-be72e4aef556\") " pod="openshift-ovn-kubernetes/ovnkube-node-qsgqr" Feb 23 08:59:03 crc kubenswrapper[4940]: I0223 08:59:03.354966 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c660e08f-37bb-4df2-84ad-be72e4aef556-host-run-netns\") pod \"ovnkube-node-qsgqr\" (UID: \"c660e08f-37bb-4df2-84ad-be72e4aef556\") " pod="openshift-ovn-kubernetes/ovnkube-node-qsgqr" Feb 23 08:59:03 crc kubenswrapper[4940]: I0223 08:59:03.354993 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/c660e08f-37bb-4df2-84ad-be72e4aef556-log-socket\") pod \"ovnkube-node-qsgqr\" (UID: \"c660e08f-37bb-4df2-84ad-be72e4aef556\") " pod="openshift-ovn-kubernetes/ovnkube-node-qsgqr" Feb 23 08:59:03 crc kubenswrapper[4940]: I0223 08:59:03.355014 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c660e08f-37bb-4df2-84ad-be72e4aef556-host-cni-netd\") pod \"ovnkube-node-qsgqr\" (UID: \"c660e08f-37bb-4df2-84ad-be72e4aef556\") " pod="openshift-ovn-kubernetes/ovnkube-node-qsgqr" Feb 23 08:59:03 crc kubenswrapper[4940]: I0223 08:59:03.355061 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c660e08f-37bb-4df2-84ad-be72e4aef556-var-lib-openvswitch\") pod \"ovnkube-node-qsgqr\" (UID: \"c660e08f-37bb-4df2-84ad-be72e4aef556\") " pod="openshift-ovn-kubernetes/ovnkube-node-qsgqr" Feb 23 08:59:03 crc kubenswrapper[4940]: I0223 08:59:03.355083 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/c660e08f-37bb-4df2-84ad-be72e4aef556-ovn-node-metrics-cert\") pod \"ovnkube-node-qsgqr\" (UID: \"c660e08f-37bb-4df2-84ad-be72e4aef556\") " pod="openshift-ovn-kubernetes/ovnkube-node-qsgqr" Feb 23 08:59:03 crc kubenswrapper[4940]: I0223 08:59:03.355131 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/c660e08f-37bb-4df2-84ad-be72e4aef556-ovnkube-script-lib\") pod \"ovnkube-node-qsgqr\" (UID: \"c660e08f-37bb-4df2-84ad-be72e4aef556\") " pod="openshift-ovn-kubernetes/ovnkube-node-qsgqr" Feb 23 08:59:03 crc kubenswrapper[4940]: I0223 08:59:03.355222 4940 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/d0b5a971-c6f4-4518-9bb3-49d228275668-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 23 08:59:03 crc kubenswrapper[4940]: I0223 08:59:03.355235 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d0b5a971-c6f4-4518-9bb3-49d228275668-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "d0b5a971-c6f4-4518-9bb3-49d228275668" (UID: "d0b5a971-c6f4-4518-9bb3-49d228275668"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 08:59:03 crc kubenswrapper[4940]: I0223 08:59:03.355278 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d0b5a971-c6f4-4518-9bb3-49d228275668-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "d0b5a971-c6f4-4518-9bb3-49d228275668" (UID: "d0b5a971-c6f4-4518-9bb3-49d228275668"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 08:59:03 crc kubenswrapper[4940]: I0223 08:59:03.355301 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d0b5a971-c6f4-4518-9bb3-49d228275668-node-log" (OuterVolumeSpecName: "node-log") pod "d0b5a971-c6f4-4518-9bb3-49d228275668" (UID: "d0b5a971-c6f4-4518-9bb3-49d228275668"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 08:59:03 crc kubenswrapper[4940]: I0223 08:59:03.355840 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d0b5a971-c6f4-4518-9bb3-49d228275668-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "d0b5a971-c6f4-4518-9bb3-49d228275668" (UID: "d0b5a971-c6f4-4518-9bb3-49d228275668"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 08:59:03 crc kubenswrapper[4940]: I0223 08:59:03.355938 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d0b5a971-c6f4-4518-9bb3-49d228275668-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "d0b5a971-c6f4-4518-9bb3-49d228275668" (UID: "d0b5a971-c6f4-4518-9bb3-49d228275668"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 08:59:03 crc kubenswrapper[4940]: I0223 08:59:03.356005 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d0b5a971-c6f4-4518-9bb3-49d228275668-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "d0b5a971-c6f4-4518-9bb3-49d228275668" (UID: "d0b5a971-c6f4-4518-9bb3-49d228275668"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 08:59:03 crc kubenswrapper[4940]: I0223 08:59:03.356047 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d0b5a971-c6f4-4518-9bb3-49d228275668-host-slash" (OuterVolumeSpecName: "host-slash") pod "d0b5a971-c6f4-4518-9bb3-49d228275668" (UID: "d0b5a971-c6f4-4518-9bb3-49d228275668"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 08:59:03 crc kubenswrapper[4940]: I0223 08:59:03.356271 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d0b5a971-c6f4-4518-9bb3-49d228275668-log-socket" (OuterVolumeSpecName: "log-socket") pod "d0b5a971-c6f4-4518-9bb3-49d228275668" (UID: "d0b5a971-c6f4-4518-9bb3-49d228275668"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 08:59:03 crc kubenswrapper[4940]: I0223 08:59:03.356340 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d0b5a971-c6f4-4518-9bb3-49d228275668-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "d0b5a971-c6f4-4518-9bb3-49d228275668" (UID: "d0b5a971-c6f4-4518-9bb3-49d228275668"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 08:59:03 crc kubenswrapper[4940]: I0223 08:59:03.356347 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d0b5a971-c6f4-4518-9bb3-49d228275668-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "d0b5a971-c6f4-4518-9bb3-49d228275668" (UID: "d0b5a971-c6f4-4518-9bb3-49d228275668"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 08:59:03 crc kubenswrapper[4940]: I0223 08:59:03.356379 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d0b5a971-c6f4-4518-9bb3-49d228275668-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "d0b5a971-c6f4-4518-9bb3-49d228275668" (UID: "d0b5a971-c6f4-4518-9bb3-49d228275668"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 08:59:03 crc kubenswrapper[4940]: I0223 08:59:03.356394 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d0b5a971-c6f4-4518-9bb3-49d228275668-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "d0b5a971-c6f4-4518-9bb3-49d228275668" (UID: "d0b5a971-c6f4-4518-9bb3-49d228275668"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 08:59:03 crc kubenswrapper[4940]: I0223 08:59:03.356421 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d0b5a971-c6f4-4518-9bb3-49d228275668-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "d0b5a971-c6f4-4518-9bb3-49d228275668" (UID: "d0b5a971-c6f4-4518-9bb3-49d228275668"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 08:59:03 crc kubenswrapper[4940]: I0223 08:59:03.356470 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d0b5a971-c6f4-4518-9bb3-49d228275668-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "d0b5a971-c6f4-4518-9bb3-49d228275668" (UID: "d0b5a971-c6f4-4518-9bb3-49d228275668"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 08:59:03 crc kubenswrapper[4940]: I0223 08:59:03.356779 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d0b5a971-c6f4-4518-9bb3-49d228275668-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "d0b5a971-c6f4-4518-9bb3-49d228275668" (UID: "d0b5a971-c6f4-4518-9bb3-49d228275668"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 08:59:03 crc kubenswrapper[4940]: I0223 08:59:03.356866 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d0b5a971-c6f4-4518-9bb3-49d228275668-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "d0b5a971-c6f4-4518-9bb3-49d228275668" (UID: "d0b5a971-c6f4-4518-9bb3-49d228275668"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 08:59:03 crc kubenswrapper[4940]: I0223 08:59:03.361393 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d0b5a971-c6f4-4518-9bb3-49d228275668-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "d0b5a971-c6f4-4518-9bb3-49d228275668" (UID: "d0b5a971-c6f4-4518-9bb3-49d228275668"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 08:59:03 crc kubenswrapper[4940]: I0223 08:59:03.361408 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d0b5a971-c6f4-4518-9bb3-49d228275668-kube-api-access-sq2ck" (OuterVolumeSpecName: "kube-api-access-sq2ck") pod "d0b5a971-c6f4-4518-9bb3-49d228275668" (UID: "d0b5a971-c6f4-4518-9bb3-49d228275668"). InnerVolumeSpecName "kube-api-access-sq2ck". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 08:59:03 crc kubenswrapper[4940]: I0223 08:59:03.379324 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d0b5a971-c6f4-4518-9bb3-49d228275668-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "d0b5a971-c6f4-4518-9bb3-49d228275668" (UID: "d0b5a971-c6f4-4518-9bb3-49d228275668"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 08:59:03 crc kubenswrapper[4940]: I0223 08:59:03.455846 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/c660e08f-37bb-4df2-84ad-be72e4aef556-host-slash\") pod \"ovnkube-node-qsgqr\" (UID: \"c660e08f-37bb-4df2-84ad-be72e4aef556\") " pod="openshift-ovn-kubernetes/ovnkube-node-qsgqr" Feb 23 08:59:03 crc kubenswrapper[4940]: I0223 08:59:03.455929 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c660e08f-37bb-4df2-84ad-be72e4aef556-host-run-ovn-kubernetes\") pod \"ovnkube-node-qsgqr\" (UID: \"c660e08f-37bb-4df2-84ad-be72e4aef556\") " pod="openshift-ovn-kubernetes/ovnkube-node-qsgqr" Feb 23 08:59:03 crc kubenswrapper[4940]: I0223 08:59:03.455975 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c660e08f-37bb-4df2-84ad-be72e4aef556-host-cni-bin\") pod \"ovnkube-node-qsgqr\" (UID: \"c660e08f-37bb-4df2-84ad-be72e4aef556\") " pod="openshift-ovn-kubernetes/ovnkube-node-qsgqr" Feb 23 08:59:03 crc kubenswrapper[4940]: I0223 08:59:03.456011 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c660e08f-37bb-4df2-84ad-be72e4aef556-etc-openvswitch\") pod \"ovnkube-node-qsgqr\" (UID: \"c660e08f-37bb-4df2-84ad-be72e4aef556\") " pod="openshift-ovn-kubernetes/ovnkube-node-qsgqr" Feb 23 08:59:03 crc kubenswrapper[4940]: I0223 08:59:03.456047 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/c660e08f-37bb-4df2-84ad-be72e4aef556-systemd-units\") pod \"ovnkube-node-qsgqr\" (UID: \"c660e08f-37bb-4df2-84ad-be72e4aef556\") " pod="openshift-ovn-kubernetes/ovnkube-node-qsgqr" Feb 23 08:59:03 crc kubenswrapper[4940]: I0223 08:59:03.456095 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c660e08f-37bb-4df2-84ad-be72e4aef556-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-qsgqr\" (UID: \"c660e08f-37bb-4df2-84ad-be72e4aef556\") " pod="openshift-ovn-kubernetes/ovnkube-node-qsgqr" Feb 23 08:59:03 crc kubenswrapper[4940]: I0223 08:59:03.456135 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c660e08f-37bb-4df2-84ad-be72e4aef556-run-openvswitch\") pod \"ovnkube-node-qsgqr\" (UID: \"c660e08f-37bb-4df2-84ad-be72e4aef556\") " pod="openshift-ovn-kubernetes/ovnkube-node-qsgqr" Feb 23 08:59:03 crc kubenswrapper[4940]: I0223 08:59:03.456168 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/c660e08f-37bb-4df2-84ad-be72e4aef556-run-systemd\") pod \"ovnkube-node-qsgqr\" (UID: \"c660e08f-37bb-4df2-84ad-be72e4aef556\") " pod="openshift-ovn-kubernetes/ovnkube-node-qsgqr" Feb 23 08:59:03 crc kubenswrapper[4940]: I0223 08:59:03.456347 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c660e08f-37bb-4df2-84ad-be72e4aef556-etc-openvswitch\") pod \"ovnkube-node-qsgqr\" (UID: \"c660e08f-37bb-4df2-84ad-be72e4aef556\") " pod="openshift-ovn-kubernetes/ovnkube-node-qsgqr" Feb 23 08:59:03 crc kubenswrapper[4940]: I0223 08:59:03.456389 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c660e08f-37bb-4df2-84ad-be72e4aef556-host-run-ovn-kubernetes\") pod \"ovnkube-node-qsgqr\" (UID: \"c660e08f-37bb-4df2-84ad-be72e4aef556\") " pod="openshift-ovn-kubernetes/ovnkube-node-qsgqr" Feb 23 08:59:03 crc kubenswrapper[4940]: I0223 08:59:03.456427 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c660e08f-37bb-4df2-84ad-be72e4aef556-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-qsgqr\" (UID: \"c660e08f-37bb-4df2-84ad-be72e4aef556\") " pod="openshift-ovn-kubernetes/ovnkube-node-qsgqr" Feb 23 08:59:03 crc kubenswrapper[4940]: I0223 08:59:03.456447 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c660e08f-37bb-4df2-84ad-be72e4aef556-host-cni-bin\") pod \"ovnkube-node-qsgqr\" (UID: \"c660e08f-37bb-4df2-84ad-be72e4aef556\") " pod="openshift-ovn-kubernetes/ovnkube-node-qsgqr" Feb 23 08:59:03 crc kubenswrapper[4940]: I0223 08:59:03.456480 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c660e08f-37bb-4df2-84ad-be72e4aef556-run-openvswitch\") pod \"ovnkube-node-qsgqr\" (UID: \"c660e08f-37bb-4df2-84ad-be72e4aef556\") " pod="openshift-ovn-kubernetes/ovnkube-node-qsgqr" Feb 23 08:59:03 crc kubenswrapper[4940]: I0223 08:59:03.456497 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/c660e08f-37bb-4df2-84ad-be72e4aef556-run-systemd\") pod \"ovnkube-node-qsgqr\" (UID: \"c660e08f-37bb-4df2-84ad-be72e4aef556\") " pod="openshift-ovn-kubernetes/ovnkube-node-qsgqr" Feb 23 08:59:03 crc kubenswrapper[4940]: I0223 08:59:03.456529 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/c660e08f-37bb-4df2-84ad-be72e4aef556-host-slash\") pod \"ovnkube-node-qsgqr\" (UID: \"c660e08f-37bb-4df2-84ad-be72e4aef556\") " pod="openshift-ovn-kubernetes/ovnkube-node-qsgqr" Feb 23 08:59:03 crc kubenswrapper[4940]: I0223 08:59:03.456213 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/c660e08f-37bb-4df2-84ad-be72e4aef556-env-overrides\") pod \"ovnkube-node-qsgqr\" (UID: \"c660e08f-37bb-4df2-84ad-be72e4aef556\") " pod="openshift-ovn-kubernetes/ovnkube-node-qsgqr" Feb 23 08:59:03 crc kubenswrapper[4940]: I0223 08:59:03.456469 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/c660e08f-37bb-4df2-84ad-be72e4aef556-systemd-units\") pod \"ovnkube-node-qsgqr\" (UID: \"c660e08f-37bb-4df2-84ad-be72e4aef556\") " pod="openshift-ovn-kubernetes/ovnkube-node-qsgqr" Feb 23 08:59:03 crc kubenswrapper[4940]: I0223 08:59:03.456644 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zhgvp\" (UniqueName: \"kubernetes.io/projected/c660e08f-37bb-4df2-84ad-be72e4aef556-kube-api-access-zhgvp\") pod \"ovnkube-node-qsgqr\" (UID: \"c660e08f-37bb-4df2-84ad-be72e4aef556\") " pod="openshift-ovn-kubernetes/ovnkube-node-qsgqr" Feb 23 08:59:03 crc kubenswrapper[4940]: I0223 08:59:03.456711 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/c660e08f-37bb-4df2-84ad-be72e4aef556-host-kubelet\") pod \"ovnkube-node-qsgqr\" (UID: \"c660e08f-37bb-4df2-84ad-be72e4aef556\") " pod="openshift-ovn-kubernetes/ovnkube-node-qsgqr" Feb 23 08:59:03 crc kubenswrapper[4940]: I0223 08:59:03.456752 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/c660e08f-37bb-4df2-84ad-be72e4aef556-node-log\") pod \"ovnkube-node-qsgqr\" (UID: \"c660e08f-37bb-4df2-84ad-be72e4aef556\") " pod="openshift-ovn-kubernetes/ovnkube-node-qsgqr" Feb 23 08:59:03 crc kubenswrapper[4940]: I0223 08:59:03.456807 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/c660e08f-37bb-4df2-84ad-be72e4aef556-host-kubelet\") pod \"ovnkube-node-qsgqr\" (UID: \"c660e08f-37bb-4df2-84ad-be72e4aef556\") " pod="openshift-ovn-kubernetes/ovnkube-node-qsgqr" Feb 23 08:59:03 crc kubenswrapper[4940]: I0223 08:59:03.456805 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c660e08f-37bb-4df2-84ad-be72e4aef556-host-run-netns\") pod \"ovnkube-node-qsgqr\" (UID: \"c660e08f-37bb-4df2-84ad-be72e4aef556\") " pod="openshift-ovn-kubernetes/ovnkube-node-qsgqr" Feb 23 08:59:03 crc kubenswrapper[4940]: I0223 08:59:03.456855 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c660e08f-37bb-4df2-84ad-be72e4aef556-host-run-netns\") pod \"ovnkube-node-qsgqr\" (UID: \"c660e08f-37bb-4df2-84ad-be72e4aef556\") " pod="openshift-ovn-kubernetes/ovnkube-node-qsgqr" Feb 23 08:59:03 crc kubenswrapper[4940]: I0223 08:59:03.456866 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/c660e08f-37bb-4df2-84ad-be72e4aef556-node-log\") pod \"ovnkube-node-qsgqr\" (UID: \"c660e08f-37bb-4df2-84ad-be72e4aef556\") " pod="openshift-ovn-kubernetes/ovnkube-node-qsgqr" Feb 23 08:59:03 crc kubenswrapper[4940]: I0223 08:59:03.456941 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/c660e08f-37bb-4df2-84ad-be72e4aef556-log-socket\") pod \"ovnkube-node-qsgqr\" (UID: \"c660e08f-37bb-4df2-84ad-be72e4aef556\") " pod="openshift-ovn-kubernetes/ovnkube-node-qsgqr" Feb 23 08:59:03 crc kubenswrapper[4940]: I0223 08:59:03.456993 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c660e08f-37bb-4df2-84ad-be72e4aef556-host-cni-netd\") pod \"ovnkube-node-qsgqr\" (UID: \"c660e08f-37bb-4df2-84ad-be72e4aef556\") " pod="openshift-ovn-kubernetes/ovnkube-node-qsgqr" Feb 23 08:59:03 crc kubenswrapper[4940]: I0223 08:59:03.457028 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c660e08f-37bb-4df2-84ad-be72e4aef556-var-lib-openvswitch\") pod \"ovnkube-node-qsgqr\" (UID: \"c660e08f-37bb-4df2-84ad-be72e4aef556\") " pod="openshift-ovn-kubernetes/ovnkube-node-qsgqr" Feb 23 08:59:03 crc kubenswrapper[4940]: I0223 08:59:03.457078 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/c660e08f-37bb-4df2-84ad-be72e4aef556-ovn-node-metrics-cert\") pod \"ovnkube-node-qsgqr\" (UID: \"c660e08f-37bb-4df2-84ad-be72e4aef556\") " pod="openshift-ovn-kubernetes/ovnkube-node-qsgqr" Feb 23 08:59:03 crc kubenswrapper[4940]: I0223 08:59:03.457108 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/c660e08f-37bb-4df2-84ad-be72e4aef556-ovnkube-script-lib\") pod \"ovnkube-node-qsgqr\" (UID: \"c660e08f-37bb-4df2-84ad-be72e4aef556\") " pod="openshift-ovn-kubernetes/ovnkube-node-qsgqr" Feb 23 08:59:03 crc kubenswrapper[4940]: I0223 08:59:03.457112 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/c660e08f-37bb-4df2-84ad-be72e4aef556-log-socket\") pod \"ovnkube-node-qsgqr\" (UID: \"c660e08f-37bb-4df2-84ad-be72e4aef556\") " pod="openshift-ovn-kubernetes/ovnkube-node-qsgqr" Feb 23 08:59:03 crc kubenswrapper[4940]: I0223 08:59:03.457197 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c660e08f-37bb-4df2-84ad-be72e4aef556-var-lib-openvswitch\") pod \"ovnkube-node-qsgqr\" (UID: \"c660e08f-37bb-4df2-84ad-be72e4aef556\") " pod="openshift-ovn-kubernetes/ovnkube-node-qsgqr" Feb 23 08:59:03 crc kubenswrapper[4940]: I0223 08:59:03.457221 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/c660e08f-37bb-4df2-84ad-be72e4aef556-run-ovn\") pod \"ovnkube-node-qsgqr\" (UID: \"c660e08f-37bb-4df2-84ad-be72e4aef556\") " pod="openshift-ovn-kubernetes/ovnkube-node-qsgqr" Feb 23 08:59:03 crc kubenswrapper[4940]: I0223 08:59:03.457278 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/c660e08f-37bb-4df2-84ad-be72e4aef556-ovnkube-config\") pod \"ovnkube-node-qsgqr\" (UID: \"c660e08f-37bb-4df2-84ad-be72e4aef556\") " pod="openshift-ovn-kubernetes/ovnkube-node-qsgqr" Feb 23 08:59:03 crc kubenswrapper[4940]: I0223 08:59:03.457394 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/c660e08f-37bb-4df2-84ad-be72e4aef556-run-ovn\") pod \"ovnkube-node-qsgqr\" (UID: \"c660e08f-37bb-4df2-84ad-be72e4aef556\") " pod="openshift-ovn-kubernetes/ovnkube-node-qsgqr" Feb 23 08:59:03 crc kubenswrapper[4940]: I0223 08:59:03.457237 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c660e08f-37bb-4df2-84ad-be72e4aef556-host-cni-netd\") pod \"ovnkube-node-qsgqr\" (UID: \"c660e08f-37bb-4df2-84ad-be72e4aef556\") " pod="openshift-ovn-kubernetes/ovnkube-node-qsgqr" Feb 23 08:59:03 crc kubenswrapper[4940]: I0223 08:59:03.457457 4940 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/d0b5a971-c6f4-4518-9bb3-49d228275668-host-cni-bin\") on node \"crc\" DevicePath \"\"" Feb 23 08:59:03 crc kubenswrapper[4940]: I0223 08:59:03.457497 4940 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/d0b5a971-c6f4-4518-9bb3-49d228275668-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 23 08:59:03 crc kubenswrapper[4940]: I0223 08:59:03.457514 4940 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d0b5a971-c6f4-4518-9bb3-49d228275668-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Feb 23 08:59:03 crc kubenswrapper[4940]: I0223 08:59:03.457527 4940 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/d0b5a971-c6f4-4518-9bb3-49d228275668-systemd-units\") on node \"crc\" DevicePath \"\"" Feb 23 08:59:03 crc kubenswrapper[4940]: I0223 08:59:03.457538 4940 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d0b5a971-c6f4-4518-9bb3-49d228275668-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 23 08:59:03 crc kubenswrapper[4940]: I0223 08:59:03.457560 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/c660e08f-37bb-4df2-84ad-be72e4aef556-env-overrides\") pod \"ovnkube-node-qsgqr\" (UID: \"c660e08f-37bb-4df2-84ad-be72e4aef556\") " pod="openshift-ovn-kubernetes/ovnkube-node-qsgqr" Feb 23 08:59:03 crc kubenswrapper[4940]: I0223 08:59:03.457570 4940 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d0b5a971-c6f4-4518-9bb3-49d228275668-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 23 08:59:03 crc kubenswrapper[4940]: I0223 08:59:03.457669 4940 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d0b5a971-c6f4-4518-9bb3-49d228275668-host-cni-netd\") on node \"crc\" DevicePath \"\"" Feb 23 08:59:03 crc kubenswrapper[4940]: I0223 08:59:03.457689 4940 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sq2ck\" (UniqueName: \"kubernetes.io/projected/d0b5a971-c6f4-4518-9bb3-49d228275668-kube-api-access-sq2ck\") on node \"crc\" DevicePath \"\"" Feb 23 08:59:03 crc kubenswrapper[4940]: I0223 08:59:03.457716 4940 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d0b5a971-c6f4-4518-9bb3-49d228275668-run-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 23 08:59:03 crc kubenswrapper[4940]: I0223 08:59:03.457736 4940 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/d0b5a971-c6f4-4518-9bb3-49d228275668-node-log\") on node \"crc\" DevicePath \"\"" Feb 23 08:59:03 crc kubenswrapper[4940]: I0223 08:59:03.457754 4940 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/d0b5a971-c6f4-4518-9bb3-49d228275668-run-systemd\") on node \"crc\" DevicePath \"\"" Feb 23 08:59:03 crc kubenswrapper[4940]: I0223 08:59:03.457779 4940 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/d0b5a971-c6f4-4518-9bb3-49d228275668-log-socket\") on node \"crc\" DevicePath \"\"" Feb 23 08:59:03 crc kubenswrapper[4940]: I0223 08:59:03.457806 4940 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d0b5a971-c6f4-4518-9bb3-49d228275668-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 23 08:59:03 crc kubenswrapper[4940]: I0223 08:59:03.457830 4940 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/d0b5a971-c6f4-4518-9bb3-49d228275668-run-ovn\") on node \"crc\" DevicePath \"\"" Feb 23 08:59:03 crc kubenswrapper[4940]: I0223 08:59:03.457855 4940 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d0b5a971-c6f4-4518-9bb3-49d228275668-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Feb 23 08:59:03 crc kubenswrapper[4940]: I0223 08:59:03.457882 4940 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/d0b5a971-c6f4-4518-9bb3-49d228275668-host-kubelet\") on node \"crc\" DevicePath \"\"" Feb 23 08:59:03 crc kubenswrapper[4940]: I0223 08:59:03.457907 4940 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d0b5a971-c6f4-4518-9bb3-49d228275668-host-slash\") on node \"crc\" DevicePath \"\"" Feb 23 08:59:03 crc kubenswrapper[4940]: I0223 08:59:03.457931 4940 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/d0b5a971-c6f4-4518-9bb3-49d228275668-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Feb 23 08:59:03 crc kubenswrapper[4940]: I0223 08:59:03.457954 4940 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/d0b5a971-c6f4-4518-9bb3-49d228275668-host-run-netns\") on node \"crc\" DevicePath \"\"" Feb 23 08:59:03 crc kubenswrapper[4940]: I0223 08:59:03.458800 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/c660e08f-37bb-4df2-84ad-be72e4aef556-ovnkube-script-lib\") pod \"ovnkube-node-qsgqr\" (UID: \"c660e08f-37bb-4df2-84ad-be72e4aef556\") " pod="openshift-ovn-kubernetes/ovnkube-node-qsgqr" Feb 23 08:59:03 crc kubenswrapper[4940]: I0223 08:59:03.458898 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/c660e08f-37bb-4df2-84ad-be72e4aef556-ovnkube-config\") pod \"ovnkube-node-qsgqr\" (UID: \"c660e08f-37bb-4df2-84ad-be72e4aef556\") " pod="openshift-ovn-kubernetes/ovnkube-node-qsgqr" Feb 23 08:59:03 crc kubenswrapper[4940]: I0223 08:59:03.464275 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/c660e08f-37bb-4df2-84ad-be72e4aef556-ovn-node-metrics-cert\") pod \"ovnkube-node-qsgqr\" (UID: \"c660e08f-37bb-4df2-84ad-be72e4aef556\") " pod="openshift-ovn-kubernetes/ovnkube-node-qsgqr" Feb 23 08:59:03 crc kubenswrapper[4940]: I0223 08:59:03.485707 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zhgvp\" (UniqueName: \"kubernetes.io/projected/c660e08f-37bb-4df2-84ad-be72e4aef556-kube-api-access-zhgvp\") pod \"ovnkube-node-qsgqr\" (UID: \"c660e08f-37bb-4df2-84ad-be72e4aef556\") " pod="openshift-ovn-kubernetes/ovnkube-node-qsgqr" Feb 23 08:59:03 crc kubenswrapper[4940]: I0223 08:59:03.505032 4940 scope.go:117] "RemoveContainer" containerID="4558e2adbf43351cb79df3a10fc2e37fa7edc8eae79ceb459ae1ea068e8df21b" Feb 23 08:59:03 crc kubenswrapper[4940]: I0223 08:59:03.524571 4940 scope.go:117] "RemoveContainer" containerID="2f189355dc4d43c8b1df6111c9c3a128e90b724b976602933c384817e4f51882" Feb 23 08:59:03 crc kubenswrapper[4940]: I0223 08:59:03.542411 4940 scope.go:117] "RemoveContainer" containerID="524ac500bdffecad2d1533ea863b5f2dc0acd130368e80faa476facf6817bcc0" Feb 23 08:59:03 crc kubenswrapper[4940]: I0223 08:59:03.560820 4940 scope.go:117] "RemoveContainer" containerID="56ef19c4e2659a35760fb8e2bb8a79c35a283fa3f4b00766bdd237d1464bd933" Feb 23 08:59:03 crc kubenswrapper[4940]: I0223 08:59:03.576509 4940 scope.go:117] "RemoveContainer" containerID="5cff793d0cde25f9d7edf3abaaf81005870d7ad01f985cfcc3cf3f5a57ea062b" Feb 23 08:59:03 crc kubenswrapper[4940]: I0223 08:59:03.590901 4940 scope.go:117] "RemoveContainer" containerID="1649bf6fd3e252298bb7f2414c8ed3d153ffbc4ec0a8497b91cc94cd41c0359f" Feb 23 08:59:03 crc kubenswrapper[4940]: I0223 08:59:03.608561 4940 scope.go:117] "RemoveContainer" containerID="340eaaffa94ddb12c76a83ddbb966d6a4f34ca8e74f15b11fe931f5f2c8cca12" Feb 23 08:59:03 crc kubenswrapper[4940]: I0223 08:59:03.628678 4940 scope.go:117] "RemoveContainer" containerID="683969deceafc005dbe91b7dbc1fd159cc1da9f55e48112ff68947c40997891e" Feb 23 08:59:03 crc kubenswrapper[4940]: I0223 08:59:03.629505 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-687f57d79b-wv7r6" Feb 23 08:59:03 crc kubenswrapper[4940]: I0223 08:59:03.646560 4940 scope.go:117] "RemoveContainer" containerID="9ef9add89b50dd7b480368148d404e8f7128d92e6d921c3c89a17463706f3dca" Feb 23 08:59:03 crc kubenswrapper[4940]: I0223 08:59:03.655142 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-qsgqr" Feb 23 08:59:03 crc kubenswrapper[4940]: I0223 08:59:03.679240 4940 scope.go:117] "RemoveContainer" containerID="a7a97208ce587c7633bbea9ad618e0721a712dfd62ff249a5843018bc125922a" Feb 23 08:59:03 crc kubenswrapper[4940]: I0223 08:59:03.695844 4940 scope.go:117] "RemoveContainer" containerID="fa9251aa0cd26d68256087c8d2f9d36cae16abcb8db328bb7d53cdd7bfacafd0" Feb 23 08:59:03 crc kubenswrapper[4940]: I0223 08:59:03.835012 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qkw6w" event={"ID":"d0b5a971-c6f4-4518-9bb3-49d228275668","Type":"ContainerDied","Data":"340eaaffa94ddb12c76a83ddbb966d6a4f34ca8e74f15b11fe931f5f2c8cca12"} Feb 23 08:59:03 crc kubenswrapper[4940]: I0223 08:59:03.835457 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qkw6w" event={"ID":"d0b5a971-c6f4-4518-9bb3-49d228275668","Type":"ContainerDied","Data":"a7a97208ce587c7633bbea9ad618e0721a712dfd62ff249a5843018bc125922a"} Feb 23 08:59:03 crc kubenswrapper[4940]: I0223 08:59:03.835486 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qkw6w" event={"ID":"d0b5a971-c6f4-4518-9bb3-49d228275668","Type":"ContainerDied","Data":"683969deceafc005dbe91b7dbc1fd159cc1da9f55e48112ff68947c40997891e"} Feb 23 08:59:03 crc kubenswrapper[4940]: I0223 08:59:03.835506 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qkw6w" event={"ID":"d0b5a971-c6f4-4518-9bb3-49d228275668","Type":"ContainerDied","Data":"5cff793d0cde25f9d7edf3abaaf81005870d7ad01f985cfcc3cf3f5a57ea062b"} Feb 23 08:59:03 crc kubenswrapper[4940]: I0223 08:59:03.835526 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qkw6w" event={"ID":"d0b5a971-c6f4-4518-9bb3-49d228275668","Type":"ContainerDied","Data":"4558e2adbf43351cb79df3a10fc2e37fa7edc8eae79ceb459ae1ea068e8df21b"} Feb 23 08:59:03 crc kubenswrapper[4940]: I0223 08:59:03.835545 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qkw6w" event={"ID":"d0b5a971-c6f4-4518-9bb3-49d228275668","Type":"ContainerDied","Data":"524ac500bdffecad2d1533ea863b5f2dc0acd130368e80faa476facf6817bcc0"} Feb 23 08:59:03 crc kubenswrapper[4940]: I0223 08:59:03.835564 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qkw6w" event={"ID":"d0b5a971-c6f4-4518-9bb3-49d228275668","Type":"ContainerDied","Data":"2f189355dc4d43c8b1df6111c9c3a128e90b724b976602933c384817e4f51882"} Feb 23 08:59:03 crc kubenswrapper[4940]: I0223 08:59:03.835583 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qkw6w" event={"ID":"d0b5a971-c6f4-4518-9bb3-49d228275668","Type":"ContainerDied","Data":"9ef9add89b50dd7b480368148d404e8f7128d92e6d921c3c89a17463706f3dca"} Feb 23 08:59:03 crc kubenswrapper[4940]: I0223 08:59:03.835604 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qkw6w" event={"ID":"d0b5a971-c6f4-4518-9bb3-49d228275668","Type":"ContainerDied","Data":"4f21335f3055d4efa629aa9cfc916ee3c69d12b98c562517e8087e1715257691"} Feb 23 08:59:03 crc kubenswrapper[4940]: I0223 08:59:03.839590 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qsgqr" event={"ID":"c660e08f-37bb-4df2-84ad-be72e4aef556","Type":"ContainerStarted","Data":"3e7bb18c0f5f404d96f297802e89976731974fa37efcf3a1d2072c738ead25a7"} Feb 23 08:59:03 crc kubenswrapper[4940]: I0223 08:59:03.839720 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qsgqr" event={"ID":"c660e08f-37bb-4df2-84ad-be72e4aef556","Type":"ContainerStarted","Data":"638cbd8d618ed2135b8ce8d9e61b793f640f5199d20ddac14388afb5be294762"} Feb 23 08:59:03 crc kubenswrapper[4940]: I0223 08:59:03.842902 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-czrqm_ec3904ad-5d0b-46b4-9c13-68454d9a3cb2/kube-multus/2.log" Feb 23 08:59:03 crc kubenswrapper[4940]: I0223 08:59:03.842988 4940 generic.go:334] "Generic (PLEG): container finished" podID="ec3904ad-5d0b-46b4-9c13-68454d9a3cb2" containerID="15e9457644bb5f9d8fa27855d11196a713b74e007cbba9692fff4c486fc19e99" exitCode=2 Feb 23 08:59:03 crc kubenswrapper[4940]: I0223 08:59:03.843179 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-qkw6w" Feb 23 08:59:03 crc kubenswrapper[4940]: I0223 08:59:03.843591 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-czrqm" event={"ID":"ec3904ad-5d0b-46b4-9c13-68454d9a3cb2","Type":"ContainerDied","Data":"15e9457644bb5f9d8fa27855d11196a713b74e007cbba9692fff4c486fc19e99"} Feb 23 08:59:03 crc kubenswrapper[4940]: I0223 08:59:03.844153 4940 scope.go:117] "RemoveContainer" containerID="15e9457644bb5f9d8fa27855d11196a713b74e007cbba9692fff4c486fc19e99" Feb 23 08:59:03 crc kubenswrapper[4940]: E0223 08:59:03.844496 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-czrqm_openshift-multus(ec3904ad-5d0b-46b4-9c13-68454d9a3cb2)\"" pod="openshift-multus/multus-czrqm" podUID="ec3904ad-5d0b-46b4-9c13-68454d9a3cb2" Feb 23 08:59:03 crc kubenswrapper[4940]: I0223 08:59:03.946440 4940 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-qkw6w"] Feb 23 08:59:03 crc kubenswrapper[4940]: I0223 08:59:03.955121 4940 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-qkw6w"] Feb 23 08:59:04 crc kubenswrapper[4940]: I0223 08:59:04.851577 4940 generic.go:334] "Generic (PLEG): container finished" podID="c660e08f-37bb-4df2-84ad-be72e4aef556" containerID="3e7bb18c0f5f404d96f297802e89976731974fa37efcf3a1d2072c738ead25a7" exitCode=0 Feb 23 08:59:04 crc kubenswrapper[4940]: I0223 08:59:04.851673 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qsgqr" event={"ID":"c660e08f-37bb-4df2-84ad-be72e4aef556","Type":"ContainerDied","Data":"3e7bb18c0f5f404d96f297802e89976731974fa37efcf3a1d2072c738ead25a7"} Feb 23 08:59:05 crc kubenswrapper[4940]: I0223 08:59:05.351530 4940 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d0b5a971-c6f4-4518-9bb3-49d228275668" path="/var/lib/kubelet/pods/d0b5a971-c6f4-4518-9bb3-49d228275668/volumes" Feb 23 08:59:05 crc kubenswrapper[4940]: I0223 08:59:05.872018 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qsgqr" event={"ID":"c660e08f-37bb-4df2-84ad-be72e4aef556","Type":"ContainerStarted","Data":"7fa30aa8469580b3b13a31ca57c1ae6cb0a48d5c1533aefee5e5695444699c00"} Feb 23 08:59:05 crc kubenswrapper[4940]: I0223 08:59:05.872096 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qsgqr" event={"ID":"c660e08f-37bb-4df2-84ad-be72e4aef556","Type":"ContainerStarted","Data":"ee05f4f62fa33072be87ae5dd4bf7bfce1b70ecac14bca5149664b11836c294b"} Feb 23 08:59:05 crc kubenswrapper[4940]: I0223 08:59:05.872117 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qsgqr" event={"ID":"c660e08f-37bb-4df2-84ad-be72e4aef556","Type":"ContainerStarted","Data":"ff909da392d955bd0a4fb03f5cfab9ac685892fbcedc713a5d461f314b980a50"} Feb 23 08:59:05 crc kubenswrapper[4940]: I0223 08:59:05.872135 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qsgqr" event={"ID":"c660e08f-37bb-4df2-84ad-be72e4aef556","Type":"ContainerStarted","Data":"1b2e8cf8a94d3db252cd502757832bfed5f67ca459bde02cf243025b0433f63f"} Feb 23 08:59:05 crc kubenswrapper[4940]: I0223 08:59:05.872154 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qsgqr" event={"ID":"c660e08f-37bb-4df2-84ad-be72e4aef556","Type":"ContainerStarted","Data":"251bc9dd3bc121f9a271ad1eb25a08f7214e998bd03f9f0c45c84549a89ff42c"} Feb 23 08:59:05 crc kubenswrapper[4940]: I0223 08:59:05.872185 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qsgqr" event={"ID":"c660e08f-37bb-4df2-84ad-be72e4aef556","Type":"ContainerStarted","Data":"b604e5e09ea1a19e5e6a8188874cc1c1319dc8cc82cc989fcdcfcb6b5b45606e"} Feb 23 08:59:07 crc kubenswrapper[4940]: I0223 08:59:07.891412 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qsgqr" event={"ID":"c660e08f-37bb-4df2-84ad-be72e4aef556","Type":"ContainerStarted","Data":"069d85e18838d69d9b4201a110612506d78d866f7ffc3c46e2b87498365e7a67"} Feb 23 08:59:10 crc kubenswrapper[4940]: I0223 08:59:10.916310 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qsgqr" event={"ID":"c660e08f-37bb-4df2-84ad-be72e4aef556","Type":"ContainerStarted","Data":"300f84e9fb26ef49587af147398baf6676dfb062aea62ddea8de19c5e6227ac7"} Feb 23 08:59:10 crc kubenswrapper[4940]: I0223 08:59:10.917069 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-qsgqr" Feb 23 08:59:10 crc kubenswrapper[4940]: I0223 08:59:10.917105 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-qsgqr" Feb 23 08:59:10 crc kubenswrapper[4940]: I0223 08:59:10.977986 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-qsgqr" Feb 23 08:59:11 crc kubenswrapper[4940]: I0223 08:59:11.029937 4940 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-qsgqr" podStartSLOduration=8.029915615 podStartE2EDuration="8.029915615s" podCreationTimestamp="2026-02-23 08:59:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 08:59:10.962562843 +0000 UTC m=+682.345769070" watchObservedRunningTime="2026-02-23 08:59:11.029915615 +0000 UTC m=+682.413121792" Feb 23 08:59:11 crc kubenswrapper[4940]: I0223 08:59:11.923867 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-qsgqr" Feb 23 08:59:11 crc kubenswrapper[4940]: I0223 08:59:11.971138 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-qsgqr" Feb 23 08:59:15 crc kubenswrapper[4940]: I0223 08:59:15.346201 4940 scope.go:117] "RemoveContainer" containerID="15e9457644bb5f9d8fa27855d11196a713b74e007cbba9692fff4c486fc19e99" Feb 23 08:59:15 crc kubenswrapper[4940]: E0223 08:59:15.347302 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-czrqm_openshift-multus(ec3904ad-5d0b-46b4-9c13-68454d9a3cb2)\"" pod="openshift-multus/multus-czrqm" podUID="ec3904ad-5d0b-46b4-9c13-68454d9a3cb2" Feb 23 08:59:24 crc kubenswrapper[4940]: I0223 08:59:24.805818 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceph"] Feb 23 08:59:24 crc kubenswrapper[4940]: I0223 08:59:24.808851 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceph" Feb 23 08:59:24 crc kubenswrapper[4940]: I0223 08:59:24.813472 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-ndkbt" Feb 23 08:59:24 crc kubenswrapper[4940]: I0223 08:59:24.814002 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Feb 23 08:59:24 crc kubenswrapper[4940]: I0223 08:59:24.814506 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Feb 23 08:59:24 crc kubenswrapper[4940]: I0223 08:59:24.958136 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/empty-dir/63ebc8a2-744a-4844-b60d-80fefedbf7df-run\") pod \"ceph\" (UID: \"63ebc8a2-744a-4844-b60d-80fefedbf7df\") " pod="openstack/ceph" Feb 23 08:59:24 crc kubenswrapper[4940]: I0223 08:59:24.958833 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4jvxg\" (UniqueName: \"kubernetes.io/projected/63ebc8a2-744a-4844-b60d-80fefedbf7df-kube-api-access-4jvxg\") pod \"ceph\" (UID: \"63ebc8a2-744a-4844-b60d-80fefedbf7df\") " pod="openstack/ceph" Feb 23 08:59:24 crc kubenswrapper[4940]: I0223 08:59:24.959021 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/63ebc8a2-744a-4844-b60d-80fefedbf7df-data\") pod \"ceph\" (UID: \"63ebc8a2-744a-4844-b60d-80fefedbf7df\") " pod="openstack/ceph" Feb 23 08:59:24 crc kubenswrapper[4940]: I0223 08:59:24.959186 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log\" (UniqueName: \"kubernetes.io/empty-dir/63ebc8a2-744a-4844-b60d-80fefedbf7df-log\") pod \"ceph\" (UID: \"63ebc8a2-744a-4844-b60d-80fefedbf7df\") " pod="openstack/ceph" Feb 23 08:59:25 crc kubenswrapper[4940]: I0223 08:59:25.061210 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log\" (UniqueName: \"kubernetes.io/empty-dir/63ebc8a2-744a-4844-b60d-80fefedbf7df-log\") pod \"ceph\" (UID: \"63ebc8a2-744a-4844-b60d-80fefedbf7df\") " pod="openstack/ceph" Feb 23 08:59:25 crc kubenswrapper[4940]: I0223 08:59:25.061345 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/empty-dir/63ebc8a2-744a-4844-b60d-80fefedbf7df-run\") pod \"ceph\" (UID: \"63ebc8a2-744a-4844-b60d-80fefedbf7df\") " pod="openstack/ceph" Feb 23 08:59:25 crc kubenswrapper[4940]: I0223 08:59:25.061409 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4jvxg\" (UniqueName: \"kubernetes.io/projected/63ebc8a2-744a-4844-b60d-80fefedbf7df-kube-api-access-4jvxg\") pod \"ceph\" (UID: \"63ebc8a2-744a-4844-b60d-80fefedbf7df\") " pod="openstack/ceph" Feb 23 08:59:25 crc kubenswrapper[4940]: I0223 08:59:25.061515 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/63ebc8a2-744a-4844-b60d-80fefedbf7df-data\") pod \"ceph\" (UID: \"63ebc8a2-744a-4844-b60d-80fefedbf7df\") " pod="openstack/ceph" Feb 23 08:59:25 crc kubenswrapper[4940]: I0223 08:59:25.062329 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/63ebc8a2-744a-4844-b60d-80fefedbf7df-data\") pod \"ceph\" (UID: \"63ebc8a2-744a-4844-b60d-80fefedbf7df\") " pod="openstack/ceph" Feb 23 08:59:25 crc kubenswrapper[4940]: I0223 08:59:25.062466 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log\" (UniqueName: \"kubernetes.io/empty-dir/63ebc8a2-744a-4844-b60d-80fefedbf7df-log\") pod \"ceph\" (UID: \"63ebc8a2-744a-4844-b60d-80fefedbf7df\") " pod="openstack/ceph" Feb 23 08:59:25 crc kubenswrapper[4940]: I0223 08:59:25.062482 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/empty-dir/63ebc8a2-744a-4844-b60d-80fefedbf7df-run\") pod \"ceph\" (UID: \"63ebc8a2-744a-4844-b60d-80fefedbf7df\") " pod="openstack/ceph" Feb 23 08:59:25 crc kubenswrapper[4940]: I0223 08:59:25.095065 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4jvxg\" (UniqueName: \"kubernetes.io/projected/63ebc8a2-744a-4844-b60d-80fefedbf7df-kube-api-access-4jvxg\") pod \"ceph\" (UID: \"63ebc8a2-744a-4844-b60d-80fefedbf7df\") " pod="openstack/ceph" Feb 23 08:59:25 crc kubenswrapper[4940]: I0223 08:59:25.131941 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceph" Feb 23 08:59:25 crc kubenswrapper[4940]: W0223 08:59:25.164808 4940 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod63ebc8a2_744a_4844_b60d_80fefedbf7df.slice/crio-25625831cb27cb43aa0bb9db9d79a10f15fe37adaaf91e4c9f616b318449c467 WatchSource:0}: Error finding container 25625831cb27cb43aa0bb9db9d79a10f15fe37adaaf91e4c9f616b318449c467: Status 404 returned error can't find the container with id 25625831cb27cb43aa0bb9db9d79a10f15fe37adaaf91e4c9f616b318449c467 Feb 23 08:59:25 crc kubenswrapper[4940]: E0223 08:59:25.212883 4940 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=448311534621890053, SKID=, AKID=C6:4D:7B:6C:15:76:9B:C3:BF:1E:FD:1D:36:03:77:6E:A0:30:BF:77 failed: x509: certificate signed by unknown authority" Feb 23 08:59:25 crc kubenswrapper[4940]: E0223 08:59:25.230436 4940 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=448311534621890053, SKID=, AKID=C6:4D:7B:6C:15:76:9B:C3:BF:1E:FD:1D:36:03:77:6E:A0:30:BF:77 failed: x509: certificate signed by unknown authority" Feb 23 08:59:26 crc kubenswrapper[4940]: I0223 08:59:26.022404 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph" event={"ID":"63ebc8a2-744a-4844-b60d-80fefedbf7df","Type":"ContainerStarted","Data":"25625831cb27cb43aa0bb9db9d79a10f15fe37adaaf91e4c9f616b318449c467"} Feb 23 08:59:26 crc kubenswrapper[4940]: I0223 08:59:26.346158 4940 scope.go:117] "RemoveContainer" containerID="15e9457644bb5f9d8fa27855d11196a713b74e007cbba9692fff4c486fc19e99" Feb 23 08:59:26 crc kubenswrapper[4940]: E0223 08:59:26.419165 4940 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=448311534621890053, SKID=, AKID=C6:4D:7B:6C:15:76:9B:C3:BF:1E:FD:1D:36:03:77:6E:A0:30:BF:77 failed: x509: certificate signed by unknown authority" Feb 23 08:59:26 crc kubenswrapper[4940]: E0223 08:59:26.441903 4940 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=448311534621890053, SKID=, AKID=C6:4D:7B:6C:15:76:9B:C3:BF:1E:FD:1D:36:03:77:6E:A0:30:BF:77 failed: x509: certificate signed by unknown authority" Feb 23 08:59:27 crc kubenswrapper[4940]: I0223 08:59:27.028420 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-czrqm_ec3904ad-5d0b-46b4-9c13-68454d9a3cb2/kube-multus/2.log" Feb 23 08:59:27 crc kubenswrapper[4940]: I0223 08:59:27.028538 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-czrqm" event={"ID":"ec3904ad-5d0b-46b4-9c13-68454d9a3cb2","Type":"ContainerStarted","Data":"09fa60bd995ff399ad2bb9b48817d7dbf77e67a6911e75e0f10813c9d2bf65b6"} Feb 23 08:59:27 crc kubenswrapper[4940]: E0223 08:59:27.575810 4940 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=448311534621890053, SKID=, AKID=C6:4D:7B:6C:15:76:9B:C3:BF:1E:FD:1D:36:03:77:6E:A0:30:BF:77 failed: x509: certificate signed by unknown authority" Feb 23 08:59:27 crc kubenswrapper[4940]: E0223 08:59:27.590822 4940 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=448311534621890053, SKID=, AKID=C6:4D:7B:6C:15:76:9B:C3:BF:1E:FD:1D:36:03:77:6E:A0:30:BF:77 failed: x509: certificate signed by unknown authority" Feb 23 08:59:28 crc kubenswrapper[4940]: E0223 08:59:28.772890 4940 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=448311534621890053, SKID=, AKID=C6:4D:7B:6C:15:76:9B:C3:BF:1E:FD:1D:36:03:77:6E:A0:30:BF:77 failed: x509: certificate signed by unknown authority" Feb 23 08:59:28 crc kubenswrapper[4940]: E0223 08:59:28.787472 4940 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=448311534621890053, SKID=, AKID=C6:4D:7B:6C:15:76:9B:C3:BF:1E:FD:1D:36:03:77:6E:A0:30:BF:77 failed: x509: certificate signed by unknown authority" Feb 23 08:59:29 crc kubenswrapper[4940]: E0223 08:59:29.921336 4940 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=448311534621890053, SKID=, AKID=C6:4D:7B:6C:15:76:9B:C3:BF:1E:FD:1D:36:03:77:6E:A0:30:BF:77 failed: x509: certificate signed by unknown authority" Feb 23 08:59:29 crc kubenswrapper[4940]: E0223 08:59:29.941851 4940 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=448311534621890053, SKID=, AKID=C6:4D:7B:6C:15:76:9B:C3:BF:1E:FD:1D:36:03:77:6E:A0:30:BF:77 failed: x509: certificate signed by unknown authority" Feb 23 08:59:31 crc kubenswrapper[4940]: E0223 08:59:31.136528 4940 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=448311534621890053, SKID=, AKID=C6:4D:7B:6C:15:76:9B:C3:BF:1E:FD:1D:36:03:77:6E:A0:30:BF:77 failed: x509: certificate signed by unknown authority" Feb 23 08:59:31 crc kubenswrapper[4940]: E0223 08:59:31.153912 4940 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=448311534621890053, SKID=, AKID=C6:4D:7B:6C:15:76:9B:C3:BF:1E:FD:1D:36:03:77:6E:A0:30:BF:77 failed: x509: certificate signed by unknown authority" Feb 23 08:59:32 crc kubenswrapper[4940]: E0223 08:59:32.330028 4940 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=448311534621890053, SKID=, AKID=C6:4D:7B:6C:15:76:9B:C3:BF:1E:FD:1D:36:03:77:6E:A0:30:BF:77 failed: x509: certificate signed by unknown authority" Feb 23 08:59:32 crc kubenswrapper[4940]: E0223 08:59:32.347839 4940 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=448311534621890053, SKID=, AKID=C6:4D:7B:6C:15:76:9B:C3:BF:1E:FD:1D:36:03:77:6E:A0:30:BF:77 failed: x509: certificate signed by unknown authority" Feb 23 08:59:33 crc kubenswrapper[4940]: E0223 08:59:33.550394 4940 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=448311534621890053, SKID=, AKID=C6:4D:7B:6C:15:76:9B:C3:BF:1E:FD:1D:36:03:77:6E:A0:30:BF:77 failed: x509: certificate signed by unknown authority" Feb 23 08:59:33 crc kubenswrapper[4940]: E0223 08:59:33.563628 4940 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=448311534621890053, SKID=, AKID=C6:4D:7B:6C:15:76:9B:C3:BF:1E:FD:1D:36:03:77:6E:A0:30:BF:77 failed: x509: certificate signed by unknown authority" Feb 23 08:59:33 crc kubenswrapper[4940]: I0223 08:59:33.682518 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-qsgqr" Feb 23 08:59:34 crc kubenswrapper[4940]: E0223 08:59:34.708037 4940 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=448311534621890053, SKID=, AKID=C6:4D:7B:6C:15:76:9B:C3:BF:1E:FD:1D:36:03:77:6E:A0:30:BF:77 failed: x509: certificate signed by unknown authority" Feb 23 08:59:34 crc kubenswrapper[4940]: E0223 08:59:34.723580 4940 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=448311534621890053, SKID=, AKID=C6:4D:7B:6C:15:76:9B:C3:BF:1E:FD:1D:36:03:77:6E:A0:30:BF:77 failed: x509: certificate signed by unknown authority" Feb 23 08:59:35 crc kubenswrapper[4940]: E0223 08:59:35.856112 4940 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=448311534621890053, SKID=, AKID=C6:4D:7B:6C:15:76:9B:C3:BF:1E:FD:1D:36:03:77:6E:A0:30:BF:77 failed: x509: certificate signed by unknown authority" Feb 23 08:59:35 crc kubenswrapper[4940]: E0223 08:59:35.872128 4940 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=448311534621890053, SKID=, AKID=C6:4D:7B:6C:15:76:9B:C3:BF:1E:FD:1D:36:03:77:6E:A0:30:BF:77 failed: x509: certificate signed by unknown authority" Feb 23 08:59:37 crc kubenswrapper[4940]: E0223 08:59:37.100668 4940 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=448311534621890053, SKID=, AKID=C6:4D:7B:6C:15:76:9B:C3:BF:1E:FD:1D:36:03:77:6E:A0:30:BF:77 failed: x509: certificate signed by unknown authority" Feb 23 08:59:37 crc kubenswrapper[4940]: E0223 08:59:37.113119 4940 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=448311534621890053, SKID=, AKID=C6:4D:7B:6C:15:76:9B:C3:BF:1E:FD:1D:36:03:77:6E:A0:30:BF:77 failed: x509: certificate signed by unknown authority" Feb 23 08:59:38 crc kubenswrapper[4940]: E0223 08:59:38.301525 4940 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=448311534621890053, SKID=, AKID=C6:4D:7B:6C:15:76:9B:C3:BF:1E:FD:1D:36:03:77:6E:A0:30:BF:77 failed: x509: certificate signed by unknown authority" Feb 23 08:59:38 crc kubenswrapper[4940]: E0223 08:59:38.313999 4940 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=448311534621890053, SKID=, AKID=C6:4D:7B:6C:15:76:9B:C3:BF:1E:FD:1D:36:03:77:6E:A0:30:BF:77 failed: x509: certificate signed by unknown authority" Feb 23 08:59:39 crc kubenswrapper[4940]: E0223 08:59:39.496873 4940 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=448311534621890053, SKID=, AKID=C6:4D:7B:6C:15:76:9B:C3:BF:1E:FD:1D:36:03:77:6E:A0:30:BF:77 failed: x509: certificate signed by unknown authority" Feb 23 08:59:39 crc kubenswrapper[4940]: E0223 08:59:39.509978 4940 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=448311534621890053, SKID=, AKID=C6:4D:7B:6C:15:76:9B:C3:BF:1E:FD:1D:36:03:77:6E:A0:30:BF:77 failed: x509: certificate signed by unknown authority" Feb 23 08:59:40 crc kubenswrapper[4940]: E0223 08:59:40.657482 4940 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=448311534621890053, SKID=, AKID=C6:4D:7B:6C:15:76:9B:C3:BF:1E:FD:1D:36:03:77:6E:A0:30:BF:77 failed: x509: certificate signed by unknown authority" Feb 23 08:59:40 crc kubenswrapper[4940]: E0223 08:59:40.670889 4940 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=448311534621890053, SKID=, AKID=C6:4D:7B:6C:15:76:9B:C3:BF:1E:FD:1D:36:03:77:6E:A0:30:BF:77 failed: x509: certificate signed by unknown authority" Feb 23 08:59:41 crc kubenswrapper[4940]: E0223 08:59:41.843779 4940 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=448311534621890053, SKID=, AKID=C6:4D:7B:6C:15:76:9B:C3:BF:1E:FD:1D:36:03:77:6E:A0:30:BF:77 failed: x509: certificate signed by unknown authority" Feb 23 08:59:41 crc kubenswrapper[4940]: E0223 08:59:41.866854 4940 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=448311534621890053, SKID=, AKID=C6:4D:7B:6C:15:76:9B:C3:BF:1E:FD:1D:36:03:77:6E:A0:30:BF:77 failed: x509: certificate signed by unknown authority" Feb 23 08:59:43 crc kubenswrapper[4940]: E0223 08:59:43.120089 4940 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=448311534621890053, SKID=, AKID=C6:4D:7B:6C:15:76:9B:C3:BF:1E:FD:1D:36:03:77:6E:A0:30:BF:77 failed: x509: certificate signed by unknown authority" Feb 23 08:59:43 crc kubenswrapper[4940]: E0223 08:59:43.148983 4940 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=448311534621890053, SKID=, AKID=C6:4D:7B:6C:15:76:9B:C3:BF:1E:FD:1D:36:03:77:6E:A0:30:BF:77 failed: x509: certificate signed by unknown authority" Feb 23 08:59:44 crc kubenswrapper[4940]: E0223 08:59:44.308665 4940 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=448311534621890053, SKID=, AKID=C6:4D:7B:6C:15:76:9B:C3:BF:1E:FD:1D:36:03:77:6E:A0:30:BF:77 failed: x509: certificate signed by unknown authority" Feb 23 08:59:44 crc kubenswrapper[4940]: E0223 08:59:44.328658 4940 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=448311534621890053, SKID=, AKID=C6:4D:7B:6C:15:76:9B:C3:BF:1E:FD:1D:36:03:77:6E:A0:30:BF:77 failed: x509: certificate signed by unknown authority" Feb 23 08:59:45 crc kubenswrapper[4940]: E0223 08:59:45.307460 4940 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/ceph/demo:latest-squid" Feb 23 08:59:45 crc kubenswrapper[4940]: E0223 08:59:45.308320 4940 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceph,Image:quay.io/ceph/demo:latest-squid,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:MON_IP,Value:192.168.126.11,ValueFrom:nil,},EnvVar{Name:CEPH_DAEMON,Value:demo,ValueFrom:nil,},EnvVar{Name:CEPH_PUBLIC_NETWORK,Value:0.0.0.0/0,ValueFrom:nil,},EnvVar{Name:DEMO_DAEMONS,Value:osd,mds,rgw,ValueFrom:nil,},EnvVar{Name:CEPH_DEMO_UID,Value:0,ValueFrom:nil,},EnvVar{Name:RGW_NAME,Value:ceph,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:data,ReadOnly:false,MountPath:/var/lib/ceph,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:log,ReadOnly:false,MountPath:/var/log/ceph,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:run,ReadOnly:false,MountPath:/run/ceph,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4jvxg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceph_openstack(63ebc8a2-744a-4844-b60d-80fefedbf7df): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 23 08:59:45 crc kubenswrapper[4940]: E0223 08:59:45.309894 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceph\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ceph" podUID="63ebc8a2-744a-4844-b60d-80fefedbf7df" Feb 23 08:59:45 crc kubenswrapper[4940]: E0223 08:59:45.472112 4940 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=448311534621890053, SKID=, AKID=C6:4D:7B:6C:15:76:9B:C3:BF:1E:FD:1D:36:03:77:6E:A0:30:BF:77 failed: x509: certificate signed by unknown authority" Feb 23 08:59:45 crc kubenswrapper[4940]: E0223 08:59:45.492692 4940 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=448311534621890053, SKID=, AKID=C6:4D:7B:6C:15:76:9B:C3:BF:1E:FD:1D:36:03:77:6E:A0:30:BF:77 failed: x509: certificate signed by unknown authority" Feb 23 08:59:46 crc kubenswrapper[4940]: E0223 08:59:46.300813 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceph\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/ceph/demo:latest-squid\\\"\"" pod="openstack/ceph" podUID="63ebc8a2-744a-4844-b60d-80fefedbf7df" Feb 23 08:59:46 crc kubenswrapper[4940]: E0223 08:59:46.641168 4940 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=448311534621890053, SKID=, AKID=C6:4D:7B:6C:15:76:9B:C3:BF:1E:FD:1D:36:03:77:6E:A0:30:BF:77 failed: x509: certificate signed by unknown authority" Feb 23 08:59:46 crc kubenswrapper[4940]: E0223 08:59:46.664667 4940 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=448311534621890053, SKID=, AKID=C6:4D:7B:6C:15:76:9B:C3:BF:1E:FD:1D:36:03:77:6E:A0:30:BF:77 failed: x509: certificate signed by unknown authority" Feb 23 08:59:47 crc kubenswrapper[4940]: E0223 08:59:47.804076 4940 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=448311534621890053, SKID=, AKID=C6:4D:7B:6C:15:76:9B:C3:BF:1E:FD:1D:36:03:77:6E:A0:30:BF:77 failed: x509: certificate signed by unknown authority" Feb 23 08:59:47 crc kubenswrapper[4940]: E0223 08:59:47.825526 4940 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=448311534621890053, SKID=, AKID=C6:4D:7B:6C:15:76:9B:C3:BF:1E:FD:1D:36:03:77:6E:A0:30:BF:77 failed: x509: certificate signed by unknown authority" Feb 23 08:59:48 crc kubenswrapper[4940]: E0223 08:59:48.971137 4940 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=448311534621890053, SKID=, AKID=C6:4D:7B:6C:15:76:9B:C3:BF:1E:FD:1D:36:03:77:6E:A0:30:BF:77 failed: x509: certificate signed by unknown authority" Feb 23 08:59:48 crc kubenswrapper[4940]: E0223 08:59:48.985542 4940 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=448311534621890053, SKID=, AKID=C6:4D:7B:6C:15:76:9B:C3:BF:1E:FD:1D:36:03:77:6E:A0:30:BF:77 failed: x509: certificate signed by unknown authority" Feb 23 08:59:50 crc kubenswrapper[4940]: E0223 08:59:50.199475 4940 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=448311534621890053, SKID=, AKID=C6:4D:7B:6C:15:76:9B:C3:BF:1E:FD:1D:36:03:77:6E:A0:30:BF:77 failed: x509: certificate signed by unknown authority" Feb 23 08:59:50 crc kubenswrapper[4940]: E0223 08:59:50.220534 4940 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=448311534621890053, SKID=, AKID=C6:4D:7B:6C:15:76:9B:C3:BF:1E:FD:1D:36:03:77:6E:A0:30:BF:77 failed: x509: certificate signed by unknown authority" Feb 23 08:59:51 crc kubenswrapper[4940]: E0223 08:59:51.430320 4940 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=448311534621890053, SKID=, AKID=C6:4D:7B:6C:15:76:9B:C3:BF:1E:FD:1D:36:03:77:6E:A0:30:BF:77 failed: x509: certificate signed by unknown authority" Feb 23 08:59:51 crc kubenswrapper[4940]: E0223 08:59:51.452934 4940 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=448311534621890053, SKID=, AKID=C6:4D:7B:6C:15:76:9B:C3:BF:1E:FD:1D:36:03:77:6E:A0:30:BF:77 failed: x509: certificate signed by unknown authority" Feb 23 08:59:52 crc kubenswrapper[4940]: E0223 08:59:52.646227 4940 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=448311534621890053, SKID=, AKID=C6:4D:7B:6C:15:76:9B:C3:BF:1E:FD:1D:36:03:77:6E:A0:30:BF:77 failed: x509: certificate signed by unknown authority" Feb 23 08:59:52 crc kubenswrapper[4940]: E0223 08:59:52.668150 4940 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=448311534621890053, SKID=, AKID=C6:4D:7B:6C:15:76:9B:C3:BF:1E:FD:1D:36:03:77:6E:A0:30:BF:77 failed: x509: certificate signed by unknown authority" Feb 23 08:59:53 crc kubenswrapper[4940]: E0223 08:59:53.860367 4940 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=448311534621890053, SKID=, AKID=C6:4D:7B:6C:15:76:9B:C3:BF:1E:FD:1D:36:03:77:6E:A0:30:BF:77 failed: x509: certificate signed by unknown authority" Feb 23 08:59:53 crc kubenswrapper[4940]: E0223 08:59:53.881810 4940 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=448311534621890053, SKID=, AKID=C6:4D:7B:6C:15:76:9B:C3:BF:1E:FD:1D:36:03:77:6E:A0:30:BF:77 failed: x509: certificate signed by unknown authority" Feb 23 08:59:55 crc kubenswrapper[4940]: E0223 08:59:55.085666 4940 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=448311534621890053, SKID=, AKID=C6:4D:7B:6C:15:76:9B:C3:BF:1E:FD:1D:36:03:77:6E:A0:30:BF:77 failed: x509: certificate signed by unknown authority" Feb 23 08:59:55 crc kubenswrapper[4940]: E0223 08:59:55.105169 4940 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=448311534621890053, SKID=, AKID=C6:4D:7B:6C:15:76:9B:C3:BF:1E:FD:1D:36:03:77:6E:A0:30:BF:77 failed: x509: certificate signed by unknown authority" Feb 23 08:59:56 crc kubenswrapper[4940]: E0223 08:59:56.290509 4940 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=448311534621890053, SKID=, AKID=C6:4D:7B:6C:15:76:9B:C3:BF:1E:FD:1D:36:03:77:6E:A0:30:BF:77 failed: x509: certificate signed by unknown authority" Feb 23 08:59:56 crc kubenswrapper[4940]: E0223 08:59:56.315078 4940 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=448311534621890053, SKID=, AKID=C6:4D:7B:6C:15:76:9B:C3:BF:1E:FD:1D:36:03:77:6E:A0:30:BF:77 failed: x509: certificate signed by unknown authority" Feb 23 08:59:57 crc kubenswrapper[4940]: E0223 08:59:57.457015 4940 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=448311534621890053, SKID=, AKID=C6:4D:7B:6C:15:76:9B:C3:BF:1E:FD:1D:36:03:77:6E:A0:30:BF:77 failed: x509: certificate signed by unknown authority" Feb 23 08:59:57 crc kubenswrapper[4940]: E0223 08:59:57.472390 4940 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=448311534621890053, SKID=, AKID=C6:4D:7B:6C:15:76:9B:C3:BF:1E:FD:1D:36:03:77:6E:A0:30:BF:77 failed: x509: certificate signed by unknown authority" Feb 23 08:59:58 crc kubenswrapper[4940]: I0223 08:59:58.376036 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph" event={"ID":"63ebc8a2-744a-4844-b60d-80fefedbf7df","Type":"ContainerStarted","Data":"c78d78a1b507eb875f71f9534e9cee2d79cf04a7fecd31192891a7f2fdd92417"} Feb 23 08:59:58 crc kubenswrapper[4940]: I0223 08:59:58.401038 4940 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceph" podStartSLOduration=1.76649325 podStartE2EDuration="34.401011404s" podCreationTimestamp="2026-02-23 08:59:24 +0000 UTC" firstStartedPulling="2026-02-23 08:59:25.169478588 +0000 UTC m=+696.552684785" lastFinishedPulling="2026-02-23 08:59:57.803996752 +0000 UTC m=+729.187202939" observedRunningTime="2026-02-23 08:59:58.395309449 +0000 UTC m=+729.778515646" watchObservedRunningTime="2026-02-23 08:59:58.401011404 +0000 UTC m=+729.784217601" Feb 23 08:59:58 crc kubenswrapper[4940]: E0223 08:59:58.621786 4940 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=448311534621890053, SKID=, AKID=C6:4D:7B:6C:15:76:9B:C3:BF:1E:FD:1D:36:03:77:6E:A0:30:BF:77 failed: x509: certificate signed by unknown authority" Feb 23 08:59:58 crc kubenswrapper[4940]: E0223 08:59:58.639496 4940 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=448311534621890053, SKID=, AKID=C6:4D:7B:6C:15:76:9B:C3:BF:1E:FD:1D:36:03:77:6E:A0:30:BF:77 failed: x509: certificate signed by unknown authority" Feb 23 08:59:59 crc kubenswrapper[4940]: E0223 08:59:59.854423 4940 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=448311534621890053, SKID=, AKID=C6:4D:7B:6C:15:76:9B:C3:BF:1E:FD:1D:36:03:77:6E:A0:30:BF:77 failed: x509: certificate signed by unknown authority" Feb 23 08:59:59 crc kubenswrapper[4940]: E0223 08:59:59.875003 4940 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=448311534621890053, SKID=, AKID=C6:4D:7B:6C:15:76:9B:C3:BF:1E:FD:1D:36:03:77:6E:A0:30:BF:77 failed: x509: certificate signed by unknown authority" Feb 23 09:00:00 crc kubenswrapper[4940]: I0223 09:00:00.191452 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29530620-lwqfr"] Feb 23 09:00:00 crc kubenswrapper[4940]: I0223 09:00:00.192264 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29530620-lwqfr" Feb 23 09:00:00 crc kubenswrapper[4940]: I0223 09:00:00.198530 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 23 09:00:00 crc kubenswrapper[4940]: I0223 09:00:00.211649 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29530620-lwqfr"] Feb 23 09:00:00 crc kubenswrapper[4940]: I0223 09:00:00.212171 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 23 09:00:00 crc kubenswrapper[4940]: I0223 09:00:00.304418 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/96a33e39-df26-4233-aca9-edbe7b31aa62-config-volume\") pod \"collect-profiles-29530620-lwqfr\" (UID: \"96a33e39-df26-4233-aca9-edbe7b31aa62\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29530620-lwqfr" Feb 23 09:00:00 crc kubenswrapper[4940]: I0223 09:00:00.304595 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/96a33e39-df26-4233-aca9-edbe7b31aa62-secret-volume\") pod \"collect-profiles-29530620-lwqfr\" (UID: \"96a33e39-df26-4233-aca9-edbe7b31aa62\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29530620-lwqfr" Feb 23 09:00:00 crc kubenswrapper[4940]: I0223 09:00:00.304675 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w2xq6\" (UniqueName: \"kubernetes.io/projected/96a33e39-df26-4233-aca9-edbe7b31aa62-kube-api-access-w2xq6\") pod \"collect-profiles-29530620-lwqfr\" (UID: \"96a33e39-df26-4233-aca9-edbe7b31aa62\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29530620-lwqfr" Feb 23 09:00:00 crc kubenswrapper[4940]: I0223 09:00:00.405785 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/96a33e39-df26-4233-aca9-edbe7b31aa62-config-volume\") pod \"collect-profiles-29530620-lwqfr\" (UID: \"96a33e39-df26-4233-aca9-edbe7b31aa62\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29530620-lwqfr" Feb 23 09:00:00 crc kubenswrapper[4940]: I0223 09:00:00.405890 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/96a33e39-df26-4233-aca9-edbe7b31aa62-secret-volume\") pod \"collect-profiles-29530620-lwqfr\" (UID: \"96a33e39-df26-4233-aca9-edbe7b31aa62\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29530620-lwqfr" Feb 23 09:00:00 crc kubenswrapper[4940]: I0223 09:00:00.405940 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w2xq6\" (UniqueName: \"kubernetes.io/projected/96a33e39-df26-4233-aca9-edbe7b31aa62-kube-api-access-w2xq6\") pod \"collect-profiles-29530620-lwqfr\" (UID: \"96a33e39-df26-4233-aca9-edbe7b31aa62\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29530620-lwqfr" Feb 23 09:00:00 crc kubenswrapper[4940]: I0223 09:00:00.406881 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/96a33e39-df26-4233-aca9-edbe7b31aa62-config-volume\") pod \"collect-profiles-29530620-lwqfr\" (UID: \"96a33e39-df26-4233-aca9-edbe7b31aa62\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29530620-lwqfr" Feb 23 09:00:00 crc kubenswrapper[4940]: I0223 09:00:00.412179 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/96a33e39-df26-4233-aca9-edbe7b31aa62-secret-volume\") pod \"collect-profiles-29530620-lwqfr\" (UID: \"96a33e39-df26-4233-aca9-edbe7b31aa62\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29530620-lwqfr" Feb 23 09:00:00 crc kubenswrapper[4940]: I0223 09:00:00.426697 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w2xq6\" (UniqueName: \"kubernetes.io/projected/96a33e39-df26-4233-aca9-edbe7b31aa62-kube-api-access-w2xq6\") pod \"collect-profiles-29530620-lwqfr\" (UID: \"96a33e39-df26-4233-aca9-edbe7b31aa62\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29530620-lwqfr" Feb 23 09:00:00 crc kubenswrapper[4940]: I0223 09:00:00.524284 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29530620-lwqfr" Feb 23 09:00:00 crc kubenswrapper[4940]: I0223 09:00:00.952689 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29530620-lwqfr"] Feb 23 09:00:01 crc kubenswrapper[4940]: E0223 09:00:01.090584 4940 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=448311534621890053, SKID=, AKID=C6:4D:7B:6C:15:76:9B:C3:BF:1E:FD:1D:36:03:77:6E:A0:30:BF:77 failed: x509: certificate signed by unknown authority" Feb 23 09:00:01 crc kubenswrapper[4940]: E0223 09:00:01.107418 4940 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=448311534621890053, SKID=, AKID=C6:4D:7B:6C:15:76:9B:C3:BF:1E:FD:1D:36:03:77:6E:A0:30:BF:77 failed: x509: certificate signed by unknown authority" Feb 23 09:00:01 crc kubenswrapper[4940]: I0223 09:00:01.392263 4940 generic.go:334] "Generic (PLEG): container finished" podID="96a33e39-df26-4233-aca9-edbe7b31aa62" containerID="b4e34acd75f184b061368d58273c59c90b3d4ace233d52dd0374a92d234322e7" exitCode=0 Feb 23 09:00:01 crc kubenswrapper[4940]: I0223 09:00:01.392334 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29530620-lwqfr" event={"ID":"96a33e39-df26-4233-aca9-edbe7b31aa62","Type":"ContainerDied","Data":"b4e34acd75f184b061368d58273c59c90b3d4ace233d52dd0374a92d234322e7"} Feb 23 09:00:01 crc kubenswrapper[4940]: I0223 09:00:01.392564 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29530620-lwqfr" event={"ID":"96a33e39-df26-4233-aca9-edbe7b31aa62","Type":"ContainerStarted","Data":"0c99f16705eba7ba7daffeb26ad04322e5d2f2d3e99df7fa1bef8236cedf630a"} Feb 23 09:00:02 crc kubenswrapper[4940]: E0223 09:00:02.320729 4940 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=448311534621890053, SKID=, AKID=C6:4D:7B:6C:15:76:9B:C3:BF:1E:FD:1D:36:03:77:6E:A0:30:BF:77 failed: x509: certificate signed by unknown authority" Feb 23 09:00:02 crc kubenswrapper[4940]: E0223 09:00:02.342507 4940 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=448311534621890053, SKID=, AKID=C6:4D:7B:6C:15:76:9B:C3:BF:1E:FD:1D:36:03:77:6E:A0:30:BF:77 failed: x509: certificate signed by unknown authority" Feb 23 09:00:02 crc kubenswrapper[4940]: I0223 09:00:02.700326 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29530620-lwqfr" Feb 23 09:00:02 crc kubenswrapper[4940]: I0223 09:00:02.841836 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/96a33e39-df26-4233-aca9-edbe7b31aa62-config-volume\") pod \"96a33e39-df26-4233-aca9-edbe7b31aa62\" (UID: \"96a33e39-df26-4233-aca9-edbe7b31aa62\") " Feb 23 09:00:02 crc kubenswrapper[4940]: I0223 09:00:02.841952 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w2xq6\" (UniqueName: \"kubernetes.io/projected/96a33e39-df26-4233-aca9-edbe7b31aa62-kube-api-access-w2xq6\") pod \"96a33e39-df26-4233-aca9-edbe7b31aa62\" (UID: \"96a33e39-df26-4233-aca9-edbe7b31aa62\") " Feb 23 09:00:02 crc kubenswrapper[4940]: I0223 09:00:02.842054 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/96a33e39-df26-4233-aca9-edbe7b31aa62-secret-volume\") pod \"96a33e39-df26-4233-aca9-edbe7b31aa62\" (UID: \"96a33e39-df26-4233-aca9-edbe7b31aa62\") " Feb 23 09:00:02 crc kubenswrapper[4940]: I0223 09:00:02.844286 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/96a33e39-df26-4233-aca9-edbe7b31aa62-config-volume" (OuterVolumeSpecName: "config-volume") pod "96a33e39-df26-4233-aca9-edbe7b31aa62" (UID: "96a33e39-df26-4233-aca9-edbe7b31aa62"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 09:00:02 crc kubenswrapper[4940]: I0223 09:00:02.853473 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96a33e39-df26-4233-aca9-edbe7b31aa62-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "96a33e39-df26-4233-aca9-edbe7b31aa62" (UID: "96a33e39-df26-4233-aca9-edbe7b31aa62"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 09:00:02 crc kubenswrapper[4940]: I0223 09:00:02.853573 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96a33e39-df26-4233-aca9-edbe7b31aa62-kube-api-access-w2xq6" (OuterVolumeSpecName: "kube-api-access-w2xq6") pod "96a33e39-df26-4233-aca9-edbe7b31aa62" (UID: "96a33e39-df26-4233-aca9-edbe7b31aa62"). InnerVolumeSpecName "kube-api-access-w2xq6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 09:00:02 crc kubenswrapper[4940]: I0223 09:00:02.943253 4940 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w2xq6\" (UniqueName: \"kubernetes.io/projected/96a33e39-df26-4233-aca9-edbe7b31aa62-kube-api-access-w2xq6\") on node \"crc\" DevicePath \"\"" Feb 23 09:00:02 crc kubenswrapper[4940]: I0223 09:00:02.943306 4940 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/96a33e39-df26-4233-aca9-edbe7b31aa62-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 23 09:00:02 crc kubenswrapper[4940]: I0223 09:00:02.943325 4940 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/96a33e39-df26-4233-aca9-edbe7b31aa62-config-volume\") on node \"crc\" DevicePath \"\"" Feb 23 09:00:03 crc kubenswrapper[4940]: I0223 09:00:03.421324 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29530620-lwqfr" event={"ID":"96a33e39-df26-4233-aca9-edbe7b31aa62","Type":"ContainerDied","Data":"0c99f16705eba7ba7daffeb26ad04322e5d2f2d3e99df7fa1bef8236cedf630a"} Feb 23 09:00:03 crc kubenswrapper[4940]: I0223 09:00:03.421820 4940 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0c99f16705eba7ba7daffeb26ad04322e5d2f2d3e99df7fa1bef8236cedf630a" Feb 23 09:00:03 crc kubenswrapper[4940]: I0223 09:00:03.421887 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29530620-lwqfr" Feb 23 09:00:03 crc kubenswrapper[4940]: E0223 09:00:03.570586 4940 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=448311534621890053, SKID=, AKID=C6:4D:7B:6C:15:76:9B:C3:BF:1E:FD:1D:36:03:77:6E:A0:30:BF:77 failed: x509: certificate signed by unknown authority" Feb 23 09:00:03 crc kubenswrapper[4940]: E0223 09:00:03.592371 4940 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=448311534621890053, SKID=, AKID=C6:4D:7B:6C:15:76:9B:C3:BF:1E:FD:1D:36:03:77:6E:A0:30:BF:77 failed: x509: certificate signed by unknown authority" Feb 23 09:00:04 crc kubenswrapper[4940]: E0223 09:00:04.744954 4940 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=448311534621890053, SKID=, AKID=C6:4D:7B:6C:15:76:9B:C3:BF:1E:FD:1D:36:03:77:6E:A0:30:BF:77 failed: x509: certificate signed by unknown authority" Feb 23 09:00:04 crc kubenswrapper[4940]: E0223 09:00:04.768573 4940 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=448311534621890053, SKID=, AKID=C6:4D:7B:6C:15:76:9B:C3:BF:1E:FD:1D:36:03:77:6E:A0:30:BF:77 failed: x509: certificate signed by unknown authority" Feb 23 09:00:05 crc kubenswrapper[4940]: E0223 09:00:05.909044 4940 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=448311534621890053, SKID=, AKID=C6:4D:7B:6C:15:76:9B:C3:BF:1E:FD:1D:36:03:77:6E:A0:30:BF:77 failed: x509: certificate signed by unknown authority" Feb 23 09:00:05 crc kubenswrapper[4940]: E0223 09:00:05.930160 4940 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=448311534621890053, SKID=, AKID=C6:4D:7B:6C:15:76:9B:C3:BF:1E:FD:1D:36:03:77:6E:A0:30:BF:77 failed: x509: certificate signed by unknown authority" Feb 23 09:00:07 crc kubenswrapper[4940]: E0223 09:00:07.078157 4940 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=448311534621890053, SKID=, AKID=C6:4D:7B:6C:15:76:9B:C3:BF:1E:FD:1D:36:03:77:6E:A0:30:BF:77 failed: x509: certificate signed by unknown authority" Feb 23 09:00:07 crc kubenswrapper[4940]: E0223 09:00:07.095675 4940 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=448311534621890053, SKID=, AKID=C6:4D:7B:6C:15:76:9B:C3:BF:1E:FD:1D:36:03:77:6E:A0:30:BF:77 failed: x509: certificate signed by unknown authority" Feb 23 09:00:08 crc kubenswrapper[4940]: E0223 09:00:08.277914 4940 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=448311534621890053, SKID=, AKID=C6:4D:7B:6C:15:76:9B:C3:BF:1E:FD:1D:36:03:77:6E:A0:30:BF:77 failed: x509: certificate signed by unknown authority" Feb 23 09:00:08 crc kubenswrapper[4940]: E0223 09:00:08.303038 4940 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=448311534621890053, SKID=, AKID=C6:4D:7B:6C:15:76:9B:C3:BF:1E:FD:1D:36:03:77:6E:A0:30:BF:77 failed: x509: certificate signed by unknown authority" Feb 23 09:00:09 crc kubenswrapper[4940]: E0223 09:00:09.473109 4940 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=448311534621890053, SKID=, AKID=C6:4D:7B:6C:15:76:9B:C3:BF:1E:FD:1D:36:03:77:6E:A0:30:BF:77 failed: x509: certificate signed by unknown authority" Feb 23 09:00:09 crc kubenswrapper[4940]: E0223 09:00:09.498284 4940 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=448311534621890053, SKID=, AKID=C6:4D:7B:6C:15:76:9B:C3:BF:1E:FD:1D:36:03:77:6E:A0:30:BF:77 failed: x509: certificate signed by unknown authority" Feb 23 09:00:10 crc kubenswrapper[4940]: E0223 09:00:10.646963 4940 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=448311534621890053, SKID=, AKID=C6:4D:7B:6C:15:76:9B:C3:BF:1E:FD:1D:36:03:77:6E:A0:30:BF:77 failed: x509: certificate signed by unknown authority" Feb 23 09:00:10 crc kubenswrapper[4940]: E0223 09:00:10.663049 4940 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=448311534621890053, SKID=, AKID=C6:4D:7B:6C:15:76:9B:C3:BF:1E:FD:1D:36:03:77:6E:A0:30:BF:77 failed: x509: certificate signed by unknown authority" Feb 23 09:00:11 crc kubenswrapper[4940]: E0223 09:00:11.861145 4940 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=448311534621890053, SKID=, AKID=C6:4D:7B:6C:15:76:9B:C3:BF:1E:FD:1D:36:03:77:6E:A0:30:BF:77 failed: x509: certificate signed by unknown authority" Feb 23 09:00:11 crc kubenswrapper[4940]: E0223 09:00:11.882133 4940 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=448311534621890053, SKID=, AKID=C6:4D:7B:6C:15:76:9B:C3:BF:1E:FD:1D:36:03:77:6E:A0:30:BF:77 failed: x509: certificate signed by unknown authority" Feb 23 09:00:13 crc kubenswrapper[4940]: E0223 09:00:13.037029 4940 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=448311534621890053, SKID=, AKID=C6:4D:7B:6C:15:76:9B:C3:BF:1E:FD:1D:36:03:77:6E:A0:30:BF:77 failed: x509: certificate signed by unknown authority" Feb 23 09:00:13 crc kubenswrapper[4940]: E0223 09:00:13.053946 4940 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=448311534621890053, SKID=, AKID=C6:4D:7B:6C:15:76:9B:C3:BF:1E:FD:1D:36:03:77:6E:A0:30:BF:77 failed: x509: certificate signed by unknown authority" Feb 23 09:00:14 crc kubenswrapper[4940]: E0223 09:00:14.239061 4940 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=448311534621890053, SKID=, AKID=C6:4D:7B:6C:15:76:9B:C3:BF:1E:FD:1D:36:03:77:6E:A0:30:BF:77 failed: x509: certificate signed by unknown authority" Feb 23 09:00:14 crc kubenswrapper[4940]: E0223 09:00:14.260664 4940 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=448311534621890053, SKID=, AKID=C6:4D:7B:6C:15:76:9B:C3:BF:1E:FD:1D:36:03:77:6E:A0:30:BF:77 failed: x509: certificate signed by unknown authority" Feb 23 09:00:15 crc kubenswrapper[4940]: E0223 09:00:15.405868 4940 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=448311534621890053, SKID=, AKID=C6:4D:7B:6C:15:76:9B:C3:BF:1E:FD:1D:36:03:77:6E:A0:30:BF:77 failed: x509: certificate signed by unknown authority" Feb 23 09:00:15 crc kubenswrapper[4940]: E0223 09:00:15.428318 4940 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=448311534621890053, SKID=, AKID=C6:4D:7B:6C:15:76:9B:C3:BF:1E:FD:1D:36:03:77:6E:A0:30:BF:77 failed: x509: certificate signed by unknown authority" Feb 23 09:00:16 crc kubenswrapper[4940]: E0223 09:00:16.594396 4940 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=448311534621890053, SKID=, AKID=C6:4D:7B:6C:15:76:9B:C3:BF:1E:FD:1D:36:03:77:6E:A0:30:BF:77 failed: x509: certificate signed by unknown authority" Feb 23 09:00:16 crc kubenswrapper[4940]: E0223 09:00:16.616984 4940 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=448311534621890053, SKID=, AKID=C6:4D:7B:6C:15:76:9B:C3:BF:1E:FD:1D:36:03:77:6E:A0:30:BF:77 failed: x509: certificate signed by unknown authority" Feb 23 09:00:17 crc kubenswrapper[4940]: E0223 09:00:17.769824 4940 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=448311534621890053, SKID=, AKID=C6:4D:7B:6C:15:76:9B:C3:BF:1E:FD:1D:36:03:77:6E:A0:30:BF:77 failed: x509: certificate signed by unknown authority" Feb 23 09:00:17 crc kubenswrapper[4940]: E0223 09:00:17.791561 4940 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=448311534621890053, SKID=, AKID=C6:4D:7B:6C:15:76:9B:C3:BF:1E:FD:1D:36:03:77:6E:A0:30:BF:77 failed: x509: certificate signed by unknown authority" Feb 23 09:00:19 crc kubenswrapper[4940]: E0223 09:00:19.013165 4940 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=448311534621890053, SKID=, AKID=C6:4D:7B:6C:15:76:9B:C3:BF:1E:FD:1D:36:03:77:6E:A0:30:BF:77 failed: x509: certificate signed by unknown authority" Feb 23 09:00:19 crc kubenswrapper[4940]: E0223 09:00:19.031296 4940 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=448311534621890053, SKID=, AKID=C6:4D:7B:6C:15:76:9B:C3:BF:1E:FD:1D:36:03:77:6E:A0:30:BF:77 failed: x509: certificate signed by unknown authority" Feb 23 09:00:20 crc kubenswrapper[4940]: E0223 09:00:20.193054 4940 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=448311534621890053, SKID=, AKID=C6:4D:7B:6C:15:76:9B:C3:BF:1E:FD:1D:36:03:77:6E:A0:30:BF:77 failed: x509: certificate signed by unknown authority" Feb 23 09:00:20 crc kubenswrapper[4940]: E0223 09:00:20.213571 4940 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=448311534621890053, SKID=, AKID=C6:4D:7B:6C:15:76:9B:C3:BF:1E:FD:1D:36:03:77:6E:A0:30:BF:77 failed: x509: certificate signed by unknown authority" Feb 23 09:00:21 crc kubenswrapper[4940]: E0223 09:00:21.422222 4940 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=448311534621890053, SKID=, AKID=C6:4D:7B:6C:15:76:9B:C3:BF:1E:FD:1D:36:03:77:6E:A0:30:BF:77 failed: x509: certificate signed by unknown authority" Feb 23 09:00:21 crc kubenswrapper[4940]: E0223 09:00:21.435406 4940 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=448311534621890053, SKID=, AKID=C6:4D:7B:6C:15:76:9B:C3:BF:1E:FD:1D:36:03:77:6E:A0:30:BF:77 failed: x509: certificate signed by unknown authority" Feb 23 09:00:22 crc kubenswrapper[4940]: E0223 09:00:22.601058 4940 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=448311534621890053, SKID=, AKID=C6:4D:7B:6C:15:76:9B:C3:BF:1E:FD:1D:36:03:77:6E:A0:30:BF:77 failed: x509: certificate signed by unknown authority" Feb 23 09:00:22 crc kubenswrapper[4940]: E0223 09:00:22.621192 4940 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=448311534621890053, SKID=, AKID=C6:4D:7B:6C:15:76:9B:C3:BF:1E:FD:1D:36:03:77:6E:A0:30:BF:77 failed: x509: certificate signed by unknown authority" Feb 23 09:00:23 crc kubenswrapper[4940]: E0223 09:00:23.776046 4940 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=448311534621890053, SKID=, AKID=C6:4D:7B:6C:15:76:9B:C3:BF:1E:FD:1D:36:03:77:6E:A0:30:BF:77 failed: x509: certificate signed by unknown authority" Feb 23 09:00:23 crc kubenswrapper[4940]: E0223 09:00:23.795408 4940 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=448311534621890053, SKID=, AKID=C6:4D:7B:6C:15:76:9B:C3:BF:1E:FD:1D:36:03:77:6E:A0:30:BF:77 failed: x509: certificate signed by unknown authority" Feb 23 09:00:25 crc kubenswrapper[4940]: E0223 09:00:25.010954 4940 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=448311534621890053, SKID=, AKID=C6:4D:7B:6C:15:76:9B:C3:BF:1E:FD:1D:36:03:77:6E:A0:30:BF:77 failed: x509: certificate signed by unknown authority" Feb 23 09:00:25 crc kubenswrapper[4940]: E0223 09:00:25.030467 4940 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=448311534621890053, SKID=, AKID=C6:4D:7B:6C:15:76:9B:C3:BF:1E:FD:1D:36:03:77:6E:A0:30:BF:77 failed: x509: certificate signed by unknown authority" Feb 23 09:00:26 crc kubenswrapper[4940]: E0223 09:00:26.216575 4940 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=448311534621890053, SKID=, AKID=C6:4D:7B:6C:15:76:9B:C3:BF:1E:FD:1D:36:03:77:6E:A0:30:BF:77 failed: x509: certificate signed by unknown authority" Feb 23 09:00:26 crc kubenswrapper[4940]: E0223 09:00:26.231363 4940 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=448311534621890053, SKID=, AKID=C6:4D:7B:6C:15:76:9B:C3:BF:1E:FD:1D:36:03:77:6E:A0:30:BF:77 failed: x509: certificate signed by unknown authority" Feb 23 09:00:27 crc kubenswrapper[4940]: E0223 09:00:27.400992 4940 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=448311534621890053, SKID=, AKID=C6:4D:7B:6C:15:76:9B:C3:BF:1E:FD:1D:36:03:77:6E:A0:30:BF:77 failed: x509: certificate signed by unknown authority" Feb 23 09:00:27 crc kubenswrapper[4940]: E0223 09:00:27.419897 4940 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=448311534621890053, SKID=, AKID=C6:4D:7B:6C:15:76:9B:C3:BF:1E:FD:1D:36:03:77:6E:A0:30:BF:77 failed: x509: certificate signed by unknown authority" Feb 23 09:00:28 crc kubenswrapper[4940]: E0223 09:00:28.603143 4940 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=448311534621890053, SKID=, AKID=C6:4D:7B:6C:15:76:9B:C3:BF:1E:FD:1D:36:03:77:6E:A0:30:BF:77 failed: x509: certificate signed by unknown authority" Feb 23 09:00:28 crc kubenswrapper[4940]: E0223 09:00:28.623972 4940 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=448311534621890053, SKID=, AKID=C6:4D:7B:6C:15:76:9B:C3:BF:1E:FD:1D:36:03:77:6E:A0:30:BF:77 failed: x509: certificate signed by unknown authority" Feb 23 09:00:29 crc kubenswrapper[4940]: E0223 09:00:29.795966 4940 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=448311534621890053, SKID=, AKID=C6:4D:7B:6C:15:76:9B:C3:BF:1E:FD:1D:36:03:77:6E:A0:30:BF:77 failed: x509: certificate signed by unknown authority" Feb 23 09:00:29 crc kubenswrapper[4940]: E0223 09:00:29.814515 4940 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=448311534621890053, SKID=, AKID=C6:4D:7B:6C:15:76:9B:C3:BF:1E:FD:1D:36:03:77:6E:A0:30:BF:77 failed: x509: certificate signed by unknown authority" Feb 23 09:00:30 crc kubenswrapper[4940]: E0223 09:00:30.957928 4940 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=448311534621890053, SKID=, AKID=C6:4D:7B:6C:15:76:9B:C3:BF:1E:FD:1D:36:03:77:6E:A0:30:BF:77 failed: x509: certificate signed by unknown authority" Feb 23 09:00:30 crc kubenswrapper[4940]: E0223 09:00:30.979734 4940 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=448311534621890053, SKID=, AKID=C6:4D:7B:6C:15:76:9B:C3:BF:1E:FD:1D:36:03:77:6E:A0:30:BF:77 failed: x509: certificate signed by unknown authority" Feb 23 09:00:31 crc kubenswrapper[4940]: I0223 09:00:31.429178 4940 patch_prober.go:28] interesting pod/machine-config-daemon-26mgs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 23 09:00:31 crc kubenswrapper[4940]: I0223 09:00:31.429251 4940 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" podUID="f3f2cfd6-5ddf-436d-998f-440f1cc642b1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 23 09:00:32 crc kubenswrapper[4940]: E0223 09:00:32.125441 4940 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=448311534621890053, SKID=, AKID=C6:4D:7B:6C:15:76:9B:C3:BF:1E:FD:1D:36:03:77:6E:A0:30:BF:77 failed: x509: certificate signed by unknown authority" Feb 23 09:00:32 crc kubenswrapper[4940]: E0223 09:00:32.143558 4940 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=448311534621890053, SKID=, AKID=C6:4D:7B:6C:15:76:9B:C3:BF:1E:FD:1D:36:03:77:6E:A0:30:BF:77 failed: x509: certificate signed by unknown authority" Feb 23 09:00:33 crc kubenswrapper[4940]: E0223 09:00:33.294284 4940 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=448311534621890053, SKID=, AKID=C6:4D:7B:6C:15:76:9B:C3:BF:1E:FD:1D:36:03:77:6E:A0:30:BF:77 failed: x509: certificate signed by unknown authority" Feb 23 09:00:33 crc kubenswrapper[4940]: E0223 09:00:33.315226 4940 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=448311534621890053, SKID=, AKID=C6:4D:7B:6C:15:76:9B:C3:BF:1E:FD:1D:36:03:77:6E:A0:30:BF:77 failed: x509: certificate signed by unknown authority" Feb 23 09:00:34 crc kubenswrapper[4940]: E0223 09:00:34.520898 4940 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=448311534621890053, SKID=, AKID=C6:4D:7B:6C:15:76:9B:C3:BF:1E:FD:1D:36:03:77:6E:A0:30:BF:77 failed: x509: certificate signed by unknown authority" Feb 23 09:00:34 crc kubenswrapper[4940]: E0223 09:00:34.546029 4940 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=448311534621890053, SKID=, AKID=C6:4D:7B:6C:15:76:9B:C3:BF:1E:FD:1D:36:03:77:6E:A0:30:BF:77 failed: x509: certificate signed by unknown authority" Feb 23 09:00:35 crc kubenswrapper[4940]: E0223 09:00:35.681559 4940 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=448311534621890053, SKID=, AKID=C6:4D:7B:6C:15:76:9B:C3:BF:1E:FD:1D:36:03:77:6E:A0:30:BF:77 failed: x509: certificate signed by unknown authority" Feb 23 09:00:35 crc kubenswrapper[4940]: E0223 09:00:35.700418 4940 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=448311534621890053, SKID=, AKID=C6:4D:7B:6C:15:76:9B:C3:BF:1E:FD:1D:36:03:77:6E:A0:30:BF:77 failed: x509: certificate signed by unknown authority" Feb 23 09:00:36 crc kubenswrapper[4940]: E0223 09:00:36.892814 4940 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=448311534621890053, SKID=, AKID=C6:4D:7B:6C:15:76:9B:C3:BF:1E:FD:1D:36:03:77:6E:A0:30:BF:77 failed: x509: certificate signed by unknown authority" Feb 23 09:00:36 crc kubenswrapper[4940]: E0223 09:00:36.911793 4940 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=448311534621890053, SKID=, AKID=C6:4D:7B:6C:15:76:9B:C3:BF:1E:FD:1D:36:03:77:6E:A0:30:BF:77 failed: x509: certificate signed by unknown authority" Feb 23 09:00:38 crc kubenswrapper[4940]: E0223 09:00:38.086354 4940 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=448311534621890053, SKID=, AKID=C6:4D:7B:6C:15:76:9B:C3:BF:1E:FD:1D:36:03:77:6E:A0:30:BF:77 failed: x509: certificate signed by unknown authority" Feb 23 09:00:38 crc kubenswrapper[4940]: E0223 09:00:38.102568 4940 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=448311534621890053, SKID=, AKID=C6:4D:7B:6C:15:76:9B:C3:BF:1E:FD:1D:36:03:77:6E:A0:30:BF:77 failed: x509: certificate signed by unknown authority" Feb 23 09:00:39 crc kubenswrapper[4940]: E0223 09:00:39.291083 4940 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=448311534621890053, SKID=, AKID=C6:4D:7B:6C:15:76:9B:C3:BF:1E:FD:1D:36:03:77:6E:A0:30:BF:77 failed: x509: certificate signed by unknown authority" Feb 23 09:00:39 crc kubenswrapper[4940]: E0223 09:00:39.314077 4940 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=448311534621890053, SKID=, AKID=C6:4D:7B:6C:15:76:9B:C3:BF:1E:FD:1D:36:03:77:6E:A0:30:BF:77 failed: x509: certificate signed by unknown authority" Feb 23 09:00:40 crc kubenswrapper[4940]: E0223 09:00:40.509921 4940 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=448311534621890053, SKID=, AKID=C6:4D:7B:6C:15:76:9B:C3:BF:1E:FD:1D:36:03:77:6E:A0:30:BF:77 failed: x509: certificate signed by unknown authority" Feb 23 09:00:40 crc kubenswrapper[4940]: E0223 09:00:40.526643 4940 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=448311534621890053, SKID=, AKID=C6:4D:7B:6C:15:76:9B:C3:BF:1E:FD:1D:36:03:77:6E:A0:30:BF:77 failed: x509: certificate signed by unknown authority" Feb 23 09:00:41 crc kubenswrapper[4940]: E0223 09:00:41.702759 4940 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=448311534621890053, SKID=, AKID=C6:4D:7B:6C:15:76:9B:C3:BF:1E:FD:1D:36:03:77:6E:A0:30:BF:77 failed: x509: certificate signed by unknown authority" Feb 23 09:00:41 crc kubenswrapper[4940]: E0223 09:00:41.725967 4940 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=448311534621890053, SKID=, AKID=C6:4D:7B:6C:15:76:9B:C3:BF:1E:FD:1D:36:03:77:6E:A0:30:BF:77 failed: x509: certificate signed by unknown authority" Feb 23 09:00:42 crc kubenswrapper[4940]: E0223 09:00:42.934753 4940 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=448311534621890053, SKID=, AKID=C6:4D:7B:6C:15:76:9B:C3:BF:1E:FD:1D:36:03:77:6E:A0:30:BF:77 failed: x509: certificate signed by unknown authority" Feb 23 09:00:42 crc kubenswrapper[4940]: E0223 09:00:42.953927 4940 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=448311534621890053, SKID=, AKID=C6:4D:7B:6C:15:76:9B:C3:BF:1E:FD:1D:36:03:77:6E:A0:30:BF:77 failed: x509: certificate signed by unknown authority" Feb 23 09:00:44 crc kubenswrapper[4940]: E0223 09:00:44.114301 4940 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=448311534621890053, SKID=, AKID=C6:4D:7B:6C:15:76:9B:C3:BF:1E:FD:1D:36:03:77:6E:A0:30:BF:77 failed: x509: certificate signed by unknown authority" Feb 23 09:00:44 crc kubenswrapper[4940]: E0223 09:00:44.135204 4940 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=448311534621890053, SKID=, AKID=C6:4D:7B:6C:15:76:9B:C3:BF:1E:FD:1D:36:03:77:6E:A0:30:BF:77 failed: x509: certificate signed by unknown authority" Feb 23 09:00:45 crc kubenswrapper[4940]: E0223 09:00:45.303340 4940 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=448311534621890053, SKID=, AKID=C6:4D:7B:6C:15:76:9B:C3:BF:1E:FD:1D:36:03:77:6E:A0:30:BF:77 failed: x509: certificate signed by unknown authority" Feb 23 09:00:45 crc kubenswrapper[4940]: E0223 09:00:45.325719 4940 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=448311534621890053, SKID=, AKID=C6:4D:7B:6C:15:76:9B:C3:BF:1E:FD:1D:36:03:77:6E:A0:30:BF:77 failed: x509: certificate signed by unknown authority" Feb 23 09:00:46 crc kubenswrapper[4940]: E0223 09:00:46.488504 4940 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=448311534621890053, SKID=, AKID=C6:4D:7B:6C:15:76:9B:C3:BF:1E:FD:1D:36:03:77:6E:A0:30:BF:77 failed: x509: certificate signed by unknown authority" Feb 23 09:00:46 crc kubenswrapper[4940]: E0223 09:00:46.509648 4940 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=448311534621890053, SKID=, AKID=C6:4D:7B:6C:15:76:9B:C3:BF:1E:FD:1D:36:03:77:6E:A0:30:BF:77 failed: x509: certificate signed by unknown authority" Feb 23 09:00:47 crc kubenswrapper[4940]: E0223 09:00:47.704111 4940 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=448311534621890053, SKID=, AKID=C6:4D:7B:6C:15:76:9B:C3:BF:1E:FD:1D:36:03:77:6E:A0:30:BF:77 failed: x509: certificate signed by unknown authority" Feb 23 09:00:47 crc kubenswrapper[4940]: E0223 09:00:47.725906 4940 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=448311534621890053, SKID=, AKID=C6:4D:7B:6C:15:76:9B:C3:BF:1E:FD:1D:36:03:77:6E:A0:30:BF:77 failed: x509: certificate signed by unknown authority" Feb 23 09:00:48 crc kubenswrapper[4940]: E0223 09:00:48.910430 4940 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=448311534621890053, SKID=, AKID=C6:4D:7B:6C:15:76:9B:C3:BF:1E:FD:1D:36:03:77:6E:A0:30:BF:77 failed: x509: certificate signed by unknown authority" Feb 23 09:00:48 crc kubenswrapper[4940]: E0223 09:00:48.926011 4940 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=448311534621890053, SKID=, AKID=C6:4D:7B:6C:15:76:9B:C3:BF:1E:FD:1D:36:03:77:6E:A0:30:BF:77 failed: x509: certificate signed by unknown authority" Feb 23 09:00:50 crc kubenswrapper[4940]: E0223 09:00:50.065889 4940 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=448311534621890053, SKID=, AKID=C6:4D:7B:6C:15:76:9B:C3:BF:1E:FD:1D:36:03:77:6E:A0:30:BF:77 failed: x509: certificate signed by unknown authority" Feb 23 09:00:50 crc kubenswrapper[4940]: E0223 09:00:50.087753 4940 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=448311534621890053, SKID=, AKID=C6:4D:7B:6C:15:76:9B:C3:BF:1E:FD:1D:36:03:77:6E:A0:30:BF:77 failed: x509: certificate signed by unknown authority" Feb 23 09:00:51 crc kubenswrapper[4940]: E0223 09:00:51.255654 4940 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=448311534621890053, SKID=, AKID=C6:4D:7B:6C:15:76:9B:C3:BF:1E:FD:1D:36:03:77:6E:A0:30:BF:77 failed: x509: certificate signed by unknown authority" Feb 23 09:00:51 crc kubenswrapper[4940]: E0223 09:00:51.269016 4940 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=448311534621890053, SKID=, AKID=C6:4D:7B:6C:15:76:9B:C3:BF:1E:FD:1D:36:03:77:6E:A0:30:BF:77 failed: x509: certificate signed by unknown authority" Feb 23 09:00:52 crc kubenswrapper[4940]: E0223 09:00:52.440183 4940 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=448311534621890053, SKID=, AKID=C6:4D:7B:6C:15:76:9B:C3:BF:1E:FD:1D:36:03:77:6E:A0:30:BF:77 failed: x509: certificate signed by unknown authority" Feb 23 09:00:52 crc kubenswrapper[4940]: E0223 09:00:52.459450 4940 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=448311534621890053, SKID=, AKID=C6:4D:7B:6C:15:76:9B:C3:BF:1E:FD:1D:36:03:77:6E:A0:30:BF:77 failed: x509: certificate signed by unknown authority" Feb 23 09:00:53 crc kubenswrapper[4940]: E0223 09:00:53.601863 4940 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=448311534621890053, SKID=, AKID=C6:4D:7B:6C:15:76:9B:C3:BF:1E:FD:1D:36:03:77:6E:A0:30:BF:77 failed: x509: certificate signed by unknown authority" Feb 23 09:00:53 crc kubenswrapper[4940]: E0223 09:00:53.622165 4940 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=448311534621890053, SKID=, AKID=C6:4D:7B:6C:15:76:9B:C3:BF:1E:FD:1D:36:03:77:6E:A0:30:BF:77 failed: x509: certificate signed by unknown authority" Feb 23 09:00:54 crc kubenswrapper[4940]: E0223 09:00:54.830328 4940 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=448311534621890053, SKID=, AKID=C6:4D:7B:6C:15:76:9B:C3:BF:1E:FD:1D:36:03:77:6E:A0:30:BF:77 failed: x509: certificate signed by unknown authority" Feb 23 09:00:54 crc kubenswrapper[4940]: E0223 09:00:54.849153 4940 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=448311534621890053, SKID=, AKID=C6:4D:7B:6C:15:76:9B:C3:BF:1E:FD:1D:36:03:77:6E:A0:30:BF:77 failed: x509: certificate signed by unknown authority" Feb 23 09:00:56 crc kubenswrapper[4940]: E0223 09:00:56.000392 4940 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=448311534621890053, SKID=, AKID=C6:4D:7B:6C:15:76:9B:C3:BF:1E:FD:1D:36:03:77:6E:A0:30:BF:77 failed: x509: certificate signed by unknown authority" Feb 23 09:00:56 crc kubenswrapper[4940]: E0223 09:00:56.020739 4940 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=448311534621890053, SKID=, AKID=C6:4D:7B:6C:15:76:9B:C3:BF:1E:FD:1D:36:03:77:6E:A0:30:BF:77 failed: x509: certificate signed by unknown authority" Feb 23 09:00:57 crc kubenswrapper[4940]: E0223 09:00:57.171198 4940 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=448311534621890053, SKID=, AKID=C6:4D:7B:6C:15:76:9B:C3:BF:1E:FD:1D:36:03:77:6E:A0:30:BF:77 failed: x509: certificate signed by unknown authority" Feb 23 09:00:57 crc kubenswrapper[4940]: E0223 09:00:57.190073 4940 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=448311534621890053, SKID=, AKID=C6:4D:7B:6C:15:76:9B:C3:BF:1E:FD:1D:36:03:77:6E:A0:30:BF:77 failed: x509: certificate signed by unknown authority" Feb 23 09:00:58 crc kubenswrapper[4940]: E0223 09:00:58.408385 4940 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=448311534621890053, SKID=, AKID=C6:4D:7B:6C:15:76:9B:C3:BF:1E:FD:1D:36:03:77:6E:A0:30:BF:77 failed: x509: certificate signed by unknown authority" Feb 23 09:00:58 crc kubenswrapper[4940]: E0223 09:00:58.420763 4940 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=448311534621890053, SKID=, AKID=C6:4D:7B:6C:15:76:9B:C3:BF:1E:FD:1D:36:03:77:6E:A0:30:BF:77 failed: x509: certificate signed by unknown authority" Feb 23 09:00:59 crc kubenswrapper[4940]: E0223 09:00:59.626243 4940 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=448311534621890053, SKID=, AKID=C6:4D:7B:6C:15:76:9B:C3:BF:1E:FD:1D:36:03:77:6E:A0:30:BF:77 failed: x509: certificate signed by unknown authority" Feb 23 09:00:59 crc kubenswrapper[4940]: E0223 09:00:59.647117 4940 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=448311534621890053, SKID=, AKID=C6:4D:7B:6C:15:76:9B:C3:BF:1E:FD:1D:36:03:77:6E:A0:30:BF:77 failed: x509: certificate signed by unknown authority" Feb 23 09:01:00 crc kubenswrapper[4940]: E0223 09:01:00.799823 4940 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=448311534621890053, SKID=, AKID=C6:4D:7B:6C:15:76:9B:C3:BF:1E:FD:1D:36:03:77:6E:A0:30:BF:77 failed: x509: certificate signed by unknown authority" Feb 23 09:01:00 crc kubenswrapper[4940]: E0223 09:01:00.815960 4940 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=448311534621890053, SKID=, AKID=C6:4D:7B:6C:15:76:9B:C3:BF:1E:FD:1D:36:03:77:6E:A0:30:BF:77 failed: x509: certificate signed by unknown authority" Feb 23 09:01:01 crc kubenswrapper[4940]: I0223 09:01:01.429926 4940 patch_prober.go:28] interesting pod/machine-config-daemon-26mgs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 23 09:01:01 crc kubenswrapper[4940]: I0223 09:01:01.430008 4940 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" podUID="f3f2cfd6-5ddf-436d-998f-440f1cc642b1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 23 09:01:01 crc kubenswrapper[4940]: E0223 09:01:01.961874 4940 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=448311534621890053, SKID=, AKID=C6:4D:7B:6C:15:76:9B:C3:BF:1E:FD:1D:36:03:77:6E:A0:30:BF:77 failed: x509: certificate signed by unknown authority" Feb 23 09:01:01 crc kubenswrapper[4940]: E0223 09:01:01.977636 4940 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=448311534621890053, SKID=, AKID=C6:4D:7B:6C:15:76:9B:C3:BF:1E:FD:1D:36:03:77:6E:A0:30:BF:77 failed: x509: certificate signed by unknown authority" Feb 23 09:01:03 crc kubenswrapper[4940]: E0223 09:01:03.157971 4940 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=448311534621890053, SKID=, AKID=C6:4D:7B:6C:15:76:9B:C3:BF:1E:FD:1D:36:03:77:6E:A0:30:BF:77 failed: x509: certificate signed by unknown authority" Feb 23 09:01:03 crc kubenswrapper[4940]: E0223 09:01:03.180873 4940 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=448311534621890053, SKID=, AKID=C6:4D:7B:6C:15:76:9B:C3:BF:1E:FD:1D:36:03:77:6E:A0:30:BF:77 failed: x509: certificate signed by unknown authority" Feb 23 09:01:04 crc kubenswrapper[4940]: E0223 09:01:04.361774 4940 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=448311534621890053, SKID=, AKID=C6:4D:7B:6C:15:76:9B:C3:BF:1E:FD:1D:36:03:77:6E:A0:30:BF:77 failed: x509: certificate signed by unknown authority" Feb 23 09:01:04 crc kubenswrapper[4940]: E0223 09:01:04.384577 4940 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=448311534621890053, SKID=, AKID=C6:4D:7B:6C:15:76:9B:C3:BF:1E:FD:1D:36:03:77:6E:A0:30:BF:77 failed: x509: certificate signed by unknown authority" Feb 23 09:01:05 crc kubenswrapper[4940]: E0223 09:01:05.550286 4940 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=448311534621890053, SKID=, AKID=C6:4D:7B:6C:15:76:9B:C3:BF:1E:FD:1D:36:03:77:6E:A0:30:BF:77 failed: x509: certificate signed by unknown authority" Feb 23 09:01:05 crc kubenswrapper[4940]: E0223 09:01:05.574036 4940 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=448311534621890053, SKID=, AKID=C6:4D:7B:6C:15:76:9B:C3:BF:1E:FD:1D:36:03:77:6E:A0:30:BF:77 failed: x509: certificate signed by unknown authority" Feb 23 09:01:06 crc kubenswrapper[4940]: E0223 09:01:06.760016 4940 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=448311534621890053, SKID=, AKID=C6:4D:7B:6C:15:76:9B:C3:BF:1E:FD:1D:36:03:77:6E:A0:30:BF:77 failed: x509: certificate signed by unknown authority" Feb 23 09:01:06 crc kubenswrapper[4940]: E0223 09:01:06.777290 4940 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=448311534621890053, SKID=, AKID=C6:4D:7B:6C:15:76:9B:C3:BF:1E:FD:1D:36:03:77:6E:A0:30:BF:77 failed: x509: certificate signed by unknown authority" Feb 23 09:01:07 crc kubenswrapper[4940]: E0223 09:01:07.915742 4940 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=448311534621890053, SKID=, AKID=C6:4D:7B:6C:15:76:9B:C3:BF:1E:FD:1D:36:03:77:6E:A0:30:BF:77 failed: x509: certificate signed by unknown authority" Feb 23 09:01:07 crc kubenswrapper[4940]: E0223 09:01:07.933348 4940 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=448311534621890053, SKID=, AKID=C6:4D:7B:6C:15:76:9B:C3:BF:1E:FD:1D:36:03:77:6E:A0:30:BF:77 failed: x509: certificate signed by unknown authority" Feb 23 09:01:09 crc kubenswrapper[4940]: E0223 09:01:09.098665 4940 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=448311534621890053, SKID=, AKID=C6:4D:7B:6C:15:76:9B:C3:BF:1E:FD:1D:36:03:77:6E:A0:30:BF:77 failed: x509: certificate signed by unknown authority" Feb 23 09:01:09 crc kubenswrapper[4940]: E0223 09:01:09.120044 4940 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=448311534621890053, SKID=, AKID=C6:4D:7B:6C:15:76:9B:C3:BF:1E:FD:1D:36:03:77:6E:A0:30:BF:77 failed: x509: certificate signed by unknown authority" Feb 23 09:01:10 crc kubenswrapper[4940]: E0223 09:01:10.275742 4940 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=448311534621890053, SKID=, AKID=C6:4D:7B:6C:15:76:9B:C3:BF:1E:FD:1D:36:03:77:6E:A0:30:BF:77 failed: x509: certificate signed by unknown authority" Feb 23 09:01:10 crc kubenswrapper[4940]: E0223 09:01:10.298435 4940 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=448311534621890053, SKID=, AKID=C6:4D:7B:6C:15:76:9B:C3:BF:1E:FD:1D:36:03:77:6E:A0:30:BF:77 failed: x509: certificate signed by unknown authority" Feb 23 09:01:11 crc kubenswrapper[4940]: E0223 09:01:11.456095 4940 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=448311534621890053, SKID=, AKID=C6:4D:7B:6C:15:76:9B:C3:BF:1E:FD:1D:36:03:77:6E:A0:30:BF:77 failed: x509: certificate signed by unknown authority" Feb 23 09:01:11 crc kubenswrapper[4940]: E0223 09:01:11.476569 4940 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=448311534621890053, SKID=, AKID=C6:4D:7B:6C:15:76:9B:C3:BF:1E:FD:1D:36:03:77:6E:A0:30:BF:77 failed: x509: certificate signed by unknown authority" Feb 23 09:01:12 crc kubenswrapper[4940]: E0223 09:01:12.652100 4940 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=448311534621890053, SKID=, AKID=C6:4D:7B:6C:15:76:9B:C3:BF:1E:FD:1D:36:03:77:6E:A0:30:BF:77 failed: x509: certificate signed by unknown authority" Feb 23 09:01:12 crc kubenswrapper[4940]: E0223 09:01:12.666951 4940 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=448311534621890053, SKID=, AKID=C6:4D:7B:6C:15:76:9B:C3:BF:1E:FD:1D:36:03:77:6E:A0:30:BF:77 failed: x509: certificate signed by unknown authority" Feb 23 09:01:13 crc kubenswrapper[4940]: E0223 09:01:13.850536 4940 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=448311534621890053, SKID=, AKID=C6:4D:7B:6C:15:76:9B:C3:BF:1E:FD:1D:36:03:77:6E:A0:30:BF:77 failed: x509: certificate signed by unknown authority" Feb 23 09:01:13 crc kubenswrapper[4940]: E0223 09:01:13.870088 4940 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=448311534621890053, SKID=, AKID=C6:4D:7B:6C:15:76:9B:C3:BF:1E:FD:1D:36:03:77:6E:A0:30:BF:77 failed: x509: certificate signed by unknown authority" Feb 23 09:01:15 crc kubenswrapper[4940]: E0223 09:01:15.027178 4940 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=448311534621890053, SKID=, AKID=C6:4D:7B:6C:15:76:9B:C3:BF:1E:FD:1D:36:03:77:6E:A0:30:BF:77 failed: x509: certificate signed by unknown authority" Feb 23 09:01:15 crc kubenswrapper[4940]: E0223 09:01:15.047575 4940 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=448311534621890053, SKID=, AKID=C6:4D:7B:6C:15:76:9B:C3:BF:1E:FD:1D:36:03:77:6E:A0:30:BF:77 failed: x509: certificate signed by unknown authority" Feb 23 09:01:16 crc kubenswrapper[4940]: E0223 09:01:16.218124 4940 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=448311534621890053, SKID=, AKID=C6:4D:7B:6C:15:76:9B:C3:BF:1E:FD:1D:36:03:77:6E:A0:30:BF:77 failed: x509: certificate signed by unknown authority" Feb 23 09:01:16 crc kubenswrapper[4940]: E0223 09:01:16.238293 4940 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=448311534621890053, SKID=, AKID=C6:4D:7B:6C:15:76:9B:C3:BF:1E:FD:1D:36:03:77:6E:A0:30:BF:77 failed: x509: certificate signed by unknown authority" Feb 23 09:01:17 crc kubenswrapper[4940]: E0223 09:01:17.404767 4940 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=448311534621890053, SKID=, AKID=C6:4D:7B:6C:15:76:9B:C3:BF:1E:FD:1D:36:03:77:6E:A0:30:BF:77 failed: x509: certificate signed by unknown authority" Feb 23 09:01:17 crc kubenswrapper[4940]: E0223 09:01:17.424406 4940 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=448311534621890053, SKID=, AKID=C6:4D:7B:6C:15:76:9B:C3:BF:1E:FD:1D:36:03:77:6E:A0:30:BF:77 failed: x509: certificate signed by unknown authority" Feb 23 09:01:18 crc kubenswrapper[4940]: E0223 09:01:18.557195 4940 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=448311534621890053, SKID=, AKID=C6:4D:7B:6C:15:76:9B:C3:BF:1E:FD:1D:36:03:77:6E:A0:30:BF:77 failed: x509: certificate signed by unknown authority" Feb 23 09:01:18 crc kubenswrapper[4940]: E0223 09:01:18.577918 4940 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=448311534621890053, SKID=, AKID=C6:4D:7B:6C:15:76:9B:C3:BF:1E:FD:1D:36:03:77:6E:A0:30:BF:77 failed: x509: certificate signed by unknown authority" Feb 23 09:01:19 crc kubenswrapper[4940]: E0223 09:01:19.774312 4940 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=448311534621890053, SKID=, AKID=C6:4D:7B:6C:15:76:9B:C3:BF:1E:FD:1D:36:03:77:6E:A0:30:BF:77 failed: x509: certificate signed by unknown authority" Feb 23 09:01:19 crc kubenswrapper[4940]: E0223 09:01:19.791532 4940 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=448311534621890053, SKID=, AKID=C6:4D:7B:6C:15:76:9B:C3:BF:1E:FD:1D:36:03:77:6E:A0:30:BF:77 failed: x509: certificate signed by unknown authority" Feb 23 09:01:20 crc kubenswrapper[4940]: E0223 09:01:20.999327 4940 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=448311534621890053, SKID=, AKID=C6:4D:7B:6C:15:76:9B:C3:BF:1E:FD:1D:36:03:77:6E:A0:30:BF:77 failed: x509: certificate signed by unknown authority" Feb 23 09:01:21 crc kubenswrapper[4940]: E0223 09:01:21.021981 4940 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=448311534621890053, SKID=, AKID=C6:4D:7B:6C:15:76:9B:C3:BF:1E:FD:1D:36:03:77:6E:A0:30:BF:77 failed: x509: certificate signed by unknown authority" Feb 23 09:01:22 crc kubenswrapper[4940]: E0223 09:01:22.244864 4940 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=448311534621890053, SKID=, AKID=C6:4D:7B:6C:15:76:9B:C3:BF:1E:FD:1D:36:03:77:6E:A0:30:BF:77 failed: x509: certificate signed by unknown authority" Feb 23 09:01:22 crc kubenswrapper[4940]: E0223 09:01:22.268023 4940 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=448311534621890053, SKID=, AKID=C6:4D:7B:6C:15:76:9B:C3:BF:1E:FD:1D:36:03:77:6E:A0:30:BF:77 failed: x509: certificate signed by unknown authority" Feb 23 09:01:23 crc kubenswrapper[4940]: E0223 09:01:23.456558 4940 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=448311534621890053, SKID=, AKID=C6:4D:7B:6C:15:76:9B:C3:BF:1E:FD:1D:36:03:77:6E:A0:30:BF:77 failed: x509: certificate signed by unknown authority" Feb 23 09:01:23 crc kubenswrapper[4940]: E0223 09:01:23.478945 4940 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=448311534621890053, SKID=, AKID=C6:4D:7B:6C:15:76:9B:C3:BF:1E:FD:1D:36:03:77:6E:A0:30:BF:77 failed: x509: certificate signed by unknown authority" Feb 23 09:01:24 crc kubenswrapper[4940]: E0223 09:01:24.666094 4940 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=448311534621890053, SKID=, AKID=C6:4D:7B:6C:15:76:9B:C3:BF:1E:FD:1D:36:03:77:6E:A0:30:BF:77 failed: x509: certificate signed by unknown authority" Feb 23 09:01:24 crc kubenswrapper[4940]: E0223 09:01:24.689708 4940 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=448311534621890053, SKID=, AKID=C6:4D:7B:6C:15:76:9B:C3:BF:1E:FD:1D:36:03:77:6E:A0:30:BF:77 failed: x509: certificate signed by unknown authority" Feb 23 09:01:25 crc kubenswrapper[4940]: E0223 09:01:25.852996 4940 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=448311534621890053, SKID=, AKID=C6:4D:7B:6C:15:76:9B:C3:BF:1E:FD:1D:36:03:77:6E:A0:30:BF:77 failed: x509: certificate signed by unknown authority" Feb 23 09:01:25 crc kubenswrapper[4940]: E0223 09:01:25.874997 4940 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=448311534621890053, SKID=, AKID=C6:4D:7B:6C:15:76:9B:C3:BF:1E:FD:1D:36:03:77:6E:A0:30:BF:77 failed: x509: certificate signed by unknown authority" Feb 23 09:01:27 crc kubenswrapper[4940]: E0223 09:01:27.020216 4940 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=448311534621890053, SKID=, AKID=C6:4D:7B:6C:15:76:9B:C3:BF:1E:FD:1D:36:03:77:6E:A0:30:BF:77 failed: x509: certificate signed by unknown authority" Feb 23 09:01:27 crc kubenswrapper[4940]: E0223 09:01:27.038743 4940 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=448311534621890053, SKID=, AKID=C6:4D:7B:6C:15:76:9B:C3:BF:1E:FD:1D:36:03:77:6E:A0:30:BF:77 failed: x509: certificate signed by unknown authority" Feb 23 09:01:28 crc kubenswrapper[4940]: E0223 09:01:28.189869 4940 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=448311534621890053, SKID=, AKID=C6:4D:7B:6C:15:76:9B:C3:BF:1E:FD:1D:36:03:77:6E:A0:30:BF:77 failed: x509: certificate signed by unknown authority" Feb 23 09:01:28 crc kubenswrapper[4940]: E0223 09:01:28.209511 4940 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=448311534621890053, SKID=, AKID=C6:4D:7B:6C:15:76:9B:C3:BF:1E:FD:1D:36:03:77:6E:A0:30:BF:77 failed: x509: certificate signed by unknown authority" Feb 23 09:01:29 crc kubenswrapper[4940]: E0223 09:01:29.372688 4940 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=448311534621890053, SKID=, AKID=C6:4D:7B:6C:15:76:9B:C3:BF:1E:FD:1D:36:03:77:6E:A0:30:BF:77 failed: x509: certificate signed by unknown authority" Feb 23 09:01:29 crc kubenswrapper[4940]: E0223 09:01:29.390541 4940 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=448311534621890053, SKID=, AKID=C6:4D:7B:6C:15:76:9B:C3:BF:1E:FD:1D:36:03:77:6E:A0:30:BF:77 failed: x509: certificate signed by unknown authority" Feb 23 09:01:30 crc kubenswrapper[4940]: E0223 09:01:30.538140 4940 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=448311534621890053, SKID=, AKID=C6:4D:7B:6C:15:76:9B:C3:BF:1E:FD:1D:36:03:77:6E:A0:30:BF:77 failed: x509: certificate signed by unknown authority" Feb 23 09:01:30 crc kubenswrapper[4940]: E0223 09:01:30.557503 4940 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=448311534621890053, SKID=, AKID=C6:4D:7B:6C:15:76:9B:C3:BF:1E:FD:1D:36:03:77:6E:A0:30:BF:77 failed: x509: certificate signed by unknown authority" Feb 23 09:01:31 crc kubenswrapper[4940]: I0223 09:01:31.429422 4940 patch_prober.go:28] interesting pod/machine-config-daemon-26mgs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 23 09:01:31 crc kubenswrapper[4940]: I0223 09:01:31.429488 4940 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" podUID="f3f2cfd6-5ddf-436d-998f-440f1cc642b1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 23 09:01:31 crc kubenswrapper[4940]: I0223 09:01:31.429534 4940 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" Feb 23 09:01:31 crc kubenswrapper[4940]: I0223 09:01:31.430064 4940 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"2bd3eb7943bad5600333867d165292448d5139e55ddeeb5836e6b84ea71a9212"} pod="openshift-machine-config-operator/machine-config-daemon-26mgs" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 23 09:01:31 crc kubenswrapper[4940]: I0223 09:01:31.430118 4940 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" podUID="f3f2cfd6-5ddf-436d-998f-440f1cc642b1" containerName="machine-config-daemon" containerID="cri-o://2bd3eb7943bad5600333867d165292448d5139e55ddeeb5836e6b84ea71a9212" gracePeriod=600 Feb 23 09:01:31 crc kubenswrapper[4940]: E0223 09:01:31.729018 4940 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=448311534621890053, SKID=, AKID=C6:4D:7B:6C:15:76:9B:C3:BF:1E:FD:1D:36:03:77:6E:A0:30:BF:77 failed: x509: certificate signed by unknown authority" Feb 23 09:01:31 crc kubenswrapper[4940]: E0223 09:01:31.746011 4940 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=448311534621890053, SKID=, AKID=C6:4D:7B:6C:15:76:9B:C3:BF:1E:FD:1D:36:03:77:6E:A0:30:BF:77 failed: x509: certificate signed by unknown authority" Feb 23 09:01:32 crc kubenswrapper[4940]: I0223 09:01:32.009251 4940 generic.go:334] "Generic (PLEG): container finished" podID="f3f2cfd6-5ddf-436d-998f-440f1cc642b1" containerID="2bd3eb7943bad5600333867d165292448d5139e55ddeeb5836e6b84ea71a9212" exitCode=0 Feb 23 09:01:32 crc kubenswrapper[4940]: I0223 09:01:32.009291 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" event={"ID":"f3f2cfd6-5ddf-436d-998f-440f1cc642b1","Type":"ContainerDied","Data":"2bd3eb7943bad5600333867d165292448d5139e55ddeeb5836e6b84ea71a9212"} Feb 23 09:01:32 crc kubenswrapper[4940]: I0223 09:01:32.009882 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" event={"ID":"f3f2cfd6-5ddf-436d-998f-440f1cc642b1","Type":"ContainerStarted","Data":"cb93228543200fd2d6020d08fa2989a091f75ae9c39f73df80e0c09ed858a572"} Feb 23 09:01:32 crc kubenswrapper[4940]: I0223 09:01:32.009930 4940 scope.go:117] "RemoveContainer" containerID="082fd847d235e860d5089e1f82b477d7544abdb6fa8b0e1a1f32dd0087a19959" Feb 23 09:01:32 crc kubenswrapper[4940]: E0223 09:01:32.931233 4940 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=448311534621890053, SKID=, AKID=C6:4D:7B:6C:15:76:9B:C3:BF:1E:FD:1D:36:03:77:6E:A0:30:BF:77 failed: x509: certificate signed by unknown authority" Feb 23 09:01:32 crc kubenswrapper[4940]: E0223 09:01:32.946323 4940 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=448311534621890053, SKID=, AKID=C6:4D:7B:6C:15:76:9B:C3:BF:1E:FD:1D:36:03:77:6E:A0:30:BF:77 failed: x509: certificate signed by unknown authority" Feb 23 09:01:34 crc kubenswrapper[4940]: E0223 09:01:34.147714 4940 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=448311534621890053, SKID=, AKID=C6:4D:7B:6C:15:76:9B:C3:BF:1E:FD:1D:36:03:77:6E:A0:30:BF:77 failed: x509: certificate signed by unknown authority" Feb 23 09:01:34 crc kubenswrapper[4940]: E0223 09:01:34.164033 4940 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=448311534621890053, SKID=, AKID=C6:4D:7B:6C:15:76:9B:C3:BF:1E:FD:1D:36:03:77:6E:A0:30:BF:77 failed: x509: certificate signed by unknown authority" Feb 23 09:01:35 crc kubenswrapper[4940]: E0223 09:01:35.314752 4940 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=448311534621890053, SKID=, AKID=C6:4D:7B:6C:15:76:9B:C3:BF:1E:FD:1D:36:03:77:6E:A0:30:BF:77 failed: x509: certificate signed by unknown authority" Feb 23 09:01:35 crc kubenswrapper[4940]: E0223 09:01:35.335444 4940 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=448311534621890053, SKID=, AKID=C6:4D:7B:6C:15:76:9B:C3:BF:1E:FD:1D:36:03:77:6E:A0:30:BF:77 failed: x509: certificate signed by unknown authority" Feb 23 09:01:36 crc kubenswrapper[4940]: E0223 09:01:36.512279 4940 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=448311534621890053, SKID=, AKID=C6:4D:7B:6C:15:76:9B:C3:BF:1E:FD:1D:36:03:77:6E:A0:30:BF:77 failed: x509: certificate signed by unknown authority" Feb 23 09:01:36 crc kubenswrapper[4940]: E0223 09:01:36.529547 4940 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=448311534621890053, SKID=, AKID=C6:4D:7B:6C:15:76:9B:C3:BF:1E:FD:1D:36:03:77:6E:A0:30:BF:77 failed: x509: certificate signed by unknown authority" Feb 23 09:01:37 crc kubenswrapper[4940]: E0223 09:01:37.689889 4940 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=448311534621890053, SKID=, AKID=C6:4D:7B:6C:15:76:9B:C3:BF:1E:FD:1D:36:03:77:6E:A0:30:BF:77 failed: x509: certificate signed by unknown authority" Feb 23 09:01:37 crc kubenswrapper[4940]: E0223 09:01:37.708599 4940 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=448311534621890053, SKID=, AKID=C6:4D:7B:6C:15:76:9B:C3:BF:1E:FD:1D:36:03:77:6E:A0:30:BF:77 failed: x509: certificate signed by unknown authority" Feb 23 09:01:38 crc kubenswrapper[4940]: E0223 09:01:38.872477 4940 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=448311534621890053, SKID=, AKID=C6:4D:7B:6C:15:76:9B:C3:BF:1E:FD:1D:36:03:77:6E:A0:30:BF:77 failed: x509: certificate signed by unknown authority" Feb 23 09:01:38 crc kubenswrapper[4940]: E0223 09:01:38.892037 4940 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=448311534621890053, SKID=, AKID=C6:4D:7B:6C:15:76:9B:C3:BF:1E:FD:1D:36:03:77:6E:A0:30:BF:77 failed: x509: certificate signed by unknown authority" Feb 23 09:01:40 crc kubenswrapper[4940]: E0223 09:01:40.028162 4940 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=448311534621890053, SKID=, AKID=C6:4D:7B:6C:15:76:9B:C3:BF:1E:FD:1D:36:03:77:6E:A0:30:BF:77 failed: x509: certificate signed by unknown authority" Feb 23 09:01:40 crc kubenswrapper[4940]: E0223 09:01:40.051498 4940 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=448311534621890053, SKID=, AKID=C6:4D:7B:6C:15:76:9B:C3:BF:1E:FD:1D:36:03:77:6E:A0:30:BF:77 failed: x509: certificate signed by unknown authority" Feb 23 09:01:41 crc kubenswrapper[4940]: E0223 09:01:41.234747 4940 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=448311534621890053, SKID=, AKID=C6:4D:7B:6C:15:76:9B:C3:BF:1E:FD:1D:36:03:77:6E:A0:30:BF:77 failed: x509: certificate signed by unknown authority" Feb 23 09:01:41 crc kubenswrapper[4940]: E0223 09:01:41.251889 4940 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=448311534621890053, SKID=, AKID=C6:4D:7B:6C:15:76:9B:C3:BF:1E:FD:1D:36:03:77:6E:A0:30:BF:77 failed: x509: certificate signed by unknown authority" Feb 23 09:01:42 crc kubenswrapper[4940]: I0223 09:01:42.242434 4940 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 23 09:02:08 crc kubenswrapper[4940]: E0223 09:02:08.465488 4940 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.222:39926->38.102.83.222:40203: write tcp 38.102.83.222:39926->38.102.83.222:40203: write: broken pipe Feb 23 09:02:30 crc kubenswrapper[4940]: I0223 09:02:30.144336 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca4pb6q"] Feb 23 09:02:30 crc kubenswrapper[4940]: E0223 09:02:30.145025 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96a33e39-df26-4233-aca9-edbe7b31aa62" containerName="collect-profiles" Feb 23 09:02:30 crc kubenswrapper[4940]: I0223 09:02:30.145036 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="96a33e39-df26-4233-aca9-edbe7b31aa62" containerName="collect-profiles" Feb 23 09:02:30 crc kubenswrapper[4940]: I0223 09:02:30.145120 4940 memory_manager.go:354] "RemoveStaleState removing state" podUID="96a33e39-df26-4233-aca9-edbe7b31aa62" containerName="collect-profiles" Feb 23 09:02:30 crc kubenswrapper[4940]: I0223 09:02:30.145811 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca4pb6q" Feb 23 09:02:30 crc kubenswrapper[4940]: I0223 09:02:30.147669 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 23 09:02:30 crc kubenswrapper[4940]: I0223 09:02:30.159322 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca4pb6q"] Feb 23 09:02:30 crc kubenswrapper[4940]: I0223 09:02:30.205518 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hj89r\" (UniqueName: \"kubernetes.io/projected/3db796ec-3e41-4deb-abb8-e60eb37a659a-kube-api-access-hj89r\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca4pb6q\" (UID: \"3db796ec-3e41-4deb-abb8-e60eb37a659a\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca4pb6q" Feb 23 09:02:30 crc kubenswrapper[4940]: I0223 09:02:30.205566 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3db796ec-3e41-4deb-abb8-e60eb37a659a-bundle\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca4pb6q\" (UID: \"3db796ec-3e41-4deb-abb8-e60eb37a659a\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca4pb6q" Feb 23 09:02:30 crc kubenswrapper[4940]: I0223 09:02:30.205605 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3db796ec-3e41-4deb-abb8-e60eb37a659a-util\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca4pb6q\" (UID: \"3db796ec-3e41-4deb-abb8-e60eb37a659a\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca4pb6q" Feb 23 09:02:30 crc kubenswrapper[4940]: I0223 09:02:30.306418 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hj89r\" (UniqueName: \"kubernetes.io/projected/3db796ec-3e41-4deb-abb8-e60eb37a659a-kube-api-access-hj89r\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca4pb6q\" (UID: \"3db796ec-3e41-4deb-abb8-e60eb37a659a\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca4pb6q" Feb 23 09:02:30 crc kubenswrapper[4940]: I0223 09:02:30.306863 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3db796ec-3e41-4deb-abb8-e60eb37a659a-bundle\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca4pb6q\" (UID: \"3db796ec-3e41-4deb-abb8-e60eb37a659a\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca4pb6q" Feb 23 09:02:30 crc kubenswrapper[4940]: I0223 09:02:30.307265 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3db796ec-3e41-4deb-abb8-e60eb37a659a-bundle\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca4pb6q\" (UID: \"3db796ec-3e41-4deb-abb8-e60eb37a659a\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca4pb6q" Feb 23 09:02:30 crc kubenswrapper[4940]: I0223 09:02:30.307321 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3db796ec-3e41-4deb-abb8-e60eb37a659a-util\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca4pb6q\" (UID: \"3db796ec-3e41-4deb-abb8-e60eb37a659a\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca4pb6q" Feb 23 09:02:30 crc kubenswrapper[4940]: I0223 09:02:30.307588 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3db796ec-3e41-4deb-abb8-e60eb37a659a-util\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca4pb6q\" (UID: \"3db796ec-3e41-4deb-abb8-e60eb37a659a\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca4pb6q" Feb 23 09:02:30 crc kubenswrapper[4940]: I0223 09:02:30.339495 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hj89r\" (UniqueName: \"kubernetes.io/projected/3db796ec-3e41-4deb-abb8-e60eb37a659a-kube-api-access-hj89r\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca4pb6q\" (UID: \"3db796ec-3e41-4deb-abb8-e60eb37a659a\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca4pb6q" Feb 23 09:02:30 crc kubenswrapper[4940]: I0223 09:02:30.513131 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca4pb6q" Feb 23 09:02:31 crc kubenswrapper[4940]: I0223 09:02:31.246887 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca4pb6q"] Feb 23 09:02:31 crc kubenswrapper[4940]: I0223 09:02:31.451939 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca4pb6q" event={"ID":"3db796ec-3e41-4deb-abb8-e60eb37a659a","Type":"ContainerStarted","Data":"51f6e02883804a598b4b0632a0a7d9acb37745e80faa319e3a23689e2128fc8c"} Feb 23 09:02:31 crc kubenswrapper[4940]: I0223 09:02:31.452012 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca4pb6q" event={"ID":"3db796ec-3e41-4deb-abb8-e60eb37a659a","Type":"ContainerStarted","Data":"d5da0c45202d774358020d3edfa05601b5aafa7296803cb911814b2b1dd02237"} Feb 23 09:02:31 crc kubenswrapper[4940]: I0223 09:02:31.893232 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-jm5gf"] Feb 23 09:02:31 crc kubenswrapper[4940]: I0223 09:02:31.895563 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jm5gf" Feb 23 09:02:31 crc kubenswrapper[4940]: I0223 09:02:31.904143 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-jm5gf"] Feb 23 09:02:32 crc kubenswrapper[4940]: I0223 09:02:32.113731 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b7282357-6def-4e94-8ea1-1a07d31044a9-utilities\") pod \"redhat-operators-jm5gf\" (UID: \"b7282357-6def-4e94-8ea1-1a07d31044a9\") " pod="openshift-marketplace/redhat-operators-jm5gf" Feb 23 09:02:32 crc kubenswrapper[4940]: I0223 09:02:32.114141 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b7282357-6def-4e94-8ea1-1a07d31044a9-catalog-content\") pod \"redhat-operators-jm5gf\" (UID: \"b7282357-6def-4e94-8ea1-1a07d31044a9\") " pod="openshift-marketplace/redhat-operators-jm5gf" Feb 23 09:02:32 crc kubenswrapper[4940]: I0223 09:02:32.114285 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w6vjh\" (UniqueName: \"kubernetes.io/projected/b7282357-6def-4e94-8ea1-1a07d31044a9-kube-api-access-w6vjh\") pod \"redhat-operators-jm5gf\" (UID: \"b7282357-6def-4e94-8ea1-1a07d31044a9\") " pod="openshift-marketplace/redhat-operators-jm5gf" Feb 23 09:02:32 crc kubenswrapper[4940]: I0223 09:02:32.215327 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w6vjh\" (UniqueName: \"kubernetes.io/projected/b7282357-6def-4e94-8ea1-1a07d31044a9-kube-api-access-w6vjh\") pod \"redhat-operators-jm5gf\" (UID: \"b7282357-6def-4e94-8ea1-1a07d31044a9\") " pod="openshift-marketplace/redhat-operators-jm5gf" Feb 23 09:02:32 crc kubenswrapper[4940]: I0223 09:02:32.215605 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b7282357-6def-4e94-8ea1-1a07d31044a9-utilities\") pod \"redhat-operators-jm5gf\" (UID: \"b7282357-6def-4e94-8ea1-1a07d31044a9\") " pod="openshift-marketplace/redhat-operators-jm5gf" Feb 23 09:02:32 crc kubenswrapper[4940]: I0223 09:02:32.215774 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b7282357-6def-4e94-8ea1-1a07d31044a9-catalog-content\") pod \"redhat-operators-jm5gf\" (UID: \"b7282357-6def-4e94-8ea1-1a07d31044a9\") " pod="openshift-marketplace/redhat-operators-jm5gf" Feb 23 09:02:32 crc kubenswrapper[4940]: I0223 09:02:32.216146 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b7282357-6def-4e94-8ea1-1a07d31044a9-utilities\") pod \"redhat-operators-jm5gf\" (UID: \"b7282357-6def-4e94-8ea1-1a07d31044a9\") " pod="openshift-marketplace/redhat-operators-jm5gf" Feb 23 09:02:32 crc kubenswrapper[4940]: I0223 09:02:32.216184 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b7282357-6def-4e94-8ea1-1a07d31044a9-catalog-content\") pod \"redhat-operators-jm5gf\" (UID: \"b7282357-6def-4e94-8ea1-1a07d31044a9\") " pod="openshift-marketplace/redhat-operators-jm5gf" Feb 23 09:02:32 crc kubenswrapper[4940]: I0223 09:02:32.235689 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w6vjh\" (UniqueName: \"kubernetes.io/projected/b7282357-6def-4e94-8ea1-1a07d31044a9-kube-api-access-w6vjh\") pod \"redhat-operators-jm5gf\" (UID: \"b7282357-6def-4e94-8ea1-1a07d31044a9\") " pod="openshift-marketplace/redhat-operators-jm5gf" Feb 23 09:02:32 crc kubenswrapper[4940]: I0223 09:02:32.318969 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jm5gf" Feb 23 09:02:32 crc kubenswrapper[4940]: I0223 09:02:32.458367 4940 generic.go:334] "Generic (PLEG): container finished" podID="3db796ec-3e41-4deb-abb8-e60eb37a659a" containerID="51f6e02883804a598b4b0632a0a7d9acb37745e80faa319e3a23689e2128fc8c" exitCode=0 Feb 23 09:02:32 crc kubenswrapper[4940]: I0223 09:02:32.458575 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca4pb6q" event={"ID":"3db796ec-3e41-4deb-abb8-e60eb37a659a","Type":"ContainerDied","Data":"51f6e02883804a598b4b0632a0a7d9acb37745e80faa319e3a23689e2128fc8c"} Feb 23 09:02:32 crc kubenswrapper[4940]: I0223 09:02:32.845569 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-jm5gf"] Feb 23 09:02:32 crc kubenswrapper[4940]: W0223 09:02:32.856202 4940 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb7282357_6def_4e94_8ea1_1a07d31044a9.slice/crio-711a6cea33a38ac62691550fc04741c9128ad0fd38d0f73780ca4457702132aa WatchSource:0}: Error finding container 711a6cea33a38ac62691550fc04741c9128ad0fd38d0f73780ca4457702132aa: Status 404 returned error can't find the container with id 711a6cea33a38ac62691550fc04741c9128ad0fd38d0f73780ca4457702132aa Feb 23 09:02:33 crc kubenswrapper[4940]: I0223 09:02:33.468293 4940 generic.go:334] "Generic (PLEG): container finished" podID="b7282357-6def-4e94-8ea1-1a07d31044a9" containerID="1ec3e39d5bd14e9a91fec33dfe9d4a346a15d6448a2e154019b7d017520b7138" exitCode=0 Feb 23 09:02:33 crc kubenswrapper[4940]: I0223 09:02:33.468349 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jm5gf" event={"ID":"b7282357-6def-4e94-8ea1-1a07d31044a9","Type":"ContainerDied","Data":"1ec3e39d5bd14e9a91fec33dfe9d4a346a15d6448a2e154019b7d017520b7138"} Feb 23 09:02:33 crc kubenswrapper[4940]: I0223 09:02:33.468380 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jm5gf" event={"ID":"b7282357-6def-4e94-8ea1-1a07d31044a9","Type":"ContainerStarted","Data":"711a6cea33a38ac62691550fc04741c9128ad0fd38d0f73780ca4457702132aa"} Feb 23 09:02:34 crc kubenswrapper[4940]: I0223 09:02:34.485925 4940 generic.go:334] "Generic (PLEG): container finished" podID="3db796ec-3e41-4deb-abb8-e60eb37a659a" containerID="97768ac3949d04453d3b492f7b82ea49714bf3ee56cb71e81ffd6135c4129304" exitCode=0 Feb 23 09:02:34 crc kubenswrapper[4940]: I0223 09:02:34.486068 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca4pb6q" event={"ID":"3db796ec-3e41-4deb-abb8-e60eb37a659a","Type":"ContainerDied","Data":"97768ac3949d04453d3b492f7b82ea49714bf3ee56cb71e81ffd6135c4129304"} Feb 23 09:02:34 crc kubenswrapper[4940]: I0223 09:02:34.493647 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jm5gf" event={"ID":"b7282357-6def-4e94-8ea1-1a07d31044a9","Type":"ContainerStarted","Data":"abfadf1f49d5f95651b3c939640d359ac51e35e302092b4ec65c97edfe2c29ff"} Feb 23 09:02:35 crc kubenswrapper[4940]: I0223 09:02:35.501296 4940 generic.go:334] "Generic (PLEG): container finished" podID="3db796ec-3e41-4deb-abb8-e60eb37a659a" containerID="ecb6fbc55c28f90f7999cf00c767069f73515ca752d74ade264f8d995caa88d3" exitCode=0 Feb 23 09:02:35 crc kubenswrapper[4940]: I0223 09:02:35.501353 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca4pb6q" event={"ID":"3db796ec-3e41-4deb-abb8-e60eb37a659a","Type":"ContainerDied","Data":"ecb6fbc55c28f90f7999cf00c767069f73515ca752d74ade264f8d995caa88d3"} Feb 23 09:02:37 crc kubenswrapper[4940]: I0223 09:02:37.429063 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca4pb6q" Feb 23 09:02:37 crc kubenswrapper[4940]: I0223 09:02:37.719907 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca4pb6q" event={"ID":"3db796ec-3e41-4deb-abb8-e60eb37a659a","Type":"ContainerDied","Data":"d5da0c45202d774358020d3edfa05601b5aafa7296803cb911814b2b1dd02237"} Feb 23 09:02:37 crc kubenswrapper[4940]: I0223 09:02:37.719957 4940 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d5da0c45202d774358020d3edfa05601b5aafa7296803cb911814b2b1dd02237" Feb 23 09:02:37 crc kubenswrapper[4940]: I0223 09:02:37.720035 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca4pb6q" Feb 23 09:02:37 crc kubenswrapper[4940]: I0223 09:02:37.812739 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3db796ec-3e41-4deb-abb8-e60eb37a659a-util\") pod \"3db796ec-3e41-4deb-abb8-e60eb37a659a\" (UID: \"3db796ec-3e41-4deb-abb8-e60eb37a659a\") " Feb 23 09:02:37 crc kubenswrapper[4940]: I0223 09:02:37.812922 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hj89r\" (UniqueName: \"kubernetes.io/projected/3db796ec-3e41-4deb-abb8-e60eb37a659a-kube-api-access-hj89r\") pod \"3db796ec-3e41-4deb-abb8-e60eb37a659a\" (UID: \"3db796ec-3e41-4deb-abb8-e60eb37a659a\") " Feb 23 09:02:37 crc kubenswrapper[4940]: I0223 09:02:37.813020 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3db796ec-3e41-4deb-abb8-e60eb37a659a-bundle\") pod \"3db796ec-3e41-4deb-abb8-e60eb37a659a\" (UID: \"3db796ec-3e41-4deb-abb8-e60eb37a659a\") " Feb 23 09:02:37 crc kubenswrapper[4940]: I0223 09:02:37.814013 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3db796ec-3e41-4deb-abb8-e60eb37a659a-bundle" (OuterVolumeSpecName: "bundle") pod "3db796ec-3e41-4deb-abb8-e60eb37a659a" (UID: "3db796ec-3e41-4deb-abb8-e60eb37a659a"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 09:02:37 crc kubenswrapper[4940]: I0223 09:02:37.822574 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3db796ec-3e41-4deb-abb8-e60eb37a659a-kube-api-access-hj89r" (OuterVolumeSpecName: "kube-api-access-hj89r") pod "3db796ec-3e41-4deb-abb8-e60eb37a659a" (UID: "3db796ec-3e41-4deb-abb8-e60eb37a659a"). InnerVolumeSpecName "kube-api-access-hj89r". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 09:02:37 crc kubenswrapper[4940]: I0223 09:02:37.836192 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3db796ec-3e41-4deb-abb8-e60eb37a659a-util" (OuterVolumeSpecName: "util") pod "3db796ec-3e41-4deb-abb8-e60eb37a659a" (UID: "3db796ec-3e41-4deb-abb8-e60eb37a659a"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 09:02:37 crc kubenswrapper[4940]: I0223 09:02:37.914461 4940 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hj89r\" (UniqueName: \"kubernetes.io/projected/3db796ec-3e41-4deb-abb8-e60eb37a659a-kube-api-access-hj89r\") on node \"crc\" DevicePath \"\"" Feb 23 09:02:37 crc kubenswrapper[4940]: I0223 09:02:37.914492 4940 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3db796ec-3e41-4deb-abb8-e60eb37a659a-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 09:02:37 crc kubenswrapper[4940]: I0223 09:02:37.914500 4940 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3db796ec-3e41-4deb-abb8-e60eb37a659a-util\") on node \"crc\" DevicePath \"\"" Feb 23 09:02:38 crc kubenswrapper[4940]: I0223 09:02:38.727857 4940 generic.go:334] "Generic (PLEG): container finished" podID="b7282357-6def-4e94-8ea1-1a07d31044a9" containerID="abfadf1f49d5f95651b3c939640d359ac51e35e302092b4ec65c97edfe2c29ff" exitCode=0 Feb 23 09:02:38 crc kubenswrapper[4940]: I0223 09:02:38.727927 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jm5gf" event={"ID":"b7282357-6def-4e94-8ea1-1a07d31044a9","Type":"ContainerDied","Data":"abfadf1f49d5f95651b3c939640d359ac51e35e302092b4ec65c97edfe2c29ff"} Feb 23 09:02:39 crc kubenswrapper[4940]: I0223 09:02:39.737436 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jm5gf" event={"ID":"b7282357-6def-4e94-8ea1-1a07d31044a9","Type":"ContainerStarted","Data":"55f9788e437235e6b707be377686d853d592ee549967798704928daec0e298a2"} Feb 23 09:02:39 crc kubenswrapper[4940]: I0223 09:02:39.858903 4940 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-jm5gf" podStartSLOduration=3.281079245 podStartE2EDuration="8.858867404s" podCreationTimestamp="2026-02-23 09:02:31 +0000 UTC" firstStartedPulling="2026-02-23 09:02:33.54692615 +0000 UTC m=+884.930132307" lastFinishedPulling="2026-02-23 09:02:39.124714279 +0000 UTC m=+890.507920466" observedRunningTime="2026-02-23 09:02:39.853331914 +0000 UTC m=+891.236538111" watchObservedRunningTime="2026-02-23 09:02:39.858867404 +0000 UTC m=+891.242073561" Feb 23 09:02:39 crc kubenswrapper[4940]: I0223 09:02:39.926248 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-694c9596b7-2lctm"] Feb 23 09:02:39 crc kubenswrapper[4940]: E0223 09:02:39.926463 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3db796ec-3e41-4deb-abb8-e60eb37a659a" containerName="extract" Feb 23 09:02:39 crc kubenswrapper[4940]: I0223 09:02:39.926476 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="3db796ec-3e41-4deb-abb8-e60eb37a659a" containerName="extract" Feb 23 09:02:39 crc kubenswrapper[4940]: E0223 09:02:39.926489 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3db796ec-3e41-4deb-abb8-e60eb37a659a" containerName="pull" Feb 23 09:02:39 crc kubenswrapper[4940]: I0223 09:02:39.926512 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="3db796ec-3e41-4deb-abb8-e60eb37a659a" containerName="pull" Feb 23 09:02:39 crc kubenswrapper[4940]: E0223 09:02:39.926529 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3db796ec-3e41-4deb-abb8-e60eb37a659a" containerName="util" Feb 23 09:02:39 crc kubenswrapper[4940]: I0223 09:02:39.926535 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="3db796ec-3e41-4deb-abb8-e60eb37a659a" containerName="util" Feb 23 09:02:39 crc kubenswrapper[4940]: I0223 09:02:39.926650 4940 memory_manager.go:354] "RemoveStaleState removing state" podUID="3db796ec-3e41-4deb-abb8-e60eb37a659a" containerName="extract" Feb 23 09:02:39 crc kubenswrapper[4940]: I0223 09:02:39.927046 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-694c9596b7-2lctm" Feb 23 09:02:39 crc kubenswrapper[4940]: I0223 09:02:39.928670 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-g8dhh" Feb 23 09:02:39 crc kubenswrapper[4940]: I0223 09:02:39.932982 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Feb 23 09:02:39 crc kubenswrapper[4940]: I0223 09:02:39.933185 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Feb 23 09:02:39 crc kubenswrapper[4940]: I0223 09:02:39.945397 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-694c9596b7-2lctm"] Feb 23 09:02:40 crc kubenswrapper[4940]: I0223 09:02:40.044369 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9z267\" (UniqueName: \"kubernetes.io/projected/6a03ba2d-040d-4fe6-ac2f-081bb22e1f38-kube-api-access-9z267\") pod \"nmstate-operator-694c9596b7-2lctm\" (UID: \"6a03ba2d-040d-4fe6-ac2f-081bb22e1f38\") " pod="openshift-nmstate/nmstate-operator-694c9596b7-2lctm" Feb 23 09:02:40 crc kubenswrapper[4940]: I0223 09:02:40.146230 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9z267\" (UniqueName: \"kubernetes.io/projected/6a03ba2d-040d-4fe6-ac2f-081bb22e1f38-kube-api-access-9z267\") pod \"nmstate-operator-694c9596b7-2lctm\" (UID: \"6a03ba2d-040d-4fe6-ac2f-081bb22e1f38\") " pod="openshift-nmstate/nmstate-operator-694c9596b7-2lctm" Feb 23 09:02:40 crc kubenswrapper[4940]: I0223 09:02:40.180535 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9z267\" (UniqueName: \"kubernetes.io/projected/6a03ba2d-040d-4fe6-ac2f-081bb22e1f38-kube-api-access-9z267\") pod \"nmstate-operator-694c9596b7-2lctm\" (UID: \"6a03ba2d-040d-4fe6-ac2f-081bb22e1f38\") " pod="openshift-nmstate/nmstate-operator-694c9596b7-2lctm" Feb 23 09:02:40 crc kubenswrapper[4940]: I0223 09:02:40.241079 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-694c9596b7-2lctm" Feb 23 09:02:40 crc kubenswrapper[4940]: I0223 09:02:40.638048 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-694c9596b7-2lctm"] Feb 23 09:02:40 crc kubenswrapper[4940]: W0223 09:02:40.642835 4940 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6a03ba2d_040d_4fe6_ac2f_081bb22e1f38.slice/crio-89e2282de70c8fe1be6ebf3a3c5a1600f9426af0fbb62149bf2edbd3dee76b5d WatchSource:0}: Error finding container 89e2282de70c8fe1be6ebf3a3c5a1600f9426af0fbb62149bf2edbd3dee76b5d: Status 404 returned error can't find the container with id 89e2282de70c8fe1be6ebf3a3c5a1600f9426af0fbb62149bf2edbd3dee76b5d Feb 23 09:02:40 crc kubenswrapper[4940]: I0223 09:02:40.743885 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-694c9596b7-2lctm" event={"ID":"6a03ba2d-040d-4fe6-ac2f-081bb22e1f38","Type":"ContainerStarted","Data":"89e2282de70c8fe1be6ebf3a3c5a1600f9426af0fbb62149bf2edbd3dee76b5d"} Feb 23 09:02:42 crc kubenswrapper[4940]: I0223 09:02:42.319740 4940 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-jm5gf" Feb 23 09:02:42 crc kubenswrapper[4940]: I0223 09:02:42.320211 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-jm5gf" Feb 23 09:02:43 crc kubenswrapper[4940]: I0223 09:02:43.403817 4940 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-jm5gf" podUID="b7282357-6def-4e94-8ea1-1a07d31044a9" containerName="registry-server" probeResult="failure" output=< Feb 23 09:02:43 crc kubenswrapper[4940]: timeout: failed to connect service ":50051" within 1s Feb 23 09:02:43 crc kubenswrapper[4940]: > Feb 23 09:02:43 crc kubenswrapper[4940]: I0223 09:02:43.761736 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-694c9596b7-2lctm" event={"ID":"6a03ba2d-040d-4fe6-ac2f-081bb22e1f38","Type":"ContainerStarted","Data":"c49107427a1358933e3e5119e2cb889d743b71d4a6bd913a2990b748bf884734"} Feb 23 09:02:43 crc kubenswrapper[4940]: I0223 09:02:43.779087 4940 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-694c9596b7-2lctm" podStartSLOduration=2.203767946 podStartE2EDuration="4.779066443s" podCreationTimestamp="2026-02-23 09:02:39 +0000 UTC" firstStartedPulling="2026-02-23 09:02:40.645187925 +0000 UTC m=+892.028394082" lastFinishedPulling="2026-02-23 09:02:43.220486262 +0000 UTC m=+894.603692579" observedRunningTime="2026-02-23 09:02:43.7758229 +0000 UTC m=+895.159029077" watchObservedRunningTime="2026-02-23 09:02:43.779066443 +0000 UTC m=+895.162272620" Feb 23 09:02:49 crc kubenswrapper[4940]: I0223 09:02:49.786547 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-58c85c668d-kcszk"] Feb 23 09:02:49 crc kubenswrapper[4940]: I0223 09:02:49.788198 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-58c85c668d-kcszk" Feb 23 09:02:49 crc kubenswrapper[4940]: I0223 09:02:49.790221 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-8pzxh" Feb 23 09:02:49 crc kubenswrapper[4940]: I0223 09:02:49.804314 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-866bcb46dc-btmp6"] Feb 23 09:02:49 crc kubenswrapper[4940]: I0223 09:02:49.805233 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-btmp6" Feb 23 09:02:49 crc kubenswrapper[4940]: I0223 09:02:49.808844 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-58c85c668d-kcszk"] Feb 23 09:02:49 crc kubenswrapper[4940]: I0223 09:02:49.810839 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Feb 23 09:02:49 crc kubenswrapper[4940]: I0223 09:02:49.819787 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-866bcb46dc-btmp6"] Feb 23 09:02:49 crc kubenswrapper[4940]: I0223 09:02:49.831447 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-frr6p"] Feb 23 09:02:49 crc kubenswrapper[4940]: I0223 09:02:49.832296 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-frr6p" Feb 23 09:02:49 crc kubenswrapper[4940]: I0223 09:02:49.849006 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sffc6\" (UniqueName: \"kubernetes.io/projected/f78f27f8-0a49-4aef-9e58-0cdb19fddbe9-kube-api-access-sffc6\") pod \"nmstate-webhook-866bcb46dc-btmp6\" (UID: \"f78f27f8-0a49-4aef-9e58-0cdb19fddbe9\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-btmp6" Feb 23 09:02:49 crc kubenswrapper[4940]: I0223 09:02:49.849052 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-db49f\" (UniqueName: \"kubernetes.io/projected/a28be9f7-f2d0-4349-8432-a33d0f04d076-kube-api-access-db49f\") pod \"nmstate-handler-frr6p\" (UID: \"a28be9f7-f2d0-4349-8432-a33d0f04d076\") " pod="openshift-nmstate/nmstate-handler-frr6p" Feb 23 09:02:49 crc kubenswrapper[4940]: I0223 09:02:49.849080 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/a28be9f7-f2d0-4349-8432-a33d0f04d076-nmstate-lock\") pod \"nmstate-handler-frr6p\" (UID: \"a28be9f7-f2d0-4349-8432-a33d0f04d076\") " pod="openshift-nmstate/nmstate-handler-frr6p" Feb 23 09:02:49 crc kubenswrapper[4940]: I0223 09:02:49.849107 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/a28be9f7-f2d0-4349-8432-a33d0f04d076-ovs-socket\") pod \"nmstate-handler-frr6p\" (UID: \"a28be9f7-f2d0-4349-8432-a33d0f04d076\") " pod="openshift-nmstate/nmstate-handler-frr6p" Feb 23 09:02:49 crc kubenswrapper[4940]: I0223 09:02:49.849126 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/f78f27f8-0a49-4aef-9e58-0cdb19fddbe9-tls-key-pair\") pod \"nmstate-webhook-866bcb46dc-btmp6\" (UID: \"f78f27f8-0a49-4aef-9e58-0cdb19fddbe9\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-btmp6" Feb 23 09:02:49 crc kubenswrapper[4940]: I0223 09:02:49.849157 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n6kh6\" (UniqueName: \"kubernetes.io/projected/06a06080-4162-423f-bd67-2cdc3aa6cec0-kube-api-access-n6kh6\") pod \"nmstate-metrics-58c85c668d-kcszk\" (UID: \"06a06080-4162-423f-bd67-2cdc3aa6cec0\") " pod="openshift-nmstate/nmstate-metrics-58c85c668d-kcszk" Feb 23 09:02:49 crc kubenswrapper[4940]: I0223 09:02:49.849176 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/a28be9f7-f2d0-4349-8432-a33d0f04d076-dbus-socket\") pod \"nmstate-handler-frr6p\" (UID: \"a28be9f7-f2d0-4349-8432-a33d0f04d076\") " pod="openshift-nmstate/nmstate-handler-frr6p" Feb 23 09:02:49 crc kubenswrapper[4940]: I0223 09:02:49.918129 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5c78fc5d65-w29cj"] Feb 23 09:02:49 crc kubenswrapper[4940]: I0223 09:02:49.918999 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-w29cj" Feb 23 09:02:49 crc kubenswrapper[4940]: I0223 09:02:49.920755 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Feb 23 09:02:49 crc kubenswrapper[4940]: I0223 09:02:49.921013 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-kdpmr" Feb 23 09:02:49 crc kubenswrapper[4940]: I0223 09:02:49.921184 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Feb 23 09:02:49 crc kubenswrapper[4940]: I0223 09:02:49.929791 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5c78fc5d65-w29cj"] Feb 23 09:02:49 crc kubenswrapper[4940]: I0223 09:02:49.949706 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/e77fac6b-039a-43b2-ad12-f5e506201ef7-nginx-conf\") pod \"nmstate-console-plugin-5c78fc5d65-w29cj\" (UID: \"e77fac6b-039a-43b2-ad12-f5e506201ef7\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-w29cj" Feb 23 09:02:49 crc kubenswrapper[4940]: I0223 09:02:49.949790 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sffc6\" (UniqueName: \"kubernetes.io/projected/f78f27f8-0a49-4aef-9e58-0cdb19fddbe9-kube-api-access-sffc6\") pod \"nmstate-webhook-866bcb46dc-btmp6\" (UID: \"f78f27f8-0a49-4aef-9e58-0cdb19fddbe9\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-btmp6" Feb 23 09:02:49 crc kubenswrapper[4940]: I0223 09:02:49.949832 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-db49f\" (UniqueName: \"kubernetes.io/projected/a28be9f7-f2d0-4349-8432-a33d0f04d076-kube-api-access-db49f\") pod \"nmstate-handler-frr6p\" (UID: \"a28be9f7-f2d0-4349-8432-a33d0f04d076\") " pod="openshift-nmstate/nmstate-handler-frr6p" Feb 23 09:02:49 crc kubenswrapper[4940]: I0223 09:02:49.949862 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/a28be9f7-f2d0-4349-8432-a33d0f04d076-nmstate-lock\") pod \"nmstate-handler-frr6p\" (UID: \"a28be9f7-f2d0-4349-8432-a33d0f04d076\") " pod="openshift-nmstate/nmstate-handler-frr6p" Feb 23 09:02:49 crc kubenswrapper[4940]: I0223 09:02:49.949887 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/a28be9f7-f2d0-4349-8432-a33d0f04d076-ovs-socket\") pod \"nmstate-handler-frr6p\" (UID: \"a28be9f7-f2d0-4349-8432-a33d0f04d076\") " pod="openshift-nmstate/nmstate-handler-frr6p" Feb 23 09:02:49 crc kubenswrapper[4940]: I0223 09:02:49.949922 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/f78f27f8-0a49-4aef-9e58-0cdb19fddbe9-tls-key-pair\") pod \"nmstate-webhook-866bcb46dc-btmp6\" (UID: \"f78f27f8-0a49-4aef-9e58-0cdb19fddbe9\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-btmp6" Feb 23 09:02:49 crc kubenswrapper[4940]: I0223 09:02:49.949948 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/e77fac6b-039a-43b2-ad12-f5e506201ef7-plugin-serving-cert\") pod \"nmstate-console-plugin-5c78fc5d65-w29cj\" (UID: \"e77fac6b-039a-43b2-ad12-f5e506201ef7\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-w29cj" Feb 23 09:02:49 crc kubenswrapper[4940]: I0223 09:02:49.949965 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/a28be9f7-f2d0-4349-8432-a33d0f04d076-ovs-socket\") pod \"nmstate-handler-frr6p\" (UID: \"a28be9f7-f2d0-4349-8432-a33d0f04d076\") " pod="openshift-nmstate/nmstate-handler-frr6p" Feb 23 09:02:49 crc kubenswrapper[4940]: I0223 09:02:49.949966 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jb9qs\" (UniqueName: \"kubernetes.io/projected/e77fac6b-039a-43b2-ad12-f5e506201ef7-kube-api-access-jb9qs\") pod \"nmstate-console-plugin-5c78fc5d65-w29cj\" (UID: \"e77fac6b-039a-43b2-ad12-f5e506201ef7\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-w29cj" Feb 23 09:02:49 crc kubenswrapper[4940]: I0223 09:02:49.950027 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n6kh6\" (UniqueName: \"kubernetes.io/projected/06a06080-4162-423f-bd67-2cdc3aa6cec0-kube-api-access-n6kh6\") pod \"nmstate-metrics-58c85c668d-kcszk\" (UID: \"06a06080-4162-423f-bd67-2cdc3aa6cec0\") " pod="openshift-nmstate/nmstate-metrics-58c85c668d-kcszk" Feb 23 09:02:49 crc kubenswrapper[4940]: I0223 09:02:49.950049 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/a28be9f7-f2d0-4349-8432-a33d0f04d076-dbus-socket\") pod \"nmstate-handler-frr6p\" (UID: \"a28be9f7-f2d0-4349-8432-a33d0f04d076\") " pod="openshift-nmstate/nmstate-handler-frr6p" Feb 23 09:02:49 crc kubenswrapper[4940]: I0223 09:02:49.949962 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/a28be9f7-f2d0-4349-8432-a33d0f04d076-nmstate-lock\") pod \"nmstate-handler-frr6p\" (UID: \"a28be9f7-f2d0-4349-8432-a33d0f04d076\") " pod="openshift-nmstate/nmstate-handler-frr6p" Feb 23 09:02:49 crc kubenswrapper[4940]: E0223 09:02:49.950069 4940 secret.go:188] Couldn't get secret openshift-nmstate/openshift-nmstate-webhook: secret "openshift-nmstate-webhook" not found Feb 23 09:02:49 crc kubenswrapper[4940]: E0223 09:02:49.950156 4940 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f78f27f8-0a49-4aef-9e58-0cdb19fddbe9-tls-key-pair podName:f78f27f8-0a49-4aef-9e58-0cdb19fddbe9 nodeName:}" failed. No retries permitted until 2026-02-23 09:02:50.450134564 +0000 UTC m=+901.833340821 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-key-pair" (UniqueName: "kubernetes.io/secret/f78f27f8-0a49-4aef-9e58-0cdb19fddbe9-tls-key-pair") pod "nmstate-webhook-866bcb46dc-btmp6" (UID: "f78f27f8-0a49-4aef-9e58-0cdb19fddbe9") : secret "openshift-nmstate-webhook" not found Feb 23 09:02:49 crc kubenswrapper[4940]: I0223 09:02:49.950348 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/a28be9f7-f2d0-4349-8432-a33d0f04d076-dbus-socket\") pod \"nmstate-handler-frr6p\" (UID: \"a28be9f7-f2d0-4349-8432-a33d0f04d076\") " pod="openshift-nmstate/nmstate-handler-frr6p" Feb 23 09:02:49 crc kubenswrapper[4940]: I0223 09:02:49.978393 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-db49f\" (UniqueName: \"kubernetes.io/projected/a28be9f7-f2d0-4349-8432-a33d0f04d076-kube-api-access-db49f\") pod \"nmstate-handler-frr6p\" (UID: \"a28be9f7-f2d0-4349-8432-a33d0f04d076\") " pod="openshift-nmstate/nmstate-handler-frr6p" Feb 23 09:02:49 crc kubenswrapper[4940]: I0223 09:02:49.978437 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sffc6\" (UniqueName: \"kubernetes.io/projected/f78f27f8-0a49-4aef-9e58-0cdb19fddbe9-kube-api-access-sffc6\") pod \"nmstate-webhook-866bcb46dc-btmp6\" (UID: \"f78f27f8-0a49-4aef-9e58-0cdb19fddbe9\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-btmp6" Feb 23 09:02:49 crc kubenswrapper[4940]: I0223 09:02:49.982283 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n6kh6\" (UniqueName: \"kubernetes.io/projected/06a06080-4162-423f-bd67-2cdc3aa6cec0-kube-api-access-n6kh6\") pod \"nmstate-metrics-58c85c668d-kcszk\" (UID: \"06a06080-4162-423f-bd67-2cdc3aa6cec0\") " pod="openshift-nmstate/nmstate-metrics-58c85c668d-kcszk" Feb 23 09:02:50 crc kubenswrapper[4940]: I0223 09:02:50.050766 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/e77fac6b-039a-43b2-ad12-f5e506201ef7-nginx-conf\") pod \"nmstate-console-plugin-5c78fc5d65-w29cj\" (UID: \"e77fac6b-039a-43b2-ad12-f5e506201ef7\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-w29cj" Feb 23 09:02:50 crc kubenswrapper[4940]: I0223 09:02:50.050889 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/e77fac6b-039a-43b2-ad12-f5e506201ef7-plugin-serving-cert\") pod \"nmstate-console-plugin-5c78fc5d65-w29cj\" (UID: \"e77fac6b-039a-43b2-ad12-f5e506201ef7\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-w29cj" Feb 23 09:02:50 crc kubenswrapper[4940]: I0223 09:02:50.050920 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jb9qs\" (UniqueName: \"kubernetes.io/projected/e77fac6b-039a-43b2-ad12-f5e506201ef7-kube-api-access-jb9qs\") pod \"nmstate-console-plugin-5c78fc5d65-w29cj\" (UID: \"e77fac6b-039a-43b2-ad12-f5e506201ef7\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-w29cj" Feb 23 09:02:50 crc kubenswrapper[4940]: I0223 09:02:50.052163 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Feb 23 09:02:50 crc kubenswrapper[4940]: I0223 09:02:50.052163 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Feb 23 09:02:50 crc kubenswrapper[4940]: E0223 09:02:50.061870 4940 secret.go:188] Couldn't get secret openshift-nmstate/plugin-serving-cert: secret "plugin-serving-cert" not found Feb 23 09:02:50 crc kubenswrapper[4940]: E0223 09:02:50.062258 4940 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e77fac6b-039a-43b2-ad12-f5e506201ef7-plugin-serving-cert podName:e77fac6b-039a-43b2-ad12-f5e506201ef7 nodeName:}" failed. No retries permitted until 2026-02-23 09:02:50.562235144 +0000 UTC m=+901.945441381 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "plugin-serving-cert" (UniqueName: "kubernetes.io/secret/e77fac6b-039a-43b2-ad12-f5e506201ef7-plugin-serving-cert") pod "nmstate-console-plugin-5c78fc5d65-w29cj" (UID: "e77fac6b-039a-43b2-ad12-f5e506201ef7") : secret "plugin-serving-cert" not found Feb 23 09:02:50 crc kubenswrapper[4940]: I0223 09:02:50.062543 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/e77fac6b-039a-43b2-ad12-f5e506201ef7-nginx-conf\") pod \"nmstate-console-plugin-5c78fc5d65-w29cj\" (UID: \"e77fac6b-039a-43b2-ad12-f5e506201ef7\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-w29cj" Feb 23 09:02:50 crc kubenswrapper[4940]: I0223 09:02:50.082443 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jb9qs\" (UniqueName: \"kubernetes.io/projected/e77fac6b-039a-43b2-ad12-f5e506201ef7-kube-api-access-jb9qs\") pod \"nmstate-console-plugin-5c78fc5d65-w29cj\" (UID: \"e77fac6b-039a-43b2-ad12-f5e506201ef7\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-w29cj" Feb 23 09:02:50 crc kubenswrapper[4940]: I0223 09:02:50.111645 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-8pzxh" Feb 23 09:02:50 crc kubenswrapper[4940]: I0223 09:02:50.121120 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-58c85c668d-kcszk" Feb 23 09:02:50 crc kubenswrapper[4940]: I0223 09:02:50.154665 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-frr6p" Feb 23 09:02:50 crc kubenswrapper[4940]: W0223 09:02:50.315539 4940 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda28be9f7_f2d0_4349_8432_a33d0f04d076.slice/crio-299c57aa32d06c903e0258e5881ff96dc796025e4813a280fe15a35268f941a8 WatchSource:0}: Error finding container 299c57aa32d06c903e0258e5881ff96dc796025e4813a280fe15a35268f941a8: Status 404 returned error can't find the container with id 299c57aa32d06c903e0258e5881ff96dc796025e4813a280fe15a35268f941a8 Feb 23 09:02:50 crc kubenswrapper[4940]: I0223 09:02:50.374890 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-7cd86f9fd7-66xg8"] Feb 23 09:02:50 crc kubenswrapper[4940]: I0223 09:02:50.375572 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-7cd86f9fd7-66xg8" Feb 23 09:02:50 crc kubenswrapper[4940]: I0223 09:02:50.392750 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/925ecf5d-4ab6-43a1-952e-43ad0fc9b276-trusted-ca-bundle\") pod \"console-7cd86f9fd7-66xg8\" (UID: \"925ecf5d-4ab6-43a1-952e-43ad0fc9b276\") " pod="openshift-console/console-7cd86f9fd7-66xg8" Feb 23 09:02:50 crc kubenswrapper[4940]: I0223 09:02:50.392806 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/925ecf5d-4ab6-43a1-952e-43ad0fc9b276-console-config\") pod \"console-7cd86f9fd7-66xg8\" (UID: \"925ecf5d-4ab6-43a1-952e-43ad0fc9b276\") " pod="openshift-console/console-7cd86f9fd7-66xg8" Feb 23 09:02:50 crc kubenswrapper[4940]: I0223 09:02:50.392894 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/925ecf5d-4ab6-43a1-952e-43ad0fc9b276-service-ca\") pod \"console-7cd86f9fd7-66xg8\" (UID: \"925ecf5d-4ab6-43a1-952e-43ad0fc9b276\") " pod="openshift-console/console-7cd86f9fd7-66xg8" Feb 23 09:02:50 crc kubenswrapper[4940]: I0223 09:02:50.392930 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/925ecf5d-4ab6-43a1-952e-43ad0fc9b276-oauth-serving-cert\") pod \"console-7cd86f9fd7-66xg8\" (UID: \"925ecf5d-4ab6-43a1-952e-43ad0fc9b276\") " pod="openshift-console/console-7cd86f9fd7-66xg8" Feb 23 09:02:50 crc kubenswrapper[4940]: I0223 09:02:50.392953 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xlx7d\" (UniqueName: \"kubernetes.io/projected/925ecf5d-4ab6-43a1-952e-43ad0fc9b276-kube-api-access-xlx7d\") pod \"console-7cd86f9fd7-66xg8\" (UID: \"925ecf5d-4ab6-43a1-952e-43ad0fc9b276\") " pod="openshift-console/console-7cd86f9fd7-66xg8" Feb 23 09:02:50 crc kubenswrapper[4940]: I0223 09:02:50.393018 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/925ecf5d-4ab6-43a1-952e-43ad0fc9b276-console-serving-cert\") pod \"console-7cd86f9fd7-66xg8\" (UID: \"925ecf5d-4ab6-43a1-952e-43ad0fc9b276\") " pod="openshift-console/console-7cd86f9fd7-66xg8" Feb 23 09:02:50 crc kubenswrapper[4940]: I0223 09:02:50.393047 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/925ecf5d-4ab6-43a1-952e-43ad0fc9b276-console-oauth-config\") pod \"console-7cd86f9fd7-66xg8\" (UID: \"925ecf5d-4ab6-43a1-952e-43ad0fc9b276\") " pod="openshift-console/console-7cd86f9fd7-66xg8" Feb 23 09:02:50 crc kubenswrapper[4940]: I0223 09:02:50.394883 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-7cd86f9fd7-66xg8"] Feb 23 09:02:50 crc kubenswrapper[4940]: I0223 09:02:50.617425 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/925ecf5d-4ab6-43a1-952e-43ad0fc9b276-console-serving-cert\") pod \"console-7cd86f9fd7-66xg8\" (UID: \"925ecf5d-4ab6-43a1-952e-43ad0fc9b276\") " pod="openshift-console/console-7cd86f9fd7-66xg8" Feb 23 09:02:50 crc kubenswrapper[4940]: I0223 09:02:50.617688 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/925ecf5d-4ab6-43a1-952e-43ad0fc9b276-console-oauth-config\") pod \"console-7cd86f9fd7-66xg8\" (UID: \"925ecf5d-4ab6-43a1-952e-43ad0fc9b276\") " pod="openshift-console/console-7cd86f9fd7-66xg8" Feb 23 09:02:50 crc kubenswrapper[4940]: I0223 09:02:50.617735 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/925ecf5d-4ab6-43a1-952e-43ad0fc9b276-trusted-ca-bundle\") pod \"console-7cd86f9fd7-66xg8\" (UID: \"925ecf5d-4ab6-43a1-952e-43ad0fc9b276\") " pod="openshift-console/console-7cd86f9fd7-66xg8" Feb 23 09:02:50 crc kubenswrapper[4940]: I0223 09:02:50.617757 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/925ecf5d-4ab6-43a1-952e-43ad0fc9b276-console-config\") pod \"console-7cd86f9fd7-66xg8\" (UID: \"925ecf5d-4ab6-43a1-952e-43ad0fc9b276\") " pod="openshift-console/console-7cd86f9fd7-66xg8" Feb 23 09:02:50 crc kubenswrapper[4940]: I0223 09:02:50.617815 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/f78f27f8-0a49-4aef-9e58-0cdb19fddbe9-tls-key-pair\") pod \"nmstate-webhook-866bcb46dc-btmp6\" (UID: \"f78f27f8-0a49-4aef-9e58-0cdb19fddbe9\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-btmp6" Feb 23 09:02:50 crc kubenswrapper[4940]: I0223 09:02:50.617837 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/925ecf5d-4ab6-43a1-952e-43ad0fc9b276-service-ca\") pod \"console-7cd86f9fd7-66xg8\" (UID: \"925ecf5d-4ab6-43a1-952e-43ad0fc9b276\") " pod="openshift-console/console-7cd86f9fd7-66xg8" Feb 23 09:02:50 crc kubenswrapper[4940]: I0223 09:02:50.617862 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/925ecf5d-4ab6-43a1-952e-43ad0fc9b276-oauth-serving-cert\") pod \"console-7cd86f9fd7-66xg8\" (UID: \"925ecf5d-4ab6-43a1-952e-43ad0fc9b276\") " pod="openshift-console/console-7cd86f9fd7-66xg8" Feb 23 09:02:50 crc kubenswrapper[4940]: I0223 09:02:50.617885 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xlx7d\" (UniqueName: \"kubernetes.io/projected/925ecf5d-4ab6-43a1-952e-43ad0fc9b276-kube-api-access-xlx7d\") pod \"console-7cd86f9fd7-66xg8\" (UID: \"925ecf5d-4ab6-43a1-952e-43ad0fc9b276\") " pod="openshift-console/console-7cd86f9fd7-66xg8" Feb 23 09:02:50 crc kubenswrapper[4940]: I0223 09:02:50.617912 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/e77fac6b-039a-43b2-ad12-f5e506201ef7-plugin-serving-cert\") pod \"nmstate-console-plugin-5c78fc5d65-w29cj\" (UID: \"e77fac6b-039a-43b2-ad12-f5e506201ef7\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-w29cj" Feb 23 09:02:50 crc kubenswrapper[4940]: I0223 09:02:50.618818 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/925ecf5d-4ab6-43a1-952e-43ad0fc9b276-trusted-ca-bundle\") pod \"console-7cd86f9fd7-66xg8\" (UID: \"925ecf5d-4ab6-43a1-952e-43ad0fc9b276\") " pod="openshift-console/console-7cd86f9fd7-66xg8" Feb 23 09:02:50 crc kubenswrapper[4940]: I0223 09:02:50.618838 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/925ecf5d-4ab6-43a1-952e-43ad0fc9b276-service-ca\") pod \"console-7cd86f9fd7-66xg8\" (UID: \"925ecf5d-4ab6-43a1-952e-43ad0fc9b276\") " pod="openshift-console/console-7cd86f9fd7-66xg8" Feb 23 09:02:50 crc kubenswrapper[4940]: I0223 09:02:50.618991 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/925ecf5d-4ab6-43a1-952e-43ad0fc9b276-oauth-serving-cert\") pod \"console-7cd86f9fd7-66xg8\" (UID: \"925ecf5d-4ab6-43a1-952e-43ad0fc9b276\") " pod="openshift-console/console-7cd86f9fd7-66xg8" Feb 23 09:02:50 crc kubenswrapper[4940]: I0223 09:02:50.619378 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/925ecf5d-4ab6-43a1-952e-43ad0fc9b276-console-config\") pod \"console-7cd86f9fd7-66xg8\" (UID: \"925ecf5d-4ab6-43a1-952e-43ad0fc9b276\") " pod="openshift-console/console-7cd86f9fd7-66xg8" Feb 23 09:02:50 crc kubenswrapper[4940]: I0223 09:02:50.623213 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/925ecf5d-4ab6-43a1-952e-43ad0fc9b276-console-oauth-config\") pod \"console-7cd86f9fd7-66xg8\" (UID: \"925ecf5d-4ab6-43a1-952e-43ad0fc9b276\") " pod="openshift-console/console-7cd86f9fd7-66xg8" Feb 23 09:02:50 crc kubenswrapper[4940]: I0223 09:02:50.623234 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/f78f27f8-0a49-4aef-9e58-0cdb19fddbe9-tls-key-pair\") pod \"nmstate-webhook-866bcb46dc-btmp6\" (UID: \"f78f27f8-0a49-4aef-9e58-0cdb19fddbe9\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-btmp6" Feb 23 09:02:50 crc kubenswrapper[4940]: I0223 09:02:50.624080 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/e77fac6b-039a-43b2-ad12-f5e506201ef7-plugin-serving-cert\") pod \"nmstate-console-plugin-5c78fc5d65-w29cj\" (UID: \"e77fac6b-039a-43b2-ad12-f5e506201ef7\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-w29cj" Feb 23 09:02:50 crc kubenswrapper[4940]: I0223 09:02:50.624860 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/925ecf5d-4ab6-43a1-952e-43ad0fc9b276-console-serving-cert\") pod \"console-7cd86f9fd7-66xg8\" (UID: \"925ecf5d-4ab6-43a1-952e-43ad0fc9b276\") " pod="openshift-console/console-7cd86f9fd7-66xg8" Feb 23 09:02:50 crc kubenswrapper[4940]: I0223 09:02:50.638479 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xlx7d\" (UniqueName: \"kubernetes.io/projected/925ecf5d-4ab6-43a1-952e-43ad0fc9b276-kube-api-access-xlx7d\") pod \"console-7cd86f9fd7-66xg8\" (UID: \"925ecf5d-4ab6-43a1-952e-43ad0fc9b276\") " pod="openshift-console/console-7cd86f9fd7-66xg8" Feb 23 09:02:50 crc kubenswrapper[4940]: I0223 09:02:50.727930 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-btmp6" Feb 23 09:02:50 crc kubenswrapper[4940]: I0223 09:02:50.753904 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-7cd86f9fd7-66xg8" Feb 23 09:02:50 crc kubenswrapper[4940]: I0223 09:02:50.825289 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-frr6p" event={"ID":"a28be9f7-f2d0-4349-8432-a33d0f04d076","Type":"ContainerStarted","Data":"299c57aa32d06c903e0258e5881ff96dc796025e4813a280fe15a35268f941a8"} Feb 23 09:02:50 crc kubenswrapper[4940]: I0223 09:02:50.835383 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-kdpmr" Feb 23 09:02:50 crc kubenswrapper[4940]: I0223 09:02:50.843239 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-w29cj" Feb 23 09:02:50 crc kubenswrapper[4940]: I0223 09:02:50.876913 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-58c85c668d-kcszk"] Feb 23 09:02:51 crc kubenswrapper[4940]: I0223 09:02:51.145203 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-866bcb46dc-btmp6"] Feb 23 09:02:51 crc kubenswrapper[4940]: I0223 09:02:51.361922 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-7cd86f9fd7-66xg8"] Feb 23 09:02:51 crc kubenswrapper[4940]: W0223 09:02:51.365908 4940 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod925ecf5d_4ab6_43a1_952e_43ad0fc9b276.slice/crio-a32cd6904a8b912d24f3bf4caebc51cd6335845ebe6cc6a5b2d5031ab3fdae85 WatchSource:0}: Error finding container a32cd6904a8b912d24f3bf4caebc51cd6335845ebe6cc6a5b2d5031ab3fdae85: Status 404 returned error can't find the container with id a32cd6904a8b912d24f3bf4caebc51cd6335845ebe6cc6a5b2d5031ab3fdae85 Feb 23 09:02:51 crc kubenswrapper[4940]: I0223 09:02:51.419123 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5c78fc5d65-w29cj"] Feb 23 09:02:51 crc kubenswrapper[4940]: W0223 09:02:51.431304 4940 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode77fac6b_039a_43b2_ad12_f5e506201ef7.slice/crio-416389730ae19aa04bfac70b20a6a594bd135cdba333cdc0b8ba716545928e4b WatchSource:0}: Error finding container 416389730ae19aa04bfac70b20a6a594bd135cdba333cdc0b8ba716545928e4b: Status 404 returned error can't find the container with id 416389730ae19aa04bfac70b20a6a594bd135cdba333cdc0b8ba716545928e4b Feb 23 09:02:51 crc kubenswrapper[4940]: I0223 09:02:51.833945 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-btmp6" event={"ID":"f78f27f8-0a49-4aef-9e58-0cdb19fddbe9","Type":"ContainerStarted","Data":"4541840b15466faa73f3c890abfb32063206ef99c8f4041b54135a40f5aae077"} Feb 23 09:02:51 crc kubenswrapper[4940]: I0223 09:02:51.836482 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-7cd86f9fd7-66xg8" event={"ID":"925ecf5d-4ab6-43a1-952e-43ad0fc9b276","Type":"ContainerStarted","Data":"02089f3e0367bf41d6615c28b814fa36a02a2f9c68cfb67a3311354e97193178"} Feb 23 09:02:51 crc kubenswrapper[4940]: I0223 09:02:51.836523 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-7cd86f9fd7-66xg8" event={"ID":"925ecf5d-4ab6-43a1-952e-43ad0fc9b276","Type":"ContainerStarted","Data":"a32cd6904a8b912d24f3bf4caebc51cd6335845ebe6cc6a5b2d5031ab3fdae85"} Feb 23 09:02:51 crc kubenswrapper[4940]: I0223 09:02:51.837762 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-w29cj" event={"ID":"e77fac6b-039a-43b2-ad12-f5e506201ef7","Type":"ContainerStarted","Data":"416389730ae19aa04bfac70b20a6a594bd135cdba333cdc0b8ba716545928e4b"} Feb 23 09:02:51 crc kubenswrapper[4940]: I0223 09:02:51.839017 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-58c85c668d-kcszk" event={"ID":"06a06080-4162-423f-bd67-2cdc3aa6cec0","Type":"ContainerStarted","Data":"b351bb06cff4e18911be98f108228d45ba89edb712f7a772da9e349454887945"} Feb 23 09:02:52 crc kubenswrapper[4940]: I0223 09:02:52.703834 4940 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-jm5gf" Feb 23 09:02:52 crc kubenswrapper[4940]: I0223 09:02:52.725153 4940 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-7cd86f9fd7-66xg8" podStartSLOduration=2.725138482 podStartE2EDuration="2.725138482s" podCreationTimestamp="2026-02-23 09:02:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 09:02:51.858125879 +0000 UTC m=+903.241332036" watchObservedRunningTime="2026-02-23 09:02:52.725138482 +0000 UTC m=+904.108344649" Feb 23 09:02:52 crc kubenswrapper[4940]: I0223 09:02:52.761260 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-jm5gf" Feb 23 09:02:52 crc kubenswrapper[4940]: I0223 09:02:52.989661 4940 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-jm5gf"] Feb 23 09:02:54 crc kubenswrapper[4940]: I0223 09:02:54.009313 4940 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-jm5gf" podUID="b7282357-6def-4e94-8ea1-1a07d31044a9" containerName="registry-server" containerID="cri-o://55f9788e437235e6b707be377686d853d592ee549967798704928daec0e298a2" gracePeriod=2 Feb 23 09:02:54 crc kubenswrapper[4940]: I0223 09:02:54.841566 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jm5gf" Feb 23 09:02:54 crc kubenswrapper[4940]: I0223 09:02:54.963870 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b7282357-6def-4e94-8ea1-1a07d31044a9-catalog-content\") pod \"b7282357-6def-4e94-8ea1-1a07d31044a9\" (UID: \"b7282357-6def-4e94-8ea1-1a07d31044a9\") " Feb 23 09:02:54 crc kubenswrapper[4940]: I0223 09:02:54.963939 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w6vjh\" (UniqueName: \"kubernetes.io/projected/b7282357-6def-4e94-8ea1-1a07d31044a9-kube-api-access-w6vjh\") pod \"b7282357-6def-4e94-8ea1-1a07d31044a9\" (UID: \"b7282357-6def-4e94-8ea1-1a07d31044a9\") " Feb 23 09:02:54 crc kubenswrapper[4940]: I0223 09:02:54.964012 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b7282357-6def-4e94-8ea1-1a07d31044a9-utilities\") pod \"b7282357-6def-4e94-8ea1-1a07d31044a9\" (UID: \"b7282357-6def-4e94-8ea1-1a07d31044a9\") " Feb 23 09:02:54 crc kubenswrapper[4940]: I0223 09:02:54.965419 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b7282357-6def-4e94-8ea1-1a07d31044a9-utilities" (OuterVolumeSpecName: "utilities") pod "b7282357-6def-4e94-8ea1-1a07d31044a9" (UID: "b7282357-6def-4e94-8ea1-1a07d31044a9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 09:02:54 crc kubenswrapper[4940]: I0223 09:02:54.968995 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b7282357-6def-4e94-8ea1-1a07d31044a9-kube-api-access-w6vjh" (OuterVolumeSpecName: "kube-api-access-w6vjh") pod "b7282357-6def-4e94-8ea1-1a07d31044a9" (UID: "b7282357-6def-4e94-8ea1-1a07d31044a9"). InnerVolumeSpecName "kube-api-access-w6vjh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 09:02:55 crc kubenswrapper[4940]: I0223 09:02:55.020873 4940 generic.go:334] "Generic (PLEG): container finished" podID="b7282357-6def-4e94-8ea1-1a07d31044a9" containerID="55f9788e437235e6b707be377686d853d592ee549967798704928daec0e298a2" exitCode=0 Feb 23 09:02:55 crc kubenswrapper[4940]: I0223 09:02:55.020955 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jm5gf" Feb 23 09:02:55 crc kubenswrapper[4940]: I0223 09:02:55.020954 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jm5gf" event={"ID":"b7282357-6def-4e94-8ea1-1a07d31044a9","Type":"ContainerDied","Data":"55f9788e437235e6b707be377686d853d592ee549967798704928daec0e298a2"} Feb 23 09:02:55 crc kubenswrapper[4940]: I0223 09:02:55.021100 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jm5gf" event={"ID":"b7282357-6def-4e94-8ea1-1a07d31044a9","Type":"ContainerDied","Data":"711a6cea33a38ac62691550fc04741c9128ad0fd38d0f73780ca4457702132aa"} Feb 23 09:02:55 crc kubenswrapper[4940]: I0223 09:02:55.021135 4940 scope.go:117] "RemoveContainer" containerID="55f9788e437235e6b707be377686d853d592ee549967798704928daec0e298a2" Feb 23 09:02:55 crc kubenswrapper[4940]: I0223 09:02:55.022913 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-58c85c668d-kcszk" event={"ID":"06a06080-4162-423f-bd67-2cdc3aa6cec0","Type":"ContainerStarted","Data":"45e5da5508968040760382db4114eb7e9fd71e044ee96bcc4eeaa2959601aec7"} Feb 23 09:02:55 crc kubenswrapper[4940]: I0223 09:02:55.024725 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-btmp6" event={"ID":"f78f27f8-0a49-4aef-9e58-0cdb19fddbe9","Type":"ContainerStarted","Data":"81d1c7502184a22abfb396bb9315f83253b387534eec3e2203eabcd46459e171"} Feb 23 09:02:55 crc kubenswrapper[4940]: I0223 09:02:55.025174 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-btmp6" Feb 23 09:02:55 crc kubenswrapper[4940]: I0223 09:02:55.027153 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-frr6p" event={"ID":"a28be9f7-f2d0-4349-8432-a33d0f04d076","Type":"ContainerStarted","Data":"efb60c237f3e85ea6595f8463ddaba48b84aaadc32b8ea6b70aff2b4b0fed0b3"} Feb 23 09:02:55 crc kubenswrapper[4940]: I0223 09:02:55.027286 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-frr6p" Feb 23 09:02:55 crc kubenswrapper[4940]: I0223 09:02:55.044032 4940 scope.go:117] "RemoveContainer" containerID="abfadf1f49d5f95651b3c939640d359ac51e35e302092b4ec65c97edfe2c29ff" Feb 23 09:02:55 crc kubenswrapper[4940]: I0223 09:02:55.048359 4940 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-btmp6" podStartSLOduration=3.327033129 podStartE2EDuration="6.048326394s" podCreationTimestamp="2026-02-23 09:02:49 +0000 UTC" firstStartedPulling="2026-02-23 09:02:51.154477106 +0000 UTC m=+902.537683263" lastFinishedPulling="2026-02-23 09:02:53.875770361 +0000 UTC m=+905.258976528" observedRunningTime="2026-02-23 09:02:55.039447417 +0000 UTC m=+906.422653574" watchObservedRunningTime="2026-02-23 09:02:55.048326394 +0000 UTC m=+906.431532551" Feb 23 09:02:55 crc kubenswrapper[4940]: I0223 09:02:55.058764 4940 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-frr6p" podStartSLOduration=2.501888724 podStartE2EDuration="6.058746994s" podCreationTimestamp="2026-02-23 09:02:49 +0000 UTC" firstStartedPulling="2026-02-23 09:02:50.31717174 +0000 UTC m=+901.700377907" lastFinishedPulling="2026-02-23 09:02:53.87403002 +0000 UTC m=+905.257236177" observedRunningTime="2026-02-23 09:02:55.055271115 +0000 UTC m=+906.438477292" watchObservedRunningTime="2026-02-23 09:02:55.058746994 +0000 UTC m=+906.441953151" Feb 23 09:02:55 crc kubenswrapper[4940]: I0223 09:02:55.066393 4940 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b7282357-6def-4e94-8ea1-1a07d31044a9-utilities\") on node \"crc\" DevicePath \"\"" Feb 23 09:02:55 crc kubenswrapper[4940]: I0223 09:02:55.066424 4940 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w6vjh\" (UniqueName: \"kubernetes.io/projected/b7282357-6def-4e94-8ea1-1a07d31044a9-kube-api-access-w6vjh\") on node \"crc\" DevicePath \"\"" Feb 23 09:02:55 crc kubenswrapper[4940]: I0223 09:02:55.074728 4940 scope.go:117] "RemoveContainer" containerID="1ec3e39d5bd14e9a91fec33dfe9d4a346a15d6448a2e154019b7d017520b7138" Feb 23 09:02:55 crc kubenswrapper[4940]: I0223 09:02:55.087512 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b7282357-6def-4e94-8ea1-1a07d31044a9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b7282357-6def-4e94-8ea1-1a07d31044a9" (UID: "b7282357-6def-4e94-8ea1-1a07d31044a9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 09:02:55 crc kubenswrapper[4940]: I0223 09:02:55.103654 4940 scope.go:117] "RemoveContainer" containerID="55f9788e437235e6b707be377686d853d592ee549967798704928daec0e298a2" Feb 23 09:02:55 crc kubenswrapper[4940]: E0223 09:02:55.104264 4940 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"55f9788e437235e6b707be377686d853d592ee549967798704928daec0e298a2\": container with ID starting with 55f9788e437235e6b707be377686d853d592ee549967798704928daec0e298a2 not found: ID does not exist" containerID="55f9788e437235e6b707be377686d853d592ee549967798704928daec0e298a2" Feb 23 09:02:55 crc kubenswrapper[4940]: I0223 09:02:55.104292 4940 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"55f9788e437235e6b707be377686d853d592ee549967798704928daec0e298a2"} err="failed to get container status \"55f9788e437235e6b707be377686d853d592ee549967798704928daec0e298a2\": rpc error: code = NotFound desc = could not find container \"55f9788e437235e6b707be377686d853d592ee549967798704928daec0e298a2\": container with ID starting with 55f9788e437235e6b707be377686d853d592ee549967798704928daec0e298a2 not found: ID does not exist" Feb 23 09:02:55 crc kubenswrapper[4940]: I0223 09:02:55.104312 4940 scope.go:117] "RemoveContainer" containerID="abfadf1f49d5f95651b3c939640d359ac51e35e302092b4ec65c97edfe2c29ff" Feb 23 09:02:55 crc kubenswrapper[4940]: E0223 09:02:55.104810 4940 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"abfadf1f49d5f95651b3c939640d359ac51e35e302092b4ec65c97edfe2c29ff\": container with ID starting with abfadf1f49d5f95651b3c939640d359ac51e35e302092b4ec65c97edfe2c29ff not found: ID does not exist" containerID="abfadf1f49d5f95651b3c939640d359ac51e35e302092b4ec65c97edfe2c29ff" Feb 23 09:02:55 crc kubenswrapper[4940]: I0223 09:02:55.104843 4940 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"abfadf1f49d5f95651b3c939640d359ac51e35e302092b4ec65c97edfe2c29ff"} err="failed to get container status \"abfadf1f49d5f95651b3c939640d359ac51e35e302092b4ec65c97edfe2c29ff\": rpc error: code = NotFound desc = could not find container \"abfadf1f49d5f95651b3c939640d359ac51e35e302092b4ec65c97edfe2c29ff\": container with ID starting with abfadf1f49d5f95651b3c939640d359ac51e35e302092b4ec65c97edfe2c29ff not found: ID does not exist" Feb 23 09:02:55 crc kubenswrapper[4940]: I0223 09:02:55.104863 4940 scope.go:117] "RemoveContainer" containerID="1ec3e39d5bd14e9a91fec33dfe9d4a346a15d6448a2e154019b7d017520b7138" Feb 23 09:02:55 crc kubenswrapper[4940]: E0223 09:02:55.105290 4940 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1ec3e39d5bd14e9a91fec33dfe9d4a346a15d6448a2e154019b7d017520b7138\": container with ID starting with 1ec3e39d5bd14e9a91fec33dfe9d4a346a15d6448a2e154019b7d017520b7138 not found: ID does not exist" containerID="1ec3e39d5bd14e9a91fec33dfe9d4a346a15d6448a2e154019b7d017520b7138" Feb 23 09:02:55 crc kubenswrapper[4940]: I0223 09:02:55.105361 4940 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1ec3e39d5bd14e9a91fec33dfe9d4a346a15d6448a2e154019b7d017520b7138"} err="failed to get container status \"1ec3e39d5bd14e9a91fec33dfe9d4a346a15d6448a2e154019b7d017520b7138\": rpc error: code = NotFound desc = could not find container \"1ec3e39d5bd14e9a91fec33dfe9d4a346a15d6448a2e154019b7d017520b7138\": container with ID starting with 1ec3e39d5bd14e9a91fec33dfe9d4a346a15d6448a2e154019b7d017520b7138 not found: ID does not exist" Feb 23 09:02:55 crc kubenswrapper[4940]: I0223 09:02:55.167446 4940 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b7282357-6def-4e94-8ea1-1a07d31044a9-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 23 09:02:55 crc kubenswrapper[4940]: I0223 09:02:55.358950 4940 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-jm5gf"] Feb 23 09:02:55 crc kubenswrapper[4940]: I0223 09:02:55.363025 4940 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-jm5gf"] Feb 23 09:02:56 crc kubenswrapper[4940]: I0223 09:02:56.039400 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-w29cj" event={"ID":"e77fac6b-039a-43b2-ad12-f5e506201ef7","Type":"ContainerStarted","Data":"9cbdebf7082f7de5665ff5b082d4230100c2a78b31573ebd674641f6622c8b4d"} Feb 23 09:02:56 crc kubenswrapper[4940]: I0223 09:02:56.060132 4940 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-w29cj" podStartSLOduration=3.641719932 podStartE2EDuration="7.060109271s" podCreationTimestamp="2026-02-23 09:02:49 +0000 UTC" firstStartedPulling="2026-02-23 09:02:51.434313062 +0000 UTC m=+902.817519219" lastFinishedPulling="2026-02-23 09:02:54.852702401 +0000 UTC m=+906.235908558" observedRunningTime="2026-02-23 09:02:56.057007791 +0000 UTC m=+907.440213958" watchObservedRunningTime="2026-02-23 09:02:56.060109271 +0000 UTC m=+907.443315428" Feb 23 09:02:57 crc kubenswrapper[4940]: I0223 09:02:57.049197 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-58c85c668d-kcszk" event={"ID":"06a06080-4162-423f-bd67-2cdc3aa6cec0","Type":"ContainerStarted","Data":"2433ce704cf362d05ceccaa5b45923a9e98df4702eb649f727311b53eb0159d5"} Feb 23 09:02:57 crc kubenswrapper[4940]: I0223 09:02:57.353204 4940 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b7282357-6def-4e94-8ea1-1a07d31044a9" path="/var/lib/kubelet/pods/b7282357-6def-4e94-8ea1-1a07d31044a9/volumes" Feb 23 09:03:00 crc kubenswrapper[4940]: I0223 09:03:00.186126 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-frr6p" Feb 23 09:03:00 crc kubenswrapper[4940]: I0223 09:03:00.204400 4940 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-58c85c668d-kcszk" podStartSLOduration=6.016488797 podStartE2EDuration="11.204380268s" podCreationTimestamp="2026-02-23 09:02:49 +0000 UTC" firstStartedPulling="2026-02-23 09:02:50.88781198 +0000 UTC m=+902.271018137" lastFinishedPulling="2026-02-23 09:02:56.075703451 +0000 UTC m=+907.458909608" observedRunningTime="2026-02-23 09:02:57.076350246 +0000 UTC m=+908.459556423" watchObservedRunningTime="2026-02-23 09:03:00.204380268 +0000 UTC m=+911.587586425" Feb 23 09:03:00 crc kubenswrapper[4940]: I0223 09:03:00.755095 4940 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-7cd86f9fd7-66xg8" Feb 23 09:03:00 crc kubenswrapper[4940]: I0223 09:03:00.755140 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-7cd86f9fd7-66xg8" Feb 23 09:03:00 crc kubenswrapper[4940]: I0223 09:03:00.761415 4940 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-7cd86f9fd7-66xg8" Feb 23 09:03:01 crc kubenswrapper[4940]: I0223 09:03:01.078739 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-7cd86f9fd7-66xg8" Feb 23 09:03:01 crc kubenswrapper[4940]: I0223 09:03:01.136749 4940 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-2zgdn"] Feb 23 09:03:10 crc kubenswrapper[4940]: I0223 09:03:10.733989 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-btmp6" Feb 23 09:03:23 crc kubenswrapper[4940]: I0223 09:03:23.621064 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213ccb95"] Feb 23 09:03:23 crc kubenswrapper[4940]: E0223 09:03:23.622517 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b7282357-6def-4e94-8ea1-1a07d31044a9" containerName="registry-server" Feb 23 09:03:23 crc kubenswrapper[4940]: I0223 09:03:23.622534 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="b7282357-6def-4e94-8ea1-1a07d31044a9" containerName="registry-server" Feb 23 09:03:23 crc kubenswrapper[4940]: E0223 09:03:23.622551 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b7282357-6def-4e94-8ea1-1a07d31044a9" containerName="extract-content" Feb 23 09:03:23 crc kubenswrapper[4940]: I0223 09:03:23.622557 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="b7282357-6def-4e94-8ea1-1a07d31044a9" containerName="extract-content" Feb 23 09:03:23 crc kubenswrapper[4940]: E0223 09:03:23.622574 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b7282357-6def-4e94-8ea1-1a07d31044a9" containerName="extract-utilities" Feb 23 09:03:23 crc kubenswrapper[4940]: I0223 09:03:23.622582 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="b7282357-6def-4e94-8ea1-1a07d31044a9" containerName="extract-utilities" Feb 23 09:03:23 crc kubenswrapper[4940]: I0223 09:03:23.625324 4940 memory_manager.go:354] "RemoveStaleState removing state" podUID="b7282357-6def-4e94-8ea1-1a07d31044a9" containerName="registry-server" Feb 23 09:03:23 crc kubenswrapper[4940]: I0223 09:03:23.627020 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213ccb95" Feb 23 09:03:23 crc kubenswrapper[4940]: I0223 09:03:23.630015 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 23 09:03:23 crc kubenswrapper[4940]: I0223 09:03:23.630107 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213ccb95"] Feb 23 09:03:23 crc kubenswrapper[4940]: I0223 09:03:23.673112 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/36e9d5fb-4709-4cb8-ac88-67a510ca10fe-util\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213ccb95\" (UID: \"36e9d5fb-4709-4cb8-ac88-67a510ca10fe\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213ccb95" Feb 23 09:03:23 crc kubenswrapper[4940]: I0223 09:03:23.673186 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/36e9d5fb-4709-4cb8-ac88-67a510ca10fe-bundle\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213ccb95\" (UID: \"36e9d5fb-4709-4cb8-ac88-67a510ca10fe\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213ccb95" Feb 23 09:03:23 crc kubenswrapper[4940]: I0223 09:03:23.673275 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mptv7\" (UniqueName: \"kubernetes.io/projected/36e9d5fb-4709-4cb8-ac88-67a510ca10fe-kube-api-access-mptv7\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213ccb95\" (UID: \"36e9d5fb-4709-4cb8-ac88-67a510ca10fe\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213ccb95" Feb 23 09:03:23 crc kubenswrapper[4940]: I0223 09:03:23.773874 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/36e9d5fb-4709-4cb8-ac88-67a510ca10fe-util\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213ccb95\" (UID: \"36e9d5fb-4709-4cb8-ac88-67a510ca10fe\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213ccb95" Feb 23 09:03:23 crc kubenswrapper[4940]: I0223 09:03:23.774203 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/36e9d5fb-4709-4cb8-ac88-67a510ca10fe-bundle\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213ccb95\" (UID: \"36e9d5fb-4709-4cb8-ac88-67a510ca10fe\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213ccb95" Feb 23 09:03:23 crc kubenswrapper[4940]: I0223 09:03:23.774238 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mptv7\" (UniqueName: \"kubernetes.io/projected/36e9d5fb-4709-4cb8-ac88-67a510ca10fe-kube-api-access-mptv7\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213ccb95\" (UID: \"36e9d5fb-4709-4cb8-ac88-67a510ca10fe\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213ccb95" Feb 23 09:03:23 crc kubenswrapper[4940]: I0223 09:03:23.774357 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/36e9d5fb-4709-4cb8-ac88-67a510ca10fe-util\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213ccb95\" (UID: \"36e9d5fb-4709-4cb8-ac88-67a510ca10fe\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213ccb95" Feb 23 09:03:23 crc kubenswrapper[4940]: I0223 09:03:23.774675 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/36e9d5fb-4709-4cb8-ac88-67a510ca10fe-bundle\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213ccb95\" (UID: \"36e9d5fb-4709-4cb8-ac88-67a510ca10fe\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213ccb95" Feb 23 09:03:23 crc kubenswrapper[4940]: I0223 09:03:23.792714 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mptv7\" (UniqueName: \"kubernetes.io/projected/36e9d5fb-4709-4cb8-ac88-67a510ca10fe-kube-api-access-mptv7\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213ccb95\" (UID: \"36e9d5fb-4709-4cb8-ac88-67a510ca10fe\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213ccb95" Feb 23 09:03:23 crc kubenswrapper[4940]: I0223 09:03:23.991867 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213ccb95" Feb 23 09:03:24 crc kubenswrapper[4940]: I0223 09:03:24.405087 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213ccb95"] Feb 23 09:03:24 crc kubenswrapper[4940]: W0223 09:03:24.422888 4940 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod36e9d5fb_4709_4cb8_ac88_67a510ca10fe.slice/crio-ae8492c61ad332cea1e6611af66ed9ef8940ae9fe73d33f7a800abdd8416ce47 WatchSource:0}: Error finding container ae8492c61ad332cea1e6611af66ed9ef8940ae9fe73d33f7a800abdd8416ce47: Status 404 returned error can't find the container with id ae8492c61ad332cea1e6611af66ed9ef8940ae9fe73d33f7a800abdd8416ce47 Feb 23 09:03:25 crc kubenswrapper[4940]: I0223 09:03:25.237648 4940 generic.go:334] "Generic (PLEG): container finished" podID="36e9d5fb-4709-4cb8-ac88-67a510ca10fe" containerID="a6dbb86172392922e780b1da730d51523f9329f005893f7244360a3eb48bc225" exitCode=0 Feb 23 09:03:25 crc kubenswrapper[4940]: I0223 09:03:25.237712 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213ccb95" event={"ID":"36e9d5fb-4709-4cb8-ac88-67a510ca10fe","Type":"ContainerDied","Data":"a6dbb86172392922e780b1da730d51523f9329f005893f7244360a3eb48bc225"} Feb 23 09:03:25 crc kubenswrapper[4940]: I0223 09:03:25.237750 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213ccb95" event={"ID":"36e9d5fb-4709-4cb8-ac88-67a510ca10fe","Type":"ContainerStarted","Data":"ae8492c61ad332cea1e6611af66ed9ef8940ae9fe73d33f7a800abdd8416ce47"} Feb 23 09:03:26 crc kubenswrapper[4940]: I0223 09:03:26.192760 4940 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-2zgdn" podUID="07ef0edd-666b-4ced-9a27-51433a59c6c0" containerName="console" containerID="cri-o://3026d82c3f351e0b3048d623ce2822cb03f6221b44aaf47c1869a811cebe8570" gracePeriod=15 Feb 23 09:03:26 crc kubenswrapper[4940]: I0223 09:03:26.601365 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-2zgdn_07ef0edd-666b-4ced-9a27-51433a59c6c0/console/0.log" Feb 23 09:03:26 crc kubenswrapper[4940]: I0223 09:03:26.601781 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-2zgdn" Feb 23 09:03:26 crc kubenswrapper[4940]: I0223 09:03:26.715743 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b5kv7\" (UniqueName: \"kubernetes.io/projected/07ef0edd-666b-4ced-9a27-51433a59c6c0-kube-api-access-b5kv7\") pod \"07ef0edd-666b-4ced-9a27-51433a59c6c0\" (UID: \"07ef0edd-666b-4ced-9a27-51433a59c6c0\") " Feb 23 09:03:26 crc kubenswrapper[4940]: I0223 09:03:26.715799 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/07ef0edd-666b-4ced-9a27-51433a59c6c0-trusted-ca-bundle\") pod \"07ef0edd-666b-4ced-9a27-51433a59c6c0\" (UID: \"07ef0edd-666b-4ced-9a27-51433a59c6c0\") " Feb 23 09:03:26 crc kubenswrapper[4940]: I0223 09:03:26.715868 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/07ef0edd-666b-4ced-9a27-51433a59c6c0-oauth-serving-cert\") pod \"07ef0edd-666b-4ced-9a27-51433a59c6c0\" (UID: \"07ef0edd-666b-4ced-9a27-51433a59c6c0\") " Feb 23 09:03:26 crc kubenswrapper[4940]: I0223 09:03:26.715899 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/07ef0edd-666b-4ced-9a27-51433a59c6c0-console-serving-cert\") pod \"07ef0edd-666b-4ced-9a27-51433a59c6c0\" (UID: \"07ef0edd-666b-4ced-9a27-51433a59c6c0\") " Feb 23 09:03:26 crc kubenswrapper[4940]: I0223 09:03:26.715915 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/07ef0edd-666b-4ced-9a27-51433a59c6c0-console-oauth-config\") pod \"07ef0edd-666b-4ced-9a27-51433a59c6c0\" (UID: \"07ef0edd-666b-4ced-9a27-51433a59c6c0\") " Feb 23 09:03:26 crc kubenswrapper[4940]: I0223 09:03:26.715930 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/07ef0edd-666b-4ced-9a27-51433a59c6c0-console-config\") pod \"07ef0edd-666b-4ced-9a27-51433a59c6c0\" (UID: \"07ef0edd-666b-4ced-9a27-51433a59c6c0\") " Feb 23 09:03:26 crc kubenswrapper[4940]: I0223 09:03:26.715994 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/07ef0edd-666b-4ced-9a27-51433a59c6c0-service-ca\") pod \"07ef0edd-666b-4ced-9a27-51433a59c6c0\" (UID: \"07ef0edd-666b-4ced-9a27-51433a59c6c0\") " Feb 23 09:03:26 crc kubenswrapper[4940]: I0223 09:03:26.716806 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/07ef0edd-666b-4ced-9a27-51433a59c6c0-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "07ef0edd-666b-4ced-9a27-51433a59c6c0" (UID: "07ef0edd-666b-4ced-9a27-51433a59c6c0"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 09:03:26 crc kubenswrapper[4940]: I0223 09:03:26.716893 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/07ef0edd-666b-4ced-9a27-51433a59c6c0-service-ca" (OuterVolumeSpecName: "service-ca") pod "07ef0edd-666b-4ced-9a27-51433a59c6c0" (UID: "07ef0edd-666b-4ced-9a27-51433a59c6c0"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 09:03:26 crc kubenswrapper[4940]: I0223 09:03:26.717098 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/07ef0edd-666b-4ced-9a27-51433a59c6c0-console-config" (OuterVolumeSpecName: "console-config") pod "07ef0edd-666b-4ced-9a27-51433a59c6c0" (UID: "07ef0edd-666b-4ced-9a27-51433a59c6c0"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 09:03:26 crc kubenswrapper[4940]: I0223 09:03:26.717131 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/07ef0edd-666b-4ced-9a27-51433a59c6c0-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "07ef0edd-666b-4ced-9a27-51433a59c6c0" (UID: "07ef0edd-666b-4ced-9a27-51433a59c6c0"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 09:03:26 crc kubenswrapper[4940]: I0223 09:03:26.723271 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/07ef0edd-666b-4ced-9a27-51433a59c6c0-kube-api-access-b5kv7" (OuterVolumeSpecName: "kube-api-access-b5kv7") pod "07ef0edd-666b-4ced-9a27-51433a59c6c0" (UID: "07ef0edd-666b-4ced-9a27-51433a59c6c0"). InnerVolumeSpecName "kube-api-access-b5kv7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 09:03:26 crc kubenswrapper[4940]: I0223 09:03:26.723476 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/07ef0edd-666b-4ced-9a27-51433a59c6c0-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "07ef0edd-666b-4ced-9a27-51433a59c6c0" (UID: "07ef0edd-666b-4ced-9a27-51433a59c6c0"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 09:03:26 crc kubenswrapper[4940]: I0223 09:03:26.723716 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/07ef0edd-666b-4ced-9a27-51433a59c6c0-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "07ef0edd-666b-4ced-9a27-51433a59c6c0" (UID: "07ef0edd-666b-4ced-9a27-51433a59c6c0"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 09:03:26 crc kubenswrapper[4940]: I0223 09:03:26.818408 4940 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/07ef0edd-666b-4ced-9a27-51433a59c6c0-service-ca\") on node \"crc\" DevicePath \"\"" Feb 23 09:03:26 crc kubenswrapper[4940]: I0223 09:03:26.818472 4940 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b5kv7\" (UniqueName: \"kubernetes.io/projected/07ef0edd-666b-4ced-9a27-51433a59c6c0-kube-api-access-b5kv7\") on node \"crc\" DevicePath \"\"" Feb 23 09:03:26 crc kubenswrapper[4940]: I0223 09:03:26.818499 4940 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/07ef0edd-666b-4ced-9a27-51433a59c6c0-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 09:03:26 crc kubenswrapper[4940]: I0223 09:03:26.818522 4940 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/07ef0edd-666b-4ced-9a27-51433a59c6c0-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 23 09:03:26 crc kubenswrapper[4940]: I0223 09:03:26.818546 4940 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/07ef0edd-666b-4ced-9a27-51433a59c6c0-console-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 23 09:03:26 crc kubenswrapper[4940]: I0223 09:03:26.818569 4940 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/07ef0edd-666b-4ced-9a27-51433a59c6c0-console-oauth-config\") on node \"crc\" DevicePath \"\"" Feb 23 09:03:26 crc kubenswrapper[4940]: I0223 09:03:26.818592 4940 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/07ef0edd-666b-4ced-9a27-51433a59c6c0-console-config\") on node \"crc\" DevicePath \"\"" Feb 23 09:03:27 crc kubenswrapper[4940]: I0223 09:03:27.254006 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-2zgdn_07ef0edd-666b-4ced-9a27-51433a59c6c0/console/0.log" Feb 23 09:03:27 crc kubenswrapper[4940]: I0223 09:03:27.254092 4940 generic.go:334] "Generic (PLEG): container finished" podID="07ef0edd-666b-4ced-9a27-51433a59c6c0" containerID="3026d82c3f351e0b3048d623ce2822cb03f6221b44aaf47c1869a811cebe8570" exitCode=2 Feb 23 09:03:27 crc kubenswrapper[4940]: I0223 09:03:27.254189 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-2zgdn" event={"ID":"07ef0edd-666b-4ced-9a27-51433a59c6c0","Type":"ContainerDied","Data":"3026d82c3f351e0b3048d623ce2822cb03f6221b44aaf47c1869a811cebe8570"} Feb 23 09:03:27 crc kubenswrapper[4940]: I0223 09:03:27.254190 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-2zgdn" Feb 23 09:03:27 crc kubenswrapper[4940]: I0223 09:03:27.254235 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-2zgdn" event={"ID":"07ef0edd-666b-4ced-9a27-51433a59c6c0","Type":"ContainerDied","Data":"dfff17a1648e69202a24771d9c9d5be6a439fc1f7a9ecd98c487199f22c3fef3"} Feb 23 09:03:27 crc kubenswrapper[4940]: I0223 09:03:27.254273 4940 scope.go:117] "RemoveContainer" containerID="3026d82c3f351e0b3048d623ce2822cb03f6221b44aaf47c1869a811cebe8570" Feb 23 09:03:27 crc kubenswrapper[4940]: I0223 09:03:27.259120 4940 generic.go:334] "Generic (PLEG): container finished" podID="36e9d5fb-4709-4cb8-ac88-67a510ca10fe" containerID="3b00b98259d42becb26764bc87f4a2ad0da289c1166bda97eca30c70caa0438e" exitCode=0 Feb 23 09:03:27 crc kubenswrapper[4940]: I0223 09:03:27.259184 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213ccb95" event={"ID":"36e9d5fb-4709-4cb8-ac88-67a510ca10fe","Type":"ContainerDied","Data":"3b00b98259d42becb26764bc87f4a2ad0da289c1166bda97eca30c70caa0438e"} Feb 23 09:03:27 crc kubenswrapper[4940]: I0223 09:03:27.303019 4940 scope.go:117] "RemoveContainer" containerID="3026d82c3f351e0b3048d623ce2822cb03f6221b44aaf47c1869a811cebe8570" Feb 23 09:03:27 crc kubenswrapper[4940]: E0223 09:03:27.304060 4940 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3026d82c3f351e0b3048d623ce2822cb03f6221b44aaf47c1869a811cebe8570\": container with ID starting with 3026d82c3f351e0b3048d623ce2822cb03f6221b44aaf47c1869a811cebe8570 not found: ID does not exist" containerID="3026d82c3f351e0b3048d623ce2822cb03f6221b44aaf47c1869a811cebe8570" Feb 23 09:03:27 crc kubenswrapper[4940]: I0223 09:03:27.304124 4940 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3026d82c3f351e0b3048d623ce2822cb03f6221b44aaf47c1869a811cebe8570"} err="failed to get container status \"3026d82c3f351e0b3048d623ce2822cb03f6221b44aaf47c1869a811cebe8570\": rpc error: code = NotFound desc = could not find container \"3026d82c3f351e0b3048d623ce2822cb03f6221b44aaf47c1869a811cebe8570\": container with ID starting with 3026d82c3f351e0b3048d623ce2822cb03f6221b44aaf47c1869a811cebe8570 not found: ID does not exist" Feb 23 09:03:27 crc kubenswrapper[4940]: I0223 09:03:27.314681 4940 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-2zgdn"] Feb 23 09:03:27 crc kubenswrapper[4940]: I0223 09:03:27.326179 4940 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-2zgdn"] Feb 23 09:03:27 crc kubenswrapper[4940]: I0223 09:03:27.359953 4940 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="07ef0edd-666b-4ced-9a27-51433a59c6c0" path="/var/lib/kubelet/pods/07ef0edd-666b-4ced-9a27-51433a59c6c0/volumes" Feb 23 09:03:28 crc kubenswrapper[4940]: I0223 09:03:28.277705 4940 generic.go:334] "Generic (PLEG): container finished" podID="36e9d5fb-4709-4cb8-ac88-67a510ca10fe" containerID="2b2cec3108d92016961cb778af6284ff250366c77e0080968e1679c965305401" exitCode=0 Feb 23 09:03:28 crc kubenswrapper[4940]: I0223 09:03:28.277809 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213ccb95" event={"ID":"36e9d5fb-4709-4cb8-ac88-67a510ca10fe","Type":"ContainerDied","Data":"2b2cec3108d92016961cb778af6284ff250366c77e0080968e1679c965305401"} Feb 23 09:03:29 crc kubenswrapper[4940]: I0223 09:03:29.612475 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213ccb95" Feb 23 09:03:29 crc kubenswrapper[4940]: I0223 09:03:29.657833 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/36e9d5fb-4709-4cb8-ac88-67a510ca10fe-util\") pod \"36e9d5fb-4709-4cb8-ac88-67a510ca10fe\" (UID: \"36e9d5fb-4709-4cb8-ac88-67a510ca10fe\") " Feb 23 09:03:29 crc kubenswrapper[4940]: I0223 09:03:29.657957 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mptv7\" (UniqueName: \"kubernetes.io/projected/36e9d5fb-4709-4cb8-ac88-67a510ca10fe-kube-api-access-mptv7\") pod \"36e9d5fb-4709-4cb8-ac88-67a510ca10fe\" (UID: \"36e9d5fb-4709-4cb8-ac88-67a510ca10fe\") " Feb 23 09:03:29 crc kubenswrapper[4940]: I0223 09:03:29.658037 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/36e9d5fb-4709-4cb8-ac88-67a510ca10fe-bundle\") pod \"36e9d5fb-4709-4cb8-ac88-67a510ca10fe\" (UID: \"36e9d5fb-4709-4cb8-ac88-67a510ca10fe\") " Feb 23 09:03:29 crc kubenswrapper[4940]: I0223 09:03:29.659776 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/36e9d5fb-4709-4cb8-ac88-67a510ca10fe-bundle" (OuterVolumeSpecName: "bundle") pod "36e9d5fb-4709-4cb8-ac88-67a510ca10fe" (UID: "36e9d5fb-4709-4cb8-ac88-67a510ca10fe"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 09:03:29 crc kubenswrapper[4940]: I0223 09:03:29.664038 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/36e9d5fb-4709-4cb8-ac88-67a510ca10fe-kube-api-access-mptv7" (OuterVolumeSpecName: "kube-api-access-mptv7") pod "36e9d5fb-4709-4cb8-ac88-67a510ca10fe" (UID: "36e9d5fb-4709-4cb8-ac88-67a510ca10fe"). InnerVolumeSpecName "kube-api-access-mptv7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 09:03:29 crc kubenswrapper[4940]: I0223 09:03:29.672990 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/36e9d5fb-4709-4cb8-ac88-67a510ca10fe-util" (OuterVolumeSpecName: "util") pod "36e9d5fb-4709-4cb8-ac88-67a510ca10fe" (UID: "36e9d5fb-4709-4cb8-ac88-67a510ca10fe"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 09:03:29 crc kubenswrapper[4940]: I0223 09:03:29.759561 4940 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/36e9d5fb-4709-4cb8-ac88-67a510ca10fe-util\") on node \"crc\" DevicePath \"\"" Feb 23 09:03:29 crc kubenswrapper[4940]: I0223 09:03:29.759607 4940 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mptv7\" (UniqueName: \"kubernetes.io/projected/36e9d5fb-4709-4cb8-ac88-67a510ca10fe-kube-api-access-mptv7\") on node \"crc\" DevicePath \"\"" Feb 23 09:03:29 crc kubenswrapper[4940]: I0223 09:03:29.759640 4940 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/36e9d5fb-4709-4cb8-ac88-67a510ca10fe-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 09:03:30 crc kubenswrapper[4940]: I0223 09:03:30.292803 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213ccb95" event={"ID":"36e9d5fb-4709-4cb8-ac88-67a510ca10fe","Type":"ContainerDied","Data":"ae8492c61ad332cea1e6611af66ed9ef8940ae9fe73d33f7a800abdd8416ce47"} Feb 23 09:03:30 crc kubenswrapper[4940]: I0223 09:03:30.292878 4940 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ae8492c61ad332cea1e6611af66ed9ef8940ae9fe73d33f7a800abdd8416ce47" Feb 23 09:03:30 crc kubenswrapper[4940]: I0223 09:03:30.292887 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213ccb95" Feb 23 09:03:31 crc kubenswrapper[4940]: I0223 09:03:31.429343 4940 patch_prober.go:28] interesting pod/machine-config-daemon-26mgs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 23 09:03:31 crc kubenswrapper[4940]: I0223 09:03:31.429682 4940 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" podUID="f3f2cfd6-5ddf-436d-998f-440f1cc642b1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 23 09:03:39 crc kubenswrapper[4940]: I0223 09:03:39.005134 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-6fbfbdcfc7-6tv8l"] Feb 23 09:03:39 crc kubenswrapper[4940]: E0223 09:03:39.006146 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="36e9d5fb-4709-4cb8-ac88-67a510ca10fe" containerName="extract" Feb 23 09:03:39 crc kubenswrapper[4940]: I0223 09:03:39.006160 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="36e9d5fb-4709-4cb8-ac88-67a510ca10fe" containerName="extract" Feb 23 09:03:39 crc kubenswrapper[4940]: E0223 09:03:39.006183 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="36e9d5fb-4709-4cb8-ac88-67a510ca10fe" containerName="util" Feb 23 09:03:39 crc kubenswrapper[4940]: I0223 09:03:39.006191 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="36e9d5fb-4709-4cb8-ac88-67a510ca10fe" containerName="util" Feb 23 09:03:39 crc kubenswrapper[4940]: E0223 09:03:39.006199 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="36e9d5fb-4709-4cb8-ac88-67a510ca10fe" containerName="pull" Feb 23 09:03:39 crc kubenswrapper[4940]: I0223 09:03:39.006206 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="36e9d5fb-4709-4cb8-ac88-67a510ca10fe" containerName="pull" Feb 23 09:03:39 crc kubenswrapper[4940]: E0223 09:03:39.006219 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07ef0edd-666b-4ced-9a27-51433a59c6c0" containerName="console" Feb 23 09:03:39 crc kubenswrapper[4940]: I0223 09:03:39.006226 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="07ef0edd-666b-4ced-9a27-51433a59c6c0" containerName="console" Feb 23 09:03:39 crc kubenswrapper[4940]: I0223 09:03:39.006485 4940 memory_manager.go:354] "RemoveStaleState removing state" podUID="07ef0edd-666b-4ced-9a27-51433a59c6c0" containerName="console" Feb 23 09:03:39 crc kubenswrapper[4940]: I0223 09:03:39.006500 4940 memory_manager.go:354] "RemoveStaleState removing state" podUID="36e9d5fb-4709-4cb8-ac88-67a510ca10fe" containerName="extract" Feb 23 09:03:39 crc kubenswrapper[4940]: I0223 09:03:39.007006 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-6fbfbdcfc7-6tv8l" Feb 23 09:03:39 crc kubenswrapper[4940]: I0223 09:03:39.009119 4940 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Feb 23 09:03:39 crc kubenswrapper[4940]: I0223 09:03:39.009134 4940 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Feb 23 09:03:39 crc kubenswrapper[4940]: I0223 09:03:39.009923 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Feb 23 09:03:39 crc kubenswrapper[4940]: I0223 09:03:39.010062 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Feb 23 09:03:39 crc kubenswrapper[4940]: I0223 09:03:39.018903 4940 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-r7cc9" Feb 23 09:03:39 crc kubenswrapper[4940]: I0223 09:03:39.027368 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-6fbfbdcfc7-6tv8l"] Feb 23 09:03:39 crc kubenswrapper[4940]: I0223 09:03:39.072635 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/19abcf46-c53b-4409-a6f9-e7e8b41e3182-apiservice-cert\") pod \"metallb-operator-controller-manager-6fbfbdcfc7-6tv8l\" (UID: \"19abcf46-c53b-4409-a6f9-e7e8b41e3182\") " pod="metallb-system/metallb-operator-controller-manager-6fbfbdcfc7-6tv8l" Feb 23 09:03:39 crc kubenswrapper[4940]: I0223 09:03:39.072714 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/19abcf46-c53b-4409-a6f9-e7e8b41e3182-webhook-cert\") pod \"metallb-operator-controller-manager-6fbfbdcfc7-6tv8l\" (UID: \"19abcf46-c53b-4409-a6f9-e7e8b41e3182\") " pod="metallb-system/metallb-operator-controller-manager-6fbfbdcfc7-6tv8l" Feb 23 09:03:39 crc kubenswrapper[4940]: I0223 09:03:39.072745 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-56ccg\" (UniqueName: \"kubernetes.io/projected/19abcf46-c53b-4409-a6f9-e7e8b41e3182-kube-api-access-56ccg\") pod \"metallb-operator-controller-manager-6fbfbdcfc7-6tv8l\" (UID: \"19abcf46-c53b-4409-a6f9-e7e8b41e3182\") " pod="metallb-system/metallb-operator-controller-manager-6fbfbdcfc7-6tv8l" Feb 23 09:03:39 crc kubenswrapper[4940]: I0223 09:03:39.173802 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/19abcf46-c53b-4409-a6f9-e7e8b41e3182-apiservice-cert\") pod \"metallb-operator-controller-manager-6fbfbdcfc7-6tv8l\" (UID: \"19abcf46-c53b-4409-a6f9-e7e8b41e3182\") " pod="metallb-system/metallb-operator-controller-manager-6fbfbdcfc7-6tv8l" Feb 23 09:03:39 crc kubenswrapper[4940]: I0223 09:03:39.173864 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/19abcf46-c53b-4409-a6f9-e7e8b41e3182-webhook-cert\") pod \"metallb-operator-controller-manager-6fbfbdcfc7-6tv8l\" (UID: \"19abcf46-c53b-4409-a6f9-e7e8b41e3182\") " pod="metallb-system/metallb-operator-controller-manager-6fbfbdcfc7-6tv8l" Feb 23 09:03:39 crc kubenswrapper[4940]: I0223 09:03:39.173882 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-56ccg\" (UniqueName: \"kubernetes.io/projected/19abcf46-c53b-4409-a6f9-e7e8b41e3182-kube-api-access-56ccg\") pod \"metallb-operator-controller-manager-6fbfbdcfc7-6tv8l\" (UID: \"19abcf46-c53b-4409-a6f9-e7e8b41e3182\") " pod="metallb-system/metallb-operator-controller-manager-6fbfbdcfc7-6tv8l" Feb 23 09:03:39 crc kubenswrapper[4940]: I0223 09:03:39.180578 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/19abcf46-c53b-4409-a6f9-e7e8b41e3182-apiservice-cert\") pod \"metallb-operator-controller-manager-6fbfbdcfc7-6tv8l\" (UID: \"19abcf46-c53b-4409-a6f9-e7e8b41e3182\") " pod="metallb-system/metallb-operator-controller-manager-6fbfbdcfc7-6tv8l" Feb 23 09:03:39 crc kubenswrapper[4940]: I0223 09:03:39.180735 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/19abcf46-c53b-4409-a6f9-e7e8b41e3182-webhook-cert\") pod \"metallb-operator-controller-manager-6fbfbdcfc7-6tv8l\" (UID: \"19abcf46-c53b-4409-a6f9-e7e8b41e3182\") " pod="metallb-system/metallb-operator-controller-manager-6fbfbdcfc7-6tv8l" Feb 23 09:03:39 crc kubenswrapper[4940]: I0223 09:03:39.193413 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-56ccg\" (UniqueName: \"kubernetes.io/projected/19abcf46-c53b-4409-a6f9-e7e8b41e3182-kube-api-access-56ccg\") pod \"metallb-operator-controller-manager-6fbfbdcfc7-6tv8l\" (UID: \"19abcf46-c53b-4409-a6f9-e7e8b41e3182\") " pod="metallb-system/metallb-operator-controller-manager-6fbfbdcfc7-6tv8l" Feb 23 09:03:39 crc kubenswrapper[4940]: I0223 09:03:39.232512 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-d595fc4b7-pnf6s"] Feb 23 09:03:39 crc kubenswrapper[4940]: I0223 09:03:39.233492 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-d595fc4b7-pnf6s" Feb 23 09:03:39 crc kubenswrapper[4940]: I0223 09:03:39.235157 4940 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-sf7dk" Feb 23 09:03:39 crc kubenswrapper[4940]: I0223 09:03:39.235451 4940 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Feb 23 09:03:39 crc kubenswrapper[4940]: I0223 09:03:39.236510 4940 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Feb 23 09:03:39 crc kubenswrapper[4940]: I0223 09:03:39.251227 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-d595fc4b7-pnf6s"] Feb 23 09:03:39 crc kubenswrapper[4940]: I0223 09:03:39.321858 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-6fbfbdcfc7-6tv8l" Feb 23 09:03:39 crc kubenswrapper[4940]: I0223 09:03:39.376027 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/462005ef-96eb-4734-9ffe-eec88929e4d2-webhook-cert\") pod \"metallb-operator-webhook-server-d595fc4b7-pnf6s\" (UID: \"462005ef-96eb-4734-9ffe-eec88929e4d2\") " pod="metallb-system/metallb-operator-webhook-server-d595fc4b7-pnf6s" Feb 23 09:03:39 crc kubenswrapper[4940]: I0223 09:03:39.376175 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xv855\" (UniqueName: \"kubernetes.io/projected/462005ef-96eb-4734-9ffe-eec88929e4d2-kube-api-access-xv855\") pod \"metallb-operator-webhook-server-d595fc4b7-pnf6s\" (UID: \"462005ef-96eb-4734-9ffe-eec88929e4d2\") " pod="metallb-system/metallb-operator-webhook-server-d595fc4b7-pnf6s" Feb 23 09:03:39 crc kubenswrapper[4940]: I0223 09:03:39.376206 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/462005ef-96eb-4734-9ffe-eec88929e4d2-apiservice-cert\") pod \"metallb-operator-webhook-server-d595fc4b7-pnf6s\" (UID: \"462005ef-96eb-4734-9ffe-eec88929e4d2\") " pod="metallb-system/metallb-operator-webhook-server-d595fc4b7-pnf6s" Feb 23 09:03:39 crc kubenswrapper[4940]: I0223 09:03:39.478227 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/462005ef-96eb-4734-9ffe-eec88929e4d2-webhook-cert\") pod \"metallb-operator-webhook-server-d595fc4b7-pnf6s\" (UID: \"462005ef-96eb-4734-9ffe-eec88929e4d2\") " pod="metallb-system/metallb-operator-webhook-server-d595fc4b7-pnf6s" Feb 23 09:03:39 crc kubenswrapper[4940]: I0223 09:03:39.478318 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xv855\" (UniqueName: \"kubernetes.io/projected/462005ef-96eb-4734-9ffe-eec88929e4d2-kube-api-access-xv855\") pod \"metallb-operator-webhook-server-d595fc4b7-pnf6s\" (UID: \"462005ef-96eb-4734-9ffe-eec88929e4d2\") " pod="metallb-system/metallb-operator-webhook-server-d595fc4b7-pnf6s" Feb 23 09:03:39 crc kubenswrapper[4940]: I0223 09:03:39.478346 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/462005ef-96eb-4734-9ffe-eec88929e4d2-apiservice-cert\") pod \"metallb-operator-webhook-server-d595fc4b7-pnf6s\" (UID: \"462005ef-96eb-4734-9ffe-eec88929e4d2\") " pod="metallb-system/metallb-operator-webhook-server-d595fc4b7-pnf6s" Feb 23 09:03:39 crc kubenswrapper[4940]: I0223 09:03:39.483554 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/462005ef-96eb-4734-9ffe-eec88929e4d2-apiservice-cert\") pod \"metallb-operator-webhook-server-d595fc4b7-pnf6s\" (UID: \"462005ef-96eb-4734-9ffe-eec88929e4d2\") " pod="metallb-system/metallb-operator-webhook-server-d595fc4b7-pnf6s" Feb 23 09:03:39 crc kubenswrapper[4940]: I0223 09:03:39.497227 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/462005ef-96eb-4734-9ffe-eec88929e4d2-webhook-cert\") pod \"metallb-operator-webhook-server-d595fc4b7-pnf6s\" (UID: \"462005ef-96eb-4734-9ffe-eec88929e4d2\") " pod="metallb-system/metallb-operator-webhook-server-d595fc4b7-pnf6s" Feb 23 09:03:39 crc kubenswrapper[4940]: I0223 09:03:39.500155 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xv855\" (UniqueName: \"kubernetes.io/projected/462005ef-96eb-4734-9ffe-eec88929e4d2-kube-api-access-xv855\") pod \"metallb-operator-webhook-server-d595fc4b7-pnf6s\" (UID: \"462005ef-96eb-4734-9ffe-eec88929e4d2\") " pod="metallb-system/metallb-operator-webhook-server-d595fc4b7-pnf6s" Feb 23 09:03:39 crc kubenswrapper[4940]: I0223 09:03:39.552191 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-d595fc4b7-pnf6s" Feb 23 09:03:39 crc kubenswrapper[4940]: I0223 09:03:39.861670 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-d595fc4b7-pnf6s"] Feb 23 09:03:39 crc kubenswrapper[4940]: I0223 09:03:39.881682 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-6fbfbdcfc7-6tv8l"] Feb 23 09:03:39 crc kubenswrapper[4940]: W0223 09:03:39.881821 4940 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod19abcf46_c53b_4409_a6f9_e7e8b41e3182.slice/crio-9a4c34e49fa73709ab4eabb742cc6a243121c7e617b109b3256c08199e7c754c WatchSource:0}: Error finding container 9a4c34e49fa73709ab4eabb742cc6a243121c7e617b109b3256c08199e7c754c: Status 404 returned error can't find the container with id 9a4c34e49fa73709ab4eabb742cc6a243121c7e617b109b3256c08199e7c754c Feb 23 09:03:40 crc kubenswrapper[4940]: I0223 09:03:40.353700 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-6fbfbdcfc7-6tv8l" event={"ID":"19abcf46-c53b-4409-a6f9-e7e8b41e3182","Type":"ContainerStarted","Data":"9a4c34e49fa73709ab4eabb742cc6a243121c7e617b109b3256c08199e7c754c"} Feb 23 09:03:40 crc kubenswrapper[4940]: I0223 09:03:40.354798 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-d595fc4b7-pnf6s" event={"ID":"462005ef-96eb-4734-9ffe-eec88929e4d2","Type":"ContainerStarted","Data":"d3db83c1d8cc2ea0a40e53740e7b5be6dc822c2e3625d32770f7244a873a69ff"} Feb 23 09:03:43 crc kubenswrapper[4940]: I0223 09:03:43.381707 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-6fbfbdcfc7-6tv8l" event={"ID":"19abcf46-c53b-4409-a6f9-e7e8b41e3182","Type":"ContainerStarted","Data":"267f60d410f3d904b1f58dd716843de75c21acb079d02d552326394f1eef136d"} Feb 23 09:03:43 crc kubenswrapper[4940]: I0223 09:03:43.382074 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-6fbfbdcfc7-6tv8l" Feb 23 09:03:49 crc kubenswrapper[4940]: I0223 09:03:49.378563 4940 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-6fbfbdcfc7-6tv8l" podStartSLOduration=8.615464245 podStartE2EDuration="11.378542668s" podCreationTimestamp="2026-02-23 09:03:38 +0000 UTC" firstStartedPulling="2026-02-23 09:03:39.884798547 +0000 UTC m=+951.268004704" lastFinishedPulling="2026-02-23 09:03:42.64787697 +0000 UTC m=+954.031083127" observedRunningTime="2026-02-23 09:03:43.405771992 +0000 UTC m=+954.788978169" watchObservedRunningTime="2026-02-23 09:03:49.378542668 +0000 UTC m=+960.761748825" Feb 23 09:03:54 crc kubenswrapper[4940]: I0223 09:03:54.627240 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-d595fc4b7-pnf6s" event={"ID":"462005ef-96eb-4734-9ffe-eec88929e4d2","Type":"ContainerStarted","Data":"25f5ebc7937493039c6c7132ac4c7ae6bc12bcbd5aa0a4b69dc0a0df4e49d0a1"} Feb 23 09:03:54 crc kubenswrapper[4940]: I0223 09:03:54.628795 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-d595fc4b7-pnf6s" Feb 23 09:03:54 crc kubenswrapper[4940]: I0223 09:03:54.653049 4940 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-d595fc4b7-pnf6s" podStartSLOduration=1.026092128 podStartE2EDuration="15.653027411s" podCreationTimestamp="2026-02-23 09:03:39 +0000 UTC" firstStartedPulling="2026-02-23 09:03:39.86749419 +0000 UTC m=+951.250700347" lastFinishedPulling="2026-02-23 09:03:54.494429473 +0000 UTC m=+965.877635630" observedRunningTime="2026-02-23 09:03:54.648432219 +0000 UTC m=+966.031638396" watchObservedRunningTime="2026-02-23 09:03:54.653027411 +0000 UTC m=+966.036233568" Feb 23 09:04:01 crc kubenswrapper[4940]: I0223 09:04:01.430032 4940 patch_prober.go:28] interesting pod/machine-config-daemon-26mgs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 23 09:04:01 crc kubenswrapper[4940]: I0223 09:04:01.430480 4940 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" podUID="f3f2cfd6-5ddf-436d-998f-440f1cc642b1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 23 09:04:09 crc kubenswrapper[4940]: I0223 09:04:09.559151 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-d595fc4b7-pnf6s" Feb 23 09:04:19 crc kubenswrapper[4940]: I0223 09:04:19.326339 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-6fbfbdcfc7-6tv8l" Feb 23 09:04:20 crc kubenswrapper[4940]: I0223 09:04:20.211823 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-vj8xk"] Feb 23 09:04:20 crc kubenswrapper[4940]: I0223 09:04:20.217523 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-78b44bf5bb-crdxs"] Feb 23 09:04:20 crc kubenswrapper[4940]: I0223 09:04:20.218372 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-vj8xk" Feb 23 09:04:20 crc kubenswrapper[4940]: I0223 09:04:20.218920 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-crdxs" Feb 23 09:04:20 crc kubenswrapper[4940]: I0223 09:04:20.221033 4940 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Feb 23 09:04:20 crc kubenswrapper[4940]: I0223 09:04:20.221652 4940 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Feb 23 09:04:20 crc kubenswrapper[4940]: I0223 09:04:20.223451 4940 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-bx6v5" Feb 23 09:04:20 crc kubenswrapper[4940]: I0223 09:04:20.223600 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Feb 23 09:04:20 crc kubenswrapper[4940]: I0223 09:04:20.229785 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-78b44bf5bb-crdxs"] Feb 23 09:04:20 crc kubenswrapper[4940]: I0223 09:04:20.309136 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-vw24x"] Feb 23 09:04:20 crc kubenswrapper[4940]: I0223 09:04:20.310019 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-vw24x" Feb 23 09:04:20 crc kubenswrapper[4940]: I0223 09:04:20.317366 4940 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-knvw9" Feb 23 09:04:20 crc kubenswrapper[4940]: I0223 09:04:20.317482 4940 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Feb 23 09:04:20 crc kubenswrapper[4940]: I0223 09:04:20.319524 4940 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Feb 23 09:04:20 crc kubenswrapper[4940]: I0223 09:04:20.321820 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Feb 23 09:04:20 crc kubenswrapper[4940]: I0223 09:04:20.328347 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-69bbfbf88f-5cz68"] Feb 23 09:04:20 crc kubenswrapper[4940]: I0223 09:04:20.329222 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-69bbfbf88f-5cz68" Feb 23 09:04:20 crc kubenswrapper[4940]: I0223 09:04:20.331598 4940 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Feb 23 09:04:20 crc kubenswrapper[4940]: I0223 09:04:20.367348 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/bbb0e30c-ec14-4878-922f-df5bdaa26e76-metrics\") pod \"frr-k8s-vj8xk\" (UID: \"bbb0e30c-ec14-4878-922f-df5bdaa26e76\") " pod="metallb-system/frr-k8s-vj8xk" Feb 23 09:04:20 crc kubenswrapper[4940]: I0223 09:04:20.367572 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/bbb0e30c-ec14-4878-922f-df5bdaa26e76-metrics-certs\") pod \"frr-k8s-vj8xk\" (UID: \"bbb0e30c-ec14-4878-922f-df5bdaa26e76\") " pod="metallb-system/frr-k8s-vj8xk" Feb 23 09:04:20 crc kubenswrapper[4940]: I0223 09:04:20.367758 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7dz59\" (UniqueName: \"kubernetes.io/projected/bbb0e30c-ec14-4878-922f-df5bdaa26e76-kube-api-access-7dz59\") pod \"frr-k8s-vj8xk\" (UID: \"bbb0e30c-ec14-4878-922f-df5bdaa26e76\") " pod="metallb-system/frr-k8s-vj8xk" Feb 23 09:04:20 crc kubenswrapper[4940]: I0223 09:04:20.367809 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/bbb0e30c-ec14-4878-922f-df5bdaa26e76-frr-sockets\") pod \"frr-k8s-vj8xk\" (UID: \"bbb0e30c-ec14-4878-922f-df5bdaa26e76\") " pod="metallb-system/frr-k8s-vj8xk" Feb 23 09:04:20 crc kubenswrapper[4940]: I0223 09:04:20.367838 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-965z7\" (UniqueName: \"kubernetes.io/projected/130d1750-19ea-4753-87f5-1e7f85169a40-kube-api-access-965z7\") pod \"frr-k8s-webhook-server-78b44bf5bb-crdxs\" (UID: \"130d1750-19ea-4753-87f5-1e7f85169a40\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-crdxs" Feb 23 09:04:20 crc kubenswrapper[4940]: I0223 09:04:20.367859 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/bbb0e30c-ec14-4878-922f-df5bdaa26e76-frr-startup\") pod \"frr-k8s-vj8xk\" (UID: \"bbb0e30c-ec14-4878-922f-df5bdaa26e76\") " pod="metallb-system/frr-k8s-vj8xk" Feb 23 09:04:20 crc kubenswrapper[4940]: I0223 09:04:20.367910 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/bbb0e30c-ec14-4878-922f-df5bdaa26e76-frr-conf\") pod \"frr-k8s-vj8xk\" (UID: \"bbb0e30c-ec14-4878-922f-df5bdaa26e76\") " pod="metallb-system/frr-k8s-vj8xk" Feb 23 09:04:20 crc kubenswrapper[4940]: I0223 09:04:20.367929 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/bbb0e30c-ec14-4878-922f-df5bdaa26e76-reloader\") pod \"frr-k8s-vj8xk\" (UID: \"bbb0e30c-ec14-4878-922f-df5bdaa26e76\") " pod="metallb-system/frr-k8s-vj8xk" Feb 23 09:04:20 crc kubenswrapper[4940]: I0223 09:04:20.367961 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/130d1750-19ea-4753-87f5-1e7f85169a40-cert\") pod \"frr-k8s-webhook-server-78b44bf5bb-crdxs\" (UID: \"130d1750-19ea-4753-87f5-1e7f85169a40\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-crdxs" Feb 23 09:04:20 crc kubenswrapper[4940]: I0223 09:04:20.373553 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-69bbfbf88f-5cz68"] Feb 23 09:04:20 crc kubenswrapper[4940]: I0223 09:04:20.468676 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/2309dc31-3802-4155-847b-56d77574cee0-metallb-excludel2\") pod \"speaker-vw24x\" (UID: \"2309dc31-3802-4155-847b-56d77574cee0\") " pod="metallb-system/speaker-vw24x" Feb 23 09:04:20 crc kubenswrapper[4940]: I0223 09:04:20.468738 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/bbb0e30c-ec14-4878-922f-df5bdaa26e76-metrics\") pod \"frr-k8s-vj8xk\" (UID: \"bbb0e30c-ec14-4878-922f-df5bdaa26e76\") " pod="metallb-system/frr-k8s-vj8xk" Feb 23 09:04:20 crc kubenswrapper[4940]: I0223 09:04:20.468755 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/bbb0e30c-ec14-4878-922f-df5bdaa26e76-metrics-certs\") pod \"frr-k8s-vj8xk\" (UID: \"bbb0e30c-ec14-4878-922f-df5bdaa26e76\") " pod="metallb-system/frr-k8s-vj8xk" Feb 23 09:04:20 crc kubenswrapper[4940]: I0223 09:04:20.468776 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e1d5ae18-3a8e-4845-a163-827184c53429-metrics-certs\") pod \"controller-69bbfbf88f-5cz68\" (UID: \"e1d5ae18-3a8e-4845-a163-827184c53429\") " pod="metallb-system/controller-69bbfbf88f-5cz68" Feb 23 09:04:20 crc kubenswrapper[4940]: I0223 09:04:20.468805 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7dz59\" (UniqueName: \"kubernetes.io/projected/bbb0e30c-ec14-4878-922f-df5bdaa26e76-kube-api-access-7dz59\") pod \"frr-k8s-vj8xk\" (UID: \"bbb0e30c-ec14-4878-922f-df5bdaa26e76\") " pod="metallb-system/frr-k8s-vj8xk" Feb 23 09:04:20 crc kubenswrapper[4940]: I0223 09:04:20.468820 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tvlll\" (UniqueName: \"kubernetes.io/projected/2309dc31-3802-4155-847b-56d77574cee0-kube-api-access-tvlll\") pod \"speaker-vw24x\" (UID: \"2309dc31-3802-4155-847b-56d77574cee0\") " pod="metallb-system/speaker-vw24x" Feb 23 09:04:20 crc kubenswrapper[4940]: I0223 09:04:20.468840 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/bbb0e30c-ec14-4878-922f-df5bdaa26e76-frr-sockets\") pod \"frr-k8s-vj8xk\" (UID: \"bbb0e30c-ec14-4878-922f-df5bdaa26e76\") " pod="metallb-system/frr-k8s-vj8xk" Feb 23 09:04:20 crc kubenswrapper[4940]: I0223 09:04:20.468854 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-965z7\" (UniqueName: \"kubernetes.io/projected/130d1750-19ea-4753-87f5-1e7f85169a40-kube-api-access-965z7\") pod \"frr-k8s-webhook-server-78b44bf5bb-crdxs\" (UID: \"130d1750-19ea-4753-87f5-1e7f85169a40\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-crdxs" Feb 23 09:04:20 crc kubenswrapper[4940]: I0223 09:04:20.468868 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/bbb0e30c-ec14-4878-922f-df5bdaa26e76-frr-startup\") pod \"frr-k8s-vj8xk\" (UID: \"bbb0e30c-ec14-4878-922f-df5bdaa26e76\") " pod="metallb-system/frr-k8s-vj8xk" Feb 23 09:04:20 crc kubenswrapper[4940]: I0223 09:04:20.468900 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/bbb0e30c-ec14-4878-922f-df5bdaa26e76-frr-conf\") pod \"frr-k8s-vj8xk\" (UID: \"bbb0e30c-ec14-4878-922f-df5bdaa26e76\") " pod="metallb-system/frr-k8s-vj8xk" Feb 23 09:04:20 crc kubenswrapper[4940]: I0223 09:04:20.468913 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/bbb0e30c-ec14-4878-922f-df5bdaa26e76-reloader\") pod \"frr-k8s-vj8xk\" (UID: \"bbb0e30c-ec14-4878-922f-df5bdaa26e76\") " pod="metallb-system/frr-k8s-vj8xk" Feb 23 09:04:20 crc kubenswrapper[4940]: I0223 09:04:20.468932 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2309dc31-3802-4155-847b-56d77574cee0-metrics-certs\") pod \"speaker-vw24x\" (UID: \"2309dc31-3802-4155-847b-56d77574cee0\") " pod="metallb-system/speaker-vw24x" Feb 23 09:04:20 crc kubenswrapper[4940]: I0223 09:04:20.468949 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/130d1750-19ea-4753-87f5-1e7f85169a40-cert\") pod \"frr-k8s-webhook-server-78b44bf5bb-crdxs\" (UID: \"130d1750-19ea-4753-87f5-1e7f85169a40\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-crdxs" Feb 23 09:04:20 crc kubenswrapper[4940]: I0223 09:04:20.468973 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e1d5ae18-3a8e-4845-a163-827184c53429-cert\") pod \"controller-69bbfbf88f-5cz68\" (UID: \"e1d5ae18-3a8e-4845-a163-827184c53429\") " pod="metallb-system/controller-69bbfbf88f-5cz68" Feb 23 09:04:20 crc kubenswrapper[4940]: I0223 09:04:20.468992 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/2309dc31-3802-4155-847b-56d77574cee0-memberlist\") pod \"speaker-vw24x\" (UID: \"2309dc31-3802-4155-847b-56d77574cee0\") " pod="metallb-system/speaker-vw24x" Feb 23 09:04:20 crc kubenswrapper[4940]: I0223 09:04:20.469005 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9xjwv\" (UniqueName: \"kubernetes.io/projected/e1d5ae18-3a8e-4845-a163-827184c53429-kube-api-access-9xjwv\") pod \"controller-69bbfbf88f-5cz68\" (UID: \"e1d5ae18-3a8e-4845-a163-827184c53429\") " pod="metallb-system/controller-69bbfbf88f-5cz68" Feb 23 09:04:20 crc kubenswrapper[4940]: I0223 09:04:20.469393 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/bbb0e30c-ec14-4878-922f-df5bdaa26e76-metrics\") pod \"frr-k8s-vj8xk\" (UID: \"bbb0e30c-ec14-4878-922f-df5bdaa26e76\") " pod="metallb-system/frr-k8s-vj8xk" Feb 23 09:04:20 crc kubenswrapper[4940]: E0223 09:04:20.469474 4940 secret.go:188] Couldn't get secret metallb-system/frr-k8s-certs-secret: secret "frr-k8s-certs-secret" not found Feb 23 09:04:20 crc kubenswrapper[4940]: E0223 09:04:20.469514 4940 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bbb0e30c-ec14-4878-922f-df5bdaa26e76-metrics-certs podName:bbb0e30c-ec14-4878-922f-df5bdaa26e76 nodeName:}" failed. No retries permitted until 2026-02-23 09:04:20.969499647 +0000 UTC m=+992.352705804 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/bbb0e30c-ec14-4878-922f-df5bdaa26e76-metrics-certs") pod "frr-k8s-vj8xk" (UID: "bbb0e30c-ec14-4878-922f-df5bdaa26e76") : secret "frr-k8s-certs-secret" not found Feb 23 09:04:20 crc kubenswrapper[4940]: I0223 09:04:20.470128 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/bbb0e30c-ec14-4878-922f-df5bdaa26e76-frr-sockets\") pod \"frr-k8s-vj8xk\" (UID: \"bbb0e30c-ec14-4878-922f-df5bdaa26e76\") " pod="metallb-system/frr-k8s-vj8xk" Feb 23 09:04:20 crc kubenswrapper[4940]: I0223 09:04:20.471043 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/bbb0e30c-ec14-4878-922f-df5bdaa26e76-frr-startup\") pod \"frr-k8s-vj8xk\" (UID: \"bbb0e30c-ec14-4878-922f-df5bdaa26e76\") " pod="metallb-system/frr-k8s-vj8xk" Feb 23 09:04:20 crc kubenswrapper[4940]: I0223 09:04:20.471274 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/bbb0e30c-ec14-4878-922f-df5bdaa26e76-frr-conf\") pod \"frr-k8s-vj8xk\" (UID: \"bbb0e30c-ec14-4878-922f-df5bdaa26e76\") " pod="metallb-system/frr-k8s-vj8xk" Feb 23 09:04:20 crc kubenswrapper[4940]: I0223 09:04:20.471493 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/bbb0e30c-ec14-4878-922f-df5bdaa26e76-reloader\") pod \"frr-k8s-vj8xk\" (UID: \"bbb0e30c-ec14-4878-922f-df5bdaa26e76\") " pod="metallb-system/frr-k8s-vj8xk" Feb 23 09:04:20 crc kubenswrapper[4940]: I0223 09:04:20.478417 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/130d1750-19ea-4753-87f5-1e7f85169a40-cert\") pod \"frr-k8s-webhook-server-78b44bf5bb-crdxs\" (UID: \"130d1750-19ea-4753-87f5-1e7f85169a40\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-crdxs" Feb 23 09:04:20 crc kubenswrapper[4940]: I0223 09:04:20.487086 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7dz59\" (UniqueName: \"kubernetes.io/projected/bbb0e30c-ec14-4878-922f-df5bdaa26e76-kube-api-access-7dz59\") pod \"frr-k8s-vj8xk\" (UID: \"bbb0e30c-ec14-4878-922f-df5bdaa26e76\") " pod="metallb-system/frr-k8s-vj8xk" Feb 23 09:04:20 crc kubenswrapper[4940]: I0223 09:04:20.495631 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-965z7\" (UniqueName: \"kubernetes.io/projected/130d1750-19ea-4753-87f5-1e7f85169a40-kube-api-access-965z7\") pod \"frr-k8s-webhook-server-78b44bf5bb-crdxs\" (UID: \"130d1750-19ea-4753-87f5-1e7f85169a40\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-crdxs" Feb 23 09:04:20 crc kubenswrapper[4940]: I0223 09:04:20.549095 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-crdxs" Feb 23 09:04:20 crc kubenswrapper[4940]: I0223 09:04:20.569997 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tvlll\" (UniqueName: \"kubernetes.io/projected/2309dc31-3802-4155-847b-56d77574cee0-kube-api-access-tvlll\") pod \"speaker-vw24x\" (UID: \"2309dc31-3802-4155-847b-56d77574cee0\") " pod="metallb-system/speaker-vw24x" Feb 23 09:04:20 crc kubenswrapper[4940]: I0223 09:04:20.570294 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2309dc31-3802-4155-847b-56d77574cee0-metrics-certs\") pod \"speaker-vw24x\" (UID: \"2309dc31-3802-4155-847b-56d77574cee0\") " pod="metallb-system/speaker-vw24x" Feb 23 09:04:20 crc kubenswrapper[4940]: I0223 09:04:20.570403 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e1d5ae18-3a8e-4845-a163-827184c53429-cert\") pod \"controller-69bbfbf88f-5cz68\" (UID: \"e1d5ae18-3a8e-4845-a163-827184c53429\") " pod="metallb-system/controller-69bbfbf88f-5cz68" Feb 23 09:04:20 crc kubenswrapper[4940]: I0223 09:04:20.570498 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/2309dc31-3802-4155-847b-56d77574cee0-memberlist\") pod \"speaker-vw24x\" (UID: \"2309dc31-3802-4155-847b-56d77574cee0\") " pod="metallb-system/speaker-vw24x" Feb 23 09:04:20 crc kubenswrapper[4940]: I0223 09:04:20.570579 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9xjwv\" (UniqueName: \"kubernetes.io/projected/e1d5ae18-3a8e-4845-a163-827184c53429-kube-api-access-9xjwv\") pod \"controller-69bbfbf88f-5cz68\" (UID: \"e1d5ae18-3a8e-4845-a163-827184c53429\") " pod="metallb-system/controller-69bbfbf88f-5cz68" Feb 23 09:04:20 crc kubenswrapper[4940]: I0223 09:04:20.570694 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/2309dc31-3802-4155-847b-56d77574cee0-metallb-excludel2\") pod \"speaker-vw24x\" (UID: \"2309dc31-3802-4155-847b-56d77574cee0\") " pod="metallb-system/speaker-vw24x" Feb 23 09:04:20 crc kubenswrapper[4940]: I0223 09:04:20.570806 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e1d5ae18-3a8e-4845-a163-827184c53429-metrics-certs\") pod \"controller-69bbfbf88f-5cz68\" (UID: \"e1d5ae18-3a8e-4845-a163-827184c53429\") " pod="metallb-system/controller-69bbfbf88f-5cz68" Feb 23 09:04:20 crc kubenswrapper[4940]: E0223 09:04:20.570601 4940 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Feb 23 09:04:20 crc kubenswrapper[4940]: E0223 09:04:20.571015 4940 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2309dc31-3802-4155-847b-56d77574cee0-memberlist podName:2309dc31-3802-4155-847b-56d77574cee0 nodeName:}" failed. No retries permitted until 2026-02-23 09:04:21.070999145 +0000 UTC m=+992.454205302 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/2309dc31-3802-4155-847b-56d77574cee0-memberlist") pod "speaker-vw24x" (UID: "2309dc31-3802-4155-847b-56d77574cee0") : secret "metallb-memberlist" not found Feb 23 09:04:20 crc kubenswrapper[4940]: I0223 09:04:20.571354 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/2309dc31-3802-4155-847b-56d77574cee0-metallb-excludel2\") pod \"speaker-vw24x\" (UID: \"2309dc31-3802-4155-847b-56d77574cee0\") " pod="metallb-system/speaker-vw24x" Feb 23 09:04:20 crc kubenswrapper[4940]: I0223 09:04:20.572932 4940 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Feb 23 09:04:20 crc kubenswrapper[4940]: I0223 09:04:20.575139 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e1d5ae18-3a8e-4845-a163-827184c53429-metrics-certs\") pod \"controller-69bbfbf88f-5cz68\" (UID: \"e1d5ae18-3a8e-4845-a163-827184c53429\") " pod="metallb-system/controller-69bbfbf88f-5cz68" Feb 23 09:04:20 crc kubenswrapper[4940]: I0223 09:04:20.582943 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2309dc31-3802-4155-847b-56d77574cee0-metrics-certs\") pod \"speaker-vw24x\" (UID: \"2309dc31-3802-4155-847b-56d77574cee0\") " pod="metallb-system/speaker-vw24x" Feb 23 09:04:20 crc kubenswrapper[4940]: I0223 09:04:20.590129 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e1d5ae18-3a8e-4845-a163-827184c53429-cert\") pod \"controller-69bbfbf88f-5cz68\" (UID: \"e1d5ae18-3a8e-4845-a163-827184c53429\") " pod="metallb-system/controller-69bbfbf88f-5cz68" Feb 23 09:04:20 crc kubenswrapper[4940]: I0223 09:04:20.590156 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tvlll\" (UniqueName: \"kubernetes.io/projected/2309dc31-3802-4155-847b-56d77574cee0-kube-api-access-tvlll\") pod \"speaker-vw24x\" (UID: \"2309dc31-3802-4155-847b-56d77574cee0\") " pod="metallb-system/speaker-vw24x" Feb 23 09:04:20 crc kubenswrapper[4940]: I0223 09:04:20.593244 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9xjwv\" (UniqueName: \"kubernetes.io/projected/e1d5ae18-3a8e-4845-a163-827184c53429-kube-api-access-9xjwv\") pod \"controller-69bbfbf88f-5cz68\" (UID: \"e1d5ae18-3a8e-4845-a163-827184c53429\") " pod="metallb-system/controller-69bbfbf88f-5cz68" Feb 23 09:04:20 crc kubenswrapper[4940]: I0223 09:04:20.647308 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-69bbfbf88f-5cz68" Feb 23 09:04:20 crc kubenswrapper[4940]: I0223 09:04:20.985523 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/bbb0e30c-ec14-4878-922f-df5bdaa26e76-metrics-certs\") pod \"frr-k8s-vj8xk\" (UID: \"bbb0e30c-ec14-4878-922f-df5bdaa26e76\") " pod="metallb-system/frr-k8s-vj8xk" Feb 23 09:04:20 crc kubenswrapper[4940]: I0223 09:04:20.989213 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/bbb0e30c-ec14-4878-922f-df5bdaa26e76-metrics-certs\") pod \"frr-k8s-vj8xk\" (UID: \"bbb0e30c-ec14-4878-922f-df5bdaa26e76\") " pod="metallb-system/frr-k8s-vj8xk" Feb 23 09:04:21 crc kubenswrapper[4940]: I0223 09:04:21.091423 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/2309dc31-3802-4155-847b-56d77574cee0-memberlist\") pod \"speaker-vw24x\" (UID: \"2309dc31-3802-4155-847b-56d77574cee0\") " pod="metallb-system/speaker-vw24x" Feb 23 09:04:21 crc kubenswrapper[4940]: E0223 09:04:21.091764 4940 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Feb 23 09:04:21 crc kubenswrapper[4940]: E0223 09:04:21.091898 4940 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2309dc31-3802-4155-847b-56d77574cee0-memberlist podName:2309dc31-3802-4155-847b-56d77574cee0 nodeName:}" failed. No retries permitted until 2026-02-23 09:04:22.091873983 +0000 UTC m=+993.475080140 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/2309dc31-3802-4155-847b-56d77574cee0-memberlist") pod "speaker-vw24x" (UID: "2309dc31-3802-4155-847b-56d77574cee0") : secret "metallb-memberlist" not found Feb 23 09:04:21 crc kubenswrapper[4940]: I0223 09:04:21.140320 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-vj8xk" Feb 23 09:04:21 crc kubenswrapper[4940]: I0223 09:04:21.163595 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-69bbfbf88f-5cz68"] Feb 23 09:04:21 crc kubenswrapper[4940]: W0223 09:04:21.171163 4940 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode1d5ae18_3a8e_4845_a163_827184c53429.slice/crio-94a69acb4edb1af91576a42d27d2a946560fbd8be2a56ef564f58cd1d2f31dd6 WatchSource:0}: Error finding container 94a69acb4edb1af91576a42d27d2a946560fbd8be2a56ef564f58cd1d2f31dd6: Status 404 returned error can't find the container with id 94a69acb4edb1af91576a42d27d2a946560fbd8be2a56ef564f58cd1d2f31dd6 Feb 23 09:04:21 crc kubenswrapper[4940]: I0223 09:04:21.258158 4940 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 23 09:04:21 crc kubenswrapper[4940]: I0223 09:04:21.275015 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-78b44bf5bb-crdxs"] Feb 23 09:04:21 crc kubenswrapper[4940]: W0223 09:04:21.279779 4940 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod130d1750_19ea_4753_87f5_1e7f85169a40.slice/crio-52934ed25509949e4699a8947f570a6e44a4123c061210528e3ae0f80640e866 WatchSource:0}: Error finding container 52934ed25509949e4699a8947f570a6e44a4123c061210528e3ae0f80640e866: Status 404 returned error can't find the container with id 52934ed25509949e4699a8947f570a6e44a4123c061210528e3ae0f80640e866 Feb 23 09:04:21 crc kubenswrapper[4940]: I0223 09:04:21.980109 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-crdxs" event={"ID":"130d1750-19ea-4753-87f5-1e7f85169a40","Type":"ContainerStarted","Data":"52934ed25509949e4699a8947f570a6e44a4123c061210528e3ae0f80640e866"} Feb 23 09:04:21 crc kubenswrapper[4940]: I0223 09:04:21.981958 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-vj8xk" event={"ID":"bbb0e30c-ec14-4878-922f-df5bdaa26e76","Type":"ContainerStarted","Data":"354a8a01ce2c951d4739f40334a179550a7c35059fd9952d36adaedffef895c1"} Feb 23 09:04:21 crc kubenswrapper[4940]: I0223 09:04:21.984372 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-69bbfbf88f-5cz68" event={"ID":"e1d5ae18-3a8e-4845-a163-827184c53429","Type":"ContainerStarted","Data":"c9bd919964d8cc5c30662cb4d24bc3166f6b46192d7528cf65b0a68bdc8a325b"} Feb 23 09:04:21 crc kubenswrapper[4940]: I0223 09:04:21.984424 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-69bbfbf88f-5cz68" event={"ID":"e1d5ae18-3a8e-4845-a163-827184c53429","Type":"ContainerStarted","Data":"03a2aeee1e48199530ac254ffddc1045e477445189d63c86b2fd2680b1a675ce"} Feb 23 09:04:21 crc kubenswrapper[4940]: I0223 09:04:21.984441 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-69bbfbf88f-5cz68" event={"ID":"e1d5ae18-3a8e-4845-a163-827184c53429","Type":"ContainerStarted","Data":"94a69acb4edb1af91576a42d27d2a946560fbd8be2a56ef564f58cd1d2f31dd6"} Feb 23 09:04:21 crc kubenswrapper[4940]: I0223 09:04:21.985645 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-69bbfbf88f-5cz68" Feb 23 09:04:22 crc kubenswrapper[4940]: I0223 09:04:22.033788 4940 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-69bbfbf88f-5cz68" podStartSLOduration=2.033767446 podStartE2EDuration="2.033767446s" podCreationTimestamp="2026-02-23 09:04:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 09:04:22.024604761 +0000 UTC m=+993.407810918" watchObservedRunningTime="2026-02-23 09:04:22.033767446 +0000 UTC m=+993.416973603" Feb 23 09:04:22 crc kubenswrapper[4940]: I0223 09:04:22.103236 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/2309dc31-3802-4155-847b-56d77574cee0-memberlist\") pod \"speaker-vw24x\" (UID: \"2309dc31-3802-4155-847b-56d77574cee0\") " pod="metallb-system/speaker-vw24x" Feb 23 09:04:22 crc kubenswrapper[4940]: I0223 09:04:22.108000 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/2309dc31-3802-4155-847b-56d77574cee0-memberlist\") pod \"speaker-vw24x\" (UID: \"2309dc31-3802-4155-847b-56d77574cee0\") " pod="metallb-system/speaker-vw24x" Feb 23 09:04:22 crc kubenswrapper[4940]: I0223 09:04:22.124044 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-vw24x" Feb 23 09:04:22 crc kubenswrapper[4940]: W0223 09:04:22.145915 4940 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2309dc31_3802_4155_847b_56d77574cee0.slice/crio-efac59a966a9d009d76428061e0ad631f0b1cc2cb95ee3cbf7b723ed74b9d2d0 WatchSource:0}: Error finding container efac59a966a9d009d76428061e0ad631f0b1cc2cb95ee3cbf7b723ed74b9d2d0: Status 404 returned error can't find the container with id efac59a966a9d009d76428061e0ad631f0b1cc2cb95ee3cbf7b723ed74b9d2d0 Feb 23 09:04:22 crc kubenswrapper[4940]: I0223 09:04:22.998078 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-vw24x" event={"ID":"2309dc31-3802-4155-847b-56d77574cee0","Type":"ContainerStarted","Data":"ef33b30faffec443ab7bbb0b5fde52a7265400b9b05f5fae48c7797d78230081"} Feb 23 09:04:22 crc kubenswrapper[4940]: I0223 09:04:22.998465 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-vw24x" event={"ID":"2309dc31-3802-4155-847b-56d77574cee0","Type":"ContainerStarted","Data":"5ee8c951de04df191ca0c40557c9f38a64a6cbfd4d4ce85d4fb66ef77f24dd39"} Feb 23 09:04:22 crc kubenswrapper[4940]: I0223 09:04:22.998480 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-vw24x" event={"ID":"2309dc31-3802-4155-847b-56d77574cee0","Type":"ContainerStarted","Data":"efac59a966a9d009d76428061e0ad631f0b1cc2cb95ee3cbf7b723ed74b9d2d0"} Feb 23 09:04:22 crc kubenswrapper[4940]: I0223 09:04:22.998728 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-vw24x" Feb 23 09:04:29 crc kubenswrapper[4940]: I0223 09:04:29.552220 4940 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-vw24x" podStartSLOduration=9.552195537 podStartE2EDuration="9.552195537s" podCreationTimestamp="2026-02-23 09:04:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 09:04:23.020332784 +0000 UTC m=+994.403538941" watchObservedRunningTime="2026-02-23 09:04:29.552195537 +0000 UTC m=+1000.935401694" Feb 23 09:04:31 crc kubenswrapper[4940]: I0223 09:04:31.429316 4940 patch_prober.go:28] interesting pod/machine-config-daemon-26mgs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 23 09:04:31 crc kubenswrapper[4940]: I0223 09:04:31.429809 4940 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" podUID="f3f2cfd6-5ddf-436d-998f-440f1cc642b1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 23 09:04:31 crc kubenswrapper[4940]: I0223 09:04:31.429853 4940 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" Feb 23 09:04:31 crc kubenswrapper[4940]: I0223 09:04:31.430393 4940 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"cb93228543200fd2d6020d08fa2989a091f75ae9c39f73df80e0c09ed858a572"} pod="openshift-machine-config-operator/machine-config-daemon-26mgs" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 23 09:04:31 crc kubenswrapper[4940]: I0223 09:04:31.430437 4940 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" podUID="f3f2cfd6-5ddf-436d-998f-440f1cc642b1" containerName="machine-config-daemon" containerID="cri-o://cb93228543200fd2d6020d08fa2989a091f75ae9c39f73df80e0c09ed858a572" gracePeriod=600 Feb 23 09:04:32 crc kubenswrapper[4940]: I0223 09:04:32.128643 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-vw24x" Feb 23 09:04:32 crc kubenswrapper[4940]: I0223 09:04:32.155392 4940 generic.go:334] "Generic (PLEG): container finished" podID="f3f2cfd6-5ddf-436d-998f-440f1cc642b1" containerID="cb93228543200fd2d6020d08fa2989a091f75ae9c39f73df80e0c09ed858a572" exitCode=0 Feb 23 09:04:32 crc kubenswrapper[4940]: I0223 09:04:32.155458 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" event={"ID":"f3f2cfd6-5ddf-436d-998f-440f1cc642b1","Type":"ContainerDied","Data":"cb93228543200fd2d6020d08fa2989a091f75ae9c39f73df80e0c09ed858a572"} Feb 23 09:04:32 crc kubenswrapper[4940]: I0223 09:04:32.155521 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" event={"ID":"f3f2cfd6-5ddf-436d-998f-440f1cc642b1","Type":"ContainerStarted","Data":"c29b2fbbefbec40ee98e54f4b935b91d77c404cc4f7cfd7802c91d0a773948d7"} Feb 23 09:04:32 crc kubenswrapper[4940]: I0223 09:04:32.155544 4940 scope.go:117] "RemoveContainer" containerID="2bd3eb7943bad5600333867d165292448d5139e55ddeeb5836e6b84ea71a9212" Feb 23 09:04:32 crc kubenswrapper[4940]: I0223 09:04:32.157391 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-crdxs" event={"ID":"130d1750-19ea-4753-87f5-1e7f85169a40","Type":"ContainerStarted","Data":"9a42dd7c6f0b4c1f0b89efbe99cf05ca21ab4fa4d333ec7f96152b62e4c119dc"} Feb 23 09:04:32 crc kubenswrapper[4940]: I0223 09:04:32.157514 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-crdxs" Feb 23 09:04:32 crc kubenswrapper[4940]: I0223 09:04:32.158909 4940 generic.go:334] "Generic (PLEG): container finished" podID="bbb0e30c-ec14-4878-922f-df5bdaa26e76" containerID="ead218eaa8a961e47045afb1e4ddb3196ef472d9e8cb4dd17879932e72ef248e" exitCode=0 Feb 23 09:04:32 crc kubenswrapper[4940]: I0223 09:04:32.158952 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-vj8xk" event={"ID":"bbb0e30c-ec14-4878-922f-df5bdaa26e76","Type":"ContainerDied","Data":"ead218eaa8a961e47045afb1e4ddb3196ef472d9e8cb4dd17879932e72ef248e"} Feb 23 09:04:32 crc kubenswrapper[4940]: I0223 09:04:32.205958 4940 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-crdxs" podStartSLOduration=2.312434296 podStartE2EDuration="12.205943315s" podCreationTimestamp="2026-02-23 09:04:20 +0000 UTC" firstStartedPulling="2026-02-23 09:04:21.281799182 +0000 UTC m=+992.665005339" lastFinishedPulling="2026-02-23 09:04:31.175308191 +0000 UTC m=+1002.558514358" observedRunningTime="2026-02-23 09:04:32.202694554 +0000 UTC m=+1003.585900731" watchObservedRunningTime="2026-02-23 09:04:32.205943315 +0000 UTC m=+1003.589149472" Feb 23 09:04:33 crc kubenswrapper[4940]: I0223 09:04:33.166832 4940 generic.go:334] "Generic (PLEG): container finished" podID="bbb0e30c-ec14-4878-922f-df5bdaa26e76" containerID="8331924a2e5d97146626ad540fe77c2edad56c7f6f3c38f4b8baddbd57f6658d" exitCode=0 Feb 23 09:04:33 crc kubenswrapper[4940]: I0223 09:04:33.166952 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-vj8xk" event={"ID":"bbb0e30c-ec14-4878-922f-df5bdaa26e76","Type":"ContainerDied","Data":"8331924a2e5d97146626ad540fe77c2edad56c7f6f3c38f4b8baddbd57f6658d"} Feb 23 09:04:34 crc kubenswrapper[4940]: I0223 09:04:34.177048 4940 generic.go:334] "Generic (PLEG): container finished" podID="bbb0e30c-ec14-4878-922f-df5bdaa26e76" containerID="17dfe577b4334df63773d13ce54d40aebffbc939ab5871b87a6416ae43cfa78f" exitCode=0 Feb 23 09:04:34 crc kubenswrapper[4940]: I0223 09:04:34.177169 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-vj8xk" event={"ID":"bbb0e30c-ec14-4878-922f-df5bdaa26e76","Type":"ContainerDied","Data":"17dfe577b4334df63773d13ce54d40aebffbc939ab5871b87a6416ae43cfa78f"} Feb 23 09:04:34 crc kubenswrapper[4940]: I0223 09:04:34.210943 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-qvb7p"] Feb 23 09:04:34 crc kubenswrapper[4940]: I0223 09:04:34.212434 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qvb7p" Feb 23 09:04:34 crc kubenswrapper[4940]: I0223 09:04:34.219869 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-qvb7p"] Feb 23 09:04:34 crc kubenswrapper[4940]: I0223 09:04:34.296328 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/493c6869-412e-479a-af55-1e25ae7f028e-utilities\") pod \"certified-operators-qvb7p\" (UID: \"493c6869-412e-479a-af55-1e25ae7f028e\") " pod="openshift-marketplace/certified-operators-qvb7p" Feb 23 09:04:34 crc kubenswrapper[4940]: I0223 09:04:34.296655 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tllmh\" (UniqueName: \"kubernetes.io/projected/493c6869-412e-479a-af55-1e25ae7f028e-kube-api-access-tllmh\") pod \"certified-operators-qvb7p\" (UID: \"493c6869-412e-479a-af55-1e25ae7f028e\") " pod="openshift-marketplace/certified-operators-qvb7p" Feb 23 09:04:34 crc kubenswrapper[4940]: I0223 09:04:34.296728 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/493c6869-412e-479a-af55-1e25ae7f028e-catalog-content\") pod \"certified-operators-qvb7p\" (UID: \"493c6869-412e-479a-af55-1e25ae7f028e\") " pod="openshift-marketplace/certified-operators-qvb7p" Feb 23 09:04:34 crc kubenswrapper[4940]: I0223 09:04:34.398587 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tllmh\" (UniqueName: \"kubernetes.io/projected/493c6869-412e-479a-af55-1e25ae7f028e-kube-api-access-tllmh\") pod \"certified-operators-qvb7p\" (UID: \"493c6869-412e-479a-af55-1e25ae7f028e\") " pod="openshift-marketplace/certified-operators-qvb7p" Feb 23 09:04:34 crc kubenswrapper[4940]: I0223 09:04:34.398669 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/493c6869-412e-479a-af55-1e25ae7f028e-catalog-content\") pod \"certified-operators-qvb7p\" (UID: \"493c6869-412e-479a-af55-1e25ae7f028e\") " pod="openshift-marketplace/certified-operators-qvb7p" Feb 23 09:04:34 crc kubenswrapper[4940]: I0223 09:04:34.398708 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/493c6869-412e-479a-af55-1e25ae7f028e-utilities\") pod \"certified-operators-qvb7p\" (UID: \"493c6869-412e-479a-af55-1e25ae7f028e\") " pod="openshift-marketplace/certified-operators-qvb7p" Feb 23 09:04:34 crc kubenswrapper[4940]: I0223 09:04:34.399159 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/493c6869-412e-479a-af55-1e25ae7f028e-catalog-content\") pod \"certified-operators-qvb7p\" (UID: \"493c6869-412e-479a-af55-1e25ae7f028e\") " pod="openshift-marketplace/certified-operators-qvb7p" Feb 23 09:04:34 crc kubenswrapper[4940]: I0223 09:04:34.399222 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/493c6869-412e-479a-af55-1e25ae7f028e-utilities\") pod \"certified-operators-qvb7p\" (UID: \"493c6869-412e-479a-af55-1e25ae7f028e\") " pod="openshift-marketplace/certified-operators-qvb7p" Feb 23 09:04:34 crc kubenswrapper[4940]: I0223 09:04:34.420424 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tllmh\" (UniqueName: \"kubernetes.io/projected/493c6869-412e-479a-af55-1e25ae7f028e-kube-api-access-tllmh\") pod \"certified-operators-qvb7p\" (UID: \"493c6869-412e-479a-af55-1e25ae7f028e\") " pod="openshift-marketplace/certified-operators-qvb7p" Feb 23 09:04:34 crc kubenswrapper[4940]: I0223 09:04:34.536234 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qvb7p" Feb 23 09:04:35 crc kubenswrapper[4940]: I0223 09:04:35.079685 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-qvb7p"] Feb 23 09:04:35 crc kubenswrapper[4940]: W0223 09:04:35.091465 4940 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod493c6869_412e_479a_af55_1e25ae7f028e.slice/crio-fe1378f3e34b5f011e3fcb4c288ea018b295dab969ffb5a47118a4abc24d8002 WatchSource:0}: Error finding container fe1378f3e34b5f011e3fcb4c288ea018b295dab969ffb5a47118a4abc24d8002: Status 404 returned error can't find the container with id fe1378f3e34b5f011e3fcb4c288ea018b295dab969ffb5a47118a4abc24d8002 Feb 23 09:04:35 crc kubenswrapper[4940]: I0223 09:04:35.285535 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-vj8xk" event={"ID":"bbb0e30c-ec14-4878-922f-df5bdaa26e76","Type":"ContainerStarted","Data":"4a12b349a00ae8d214bf6d505150565b7afb7ebb5c5c5e9e60e11c43df4cc641"} Feb 23 09:04:35 crc kubenswrapper[4940]: I0223 09:04:35.285576 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-vj8xk" event={"ID":"bbb0e30c-ec14-4878-922f-df5bdaa26e76","Type":"ContainerStarted","Data":"63f24a8c5bff1b77b66d3307d14c8d211849ba968bd41868adc9ae9a9d382b8a"} Feb 23 09:04:35 crc kubenswrapper[4940]: I0223 09:04:35.285587 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-vj8xk" event={"ID":"bbb0e30c-ec14-4878-922f-df5bdaa26e76","Type":"ContainerStarted","Data":"39b1eb0187c6e0259789a919c4542c0ffa65dcae4c4cc2274b9dba851aafa233"} Feb 23 09:04:35 crc kubenswrapper[4940]: I0223 09:04:35.285595 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-vj8xk" event={"ID":"bbb0e30c-ec14-4878-922f-df5bdaa26e76","Type":"ContainerStarted","Data":"ca29ac9e85f6e6945e0573544ecaf5efb4c91fef0b73dd7bb477e0806c632333"} Feb 23 09:04:35 crc kubenswrapper[4940]: I0223 09:04:35.285603 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-vj8xk" event={"ID":"bbb0e30c-ec14-4878-922f-df5bdaa26e76","Type":"ContainerStarted","Data":"08fab8de92923a0a9c26317fef1568e264fb537995fba880566e03cf74dd7d93"} Feb 23 09:04:35 crc kubenswrapper[4940]: I0223 09:04:35.287989 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qvb7p" event={"ID":"493c6869-412e-479a-af55-1e25ae7f028e","Type":"ContainerStarted","Data":"fe1378f3e34b5f011e3fcb4c288ea018b295dab969ffb5a47118a4abc24d8002"} Feb 23 09:04:36 crc kubenswrapper[4940]: I0223 09:04:36.302694 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-vj8xk" event={"ID":"bbb0e30c-ec14-4878-922f-df5bdaa26e76","Type":"ContainerStarted","Data":"a2fac05ca0e79ede54ff36f4a129e9e90ce055bf17c5f84e10f26047a4150daf"} Feb 23 09:04:36 crc kubenswrapper[4940]: I0223 09:04:36.303142 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-vj8xk" Feb 23 09:04:36 crc kubenswrapper[4940]: I0223 09:04:36.304300 4940 generic.go:334] "Generic (PLEG): container finished" podID="493c6869-412e-479a-af55-1e25ae7f028e" containerID="9c755c3721845965f33e98be89f210bcde31d27d3fb5c2394711e64c672eb20a" exitCode=0 Feb 23 09:04:36 crc kubenswrapper[4940]: I0223 09:04:36.304339 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qvb7p" event={"ID":"493c6869-412e-479a-af55-1e25ae7f028e","Type":"ContainerDied","Data":"9c755c3721845965f33e98be89f210bcde31d27d3fb5c2394711e64c672eb20a"} Feb 23 09:04:36 crc kubenswrapper[4940]: I0223 09:04:36.342147 4940 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-vj8xk" podStartSLOduration=6.424153924 podStartE2EDuration="16.342126452s" podCreationTimestamp="2026-02-23 09:04:20 +0000 UTC" firstStartedPulling="2026-02-23 09:04:21.257818178 +0000 UTC m=+992.641024335" lastFinishedPulling="2026-02-23 09:04:31.175790686 +0000 UTC m=+1002.558996863" observedRunningTime="2026-02-23 09:04:36.338599263 +0000 UTC m=+1007.721805460" watchObservedRunningTime="2026-02-23 09:04:36.342126452 +0000 UTC m=+1007.725332619" Feb 23 09:04:37 crc kubenswrapper[4940]: I0223 09:04:37.314285 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qvb7p" event={"ID":"493c6869-412e-479a-af55-1e25ae7f028e","Type":"ContainerStarted","Data":"fc98f9d18762b029a0d71e7d652e6188fe9573bd84fcf660e44f128903c9cf9f"} Feb 23 09:04:37 crc kubenswrapper[4940]: I0223 09:04:37.988151 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-fkkwt"] Feb 23 09:04:37 crc kubenswrapper[4940]: I0223 09:04:37.990657 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-fkkwt" Feb 23 09:04:37 crc kubenswrapper[4940]: I0223 09:04:37.993675 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-wjcxm" Feb 23 09:04:37 crc kubenswrapper[4940]: I0223 09:04:37.997489 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Feb 23 09:04:37 crc kubenswrapper[4940]: I0223 09:04:37.997714 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Feb 23 09:04:38 crc kubenswrapper[4940]: I0223 09:04:38.062134 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tw628\" (UniqueName: \"kubernetes.io/projected/808d7f68-dc41-4211-b785-00e0157483b1-kube-api-access-tw628\") pod \"openstack-operator-index-fkkwt\" (UID: \"808d7f68-dc41-4211-b785-00e0157483b1\") " pod="openstack-operators/openstack-operator-index-fkkwt" Feb 23 09:04:38 crc kubenswrapper[4940]: I0223 09:04:38.096971 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-fkkwt"] Feb 23 09:04:38 crc kubenswrapper[4940]: I0223 09:04:38.163053 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tw628\" (UniqueName: \"kubernetes.io/projected/808d7f68-dc41-4211-b785-00e0157483b1-kube-api-access-tw628\") pod \"openstack-operator-index-fkkwt\" (UID: \"808d7f68-dc41-4211-b785-00e0157483b1\") " pod="openstack-operators/openstack-operator-index-fkkwt" Feb 23 09:04:38 crc kubenswrapper[4940]: I0223 09:04:38.281523 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tw628\" (UniqueName: \"kubernetes.io/projected/808d7f68-dc41-4211-b785-00e0157483b1-kube-api-access-tw628\") pod \"openstack-operator-index-fkkwt\" (UID: \"808d7f68-dc41-4211-b785-00e0157483b1\") " pod="openstack-operators/openstack-operator-index-fkkwt" Feb 23 09:04:38 crc kubenswrapper[4940]: I0223 09:04:38.320129 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-fkkwt" Feb 23 09:04:38 crc kubenswrapper[4940]: I0223 09:04:38.748489 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-fkkwt"] Feb 23 09:04:38 crc kubenswrapper[4940]: W0223 09:04:38.753897 4940 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod808d7f68_dc41_4211_b785_00e0157483b1.slice/crio-6c158531c6efdf7ba505d5d268b1a82616fa9b194c5f01aedcb7dc5971fed378 WatchSource:0}: Error finding container 6c158531c6efdf7ba505d5d268b1a82616fa9b194c5f01aedcb7dc5971fed378: Status 404 returned error can't find the container with id 6c158531c6efdf7ba505d5d268b1a82616fa9b194c5f01aedcb7dc5971fed378 Feb 23 09:04:39 crc kubenswrapper[4940]: I0223 09:04:39.341638 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-fkkwt" event={"ID":"808d7f68-dc41-4211-b785-00e0157483b1","Type":"ContainerStarted","Data":"6c158531c6efdf7ba505d5d268b1a82616fa9b194c5f01aedcb7dc5971fed378"} Feb 23 09:04:39 crc kubenswrapper[4940]: I0223 09:04:39.343909 4940 generic.go:334] "Generic (PLEG): container finished" podID="493c6869-412e-479a-af55-1e25ae7f028e" containerID="fc98f9d18762b029a0d71e7d652e6188fe9573bd84fcf660e44f128903c9cf9f" exitCode=0 Feb 23 09:04:39 crc kubenswrapper[4940]: I0223 09:04:39.343955 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qvb7p" event={"ID":"493c6869-412e-479a-af55-1e25ae7f028e","Type":"ContainerDied","Data":"fc98f9d18762b029a0d71e7d652e6188fe9573bd84fcf660e44f128903c9cf9f"} Feb 23 09:04:40 crc kubenswrapper[4940]: I0223 09:04:40.656536 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-69bbfbf88f-5cz68" Feb 23 09:04:41 crc kubenswrapper[4940]: I0223 09:04:41.142804 4940 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-vj8xk" Feb 23 09:04:41 crc kubenswrapper[4940]: I0223 09:04:41.199327 4940 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-vj8xk" Feb 23 09:04:42 crc kubenswrapper[4940]: I0223 09:04:42.365124 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qvb7p" event={"ID":"493c6869-412e-479a-af55-1e25ae7f028e","Type":"ContainerStarted","Data":"dbe4b115cdbc97089da950eaa69b0daec89056d5970522263a3a49d1eb8672c6"} Feb 23 09:04:42 crc kubenswrapper[4940]: I0223 09:04:42.367387 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-fkkwt" event={"ID":"808d7f68-dc41-4211-b785-00e0157483b1","Type":"ContainerStarted","Data":"1e0949dac08b5983cbc34da50fd736a2130bcb5c39c8ede2bd87f5d5e14c639b"} Feb 23 09:04:42 crc kubenswrapper[4940]: I0223 09:04:42.386742 4940 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-qvb7p" podStartSLOduration=2.655227919 podStartE2EDuration="8.386724829s" podCreationTimestamp="2026-02-23 09:04:34 +0000 UTC" firstStartedPulling="2026-02-23 09:04:36.306289421 +0000 UTC m=+1007.689495578" lastFinishedPulling="2026-02-23 09:04:42.037786331 +0000 UTC m=+1013.420992488" observedRunningTime="2026-02-23 09:04:42.381287971 +0000 UTC m=+1013.764494148" watchObservedRunningTime="2026-02-23 09:04:42.386724829 +0000 UTC m=+1013.769930986" Feb 23 09:04:42 crc kubenswrapper[4940]: I0223 09:04:42.401413 4940 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-fkkwt" podStartSLOduration=2.083672021 podStartE2EDuration="5.401397784s" podCreationTimestamp="2026-02-23 09:04:37 +0000 UTC" firstStartedPulling="2026-02-23 09:04:38.756020573 +0000 UTC m=+1010.139226730" lastFinishedPulling="2026-02-23 09:04:42.073746336 +0000 UTC m=+1013.456952493" observedRunningTime="2026-02-23 09:04:42.398656909 +0000 UTC m=+1013.781863076" watchObservedRunningTime="2026-02-23 09:04:42.401397784 +0000 UTC m=+1013.784603941" Feb 23 09:04:44 crc kubenswrapper[4940]: I0223 09:04:44.536845 4940 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-qvb7p" Feb 23 09:04:44 crc kubenswrapper[4940]: I0223 09:04:44.537198 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-qvb7p" Feb 23 09:04:44 crc kubenswrapper[4940]: I0223 09:04:44.586518 4940 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-qvb7p" Feb 23 09:04:48 crc kubenswrapper[4940]: I0223 09:04:48.321297 4940 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-fkkwt" Feb 23 09:04:48 crc kubenswrapper[4940]: I0223 09:04:48.321581 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-fkkwt" Feb 23 09:04:48 crc kubenswrapper[4940]: I0223 09:04:48.360007 4940 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-fkkwt" Feb 23 09:04:48 crc kubenswrapper[4940]: I0223 09:04:48.447913 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-fkkwt" Feb 23 09:04:50 crc kubenswrapper[4940]: I0223 09:04:50.417189 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/f50df332aac1228d54a4d82de8c3e65bdeace5a18fcc566c39bd5668d3d5l6q"] Feb 23 09:04:50 crc kubenswrapper[4940]: I0223 09:04:50.418595 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/f50df332aac1228d54a4d82de8c3e65bdeace5a18fcc566c39bd5668d3d5l6q" Feb 23 09:04:50 crc kubenswrapper[4940]: I0223 09:04:50.420567 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-9z2p9" Feb 23 09:04:50 crc kubenswrapper[4940]: I0223 09:04:50.427040 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/f50df332aac1228d54a4d82de8c3e65bdeace5a18fcc566c39bd5668d3d5l6q"] Feb 23 09:04:50 crc kubenswrapper[4940]: I0223 09:04:50.461417 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vzlkh\" (UniqueName: \"kubernetes.io/projected/c924436a-929d-4ed7-ad05-6f9dea4ab38a-kube-api-access-vzlkh\") pod \"f50df332aac1228d54a4d82de8c3e65bdeace5a18fcc566c39bd5668d3d5l6q\" (UID: \"c924436a-929d-4ed7-ad05-6f9dea4ab38a\") " pod="openstack-operators/f50df332aac1228d54a4d82de8c3e65bdeace5a18fcc566c39bd5668d3d5l6q" Feb 23 09:04:50 crc kubenswrapper[4940]: I0223 09:04:50.461696 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c924436a-929d-4ed7-ad05-6f9dea4ab38a-util\") pod \"f50df332aac1228d54a4d82de8c3e65bdeace5a18fcc566c39bd5668d3d5l6q\" (UID: \"c924436a-929d-4ed7-ad05-6f9dea4ab38a\") " pod="openstack-operators/f50df332aac1228d54a4d82de8c3e65bdeace5a18fcc566c39bd5668d3d5l6q" Feb 23 09:04:50 crc kubenswrapper[4940]: I0223 09:04:50.461800 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c924436a-929d-4ed7-ad05-6f9dea4ab38a-bundle\") pod \"f50df332aac1228d54a4d82de8c3e65bdeace5a18fcc566c39bd5668d3d5l6q\" (UID: \"c924436a-929d-4ed7-ad05-6f9dea4ab38a\") " pod="openstack-operators/f50df332aac1228d54a4d82de8c3e65bdeace5a18fcc566c39bd5668d3d5l6q" Feb 23 09:04:50 crc kubenswrapper[4940]: I0223 09:04:50.562813 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c924436a-929d-4ed7-ad05-6f9dea4ab38a-bundle\") pod \"f50df332aac1228d54a4d82de8c3e65bdeace5a18fcc566c39bd5668d3d5l6q\" (UID: \"c924436a-929d-4ed7-ad05-6f9dea4ab38a\") " pod="openstack-operators/f50df332aac1228d54a4d82de8c3e65bdeace5a18fcc566c39bd5668d3d5l6q" Feb 23 09:04:50 crc kubenswrapper[4940]: I0223 09:04:50.562916 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vzlkh\" (UniqueName: \"kubernetes.io/projected/c924436a-929d-4ed7-ad05-6f9dea4ab38a-kube-api-access-vzlkh\") pod \"f50df332aac1228d54a4d82de8c3e65bdeace5a18fcc566c39bd5668d3d5l6q\" (UID: \"c924436a-929d-4ed7-ad05-6f9dea4ab38a\") " pod="openstack-operators/f50df332aac1228d54a4d82de8c3e65bdeace5a18fcc566c39bd5668d3d5l6q" Feb 23 09:04:50 crc kubenswrapper[4940]: I0223 09:04:50.563049 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c924436a-929d-4ed7-ad05-6f9dea4ab38a-util\") pod \"f50df332aac1228d54a4d82de8c3e65bdeace5a18fcc566c39bd5668d3d5l6q\" (UID: \"c924436a-929d-4ed7-ad05-6f9dea4ab38a\") " pod="openstack-operators/f50df332aac1228d54a4d82de8c3e65bdeace5a18fcc566c39bd5668d3d5l6q" Feb 23 09:04:50 crc kubenswrapper[4940]: I0223 09:04:50.563920 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c924436a-929d-4ed7-ad05-6f9dea4ab38a-bundle\") pod \"f50df332aac1228d54a4d82de8c3e65bdeace5a18fcc566c39bd5668d3d5l6q\" (UID: \"c924436a-929d-4ed7-ad05-6f9dea4ab38a\") " pod="openstack-operators/f50df332aac1228d54a4d82de8c3e65bdeace5a18fcc566c39bd5668d3d5l6q" Feb 23 09:04:50 crc kubenswrapper[4940]: I0223 09:04:50.563998 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c924436a-929d-4ed7-ad05-6f9dea4ab38a-util\") pod \"f50df332aac1228d54a4d82de8c3e65bdeace5a18fcc566c39bd5668d3d5l6q\" (UID: \"c924436a-929d-4ed7-ad05-6f9dea4ab38a\") " pod="openstack-operators/f50df332aac1228d54a4d82de8c3e65bdeace5a18fcc566c39bd5668d3d5l6q" Feb 23 09:04:50 crc kubenswrapper[4940]: I0223 09:04:50.752500 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-crdxs" Feb 23 09:04:50 crc kubenswrapper[4940]: I0223 09:04:50.765312 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vzlkh\" (UniqueName: \"kubernetes.io/projected/c924436a-929d-4ed7-ad05-6f9dea4ab38a-kube-api-access-vzlkh\") pod \"f50df332aac1228d54a4d82de8c3e65bdeace5a18fcc566c39bd5668d3d5l6q\" (UID: \"c924436a-929d-4ed7-ad05-6f9dea4ab38a\") " pod="openstack-operators/f50df332aac1228d54a4d82de8c3e65bdeace5a18fcc566c39bd5668d3d5l6q" Feb 23 09:04:51 crc kubenswrapper[4940]: I0223 09:04:51.043923 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/f50df332aac1228d54a4d82de8c3e65bdeace5a18fcc566c39bd5668d3d5l6q" Feb 23 09:04:51 crc kubenswrapper[4940]: I0223 09:04:51.144354 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-vj8xk" Feb 23 09:04:51 crc kubenswrapper[4940]: I0223 09:04:51.507388 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/f50df332aac1228d54a4d82de8c3e65bdeace5a18fcc566c39bd5668d3d5l6q"] Feb 23 09:04:52 crc kubenswrapper[4940]: I0223 09:04:52.431284 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/f50df332aac1228d54a4d82de8c3e65bdeace5a18fcc566c39bd5668d3d5l6q" event={"ID":"c924436a-929d-4ed7-ad05-6f9dea4ab38a","Type":"ContainerStarted","Data":"bce7c0d988c7c5f0dfa2657dfa3965c8cfe700705daa3a71aa0f57b8fc1b6380"} Feb 23 09:04:53 crc kubenswrapper[4940]: I0223 09:04:53.443034 4940 generic.go:334] "Generic (PLEG): container finished" podID="c924436a-929d-4ed7-ad05-6f9dea4ab38a" containerID="3bbe6f490bf81e35e45b60cbfe2b80884062384b885da2717ed4dce7b8880302" exitCode=0 Feb 23 09:04:53 crc kubenswrapper[4940]: I0223 09:04:53.443432 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/f50df332aac1228d54a4d82de8c3e65bdeace5a18fcc566c39bd5668d3d5l6q" event={"ID":"c924436a-929d-4ed7-ad05-6f9dea4ab38a","Type":"ContainerDied","Data":"3bbe6f490bf81e35e45b60cbfe2b80884062384b885da2717ed4dce7b8880302"} Feb 23 09:04:54 crc kubenswrapper[4940]: I0223 09:04:54.457678 4940 generic.go:334] "Generic (PLEG): container finished" podID="c924436a-929d-4ed7-ad05-6f9dea4ab38a" containerID="da24fe43e4e78648f119b760971099de5014236be1f2560007a8a9399598dc17" exitCode=0 Feb 23 09:04:54 crc kubenswrapper[4940]: I0223 09:04:54.457717 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/f50df332aac1228d54a4d82de8c3e65bdeace5a18fcc566c39bd5668d3d5l6q" event={"ID":"c924436a-929d-4ed7-ad05-6f9dea4ab38a","Type":"ContainerDied","Data":"da24fe43e4e78648f119b760971099de5014236be1f2560007a8a9399598dc17"} Feb 23 09:04:54 crc kubenswrapper[4940]: I0223 09:04:54.584057 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-qvb7p" Feb 23 09:04:55 crc kubenswrapper[4940]: I0223 09:04:55.465935 4940 generic.go:334] "Generic (PLEG): container finished" podID="c924436a-929d-4ed7-ad05-6f9dea4ab38a" containerID="d07decf631ad155a122337deea333809972d5265da589754407b5be1d2950dc5" exitCode=0 Feb 23 09:04:55 crc kubenswrapper[4940]: I0223 09:04:55.466011 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/f50df332aac1228d54a4d82de8c3e65bdeace5a18fcc566c39bd5668d3d5l6q" event={"ID":"c924436a-929d-4ed7-ad05-6f9dea4ab38a","Type":"ContainerDied","Data":"d07decf631ad155a122337deea333809972d5265da589754407b5be1d2950dc5"} Feb 23 09:04:56 crc kubenswrapper[4940]: I0223 09:04:56.773147 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/f50df332aac1228d54a4d82de8c3e65bdeace5a18fcc566c39bd5668d3d5l6q" Feb 23 09:04:56 crc kubenswrapper[4940]: I0223 09:04:56.952043 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c924436a-929d-4ed7-ad05-6f9dea4ab38a-util\") pod \"c924436a-929d-4ed7-ad05-6f9dea4ab38a\" (UID: \"c924436a-929d-4ed7-ad05-6f9dea4ab38a\") " Feb 23 09:04:56 crc kubenswrapper[4940]: I0223 09:04:56.952120 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vzlkh\" (UniqueName: \"kubernetes.io/projected/c924436a-929d-4ed7-ad05-6f9dea4ab38a-kube-api-access-vzlkh\") pod \"c924436a-929d-4ed7-ad05-6f9dea4ab38a\" (UID: \"c924436a-929d-4ed7-ad05-6f9dea4ab38a\") " Feb 23 09:04:56 crc kubenswrapper[4940]: I0223 09:04:56.952193 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c924436a-929d-4ed7-ad05-6f9dea4ab38a-bundle\") pod \"c924436a-929d-4ed7-ad05-6f9dea4ab38a\" (UID: \"c924436a-929d-4ed7-ad05-6f9dea4ab38a\") " Feb 23 09:04:56 crc kubenswrapper[4940]: I0223 09:04:56.953054 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c924436a-929d-4ed7-ad05-6f9dea4ab38a-bundle" (OuterVolumeSpecName: "bundle") pod "c924436a-929d-4ed7-ad05-6f9dea4ab38a" (UID: "c924436a-929d-4ed7-ad05-6f9dea4ab38a"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 09:04:56 crc kubenswrapper[4940]: I0223 09:04:56.957810 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c924436a-929d-4ed7-ad05-6f9dea4ab38a-kube-api-access-vzlkh" (OuterVolumeSpecName: "kube-api-access-vzlkh") pod "c924436a-929d-4ed7-ad05-6f9dea4ab38a" (UID: "c924436a-929d-4ed7-ad05-6f9dea4ab38a"). InnerVolumeSpecName "kube-api-access-vzlkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 09:04:56 crc kubenswrapper[4940]: I0223 09:04:56.966898 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c924436a-929d-4ed7-ad05-6f9dea4ab38a-util" (OuterVolumeSpecName: "util") pod "c924436a-929d-4ed7-ad05-6f9dea4ab38a" (UID: "c924436a-929d-4ed7-ad05-6f9dea4ab38a"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 09:04:57 crc kubenswrapper[4940]: I0223 09:04:57.053568 4940 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vzlkh\" (UniqueName: \"kubernetes.io/projected/c924436a-929d-4ed7-ad05-6f9dea4ab38a-kube-api-access-vzlkh\") on node \"crc\" DevicePath \"\"" Feb 23 09:04:57 crc kubenswrapper[4940]: I0223 09:04:57.053604 4940 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c924436a-929d-4ed7-ad05-6f9dea4ab38a-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 09:04:57 crc kubenswrapper[4940]: I0223 09:04:57.053630 4940 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c924436a-929d-4ed7-ad05-6f9dea4ab38a-util\") on node \"crc\" DevicePath \"\"" Feb 23 09:04:57 crc kubenswrapper[4940]: I0223 09:04:57.481244 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/f50df332aac1228d54a4d82de8c3e65bdeace5a18fcc566c39bd5668d3d5l6q" event={"ID":"c924436a-929d-4ed7-ad05-6f9dea4ab38a","Type":"ContainerDied","Data":"bce7c0d988c7c5f0dfa2657dfa3965c8cfe700705daa3a71aa0f57b8fc1b6380"} Feb 23 09:04:57 crc kubenswrapper[4940]: I0223 09:04:57.481556 4940 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bce7c0d988c7c5f0dfa2657dfa3965c8cfe700705daa3a71aa0f57b8fc1b6380" Feb 23 09:04:57 crc kubenswrapper[4940]: I0223 09:04:57.481298 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/f50df332aac1228d54a4d82de8c3e65bdeace5a18fcc566c39bd5668d3d5l6q" Feb 23 09:04:57 crc kubenswrapper[4940]: I0223 09:04:57.977646 4940 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-qvb7p"] Feb 23 09:04:57 crc kubenswrapper[4940]: I0223 09:04:57.978034 4940 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-qvb7p" podUID="493c6869-412e-479a-af55-1e25ae7f028e" containerName="registry-server" containerID="cri-o://dbe4b115cdbc97089da950eaa69b0daec89056d5970522263a3a49d1eb8672c6" gracePeriod=2 Feb 23 09:04:58 crc kubenswrapper[4940]: I0223 09:04:58.359381 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qvb7p" Feb 23 09:04:58 crc kubenswrapper[4940]: I0223 09:04:58.381028 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/493c6869-412e-479a-af55-1e25ae7f028e-utilities\") pod \"493c6869-412e-479a-af55-1e25ae7f028e\" (UID: \"493c6869-412e-479a-af55-1e25ae7f028e\") " Feb 23 09:04:58 crc kubenswrapper[4940]: I0223 09:04:58.381106 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tllmh\" (UniqueName: \"kubernetes.io/projected/493c6869-412e-479a-af55-1e25ae7f028e-kube-api-access-tllmh\") pod \"493c6869-412e-479a-af55-1e25ae7f028e\" (UID: \"493c6869-412e-479a-af55-1e25ae7f028e\") " Feb 23 09:04:58 crc kubenswrapper[4940]: I0223 09:04:58.381129 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/493c6869-412e-479a-af55-1e25ae7f028e-catalog-content\") pod \"493c6869-412e-479a-af55-1e25ae7f028e\" (UID: \"493c6869-412e-479a-af55-1e25ae7f028e\") " Feb 23 09:04:58 crc kubenswrapper[4940]: I0223 09:04:58.382166 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/493c6869-412e-479a-af55-1e25ae7f028e-utilities" (OuterVolumeSpecName: "utilities") pod "493c6869-412e-479a-af55-1e25ae7f028e" (UID: "493c6869-412e-479a-af55-1e25ae7f028e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 09:04:58 crc kubenswrapper[4940]: I0223 09:04:58.386705 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/493c6869-412e-479a-af55-1e25ae7f028e-kube-api-access-tllmh" (OuterVolumeSpecName: "kube-api-access-tllmh") pod "493c6869-412e-479a-af55-1e25ae7f028e" (UID: "493c6869-412e-479a-af55-1e25ae7f028e"). InnerVolumeSpecName "kube-api-access-tllmh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 09:04:58 crc kubenswrapper[4940]: I0223 09:04:58.432930 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/493c6869-412e-479a-af55-1e25ae7f028e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "493c6869-412e-479a-af55-1e25ae7f028e" (UID: "493c6869-412e-479a-af55-1e25ae7f028e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 09:04:58 crc kubenswrapper[4940]: I0223 09:04:58.482210 4940 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/493c6869-412e-479a-af55-1e25ae7f028e-utilities\") on node \"crc\" DevicePath \"\"" Feb 23 09:04:58 crc kubenswrapper[4940]: I0223 09:04:58.482245 4940 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tllmh\" (UniqueName: \"kubernetes.io/projected/493c6869-412e-479a-af55-1e25ae7f028e-kube-api-access-tllmh\") on node \"crc\" DevicePath \"\"" Feb 23 09:04:58 crc kubenswrapper[4940]: I0223 09:04:58.482258 4940 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/493c6869-412e-479a-af55-1e25ae7f028e-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 23 09:04:58 crc kubenswrapper[4940]: I0223 09:04:58.489029 4940 generic.go:334] "Generic (PLEG): container finished" podID="493c6869-412e-479a-af55-1e25ae7f028e" containerID="dbe4b115cdbc97089da950eaa69b0daec89056d5970522263a3a49d1eb8672c6" exitCode=0 Feb 23 09:04:58 crc kubenswrapper[4940]: I0223 09:04:58.489073 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qvb7p" event={"ID":"493c6869-412e-479a-af55-1e25ae7f028e","Type":"ContainerDied","Data":"dbe4b115cdbc97089da950eaa69b0daec89056d5970522263a3a49d1eb8672c6"} Feb 23 09:04:58 crc kubenswrapper[4940]: I0223 09:04:58.489089 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qvb7p" Feb 23 09:04:58 crc kubenswrapper[4940]: I0223 09:04:58.489101 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qvb7p" event={"ID":"493c6869-412e-479a-af55-1e25ae7f028e","Type":"ContainerDied","Data":"fe1378f3e34b5f011e3fcb4c288ea018b295dab969ffb5a47118a4abc24d8002"} Feb 23 09:04:58 crc kubenswrapper[4940]: I0223 09:04:58.489124 4940 scope.go:117] "RemoveContainer" containerID="dbe4b115cdbc97089da950eaa69b0daec89056d5970522263a3a49d1eb8672c6" Feb 23 09:04:58 crc kubenswrapper[4940]: I0223 09:04:58.504902 4940 scope.go:117] "RemoveContainer" containerID="fc98f9d18762b029a0d71e7d652e6188fe9573bd84fcf660e44f128903c9cf9f" Feb 23 09:04:58 crc kubenswrapper[4940]: I0223 09:04:58.523907 4940 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-qvb7p"] Feb 23 09:04:58 crc kubenswrapper[4940]: I0223 09:04:58.523959 4940 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-qvb7p"] Feb 23 09:04:58 crc kubenswrapper[4940]: I0223 09:04:58.534371 4940 scope.go:117] "RemoveContainer" containerID="9c755c3721845965f33e98be89f210bcde31d27d3fb5c2394711e64c672eb20a" Feb 23 09:04:58 crc kubenswrapper[4940]: I0223 09:04:58.550872 4940 scope.go:117] "RemoveContainer" containerID="dbe4b115cdbc97089da950eaa69b0daec89056d5970522263a3a49d1eb8672c6" Feb 23 09:04:58 crc kubenswrapper[4940]: E0223 09:04:58.551258 4940 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dbe4b115cdbc97089da950eaa69b0daec89056d5970522263a3a49d1eb8672c6\": container with ID starting with dbe4b115cdbc97089da950eaa69b0daec89056d5970522263a3a49d1eb8672c6 not found: ID does not exist" containerID="dbe4b115cdbc97089da950eaa69b0daec89056d5970522263a3a49d1eb8672c6" Feb 23 09:04:58 crc kubenswrapper[4940]: I0223 09:04:58.551297 4940 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dbe4b115cdbc97089da950eaa69b0daec89056d5970522263a3a49d1eb8672c6"} err="failed to get container status \"dbe4b115cdbc97089da950eaa69b0daec89056d5970522263a3a49d1eb8672c6\": rpc error: code = NotFound desc = could not find container \"dbe4b115cdbc97089da950eaa69b0daec89056d5970522263a3a49d1eb8672c6\": container with ID starting with dbe4b115cdbc97089da950eaa69b0daec89056d5970522263a3a49d1eb8672c6 not found: ID does not exist" Feb 23 09:04:58 crc kubenswrapper[4940]: I0223 09:04:58.551323 4940 scope.go:117] "RemoveContainer" containerID="fc98f9d18762b029a0d71e7d652e6188fe9573bd84fcf660e44f128903c9cf9f" Feb 23 09:04:58 crc kubenswrapper[4940]: E0223 09:04:58.551544 4940 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fc98f9d18762b029a0d71e7d652e6188fe9573bd84fcf660e44f128903c9cf9f\": container with ID starting with fc98f9d18762b029a0d71e7d652e6188fe9573bd84fcf660e44f128903c9cf9f not found: ID does not exist" containerID="fc98f9d18762b029a0d71e7d652e6188fe9573bd84fcf660e44f128903c9cf9f" Feb 23 09:04:58 crc kubenswrapper[4940]: I0223 09:04:58.551570 4940 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fc98f9d18762b029a0d71e7d652e6188fe9573bd84fcf660e44f128903c9cf9f"} err="failed to get container status \"fc98f9d18762b029a0d71e7d652e6188fe9573bd84fcf660e44f128903c9cf9f\": rpc error: code = NotFound desc = could not find container \"fc98f9d18762b029a0d71e7d652e6188fe9573bd84fcf660e44f128903c9cf9f\": container with ID starting with fc98f9d18762b029a0d71e7d652e6188fe9573bd84fcf660e44f128903c9cf9f not found: ID does not exist" Feb 23 09:04:58 crc kubenswrapper[4940]: I0223 09:04:58.551588 4940 scope.go:117] "RemoveContainer" containerID="9c755c3721845965f33e98be89f210bcde31d27d3fb5c2394711e64c672eb20a" Feb 23 09:04:58 crc kubenswrapper[4940]: E0223 09:04:58.551835 4940 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9c755c3721845965f33e98be89f210bcde31d27d3fb5c2394711e64c672eb20a\": container with ID starting with 9c755c3721845965f33e98be89f210bcde31d27d3fb5c2394711e64c672eb20a not found: ID does not exist" containerID="9c755c3721845965f33e98be89f210bcde31d27d3fb5c2394711e64c672eb20a" Feb 23 09:04:58 crc kubenswrapper[4940]: I0223 09:04:58.551864 4940 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9c755c3721845965f33e98be89f210bcde31d27d3fb5c2394711e64c672eb20a"} err="failed to get container status \"9c755c3721845965f33e98be89f210bcde31d27d3fb5c2394711e64c672eb20a\": rpc error: code = NotFound desc = could not find container \"9c755c3721845965f33e98be89f210bcde31d27d3fb5c2394711e64c672eb20a\": container with ID starting with 9c755c3721845965f33e98be89f210bcde31d27d3fb5c2394711e64c672eb20a not found: ID does not exist" Feb 23 09:04:59 crc kubenswrapper[4940]: I0223 09:04:59.365465 4940 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="493c6869-412e-479a-af55-1e25ae7f028e" path="/var/lib/kubelet/pods/493c6869-412e-479a-af55-1e25ae7f028e/volumes" Feb 23 09:04:59 crc kubenswrapper[4940]: I0223 09:04:59.585262 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-mzg8t"] Feb 23 09:04:59 crc kubenswrapper[4940]: E0223 09:04:59.585502 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c924436a-929d-4ed7-ad05-6f9dea4ab38a" containerName="extract" Feb 23 09:04:59 crc kubenswrapper[4940]: I0223 09:04:59.585513 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="c924436a-929d-4ed7-ad05-6f9dea4ab38a" containerName="extract" Feb 23 09:04:59 crc kubenswrapper[4940]: E0223 09:04:59.585525 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c924436a-929d-4ed7-ad05-6f9dea4ab38a" containerName="util" Feb 23 09:04:59 crc kubenswrapper[4940]: I0223 09:04:59.585532 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="c924436a-929d-4ed7-ad05-6f9dea4ab38a" containerName="util" Feb 23 09:04:59 crc kubenswrapper[4940]: E0223 09:04:59.585540 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c924436a-929d-4ed7-ad05-6f9dea4ab38a" containerName="pull" Feb 23 09:04:59 crc kubenswrapper[4940]: I0223 09:04:59.585546 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="c924436a-929d-4ed7-ad05-6f9dea4ab38a" containerName="pull" Feb 23 09:04:59 crc kubenswrapper[4940]: E0223 09:04:59.585557 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="493c6869-412e-479a-af55-1e25ae7f028e" containerName="extract-utilities" Feb 23 09:04:59 crc kubenswrapper[4940]: I0223 09:04:59.585562 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="493c6869-412e-479a-af55-1e25ae7f028e" containerName="extract-utilities" Feb 23 09:04:59 crc kubenswrapper[4940]: E0223 09:04:59.585574 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="493c6869-412e-479a-af55-1e25ae7f028e" containerName="registry-server" Feb 23 09:04:59 crc kubenswrapper[4940]: I0223 09:04:59.585580 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="493c6869-412e-479a-af55-1e25ae7f028e" containerName="registry-server" Feb 23 09:04:59 crc kubenswrapper[4940]: E0223 09:04:59.585588 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="493c6869-412e-479a-af55-1e25ae7f028e" containerName="extract-content" Feb 23 09:04:59 crc kubenswrapper[4940]: I0223 09:04:59.585594 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="493c6869-412e-479a-af55-1e25ae7f028e" containerName="extract-content" Feb 23 09:04:59 crc kubenswrapper[4940]: I0223 09:04:59.585761 4940 memory_manager.go:354] "RemoveStaleState removing state" podUID="c924436a-929d-4ed7-ad05-6f9dea4ab38a" containerName="extract" Feb 23 09:04:59 crc kubenswrapper[4940]: I0223 09:04:59.585773 4940 memory_manager.go:354] "RemoveStaleState removing state" podUID="493c6869-412e-479a-af55-1e25ae7f028e" containerName="registry-server" Feb 23 09:04:59 crc kubenswrapper[4940]: I0223 09:04:59.586506 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mzg8t" Feb 23 09:04:59 crc kubenswrapper[4940]: I0223 09:04:59.600182 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-mzg8t"] Feb 23 09:04:59 crc kubenswrapper[4940]: I0223 09:04:59.696828 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cvmzp\" (UniqueName: \"kubernetes.io/projected/73de16ac-08c7-439d-b525-dc8db35c1115-kube-api-access-cvmzp\") pod \"community-operators-mzg8t\" (UID: \"73de16ac-08c7-439d-b525-dc8db35c1115\") " pod="openshift-marketplace/community-operators-mzg8t" Feb 23 09:04:59 crc kubenswrapper[4940]: I0223 09:04:59.697176 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/73de16ac-08c7-439d-b525-dc8db35c1115-utilities\") pod \"community-operators-mzg8t\" (UID: \"73de16ac-08c7-439d-b525-dc8db35c1115\") " pod="openshift-marketplace/community-operators-mzg8t" Feb 23 09:04:59 crc kubenswrapper[4940]: I0223 09:04:59.697248 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/73de16ac-08c7-439d-b525-dc8db35c1115-catalog-content\") pod \"community-operators-mzg8t\" (UID: \"73de16ac-08c7-439d-b525-dc8db35c1115\") " pod="openshift-marketplace/community-operators-mzg8t" Feb 23 09:04:59 crc kubenswrapper[4940]: I0223 09:04:59.797983 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/73de16ac-08c7-439d-b525-dc8db35c1115-catalog-content\") pod \"community-operators-mzg8t\" (UID: \"73de16ac-08c7-439d-b525-dc8db35c1115\") " pod="openshift-marketplace/community-operators-mzg8t" Feb 23 09:04:59 crc kubenswrapper[4940]: I0223 09:04:59.798079 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cvmzp\" (UniqueName: \"kubernetes.io/projected/73de16ac-08c7-439d-b525-dc8db35c1115-kube-api-access-cvmzp\") pod \"community-operators-mzg8t\" (UID: \"73de16ac-08c7-439d-b525-dc8db35c1115\") " pod="openshift-marketplace/community-operators-mzg8t" Feb 23 09:04:59 crc kubenswrapper[4940]: I0223 09:04:59.798098 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/73de16ac-08c7-439d-b525-dc8db35c1115-utilities\") pod \"community-operators-mzg8t\" (UID: \"73de16ac-08c7-439d-b525-dc8db35c1115\") " pod="openshift-marketplace/community-operators-mzg8t" Feb 23 09:04:59 crc kubenswrapper[4940]: I0223 09:04:59.798669 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/73de16ac-08c7-439d-b525-dc8db35c1115-utilities\") pod \"community-operators-mzg8t\" (UID: \"73de16ac-08c7-439d-b525-dc8db35c1115\") " pod="openshift-marketplace/community-operators-mzg8t" Feb 23 09:04:59 crc kubenswrapper[4940]: I0223 09:04:59.798681 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/73de16ac-08c7-439d-b525-dc8db35c1115-catalog-content\") pod \"community-operators-mzg8t\" (UID: \"73de16ac-08c7-439d-b525-dc8db35c1115\") " pod="openshift-marketplace/community-operators-mzg8t" Feb 23 09:04:59 crc kubenswrapper[4940]: I0223 09:04:59.816302 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cvmzp\" (UniqueName: \"kubernetes.io/projected/73de16ac-08c7-439d-b525-dc8db35c1115-kube-api-access-cvmzp\") pod \"community-operators-mzg8t\" (UID: \"73de16ac-08c7-439d-b525-dc8db35c1115\") " pod="openshift-marketplace/community-operators-mzg8t" Feb 23 09:04:59 crc kubenswrapper[4940]: I0223 09:04:59.903770 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mzg8t" Feb 23 09:05:00 crc kubenswrapper[4940]: I0223 09:05:00.434786 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-mzg8t"] Feb 23 09:05:00 crc kubenswrapper[4940]: I0223 09:05:00.504428 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mzg8t" event={"ID":"73de16ac-08c7-439d-b525-dc8db35c1115","Type":"ContainerStarted","Data":"aafbda75de346d6292865ccc89b783b91c8eed3e1aa0318bbe051385ea387859"} Feb 23 09:05:01 crc kubenswrapper[4940]: I0223 09:05:01.482587 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-init-68c97fd8b-ls267"] Feb 23 09:05:01 crc kubenswrapper[4940]: I0223 09:05:01.484178 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-68c97fd8b-ls267" Feb 23 09:05:01 crc kubenswrapper[4940]: I0223 09:05:01.487695 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-init-dockercfg-mncph" Feb 23 09:05:01 crc kubenswrapper[4940]: I0223 09:05:01.507707 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-68c97fd8b-ls267"] Feb 23 09:05:01 crc kubenswrapper[4940]: I0223 09:05:01.512749 4940 generic.go:334] "Generic (PLEG): container finished" podID="73de16ac-08c7-439d-b525-dc8db35c1115" containerID="4b06d1394f246088bda2ac448774f388f7b8b7acd3f9ae2fe24afc6a99fa13ee" exitCode=0 Feb 23 09:05:01 crc kubenswrapper[4940]: I0223 09:05:01.512797 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mzg8t" event={"ID":"73de16ac-08c7-439d-b525-dc8db35c1115","Type":"ContainerDied","Data":"4b06d1394f246088bda2ac448774f388f7b8b7acd3f9ae2fe24afc6a99fa13ee"} Feb 23 09:05:01 crc kubenswrapper[4940]: I0223 09:05:01.523022 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zqcr8\" (UniqueName: \"kubernetes.io/projected/bd17ff9f-8fa9-4ccc-972b-3cbb2abbdf00-kube-api-access-zqcr8\") pod \"openstack-operator-controller-init-68c97fd8b-ls267\" (UID: \"bd17ff9f-8fa9-4ccc-972b-3cbb2abbdf00\") " pod="openstack-operators/openstack-operator-controller-init-68c97fd8b-ls267" Feb 23 09:05:01 crc kubenswrapper[4940]: I0223 09:05:01.623876 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zqcr8\" (UniqueName: \"kubernetes.io/projected/bd17ff9f-8fa9-4ccc-972b-3cbb2abbdf00-kube-api-access-zqcr8\") pod \"openstack-operator-controller-init-68c97fd8b-ls267\" (UID: \"bd17ff9f-8fa9-4ccc-972b-3cbb2abbdf00\") " pod="openstack-operators/openstack-operator-controller-init-68c97fd8b-ls267" Feb 23 09:05:01 crc kubenswrapper[4940]: I0223 09:05:01.645061 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zqcr8\" (UniqueName: \"kubernetes.io/projected/bd17ff9f-8fa9-4ccc-972b-3cbb2abbdf00-kube-api-access-zqcr8\") pod \"openstack-operator-controller-init-68c97fd8b-ls267\" (UID: \"bd17ff9f-8fa9-4ccc-972b-3cbb2abbdf00\") " pod="openstack-operators/openstack-operator-controller-init-68c97fd8b-ls267" Feb 23 09:05:01 crc kubenswrapper[4940]: I0223 09:05:01.799815 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-68c97fd8b-ls267" Feb 23 09:05:02 crc kubenswrapper[4940]: I0223 09:05:02.324086 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-68c97fd8b-ls267"] Feb 23 09:05:02 crc kubenswrapper[4940]: I0223 09:05:02.524437 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-68c97fd8b-ls267" event={"ID":"bd17ff9f-8fa9-4ccc-972b-3cbb2abbdf00","Type":"ContainerStarted","Data":"bd65c14f0eaf7629dda638c5112301b1b18e9f1e79b6e5599af26ab2ff12e900"} Feb 23 09:05:02 crc kubenswrapper[4940]: I0223 09:05:02.526873 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mzg8t" event={"ID":"73de16ac-08c7-439d-b525-dc8db35c1115","Type":"ContainerStarted","Data":"1a8cf696bb57c8a1de0c26d41207c2d60e98f9ab32a02fb7bbf9f14e01c1a04a"} Feb 23 09:05:03 crc kubenswrapper[4940]: I0223 09:05:03.619071 4940 generic.go:334] "Generic (PLEG): container finished" podID="73de16ac-08c7-439d-b525-dc8db35c1115" containerID="1a8cf696bb57c8a1de0c26d41207c2d60e98f9ab32a02fb7bbf9f14e01c1a04a" exitCode=0 Feb 23 09:05:03 crc kubenswrapper[4940]: I0223 09:05:03.619129 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mzg8t" event={"ID":"73de16ac-08c7-439d-b525-dc8db35c1115","Type":"ContainerDied","Data":"1a8cf696bb57c8a1de0c26d41207c2d60e98f9ab32a02fb7bbf9f14e01c1a04a"} Feb 23 09:05:09 crc kubenswrapper[4940]: I0223 09:05:09.665182 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-68c97fd8b-ls267" event={"ID":"bd17ff9f-8fa9-4ccc-972b-3cbb2abbdf00","Type":"ContainerStarted","Data":"70a1e4b05c8a54368f14f093ca98ccaa4b666d5a9b0d9a4cec13801b9a43cfb7"} Feb 23 09:05:09 crc kubenswrapper[4940]: I0223 09:05:09.665643 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-68c97fd8b-ls267" Feb 23 09:05:09 crc kubenswrapper[4940]: I0223 09:05:09.667298 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mzg8t" event={"ID":"73de16ac-08c7-439d-b525-dc8db35c1115","Type":"ContainerStarted","Data":"18ba686cf9ae06d14ef7f62fccc8705fc05e60e246e6f799e5829844ac8e461c"} Feb 23 09:05:09 crc kubenswrapper[4940]: I0223 09:05:09.694872 4940 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-init-68c97fd8b-ls267" podStartSLOduration=2.225393985 podStartE2EDuration="8.694856156s" podCreationTimestamp="2026-02-23 09:05:01 +0000 UTC" firstStartedPulling="2026-02-23 09:05:02.286070209 +0000 UTC m=+1033.669276366" lastFinishedPulling="2026-02-23 09:05:08.75553238 +0000 UTC m=+1040.138738537" observedRunningTime="2026-02-23 09:05:09.693469472 +0000 UTC m=+1041.076675619" watchObservedRunningTime="2026-02-23 09:05:09.694856156 +0000 UTC m=+1041.078062313" Feb 23 09:05:09 crc kubenswrapper[4940]: I0223 09:05:09.723821 4940 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-mzg8t" podStartSLOduration=3.499648446 podStartE2EDuration="10.723797615s" podCreationTimestamp="2026-02-23 09:04:59 +0000 UTC" firstStartedPulling="2026-02-23 09:05:01.515793827 +0000 UTC m=+1032.898999974" lastFinishedPulling="2026-02-23 09:05:08.739942986 +0000 UTC m=+1040.123149143" observedRunningTime="2026-02-23 09:05:09.715794306 +0000 UTC m=+1041.099000483" watchObservedRunningTime="2026-02-23 09:05:09.723797615 +0000 UTC m=+1041.107003812" Feb 23 09:05:09 crc kubenswrapper[4940]: I0223 09:05:09.904702 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-mzg8t" Feb 23 09:05:09 crc kubenswrapper[4940]: I0223 09:05:09.904749 4940 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-mzg8t" Feb 23 09:05:10 crc kubenswrapper[4940]: I0223 09:05:10.951647 4940 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-mzg8t" podUID="73de16ac-08c7-439d-b525-dc8db35c1115" containerName="registry-server" probeResult="failure" output=< Feb 23 09:05:10 crc kubenswrapper[4940]: timeout: failed to connect service ":50051" within 1s Feb 23 09:05:10 crc kubenswrapper[4940]: > Feb 23 09:05:11 crc kubenswrapper[4940]: I0223 09:05:11.801044 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-248kt"] Feb 23 09:05:11 crc kubenswrapper[4940]: I0223 09:05:11.803661 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-248kt" Feb 23 09:05:11 crc kubenswrapper[4940]: I0223 09:05:11.823091 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-248kt"] Feb 23 09:05:11 crc kubenswrapper[4940]: I0223 09:05:11.847745 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6kps9\" (UniqueName: \"kubernetes.io/projected/ae338573-cca0-4cea-bf2b-1951fd37edbe-kube-api-access-6kps9\") pod \"redhat-marketplace-248kt\" (UID: \"ae338573-cca0-4cea-bf2b-1951fd37edbe\") " pod="openshift-marketplace/redhat-marketplace-248kt" Feb 23 09:05:11 crc kubenswrapper[4940]: I0223 09:05:11.847796 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ae338573-cca0-4cea-bf2b-1951fd37edbe-catalog-content\") pod \"redhat-marketplace-248kt\" (UID: \"ae338573-cca0-4cea-bf2b-1951fd37edbe\") " pod="openshift-marketplace/redhat-marketplace-248kt" Feb 23 09:05:11 crc kubenswrapper[4940]: I0223 09:05:11.847824 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ae338573-cca0-4cea-bf2b-1951fd37edbe-utilities\") pod \"redhat-marketplace-248kt\" (UID: \"ae338573-cca0-4cea-bf2b-1951fd37edbe\") " pod="openshift-marketplace/redhat-marketplace-248kt" Feb 23 09:05:11 crc kubenswrapper[4940]: I0223 09:05:11.948747 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ae338573-cca0-4cea-bf2b-1951fd37edbe-catalog-content\") pod \"redhat-marketplace-248kt\" (UID: \"ae338573-cca0-4cea-bf2b-1951fd37edbe\") " pod="openshift-marketplace/redhat-marketplace-248kt" Feb 23 09:05:11 crc kubenswrapper[4940]: I0223 09:05:11.948851 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ae338573-cca0-4cea-bf2b-1951fd37edbe-utilities\") pod \"redhat-marketplace-248kt\" (UID: \"ae338573-cca0-4cea-bf2b-1951fd37edbe\") " pod="openshift-marketplace/redhat-marketplace-248kt" Feb 23 09:05:11 crc kubenswrapper[4940]: I0223 09:05:11.949041 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6kps9\" (UniqueName: \"kubernetes.io/projected/ae338573-cca0-4cea-bf2b-1951fd37edbe-kube-api-access-6kps9\") pod \"redhat-marketplace-248kt\" (UID: \"ae338573-cca0-4cea-bf2b-1951fd37edbe\") " pod="openshift-marketplace/redhat-marketplace-248kt" Feb 23 09:05:11 crc kubenswrapper[4940]: I0223 09:05:11.949262 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ae338573-cca0-4cea-bf2b-1951fd37edbe-catalog-content\") pod \"redhat-marketplace-248kt\" (UID: \"ae338573-cca0-4cea-bf2b-1951fd37edbe\") " pod="openshift-marketplace/redhat-marketplace-248kt" Feb 23 09:05:11 crc kubenswrapper[4940]: I0223 09:05:11.949342 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ae338573-cca0-4cea-bf2b-1951fd37edbe-utilities\") pod \"redhat-marketplace-248kt\" (UID: \"ae338573-cca0-4cea-bf2b-1951fd37edbe\") " pod="openshift-marketplace/redhat-marketplace-248kt" Feb 23 09:05:11 crc kubenswrapper[4940]: I0223 09:05:11.969488 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6kps9\" (UniqueName: \"kubernetes.io/projected/ae338573-cca0-4cea-bf2b-1951fd37edbe-kube-api-access-6kps9\") pod \"redhat-marketplace-248kt\" (UID: \"ae338573-cca0-4cea-bf2b-1951fd37edbe\") " pod="openshift-marketplace/redhat-marketplace-248kt" Feb 23 09:05:12 crc kubenswrapper[4940]: I0223 09:05:12.173501 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-248kt" Feb 23 09:05:12 crc kubenswrapper[4940]: I0223 09:05:12.689570 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-248kt"] Feb 23 09:05:13 crc kubenswrapper[4940]: I0223 09:05:13.694641 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-248kt" event={"ID":"ae338573-cca0-4cea-bf2b-1951fd37edbe","Type":"ContainerDied","Data":"26bc737cb33770a03312b57fd4a99e9c995fa33e585e278dc93a0954d8182c67"} Feb 23 09:05:13 crc kubenswrapper[4940]: I0223 09:05:13.694605 4940 generic.go:334] "Generic (PLEG): container finished" podID="ae338573-cca0-4cea-bf2b-1951fd37edbe" containerID="26bc737cb33770a03312b57fd4a99e9c995fa33e585e278dc93a0954d8182c67" exitCode=0 Feb 23 09:05:13 crc kubenswrapper[4940]: I0223 09:05:13.695059 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-248kt" event={"ID":"ae338573-cca0-4cea-bf2b-1951fd37edbe","Type":"ContainerStarted","Data":"ccefa9bb5d3a31a6f4dc6f7085f287ce69eb1bca7fdbddca9fc86122b92ed4b9"} Feb 23 09:05:14 crc kubenswrapper[4940]: I0223 09:05:14.703945 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-248kt" event={"ID":"ae338573-cca0-4cea-bf2b-1951fd37edbe","Type":"ContainerStarted","Data":"8c3d1696d46cc1e0c4ec2ccf0687c7b5fc8fed45b3e35223fc33de4afe569137"} Feb 23 09:05:15 crc kubenswrapper[4940]: I0223 09:05:15.713683 4940 generic.go:334] "Generic (PLEG): container finished" podID="ae338573-cca0-4cea-bf2b-1951fd37edbe" containerID="8c3d1696d46cc1e0c4ec2ccf0687c7b5fc8fed45b3e35223fc33de4afe569137" exitCode=0 Feb 23 09:05:15 crc kubenswrapper[4940]: I0223 09:05:15.713729 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-248kt" event={"ID":"ae338573-cca0-4cea-bf2b-1951fd37edbe","Type":"ContainerDied","Data":"8c3d1696d46cc1e0c4ec2ccf0687c7b5fc8fed45b3e35223fc33de4afe569137"} Feb 23 09:05:16 crc kubenswrapper[4940]: I0223 09:05:16.725759 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-248kt" event={"ID":"ae338573-cca0-4cea-bf2b-1951fd37edbe","Type":"ContainerStarted","Data":"5402381f9a797cef52dc0c77d5b771b9e9c1ccfad96d56268d8c1bd4882c406c"} Feb 23 09:05:16 crc kubenswrapper[4940]: I0223 09:05:16.749433 4940 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-248kt" podStartSLOduration=3.259936448 podStartE2EDuration="5.749412946s" podCreationTimestamp="2026-02-23 09:05:11 +0000 UTC" firstStartedPulling="2026-02-23 09:05:13.696888756 +0000 UTC m=+1045.080094913" lastFinishedPulling="2026-02-23 09:05:16.186365254 +0000 UTC m=+1047.569571411" observedRunningTime="2026-02-23 09:05:16.746049041 +0000 UTC m=+1048.129255268" watchObservedRunningTime="2026-02-23 09:05:16.749412946 +0000 UTC m=+1048.132619113" Feb 23 09:05:19 crc kubenswrapper[4940]: I0223 09:05:19.946911 4940 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-mzg8t" Feb 23 09:05:19 crc kubenswrapper[4940]: I0223 09:05:19.989704 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-mzg8t" Feb 23 09:05:20 crc kubenswrapper[4940]: I0223 09:05:20.178565 4940 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-mzg8t"] Feb 23 09:05:21 crc kubenswrapper[4940]: I0223 09:05:21.756992 4940 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-mzg8t" podUID="73de16ac-08c7-439d-b525-dc8db35c1115" containerName="registry-server" containerID="cri-o://18ba686cf9ae06d14ef7f62fccc8705fc05e60e246e6f799e5829844ac8e461c" gracePeriod=2 Feb 23 09:05:21 crc kubenswrapper[4940]: I0223 09:05:21.803891 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-68c97fd8b-ls267" Feb 23 09:05:22 crc kubenswrapper[4940]: I0223 09:05:22.124835 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mzg8t" Feb 23 09:05:22 crc kubenswrapper[4940]: I0223 09:05:22.173882 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-248kt" Feb 23 09:05:22 crc kubenswrapper[4940]: I0223 09:05:22.174823 4940 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-248kt" Feb 23 09:05:22 crc kubenswrapper[4940]: I0223 09:05:22.221003 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cvmzp\" (UniqueName: \"kubernetes.io/projected/73de16ac-08c7-439d-b525-dc8db35c1115-kube-api-access-cvmzp\") pod \"73de16ac-08c7-439d-b525-dc8db35c1115\" (UID: \"73de16ac-08c7-439d-b525-dc8db35c1115\") " Feb 23 09:05:22 crc kubenswrapper[4940]: I0223 09:05:22.221074 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/73de16ac-08c7-439d-b525-dc8db35c1115-catalog-content\") pod \"73de16ac-08c7-439d-b525-dc8db35c1115\" (UID: \"73de16ac-08c7-439d-b525-dc8db35c1115\") " Feb 23 09:05:22 crc kubenswrapper[4940]: I0223 09:05:22.221169 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/73de16ac-08c7-439d-b525-dc8db35c1115-utilities\") pod \"73de16ac-08c7-439d-b525-dc8db35c1115\" (UID: \"73de16ac-08c7-439d-b525-dc8db35c1115\") " Feb 23 09:05:22 crc kubenswrapper[4940]: I0223 09:05:22.223361 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/73de16ac-08c7-439d-b525-dc8db35c1115-utilities" (OuterVolumeSpecName: "utilities") pod "73de16ac-08c7-439d-b525-dc8db35c1115" (UID: "73de16ac-08c7-439d-b525-dc8db35c1115"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 09:05:22 crc kubenswrapper[4940]: I0223 09:05:22.227980 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/73de16ac-08c7-439d-b525-dc8db35c1115-kube-api-access-cvmzp" (OuterVolumeSpecName: "kube-api-access-cvmzp") pod "73de16ac-08c7-439d-b525-dc8db35c1115" (UID: "73de16ac-08c7-439d-b525-dc8db35c1115"). InnerVolumeSpecName "kube-api-access-cvmzp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 09:05:22 crc kubenswrapper[4940]: I0223 09:05:22.238374 4940 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-248kt" Feb 23 09:05:22 crc kubenswrapper[4940]: I0223 09:05:22.283887 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/73de16ac-08c7-439d-b525-dc8db35c1115-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "73de16ac-08c7-439d-b525-dc8db35c1115" (UID: "73de16ac-08c7-439d-b525-dc8db35c1115"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 09:05:22 crc kubenswrapper[4940]: I0223 09:05:22.322837 4940 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/73de16ac-08c7-439d-b525-dc8db35c1115-utilities\") on node \"crc\" DevicePath \"\"" Feb 23 09:05:22 crc kubenswrapper[4940]: I0223 09:05:22.322888 4940 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cvmzp\" (UniqueName: \"kubernetes.io/projected/73de16ac-08c7-439d-b525-dc8db35c1115-kube-api-access-cvmzp\") on node \"crc\" DevicePath \"\"" Feb 23 09:05:22 crc kubenswrapper[4940]: I0223 09:05:22.322904 4940 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/73de16ac-08c7-439d-b525-dc8db35c1115-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 23 09:05:22 crc kubenswrapper[4940]: I0223 09:05:22.764407 4940 generic.go:334] "Generic (PLEG): container finished" podID="73de16ac-08c7-439d-b525-dc8db35c1115" containerID="18ba686cf9ae06d14ef7f62fccc8705fc05e60e246e6f799e5829844ac8e461c" exitCode=0 Feb 23 09:05:22 crc kubenswrapper[4940]: I0223 09:05:22.764463 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mzg8t" Feb 23 09:05:22 crc kubenswrapper[4940]: I0223 09:05:22.764514 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mzg8t" event={"ID":"73de16ac-08c7-439d-b525-dc8db35c1115","Type":"ContainerDied","Data":"18ba686cf9ae06d14ef7f62fccc8705fc05e60e246e6f799e5829844ac8e461c"} Feb 23 09:05:22 crc kubenswrapper[4940]: I0223 09:05:22.764540 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mzg8t" event={"ID":"73de16ac-08c7-439d-b525-dc8db35c1115","Type":"ContainerDied","Data":"aafbda75de346d6292865ccc89b783b91c8eed3e1aa0318bbe051385ea387859"} Feb 23 09:05:22 crc kubenswrapper[4940]: I0223 09:05:22.764557 4940 scope.go:117] "RemoveContainer" containerID="18ba686cf9ae06d14ef7f62fccc8705fc05e60e246e6f799e5829844ac8e461c" Feb 23 09:05:22 crc kubenswrapper[4940]: I0223 09:05:22.786549 4940 scope.go:117] "RemoveContainer" containerID="1a8cf696bb57c8a1de0c26d41207c2d60e98f9ab32a02fb7bbf9f14e01c1a04a" Feb 23 09:05:22 crc kubenswrapper[4940]: I0223 09:05:22.805534 4940 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-mzg8t"] Feb 23 09:05:22 crc kubenswrapper[4940]: I0223 09:05:22.807649 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-248kt" Feb 23 09:05:22 crc kubenswrapper[4940]: I0223 09:05:22.824285 4940 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-mzg8t"] Feb 23 09:05:22 crc kubenswrapper[4940]: I0223 09:05:22.831049 4940 scope.go:117] "RemoveContainer" containerID="4b06d1394f246088bda2ac448774f388f7b8b7acd3f9ae2fe24afc6a99fa13ee" Feb 23 09:05:22 crc kubenswrapper[4940]: I0223 09:05:22.850504 4940 scope.go:117] "RemoveContainer" containerID="18ba686cf9ae06d14ef7f62fccc8705fc05e60e246e6f799e5829844ac8e461c" Feb 23 09:05:22 crc kubenswrapper[4940]: E0223 09:05:22.850937 4940 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"18ba686cf9ae06d14ef7f62fccc8705fc05e60e246e6f799e5829844ac8e461c\": container with ID starting with 18ba686cf9ae06d14ef7f62fccc8705fc05e60e246e6f799e5829844ac8e461c not found: ID does not exist" containerID="18ba686cf9ae06d14ef7f62fccc8705fc05e60e246e6f799e5829844ac8e461c" Feb 23 09:05:22 crc kubenswrapper[4940]: I0223 09:05:22.850971 4940 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"18ba686cf9ae06d14ef7f62fccc8705fc05e60e246e6f799e5829844ac8e461c"} err="failed to get container status \"18ba686cf9ae06d14ef7f62fccc8705fc05e60e246e6f799e5829844ac8e461c\": rpc error: code = NotFound desc = could not find container \"18ba686cf9ae06d14ef7f62fccc8705fc05e60e246e6f799e5829844ac8e461c\": container with ID starting with 18ba686cf9ae06d14ef7f62fccc8705fc05e60e246e6f799e5829844ac8e461c not found: ID does not exist" Feb 23 09:05:22 crc kubenswrapper[4940]: I0223 09:05:22.850994 4940 scope.go:117] "RemoveContainer" containerID="1a8cf696bb57c8a1de0c26d41207c2d60e98f9ab32a02fb7bbf9f14e01c1a04a" Feb 23 09:05:22 crc kubenswrapper[4940]: E0223 09:05:22.851421 4940 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1a8cf696bb57c8a1de0c26d41207c2d60e98f9ab32a02fb7bbf9f14e01c1a04a\": container with ID starting with 1a8cf696bb57c8a1de0c26d41207c2d60e98f9ab32a02fb7bbf9f14e01c1a04a not found: ID does not exist" containerID="1a8cf696bb57c8a1de0c26d41207c2d60e98f9ab32a02fb7bbf9f14e01c1a04a" Feb 23 09:05:22 crc kubenswrapper[4940]: I0223 09:05:22.851456 4940 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1a8cf696bb57c8a1de0c26d41207c2d60e98f9ab32a02fb7bbf9f14e01c1a04a"} err="failed to get container status \"1a8cf696bb57c8a1de0c26d41207c2d60e98f9ab32a02fb7bbf9f14e01c1a04a\": rpc error: code = NotFound desc = could not find container \"1a8cf696bb57c8a1de0c26d41207c2d60e98f9ab32a02fb7bbf9f14e01c1a04a\": container with ID starting with 1a8cf696bb57c8a1de0c26d41207c2d60e98f9ab32a02fb7bbf9f14e01c1a04a not found: ID does not exist" Feb 23 09:05:22 crc kubenswrapper[4940]: I0223 09:05:22.851480 4940 scope.go:117] "RemoveContainer" containerID="4b06d1394f246088bda2ac448774f388f7b8b7acd3f9ae2fe24afc6a99fa13ee" Feb 23 09:05:22 crc kubenswrapper[4940]: E0223 09:05:22.851886 4940 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4b06d1394f246088bda2ac448774f388f7b8b7acd3f9ae2fe24afc6a99fa13ee\": container with ID starting with 4b06d1394f246088bda2ac448774f388f7b8b7acd3f9ae2fe24afc6a99fa13ee not found: ID does not exist" containerID="4b06d1394f246088bda2ac448774f388f7b8b7acd3f9ae2fe24afc6a99fa13ee" Feb 23 09:05:22 crc kubenswrapper[4940]: I0223 09:05:22.851920 4940 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4b06d1394f246088bda2ac448774f388f7b8b7acd3f9ae2fe24afc6a99fa13ee"} err="failed to get container status \"4b06d1394f246088bda2ac448774f388f7b8b7acd3f9ae2fe24afc6a99fa13ee\": rpc error: code = NotFound desc = could not find container \"4b06d1394f246088bda2ac448774f388f7b8b7acd3f9ae2fe24afc6a99fa13ee\": container with ID starting with 4b06d1394f246088bda2ac448774f388f7b8b7acd3f9ae2fe24afc6a99fa13ee not found: ID does not exist" Feb 23 09:05:23 crc kubenswrapper[4940]: I0223 09:05:23.361289 4940 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="73de16ac-08c7-439d-b525-dc8db35c1115" path="/var/lib/kubelet/pods/73de16ac-08c7-439d-b525-dc8db35c1115/volumes" Feb 23 09:05:24 crc kubenswrapper[4940]: I0223 09:05:24.180090 4940 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-248kt"] Feb 23 09:05:25 crc kubenswrapper[4940]: I0223 09:05:25.785164 4940 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-248kt" podUID="ae338573-cca0-4cea-bf2b-1951fd37edbe" containerName="registry-server" containerID="cri-o://5402381f9a797cef52dc0c77d5b771b9e9c1ccfad96d56268d8c1bd4882c406c" gracePeriod=2 Feb 23 09:05:26 crc kubenswrapper[4940]: I0223 09:05:26.170231 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-248kt" Feb 23 09:05:26 crc kubenswrapper[4940]: I0223 09:05:26.279770 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6kps9\" (UniqueName: \"kubernetes.io/projected/ae338573-cca0-4cea-bf2b-1951fd37edbe-kube-api-access-6kps9\") pod \"ae338573-cca0-4cea-bf2b-1951fd37edbe\" (UID: \"ae338573-cca0-4cea-bf2b-1951fd37edbe\") " Feb 23 09:05:26 crc kubenswrapper[4940]: I0223 09:05:26.279942 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ae338573-cca0-4cea-bf2b-1951fd37edbe-utilities\") pod \"ae338573-cca0-4cea-bf2b-1951fd37edbe\" (UID: \"ae338573-cca0-4cea-bf2b-1951fd37edbe\") " Feb 23 09:05:26 crc kubenswrapper[4940]: I0223 09:05:26.281031 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ae338573-cca0-4cea-bf2b-1951fd37edbe-utilities" (OuterVolumeSpecName: "utilities") pod "ae338573-cca0-4cea-bf2b-1951fd37edbe" (UID: "ae338573-cca0-4cea-bf2b-1951fd37edbe"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 09:05:26 crc kubenswrapper[4940]: I0223 09:05:26.281136 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ae338573-cca0-4cea-bf2b-1951fd37edbe-catalog-content\") pod \"ae338573-cca0-4cea-bf2b-1951fd37edbe\" (UID: \"ae338573-cca0-4cea-bf2b-1951fd37edbe\") " Feb 23 09:05:26 crc kubenswrapper[4940]: I0223 09:05:26.282452 4940 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ae338573-cca0-4cea-bf2b-1951fd37edbe-utilities\") on node \"crc\" DevicePath \"\"" Feb 23 09:05:26 crc kubenswrapper[4940]: I0223 09:05:26.286266 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ae338573-cca0-4cea-bf2b-1951fd37edbe-kube-api-access-6kps9" (OuterVolumeSpecName: "kube-api-access-6kps9") pod "ae338573-cca0-4cea-bf2b-1951fd37edbe" (UID: "ae338573-cca0-4cea-bf2b-1951fd37edbe"). InnerVolumeSpecName "kube-api-access-6kps9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 09:05:26 crc kubenswrapper[4940]: I0223 09:05:26.302213 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ae338573-cca0-4cea-bf2b-1951fd37edbe-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ae338573-cca0-4cea-bf2b-1951fd37edbe" (UID: "ae338573-cca0-4cea-bf2b-1951fd37edbe"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 09:05:26 crc kubenswrapper[4940]: I0223 09:05:26.383929 4940 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6kps9\" (UniqueName: \"kubernetes.io/projected/ae338573-cca0-4cea-bf2b-1951fd37edbe-kube-api-access-6kps9\") on node \"crc\" DevicePath \"\"" Feb 23 09:05:26 crc kubenswrapper[4940]: I0223 09:05:26.383959 4940 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ae338573-cca0-4cea-bf2b-1951fd37edbe-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 23 09:05:26 crc kubenswrapper[4940]: I0223 09:05:26.792111 4940 generic.go:334] "Generic (PLEG): container finished" podID="ae338573-cca0-4cea-bf2b-1951fd37edbe" containerID="5402381f9a797cef52dc0c77d5b771b9e9c1ccfad96d56268d8c1bd4882c406c" exitCode=0 Feb 23 09:05:26 crc kubenswrapper[4940]: I0223 09:05:26.792170 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-248kt" event={"ID":"ae338573-cca0-4cea-bf2b-1951fd37edbe","Type":"ContainerDied","Data":"5402381f9a797cef52dc0c77d5b771b9e9c1ccfad96d56268d8c1bd4882c406c"} Feb 23 09:05:26 crc kubenswrapper[4940]: I0223 09:05:26.792186 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-248kt" Feb 23 09:05:26 crc kubenswrapper[4940]: I0223 09:05:26.792206 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-248kt" event={"ID":"ae338573-cca0-4cea-bf2b-1951fd37edbe","Type":"ContainerDied","Data":"ccefa9bb5d3a31a6f4dc6f7085f287ce69eb1bca7fdbddca9fc86122b92ed4b9"} Feb 23 09:05:26 crc kubenswrapper[4940]: I0223 09:05:26.792235 4940 scope.go:117] "RemoveContainer" containerID="5402381f9a797cef52dc0c77d5b771b9e9c1ccfad96d56268d8c1bd4882c406c" Feb 23 09:05:26 crc kubenswrapper[4940]: I0223 09:05:26.805556 4940 scope.go:117] "RemoveContainer" containerID="8c3d1696d46cc1e0c4ec2ccf0687c7b5fc8fed45b3e35223fc33de4afe569137" Feb 23 09:05:26 crc kubenswrapper[4940]: I0223 09:05:26.824037 4940 scope.go:117] "RemoveContainer" containerID="26bc737cb33770a03312b57fd4a99e9c995fa33e585e278dc93a0954d8182c67" Feb 23 09:05:26 crc kubenswrapper[4940]: I0223 09:05:26.835995 4940 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-248kt"] Feb 23 09:05:26 crc kubenswrapper[4940]: I0223 09:05:26.841731 4940 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-248kt"] Feb 23 09:05:26 crc kubenswrapper[4940]: I0223 09:05:26.866338 4940 scope.go:117] "RemoveContainer" containerID="5402381f9a797cef52dc0c77d5b771b9e9c1ccfad96d56268d8c1bd4882c406c" Feb 23 09:05:26 crc kubenswrapper[4940]: E0223 09:05:26.866884 4940 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5402381f9a797cef52dc0c77d5b771b9e9c1ccfad96d56268d8c1bd4882c406c\": container with ID starting with 5402381f9a797cef52dc0c77d5b771b9e9c1ccfad96d56268d8c1bd4882c406c not found: ID does not exist" containerID="5402381f9a797cef52dc0c77d5b771b9e9c1ccfad96d56268d8c1bd4882c406c" Feb 23 09:05:26 crc kubenswrapper[4940]: I0223 09:05:26.866926 4940 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5402381f9a797cef52dc0c77d5b771b9e9c1ccfad96d56268d8c1bd4882c406c"} err="failed to get container status \"5402381f9a797cef52dc0c77d5b771b9e9c1ccfad96d56268d8c1bd4882c406c\": rpc error: code = NotFound desc = could not find container \"5402381f9a797cef52dc0c77d5b771b9e9c1ccfad96d56268d8c1bd4882c406c\": container with ID starting with 5402381f9a797cef52dc0c77d5b771b9e9c1ccfad96d56268d8c1bd4882c406c not found: ID does not exist" Feb 23 09:05:26 crc kubenswrapper[4940]: I0223 09:05:26.866958 4940 scope.go:117] "RemoveContainer" containerID="8c3d1696d46cc1e0c4ec2ccf0687c7b5fc8fed45b3e35223fc33de4afe569137" Feb 23 09:05:26 crc kubenswrapper[4940]: E0223 09:05:26.867344 4940 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8c3d1696d46cc1e0c4ec2ccf0687c7b5fc8fed45b3e35223fc33de4afe569137\": container with ID starting with 8c3d1696d46cc1e0c4ec2ccf0687c7b5fc8fed45b3e35223fc33de4afe569137 not found: ID does not exist" containerID="8c3d1696d46cc1e0c4ec2ccf0687c7b5fc8fed45b3e35223fc33de4afe569137" Feb 23 09:05:26 crc kubenswrapper[4940]: I0223 09:05:26.867363 4940 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8c3d1696d46cc1e0c4ec2ccf0687c7b5fc8fed45b3e35223fc33de4afe569137"} err="failed to get container status \"8c3d1696d46cc1e0c4ec2ccf0687c7b5fc8fed45b3e35223fc33de4afe569137\": rpc error: code = NotFound desc = could not find container \"8c3d1696d46cc1e0c4ec2ccf0687c7b5fc8fed45b3e35223fc33de4afe569137\": container with ID starting with 8c3d1696d46cc1e0c4ec2ccf0687c7b5fc8fed45b3e35223fc33de4afe569137 not found: ID does not exist" Feb 23 09:05:26 crc kubenswrapper[4940]: I0223 09:05:26.867375 4940 scope.go:117] "RemoveContainer" containerID="26bc737cb33770a03312b57fd4a99e9c995fa33e585e278dc93a0954d8182c67" Feb 23 09:05:26 crc kubenswrapper[4940]: E0223 09:05:26.867639 4940 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"26bc737cb33770a03312b57fd4a99e9c995fa33e585e278dc93a0954d8182c67\": container with ID starting with 26bc737cb33770a03312b57fd4a99e9c995fa33e585e278dc93a0954d8182c67 not found: ID does not exist" containerID="26bc737cb33770a03312b57fd4a99e9c995fa33e585e278dc93a0954d8182c67" Feb 23 09:05:26 crc kubenswrapper[4940]: I0223 09:05:26.867670 4940 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"26bc737cb33770a03312b57fd4a99e9c995fa33e585e278dc93a0954d8182c67"} err="failed to get container status \"26bc737cb33770a03312b57fd4a99e9c995fa33e585e278dc93a0954d8182c67\": rpc error: code = NotFound desc = could not find container \"26bc737cb33770a03312b57fd4a99e9c995fa33e585e278dc93a0954d8182c67\": container with ID starting with 26bc737cb33770a03312b57fd4a99e9c995fa33e585e278dc93a0954d8182c67 not found: ID does not exist" Feb 23 09:05:27 crc kubenswrapper[4940]: I0223 09:05:27.353731 4940 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ae338573-cca0-4cea-bf2b-1951fd37edbe" path="/var/lib/kubelet/pods/ae338573-cca0-4cea-bf2b-1951fd37edbe/volumes" Feb 23 09:05:41 crc kubenswrapper[4940]: I0223 09:05:41.284152 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-868647ff47-vp5zb"] Feb 23 09:05:41 crc kubenswrapper[4940]: E0223 09:05:41.285042 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="73de16ac-08c7-439d-b525-dc8db35c1115" containerName="extract-content" Feb 23 09:05:41 crc kubenswrapper[4940]: I0223 09:05:41.285057 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="73de16ac-08c7-439d-b525-dc8db35c1115" containerName="extract-content" Feb 23 09:05:41 crc kubenswrapper[4940]: E0223 09:05:41.285070 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae338573-cca0-4cea-bf2b-1951fd37edbe" containerName="registry-server" Feb 23 09:05:41 crc kubenswrapper[4940]: I0223 09:05:41.285078 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae338573-cca0-4cea-bf2b-1951fd37edbe" containerName="registry-server" Feb 23 09:05:41 crc kubenswrapper[4940]: E0223 09:05:41.285086 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae338573-cca0-4cea-bf2b-1951fd37edbe" containerName="extract-utilities" Feb 23 09:05:41 crc kubenswrapper[4940]: I0223 09:05:41.285093 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae338573-cca0-4cea-bf2b-1951fd37edbe" containerName="extract-utilities" Feb 23 09:05:41 crc kubenswrapper[4940]: E0223 09:05:41.285112 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="73de16ac-08c7-439d-b525-dc8db35c1115" containerName="extract-utilities" Feb 23 09:05:41 crc kubenswrapper[4940]: I0223 09:05:41.285120 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="73de16ac-08c7-439d-b525-dc8db35c1115" containerName="extract-utilities" Feb 23 09:05:41 crc kubenswrapper[4940]: E0223 09:05:41.285132 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae338573-cca0-4cea-bf2b-1951fd37edbe" containerName="extract-content" Feb 23 09:05:41 crc kubenswrapper[4940]: I0223 09:05:41.285139 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae338573-cca0-4cea-bf2b-1951fd37edbe" containerName="extract-content" Feb 23 09:05:41 crc kubenswrapper[4940]: E0223 09:05:41.289543 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="73de16ac-08c7-439d-b525-dc8db35c1115" containerName="registry-server" Feb 23 09:05:41 crc kubenswrapper[4940]: I0223 09:05:41.289564 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="73de16ac-08c7-439d-b525-dc8db35c1115" containerName="registry-server" Feb 23 09:05:41 crc kubenswrapper[4940]: I0223 09:05:41.289718 4940 memory_manager.go:354] "RemoveStaleState removing state" podUID="73de16ac-08c7-439d-b525-dc8db35c1115" containerName="registry-server" Feb 23 09:05:41 crc kubenswrapper[4940]: I0223 09:05:41.289736 4940 memory_manager.go:354] "RemoveStaleState removing state" podUID="ae338573-cca0-4cea-bf2b-1951fd37edbe" containerName="registry-server" Feb 23 09:05:41 crc kubenswrapper[4940]: I0223 09:05:41.290060 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-5d946d989d-bqhhr"] Feb 23 09:05:41 crc kubenswrapper[4940]: I0223 09:05:41.290166 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-vp5zb" Feb 23 09:05:41 crc kubenswrapper[4940]: I0223 09:05:41.291221 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-bqhhr" Feb 23 09:05:41 crc kubenswrapper[4940]: I0223 09:05:41.299835 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-vw28r" Feb 23 09:05:41 crc kubenswrapper[4940]: I0223 09:05:41.304039 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-868647ff47-vp5zb"] Feb 23 09:05:41 crc kubenswrapper[4940]: I0223 09:05:41.309258 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d8bf5c495-92fk4"] Feb 23 09:05:41 crc kubenswrapper[4940]: I0223 09:05:41.310226 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-92fk4" Feb 23 09:05:41 crc kubenswrapper[4940]: I0223 09:05:41.314709 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-5d946d989d-bqhhr"] Feb 23 09:05:41 crc kubenswrapper[4940]: I0223 09:05:41.316856 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-48hwt" Feb 23 09:05:41 crc kubenswrapper[4940]: I0223 09:05:41.322800 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d8bf5c495-92fk4"] Feb 23 09:05:41 crc kubenswrapper[4940]: I0223 09:05:41.323023 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-k4jmb" Feb 23 09:05:41 crc kubenswrapper[4940]: I0223 09:05:41.359999 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-77987464f4-p857l"] Feb 23 09:05:41 crc kubenswrapper[4940]: I0223 09:05:41.360731 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-77987464f4-p857l" Feb 23 09:05:41 crc kubenswrapper[4940]: I0223 09:05:41.363966 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-77987464f4-p857l"] Feb 23 09:05:41 crc kubenswrapper[4940]: I0223 09:05:41.364196 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-ldbsj" Feb 23 09:05:41 crc kubenswrapper[4940]: I0223 09:05:41.406413 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dt6tn\" (UniqueName: \"kubernetes.io/projected/c8d94d12-5d54-4c60-85d4-de19e4dfde67-kube-api-access-dt6tn\") pod \"cinder-operator-controller-manager-5d946d989d-bqhhr\" (UID: \"c8d94d12-5d54-4c60-85d4-de19e4dfde67\") " pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-bqhhr" Feb 23 09:05:41 crc kubenswrapper[4940]: I0223 09:05:41.406489 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p2m59\" (UniqueName: \"kubernetes.io/projected/dfc9a681-c309-4803-9be0-6150d615b023-kube-api-access-p2m59\") pod \"barbican-operator-controller-manager-868647ff47-vp5zb\" (UID: \"dfc9a681-c309-4803-9be0-6150d615b023\") " pod="openstack-operators/barbican-operator-controller-manager-868647ff47-vp5zb" Feb 23 09:05:41 crc kubenswrapper[4940]: I0223 09:05:41.406562 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rszqn\" (UniqueName: \"kubernetes.io/projected/0d68e7dc-1d8e-4edd-a2f9-585043e15a98-kube-api-access-rszqn\") pod \"designate-operator-controller-manager-6d8bf5c495-92fk4\" (UID: \"0d68e7dc-1d8e-4edd-a2f9-585043e15a98\") " pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-92fk4" Feb 23 09:05:41 crc kubenswrapper[4940]: I0223 09:05:41.418019 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-69f49c598c-pvb4b"] Feb 23 09:05:41 crc kubenswrapper[4940]: I0223 09:05:41.419537 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-pvb4b" Feb 23 09:05:41 crc kubenswrapper[4940]: I0223 09:05:41.424137 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-ldnnl" Feb 23 09:05:41 crc kubenswrapper[4940]: I0223 09:05:41.446688 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5b9b8895d5-qzd5f"] Feb 23 09:05:41 crc kubenswrapper[4940]: I0223 09:05:41.471603 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-qzd5f" Feb 23 09:05:41 crc kubenswrapper[4940]: I0223 09:05:41.474737 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-vl94z" Feb 23 09:05:41 crc kubenswrapper[4940]: I0223 09:05:41.483717 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-69f49c598c-pvb4b"] Feb 23 09:05:41 crc kubenswrapper[4940]: I0223 09:05:41.506595 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-79d975b745-86vf7"] Feb 23 09:05:41 crc kubenswrapper[4940]: I0223 09:05:41.507305 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xw567\" (UniqueName: \"kubernetes.io/projected/2a7c5730-7ed4-44b1-832d-109fa4460dc5-kube-api-access-xw567\") pod \"glance-operator-controller-manager-77987464f4-p857l\" (UID: \"2a7c5730-7ed4-44b1-832d-109fa4460dc5\") " pod="openstack-operators/glance-operator-controller-manager-77987464f4-p857l" Feb 23 09:05:41 crc kubenswrapper[4940]: I0223 09:05:41.507377 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dt6tn\" (UniqueName: \"kubernetes.io/projected/c8d94d12-5d54-4c60-85d4-de19e4dfde67-kube-api-access-dt6tn\") pod \"cinder-operator-controller-manager-5d946d989d-bqhhr\" (UID: \"c8d94d12-5d54-4c60-85d4-de19e4dfde67\") " pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-bqhhr" Feb 23 09:05:41 crc kubenswrapper[4940]: I0223 09:05:41.507402 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p2m59\" (UniqueName: \"kubernetes.io/projected/dfc9a681-c309-4803-9be0-6150d615b023-kube-api-access-p2m59\") pod \"barbican-operator-controller-manager-868647ff47-vp5zb\" (UID: \"dfc9a681-c309-4803-9be0-6150d615b023\") " pod="openstack-operators/barbican-operator-controller-manager-868647ff47-vp5zb" Feb 23 09:05:41 crc kubenswrapper[4940]: I0223 09:05:41.507437 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rszqn\" (UniqueName: \"kubernetes.io/projected/0d68e7dc-1d8e-4edd-a2f9-585043e15a98-kube-api-access-rszqn\") pod \"designate-operator-controller-manager-6d8bf5c495-92fk4\" (UID: \"0d68e7dc-1d8e-4edd-a2f9-585043e15a98\") " pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-92fk4" Feb 23 09:05:41 crc kubenswrapper[4940]: I0223 09:05:41.507645 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-79d975b745-86vf7" Feb 23 09:05:41 crc kubenswrapper[4940]: I0223 09:05:41.511547 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Feb 23 09:05:41 crc kubenswrapper[4940]: I0223 09:05:41.511778 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5b9b8895d5-qzd5f"] Feb 23 09:05:41 crc kubenswrapper[4940]: I0223 09:05:41.511796 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-b7dr6" Feb 23 09:05:41 crc kubenswrapper[4940]: I0223 09:05:41.538668 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dt6tn\" (UniqueName: \"kubernetes.io/projected/c8d94d12-5d54-4c60-85d4-de19e4dfde67-kube-api-access-dt6tn\") pod \"cinder-operator-controller-manager-5d946d989d-bqhhr\" (UID: \"c8d94d12-5d54-4c60-85d4-de19e4dfde67\") " pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-bqhhr" Feb 23 09:05:41 crc kubenswrapper[4940]: I0223 09:05:41.543595 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rszqn\" (UniqueName: \"kubernetes.io/projected/0d68e7dc-1d8e-4edd-a2f9-585043e15a98-kube-api-access-rszqn\") pod \"designate-operator-controller-manager-6d8bf5c495-92fk4\" (UID: \"0d68e7dc-1d8e-4edd-a2f9-585043e15a98\") " pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-92fk4" Feb 23 09:05:41 crc kubenswrapper[4940]: I0223 09:05:41.546184 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p2m59\" (UniqueName: \"kubernetes.io/projected/dfc9a681-c309-4803-9be0-6150d615b023-kube-api-access-p2m59\") pod \"barbican-operator-controller-manager-868647ff47-vp5zb\" (UID: \"dfc9a681-c309-4803-9be0-6150d615b023\") " pod="openstack-operators/barbican-operator-controller-manager-868647ff47-vp5zb" Feb 23 09:05:41 crc kubenswrapper[4940]: I0223 09:05:41.573798 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-79d975b745-86vf7"] Feb 23 09:05:41 crc kubenswrapper[4940]: I0223 09:05:41.582123 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-554564d7fc-8wv98"] Feb 23 09:05:41 crc kubenswrapper[4940]: I0223 09:05:41.583097 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-8wv98" Feb 23 09:05:41 crc kubenswrapper[4940]: I0223 09:05:41.585945 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-bhzd4" Feb 23 09:05:41 crc kubenswrapper[4940]: I0223 09:05:41.586109 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-554564d7fc-8wv98"] Feb 23 09:05:41 crc kubenswrapper[4940]: I0223 09:05:41.598654 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b4d948c87-zqz6k"] Feb 23 09:05:41 crc kubenswrapper[4940]: I0223 09:05:41.599547 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-zqz6k" Feb 23 09:05:41 crc kubenswrapper[4940]: I0223 09:05:41.605296 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-2258b" Feb 23 09:05:41 crc kubenswrapper[4940]: I0223 09:05:41.609022 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vfntf\" (UniqueName: \"kubernetes.io/projected/61343538-79c0-4565-ae70-a397b5fd6b2f-kube-api-access-vfntf\") pod \"heat-operator-controller-manager-69f49c598c-pvb4b\" (UID: \"61343538-79c0-4565-ae70-a397b5fd6b2f\") " pod="openstack-operators/heat-operator-controller-manager-69f49c598c-pvb4b" Feb 23 09:05:41 crc kubenswrapper[4940]: I0223 09:05:41.609096 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/82d3766e-53e7-4dc8-9c9b-d71e9d930595-cert\") pod \"infra-operator-controller-manager-79d975b745-86vf7\" (UID: \"82d3766e-53e7-4dc8-9c9b-d71e9d930595\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-86vf7" Feb 23 09:05:41 crc kubenswrapper[4940]: I0223 09:05:41.609132 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xw567\" (UniqueName: \"kubernetes.io/projected/2a7c5730-7ed4-44b1-832d-109fa4460dc5-kube-api-access-xw567\") pod \"glance-operator-controller-manager-77987464f4-p857l\" (UID: \"2a7c5730-7ed4-44b1-832d-109fa4460dc5\") " pod="openstack-operators/glance-operator-controller-manager-77987464f4-p857l" Feb 23 09:05:41 crc kubenswrapper[4940]: I0223 09:05:41.609190 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b4psm\" (UniqueName: \"kubernetes.io/projected/db71f743-426e-4fe8-ab74-17c3f68798fc-kube-api-access-b4psm\") pod \"horizon-operator-controller-manager-5b9b8895d5-qzd5f\" (UID: \"db71f743-426e-4fe8-ab74-17c3f68798fc\") " pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-qzd5f" Feb 23 09:05:41 crc kubenswrapper[4940]: I0223 09:05:41.609262 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-79lfj\" (UniqueName: \"kubernetes.io/projected/82d3766e-53e7-4dc8-9c9b-d71e9d930595-kube-api-access-79lfj\") pod \"infra-operator-controller-manager-79d975b745-86vf7\" (UID: \"82d3766e-53e7-4dc8-9c9b-d71e9d930595\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-86vf7" Feb 23 09:05:41 crc kubenswrapper[4940]: I0223 09:05:41.615083 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b4d948c87-zqz6k"] Feb 23 09:05:41 crc kubenswrapper[4940]: I0223 09:05:41.615426 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-vp5zb" Feb 23 09:05:41 crc kubenswrapper[4940]: I0223 09:05:41.621117 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6994f66f48-rwvf9"] Feb 23 09:05:41 crc kubenswrapper[4940]: I0223 09:05:41.622222 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-rwvf9" Feb 23 09:05:41 crc kubenswrapper[4940]: I0223 09:05:41.627834 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-8hlb8" Feb 23 09:05:41 crc kubenswrapper[4940]: I0223 09:05:41.631345 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-bqhhr" Feb 23 09:05:41 crc kubenswrapper[4940]: I0223 09:05:41.632885 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xw567\" (UniqueName: \"kubernetes.io/projected/2a7c5730-7ed4-44b1-832d-109fa4460dc5-kube-api-access-xw567\") pod \"glance-operator-controller-manager-77987464f4-p857l\" (UID: \"2a7c5730-7ed4-44b1-832d-109fa4460dc5\") " pod="openstack-operators/glance-operator-controller-manager-77987464f4-p857l" Feb 23 09:05:41 crc kubenswrapper[4940]: I0223 09:05:41.635132 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-54f6768c69-vh4r6"] Feb 23 09:05:41 crc kubenswrapper[4940]: I0223 09:05:41.646200 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-92fk4" Feb 23 09:05:41 crc kubenswrapper[4940]: I0223 09:05:41.651472 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6994f66f48-rwvf9"] Feb 23 09:05:41 crc kubenswrapper[4940]: I0223 09:05:41.651510 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-64ddbf8bb-6nlcd"] Feb 23 09:05:41 crc kubenswrapper[4940]: I0223 09:05:41.651910 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-vh4r6" Feb 23 09:05:41 crc kubenswrapper[4940]: I0223 09:05:41.652281 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-6nlcd" Feb 23 09:05:41 crc kubenswrapper[4940]: I0223 09:05:41.660019 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-64ddbf8bb-6nlcd"] Feb 23 09:05:41 crc kubenswrapper[4940]: I0223 09:05:41.662789 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-w6vk8" Feb 23 09:05:41 crc kubenswrapper[4940]: I0223 09:05:41.663097 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-g9s68" Feb 23 09:05:41 crc kubenswrapper[4940]: I0223 09:05:41.669240 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-54f6768c69-vh4r6"] Feb 23 09:05:41 crc kubenswrapper[4940]: I0223 09:05:41.682057 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-567668f5cf-6ztzk"] Feb 23 09:05:41 crc kubenswrapper[4940]: I0223 09:05:41.682434 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-77987464f4-p857l" Feb 23 09:05:41 crc kubenswrapper[4940]: I0223 09:05:41.683049 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-6ztzk" Feb 23 09:05:41 crc kubenswrapper[4940]: I0223 09:05:41.690120 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-tw7b5" Feb 23 09:05:41 crc kubenswrapper[4940]: I0223 09:05:41.690265 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-567668f5cf-6ztzk"] Feb 23 09:05:41 crc kubenswrapper[4940]: I0223 09:05:41.699790 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-69f8888797-cmbf8"] Feb 23 09:05:41 crc kubenswrapper[4940]: I0223 09:05:41.700777 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-cmbf8" Feb 23 09:05:41 crc kubenswrapper[4940]: I0223 09:05:41.703418 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-46szh" Feb 23 09:05:41 crc kubenswrapper[4940]: I0223 09:05:41.715645 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/82d3766e-53e7-4dc8-9c9b-d71e9d930595-cert\") pod \"infra-operator-controller-manager-79d975b745-86vf7\" (UID: \"82d3766e-53e7-4dc8-9c9b-d71e9d930595\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-86vf7" Feb 23 09:05:41 crc kubenswrapper[4940]: I0223 09:05:41.715710 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8fz87\" (UniqueName: \"kubernetes.io/projected/2fb7ee71-a9af-4504-8899-932449157080-kube-api-access-8fz87\") pod \"keystone-operator-controller-manager-b4d948c87-zqz6k\" (UID: \"2fb7ee71-a9af-4504-8899-932449157080\") " pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-zqz6k" Feb 23 09:05:41 crc kubenswrapper[4940]: I0223 09:05:41.715747 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b4psm\" (UniqueName: \"kubernetes.io/projected/db71f743-426e-4fe8-ab74-17c3f68798fc-kube-api-access-b4psm\") pod \"horizon-operator-controller-manager-5b9b8895d5-qzd5f\" (UID: \"db71f743-426e-4fe8-ab74-17c3f68798fc\") " pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-qzd5f" Feb 23 09:05:41 crc kubenswrapper[4940]: I0223 09:05:41.715775 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xjq4j\" (UniqueName: \"kubernetes.io/projected/e05a318b-495f-49c1-83cf-056d5ce99c8c-kube-api-access-xjq4j\") pod \"mariadb-operator-controller-manager-6994f66f48-rwvf9\" (UID: \"e05a318b-495f-49c1-83cf-056d5ce99c8c\") " pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-rwvf9" Feb 23 09:05:41 crc kubenswrapper[4940]: I0223 09:05:41.715837 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-79lfj\" (UniqueName: \"kubernetes.io/projected/82d3766e-53e7-4dc8-9c9b-d71e9d930595-kube-api-access-79lfj\") pod \"infra-operator-controller-manager-79d975b745-86vf7\" (UID: \"82d3766e-53e7-4dc8-9c9b-d71e9d930595\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-86vf7" Feb 23 09:05:41 crc kubenswrapper[4940]: I0223 09:05:41.715881 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-plsxq\" (UniqueName: \"kubernetes.io/projected/34061626-0f45-4bb5-a16f-9059fa45be7f-kube-api-access-plsxq\") pod \"ironic-operator-controller-manager-554564d7fc-8wv98\" (UID: \"34061626-0f45-4bb5-a16f-9059fa45be7f\") " pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-8wv98" Feb 23 09:05:41 crc kubenswrapper[4940]: I0223 09:05:41.715902 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vfntf\" (UniqueName: \"kubernetes.io/projected/61343538-79c0-4565-ae70-a397b5fd6b2f-kube-api-access-vfntf\") pod \"heat-operator-controller-manager-69f49c598c-pvb4b\" (UID: \"61343538-79c0-4565-ae70-a397b5fd6b2f\") " pod="openstack-operators/heat-operator-controller-manager-69f49c598c-pvb4b" Feb 23 09:05:41 crc kubenswrapper[4940]: E0223 09:05:41.716127 4940 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 23 09:05:41 crc kubenswrapper[4940]: E0223 09:05:41.716228 4940 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/82d3766e-53e7-4dc8-9c9b-d71e9d930595-cert podName:82d3766e-53e7-4dc8-9c9b-d71e9d930595 nodeName:}" failed. No retries permitted until 2026-02-23 09:05:42.216213112 +0000 UTC m=+1073.599419269 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/82d3766e-53e7-4dc8-9c9b-d71e9d930595-cert") pod "infra-operator-controller-manager-79d975b745-86vf7" (UID: "82d3766e-53e7-4dc8-9c9b-d71e9d930595") : secret "infra-operator-webhook-server-cert" not found Feb 23 09:05:41 crc kubenswrapper[4940]: I0223 09:05:41.722225 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-69f8888797-cmbf8"] Feb 23 09:05:41 crc kubenswrapper[4940]: I0223 09:05:41.730400 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cwwlzn"] Feb 23 09:05:41 crc kubenswrapper[4940]: I0223 09:05:41.731245 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cwwlzn"] Feb 23 09:05:41 crc kubenswrapper[4940]: I0223 09:05:41.731334 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cwwlzn" Feb 23 09:05:41 crc kubenswrapper[4940]: I0223 09:05:41.733417 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-d44cf6b75-58p99"] Feb 23 09:05:41 crc kubenswrapper[4940]: I0223 09:05:41.733981 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-58p99" Feb 23 09:05:41 crc kubenswrapper[4940]: I0223 09:05:41.734033 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b4psm\" (UniqueName: \"kubernetes.io/projected/db71f743-426e-4fe8-ab74-17c3f68798fc-kube-api-access-b4psm\") pod \"horizon-operator-controller-manager-5b9b8895d5-qzd5f\" (UID: \"db71f743-426e-4fe8-ab74-17c3f68798fc\") " pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-qzd5f" Feb 23 09:05:41 crc kubenswrapper[4940]: I0223 09:05:41.737834 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-79lfj\" (UniqueName: \"kubernetes.io/projected/82d3766e-53e7-4dc8-9c9b-d71e9d930595-kube-api-access-79lfj\") pod \"infra-operator-controller-manager-79d975b745-86vf7\" (UID: \"82d3766e-53e7-4dc8-9c9b-d71e9d930595\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-86vf7" Feb 23 09:05:41 crc kubenswrapper[4940]: I0223 09:05:41.737963 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-8497b45c89-zb9xm"] Feb 23 09:05:41 crc kubenswrapper[4940]: I0223 09:05:41.739817 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-zb9xm" Feb 23 09:05:41 crc kubenswrapper[4940]: I0223 09:05:41.740622 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-d44cf6b75-58p99"] Feb 23 09:05:41 crc kubenswrapper[4940]: I0223 09:05:41.741851 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-qlb7p" Feb 23 09:05:41 crc kubenswrapper[4940]: I0223 09:05:41.742326 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Feb 23 09:05:41 crc kubenswrapper[4940]: I0223 09:05:41.742594 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-mmp46" Feb 23 09:05:41 crc kubenswrapper[4940]: I0223 09:05:41.742835 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-gdcnb" Feb 23 09:05:41 crc kubenswrapper[4940]: I0223 09:05:41.745260 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-68f46476f-qzv55"] Feb 23 09:05:41 crc kubenswrapper[4940]: I0223 09:05:41.746283 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-68f46476f-qzv55" Feb 23 09:05:41 crc kubenswrapper[4940]: I0223 09:05:41.747459 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-jxg7c" Feb 23 09:05:41 crc kubenswrapper[4940]: I0223 09:05:41.748862 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-8497b45c89-zb9xm"] Feb 23 09:05:41 crc kubenswrapper[4940]: I0223 09:05:41.749599 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vfntf\" (UniqueName: \"kubernetes.io/projected/61343538-79c0-4565-ae70-a397b5fd6b2f-kube-api-access-vfntf\") pod \"heat-operator-controller-manager-69f49c598c-pvb4b\" (UID: \"61343538-79c0-4565-ae70-a397b5fd6b2f\") " pod="openstack-operators/heat-operator-controller-manager-69f49c598c-pvb4b" Feb 23 09:05:41 crc kubenswrapper[4940]: I0223 09:05:41.756223 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-68f46476f-qzv55"] Feb 23 09:05:41 crc kubenswrapper[4940]: I0223 09:05:41.765524 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-khtmd"] Feb 23 09:05:41 crc kubenswrapper[4940]: I0223 09:05:41.766337 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-khtmd" Feb 23 09:05:41 crc kubenswrapper[4940]: I0223 09:05:41.768225 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-8cqph" Feb 23 09:05:41 crc kubenswrapper[4940]: I0223 09:05:41.796081 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-qzd5f" Feb 23 09:05:41 crc kubenswrapper[4940]: I0223 09:05:41.817024 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8fz87\" (UniqueName: \"kubernetes.io/projected/2fb7ee71-a9af-4504-8899-932449157080-kube-api-access-8fz87\") pod \"keystone-operator-controller-manager-b4d948c87-zqz6k\" (UID: \"2fb7ee71-a9af-4504-8899-932449157080\") " pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-zqz6k" Feb 23 09:05:41 crc kubenswrapper[4940]: I0223 09:05:41.817083 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-db76k\" (UniqueName: \"kubernetes.io/projected/70ee3a78-ae3e-4b61-a6f6-4c3e36f50c3e-kube-api-access-db76k\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cwwlzn\" (UID: \"70ee3a78-ae3e-4b61-a6f6-4c3e36f50c3e\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cwwlzn" Feb 23 09:05:41 crc kubenswrapper[4940]: I0223 09:05:41.817123 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/70ee3a78-ae3e-4b61-a6f6-4c3e36f50c3e-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cwwlzn\" (UID: \"70ee3a78-ae3e-4b61-a6f6-4c3e36f50c3e\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cwwlzn" Feb 23 09:05:41 crc kubenswrapper[4940]: I0223 09:05:41.817161 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xjq4j\" (UniqueName: \"kubernetes.io/projected/e05a318b-495f-49c1-83cf-056d5ce99c8c-kube-api-access-xjq4j\") pod \"mariadb-operator-controller-manager-6994f66f48-rwvf9\" (UID: \"e05a318b-495f-49c1-83cf-056d5ce99c8c\") " pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-rwvf9" Feb 23 09:05:41 crc kubenswrapper[4940]: I0223 09:05:41.817181 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cfzrh\" (UniqueName: \"kubernetes.io/projected/15249b0f-c437-4d93-b97a-c7e078139e07-kube-api-access-cfzrh\") pod \"neutron-operator-controller-manager-64ddbf8bb-6nlcd\" (UID: \"15249b0f-c437-4d93-b97a-c7e078139e07\") " pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-6nlcd" Feb 23 09:05:41 crc kubenswrapper[4940]: I0223 09:05:41.817215 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rswcm\" (UniqueName: \"kubernetes.io/projected/780fe903-e160-47c9-9291-31ee2d139266-kube-api-access-rswcm\") pod \"manila-operator-controller-manager-54f6768c69-vh4r6\" (UID: \"780fe903-e160-47c9-9291-31ee2d139266\") " pod="openstack-operators/manila-operator-controller-manager-54f6768c69-vh4r6" Feb 23 09:05:41 crc kubenswrapper[4940]: I0223 09:05:41.817247 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6cd5q\" (UniqueName: \"kubernetes.io/projected/32fc4d76-59e1-44b3-ace9-e9f14dc4f86a-kube-api-access-6cd5q\") pod \"octavia-operator-controller-manager-69f8888797-cmbf8\" (UID: \"32fc4d76-59e1-44b3-ace9-e9f14dc4f86a\") " pod="openstack-operators/octavia-operator-controller-manager-69f8888797-cmbf8" Feb 23 09:05:41 crc kubenswrapper[4940]: I0223 09:05:41.817293 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dvkpb\" (UniqueName: \"kubernetes.io/projected/d2c13199-d708-496b-b69a-43fba1068955-kube-api-access-dvkpb\") pod \"nova-operator-controller-manager-567668f5cf-6ztzk\" (UID: \"d2c13199-d708-496b-b69a-43fba1068955\") " pod="openstack-operators/nova-operator-controller-manager-567668f5cf-6ztzk" Feb 23 09:05:41 crc kubenswrapper[4940]: I0223 09:05:41.817318 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-plsxq\" (UniqueName: \"kubernetes.io/projected/34061626-0f45-4bb5-a16f-9059fa45be7f-kube-api-access-plsxq\") pod \"ironic-operator-controller-manager-554564d7fc-8wv98\" (UID: \"34061626-0f45-4bb5-a16f-9059fa45be7f\") " pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-8wv98" Feb 23 09:05:41 crc kubenswrapper[4940]: I0223 09:05:41.941696 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8fz87\" (UniqueName: \"kubernetes.io/projected/2fb7ee71-a9af-4504-8899-932449157080-kube-api-access-8fz87\") pod \"keystone-operator-controller-manager-b4d948c87-zqz6k\" (UID: \"2fb7ee71-a9af-4504-8899-932449157080\") " pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-zqz6k" Feb 23 09:05:41 crc kubenswrapper[4940]: I0223 09:05:41.942922 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-khtmd"] Feb 23 09:05:41 crc kubenswrapper[4940]: I0223 09:05:41.961229 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cwmhx\" (UniqueName: \"kubernetes.io/projected/bda50d0f-3559-47b6-9ee2-8104750b30c4-kube-api-access-cwmhx\") pod \"placement-operator-controller-manager-8497b45c89-zb9xm\" (UID: \"bda50d0f-3559-47b6-9ee2-8104750b30c4\") " pod="openstack-operators/placement-operator-controller-manager-8497b45c89-zb9xm" Feb 23 09:05:41 crc kubenswrapper[4940]: I0223 09:05:41.961419 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-db76k\" (UniqueName: \"kubernetes.io/projected/70ee3a78-ae3e-4b61-a6f6-4c3e36f50c3e-kube-api-access-db76k\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cwwlzn\" (UID: \"70ee3a78-ae3e-4b61-a6f6-4c3e36f50c3e\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cwwlzn" Feb 23 09:05:41 crc kubenswrapper[4940]: I0223 09:05:41.961498 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/70ee3a78-ae3e-4b61-a6f6-4c3e36f50c3e-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cwwlzn\" (UID: \"70ee3a78-ae3e-4b61-a6f6-4c3e36f50c3e\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cwwlzn" Feb 23 09:05:41 crc kubenswrapper[4940]: I0223 09:05:41.961597 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cjdql\" (UniqueName: \"kubernetes.io/projected/e810e429-c05d-4451-a863-196e8e071d9b-kube-api-access-cjdql\") pod \"ovn-operator-controller-manager-d44cf6b75-58p99\" (UID: \"e810e429-c05d-4451-a863-196e8e071d9b\") " pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-58p99" Feb 23 09:05:41 crc kubenswrapper[4940]: I0223 09:05:41.961672 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cfzrh\" (UniqueName: \"kubernetes.io/projected/15249b0f-c437-4d93-b97a-c7e078139e07-kube-api-access-cfzrh\") pod \"neutron-operator-controller-manager-64ddbf8bb-6nlcd\" (UID: \"15249b0f-c437-4d93-b97a-c7e078139e07\") " pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-6nlcd" Feb 23 09:05:41 crc kubenswrapper[4940]: I0223 09:05:41.961747 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rswcm\" (UniqueName: \"kubernetes.io/projected/780fe903-e160-47c9-9291-31ee2d139266-kube-api-access-rswcm\") pod \"manila-operator-controller-manager-54f6768c69-vh4r6\" (UID: \"780fe903-e160-47c9-9291-31ee2d139266\") " pod="openstack-operators/manila-operator-controller-manager-54f6768c69-vh4r6" Feb 23 09:05:41 crc kubenswrapper[4940]: I0223 09:05:41.961814 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vm765\" (UniqueName: \"kubernetes.io/projected/d2fb7a6a-317d-4180-bcc3-07087b8a48ba-kube-api-access-vm765\") pod \"telemetry-operator-controller-manager-7f45b4ff68-khtmd\" (UID: \"d2fb7a6a-317d-4180-bcc3-07087b8a48ba\") " pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-khtmd" Feb 23 09:05:41 crc kubenswrapper[4940]: E0223 09:05:41.961883 4940 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 23 09:05:41 crc kubenswrapper[4940]: E0223 09:05:41.961976 4940 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/70ee3a78-ae3e-4b61-a6f6-4c3e36f50c3e-cert podName:70ee3a78-ae3e-4b61-a6f6-4c3e36f50c3e nodeName:}" failed. No retries permitted until 2026-02-23 09:05:42.461943175 +0000 UTC m=+1073.845149332 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/70ee3a78-ae3e-4b61-a6f6-4c3e36f50c3e-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9cwwlzn" (UID: "70ee3a78-ae3e-4b61-a6f6-4c3e36f50c3e") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 23 09:05:41 crc kubenswrapper[4940]: I0223 09:05:41.962013 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zhqdt\" (UniqueName: \"kubernetes.io/projected/8d39a603-93c8-4c09-a1d2-97e6c14902fe-kube-api-access-zhqdt\") pod \"swift-operator-controller-manager-68f46476f-qzv55\" (UID: \"8d39a603-93c8-4c09-a1d2-97e6c14902fe\") " pod="openstack-operators/swift-operator-controller-manager-68f46476f-qzv55" Feb 23 09:05:41 crc kubenswrapper[4940]: I0223 09:05:41.962060 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6cd5q\" (UniqueName: \"kubernetes.io/projected/32fc4d76-59e1-44b3-ace9-e9f14dc4f86a-kube-api-access-6cd5q\") pod \"octavia-operator-controller-manager-69f8888797-cmbf8\" (UID: \"32fc4d76-59e1-44b3-ace9-e9f14dc4f86a\") " pod="openstack-operators/octavia-operator-controller-manager-69f8888797-cmbf8" Feb 23 09:05:41 crc kubenswrapper[4940]: I0223 09:05:41.962153 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dvkpb\" (UniqueName: \"kubernetes.io/projected/d2c13199-d708-496b-b69a-43fba1068955-kube-api-access-dvkpb\") pod \"nova-operator-controller-manager-567668f5cf-6ztzk\" (UID: \"d2c13199-d708-496b-b69a-43fba1068955\") " pod="openstack-operators/nova-operator-controller-manager-567668f5cf-6ztzk" Feb 23 09:05:41 crc kubenswrapper[4940]: I0223 09:05:41.966743 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-plsxq\" (UniqueName: \"kubernetes.io/projected/34061626-0f45-4bb5-a16f-9059fa45be7f-kube-api-access-plsxq\") pod \"ironic-operator-controller-manager-554564d7fc-8wv98\" (UID: \"34061626-0f45-4bb5-a16f-9059fa45be7f\") " pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-8wv98" Feb 23 09:05:41 crc kubenswrapper[4940]: I0223 09:05:41.982685 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xjq4j\" (UniqueName: \"kubernetes.io/projected/e05a318b-495f-49c1-83cf-056d5ce99c8c-kube-api-access-xjq4j\") pod \"mariadb-operator-controller-manager-6994f66f48-rwvf9\" (UID: \"e05a318b-495f-49c1-83cf-056d5ce99c8c\") " pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-rwvf9" Feb 23 09:05:41 crc kubenswrapper[4940]: I0223 09:05:41.983999 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rswcm\" (UniqueName: \"kubernetes.io/projected/780fe903-e160-47c9-9291-31ee2d139266-kube-api-access-rswcm\") pod \"manila-operator-controller-manager-54f6768c69-vh4r6\" (UID: \"780fe903-e160-47c9-9291-31ee2d139266\") " pod="openstack-operators/manila-operator-controller-manager-54f6768c69-vh4r6" Feb 23 09:05:41 crc kubenswrapper[4940]: I0223 09:05:41.989093 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-db76k\" (UniqueName: \"kubernetes.io/projected/70ee3a78-ae3e-4b61-a6f6-4c3e36f50c3e-kube-api-access-db76k\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cwwlzn\" (UID: \"70ee3a78-ae3e-4b61-a6f6-4c3e36f50c3e\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cwwlzn" Feb 23 09:05:41 crc kubenswrapper[4940]: I0223 09:05:41.992251 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6cd5q\" (UniqueName: \"kubernetes.io/projected/32fc4d76-59e1-44b3-ace9-e9f14dc4f86a-kube-api-access-6cd5q\") pod \"octavia-operator-controller-manager-69f8888797-cmbf8\" (UID: \"32fc4d76-59e1-44b3-ace9-e9f14dc4f86a\") " pod="openstack-operators/octavia-operator-controller-manager-69f8888797-cmbf8" Feb 23 09:05:42 crc kubenswrapper[4940]: I0223 09:05:42.010163 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cfzrh\" (UniqueName: \"kubernetes.io/projected/15249b0f-c437-4d93-b97a-c7e078139e07-kube-api-access-cfzrh\") pod \"neutron-operator-controller-manager-64ddbf8bb-6nlcd\" (UID: \"15249b0f-c437-4d93-b97a-c7e078139e07\") " pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-6nlcd" Feb 23 09:05:42 crc kubenswrapper[4940]: I0223 09:05:42.012686 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dvkpb\" (UniqueName: \"kubernetes.io/projected/d2c13199-d708-496b-b69a-43fba1068955-kube-api-access-dvkpb\") pod \"nova-operator-controller-manager-567668f5cf-6ztzk\" (UID: \"d2c13199-d708-496b-b69a-43fba1068955\") " pod="openstack-operators/nova-operator-controller-manager-567668f5cf-6ztzk" Feb 23 09:05:42 crc kubenswrapper[4940]: I0223 09:05:42.028878 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-zqz6k" Feb 23 09:05:42 crc kubenswrapper[4940]: I0223 09:05:42.052459 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-pvb4b" Feb 23 09:05:42 crc kubenswrapper[4940]: I0223 09:05:42.054511 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-rwvf9" Feb 23 09:05:42 crc kubenswrapper[4940]: I0223 09:05:42.064954 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cjdql\" (UniqueName: \"kubernetes.io/projected/e810e429-c05d-4451-a863-196e8e071d9b-kube-api-access-cjdql\") pod \"ovn-operator-controller-manager-d44cf6b75-58p99\" (UID: \"e810e429-c05d-4451-a863-196e8e071d9b\") " pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-58p99" Feb 23 09:05:42 crc kubenswrapper[4940]: I0223 09:05:42.065022 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vm765\" (UniqueName: \"kubernetes.io/projected/d2fb7a6a-317d-4180-bcc3-07087b8a48ba-kube-api-access-vm765\") pod \"telemetry-operator-controller-manager-7f45b4ff68-khtmd\" (UID: \"d2fb7a6a-317d-4180-bcc3-07087b8a48ba\") " pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-khtmd" Feb 23 09:05:42 crc kubenswrapper[4940]: I0223 09:05:42.065064 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zhqdt\" (UniqueName: \"kubernetes.io/projected/8d39a603-93c8-4c09-a1d2-97e6c14902fe-kube-api-access-zhqdt\") pod \"swift-operator-controller-manager-68f46476f-qzv55\" (UID: \"8d39a603-93c8-4c09-a1d2-97e6c14902fe\") " pod="openstack-operators/swift-operator-controller-manager-68f46476f-qzv55" Feb 23 09:05:42 crc kubenswrapper[4940]: I0223 09:05:42.065151 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cwmhx\" (UniqueName: \"kubernetes.io/projected/bda50d0f-3559-47b6-9ee2-8104750b30c4-kube-api-access-cwmhx\") pod \"placement-operator-controller-manager-8497b45c89-zb9xm\" (UID: \"bda50d0f-3559-47b6-9ee2-8104750b30c4\") " pod="openstack-operators/placement-operator-controller-manager-8497b45c89-zb9xm" Feb 23 09:05:42 crc kubenswrapper[4940]: I0223 09:05:42.076702 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-vh4r6" Feb 23 09:05:42 crc kubenswrapper[4940]: I0223 09:05:42.088896 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cwmhx\" (UniqueName: \"kubernetes.io/projected/bda50d0f-3559-47b6-9ee2-8104750b30c4-kube-api-access-cwmhx\") pod \"placement-operator-controller-manager-8497b45c89-zb9xm\" (UID: \"bda50d0f-3559-47b6-9ee2-8104750b30c4\") " pod="openstack-operators/placement-operator-controller-manager-8497b45c89-zb9xm" Feb 23 09:05:42 crc kubenswrapper[4940]: I0223 09:05:42.104456 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cjdql\" (UniqueName: \"kubernetes.io/projected/e810e429-c05d-4451-a863-196e8e071d9b-kube-api-access-cjdql\") pod \"ovn-operator-controller-manager-d44cf6b75-58p99\" (UID: \"e810e429-c05d-4451-a863-196e8e071d9b\") " pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-58p99" Feb 23 09:05:42 crc kubenswrapper[4940]: I0223 09:05:42.104550 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-7866795846-phggr"] Feb 23 09:05:42 crc kubenswrapper[4940]: I0223 09:05:42.107038 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-6nlcd" Feb 23 09:05:42 crc kubenswrapper[4940]: I0223 09:05:42.107436 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-7866795846-phggr" Feb 23 09:05:42 crc kubenswrapper[4940]: I0223 09:05:42.109734 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vm765\" (UniqueName: \"kubernetes.io/projected/d2fb7a6a-317d-4180-bcc3-07087b8a48ba-kube-api-access-vm765\") pod \"telemetry-operator-controller-manager-7f45b4ff68-khtmd\" (UID: \"d2fb7a6a-317d-4180-bcc3-07087b8a48ba\") " pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-khtmd" Feb 23 09:05:42 crc kubenswrapper[4940]: I0223 09:05:42.109920 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-7866795846-phggr"] Feb 23 09:05:42 crc kubenswrapper[4940]: I0223 09:05:42.110362 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-khtmd" Feb 23 09:05:42 crc kubenswrapper[4940]: I0223 09:05:42.110368 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zhqdt\" (UniqueName: \"kubernetes.io/projected/8d39a603-93c8-4c09-a1d2-97e6c14902fe-kube-api-access-zhqdt\") pod \"swift-operator-controller-manager-68f46476f-qzv55\" (UID: \"8d39a603-93c8-4c09-a1d2-97e6c14902fe\") " pod="openstack-operators/swift-operator-controller-manager-68f46476f-qzv55" Feb 23 09:05:42 crc kubenswrapper[4940]: I0223 09:05:42.117160 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-cnp5w" Feb 23 09:05:42 crc kubenswrapper[4940]: I0223 09:05:42.118279 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-6ztzk" Feb 23 09:05:42 crc kubenswrapper[4940]: I0223 09:05:42.136125 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-5db88f68c-s2vxb"] Feb 23 09:05:42 crc kubenswrapper[4940]: I0223 09:05:42.137406 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-s2vxb" Feb 23 09:05:42 crc kubenswrapper[4940]: I0223 09:05:42.142306 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-5db88f68c-s2vxb"] Feb 23 09:05:42 crc kubenswrapper[4940]: I0223 09:05:42.152576 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-b75tk" Feb 23 09:05:42 crc kubenswrapper[4940]: I0223 09:05:42.164744 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-554b4c57dc-7gq48"] Feb 23 09:05:42 crc kubenswrapper[4940]: I0223 09:05:42.166025 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-554b4c57dc-7gq48" Feb 23 09:05:42 crc kubenswrapper[4940]: I0223 09:05:42.167370 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rq8z2\" (UniqueName: \"kubernetes.io/projected/c81581e5-15a7-4b56-9b22-ecfd026749bc-kube-api-access-rq8z2\") pod \"test-operator-controller-manager-7866795846-phggr\" (UID: \"c81581e5-15a7-4b56-9b22-ecfd026749bc\") " pod="openstack-operators/test-operator-controller-manager-7866795846-phggr" Feb 23 09:05:42 crc kubenswrapper[4940]: I0223 09:05:42.173439 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Feb 23 09:05:42 crc kubenswrapper[4940]: I0223 09:05:42.173703 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Feb 23 09:05:42 crc kubenswrapper[4940]: I0223 09:05:42.174053 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-qrhjk" Feb 23 09:05:42 crc kubenswrapper[4940]: I0223 09:05:42.175831 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-554b4c57dc-7gq48"] Feb 23 09:05:42 crc kubenswrapper[4940]: I0223 09:05:42.208277 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-8wv98" Feb 23 09:05:42 crc kubenswrapper[4940]: I0223 09:05:42.230157 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-gk729"] Feb 23 09:05:42 crc kubenswrapper[4940]: I0223 09:05:42.236357 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-gk729" Feb 23 09:05:42 crc kubenswrapper[4940]: I0223 09:05:42.242578 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-gk729"] Feb 23 09:05:42 crc kubenswrapper[4940]: I0223 09:05:42.242868 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-cmbf8" Feb 23 09:05:42 crc kubenswrapper[4940]: I0223 09:05:42.246054 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-j5f4l" Feb 23 09:05:42 crc kubenswrapper[4940]: I0223 09:05:42.268848 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/82d3766e-53e7-4dc8-9c9b-d71e9d930595-cert\") pod \"infra-operator-controller-manager-79d975b745-86vf7\" (UID: \"82d3766e-53e7-4dc8-9c9b-d71e9d930595\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-86vf7" Feb 23 09:05:42 crc kubenswrapper[4940]: I0223 09:05:42.268898 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/ce0016e4-e6c7-4ac5-8b5e-bd9edfa9c1b8-webhook-certs\") pod \"openstack-operator-controller-manager-554b4c57dc-7gq48\" (UID: \"ce0016e4-e6c7-4ac5-8b5e-bd9edfa9c1b8\") " pod="openstack-operators/openstack-operator-controller-manager-554b4c57dc-7gq48" Feb 23 09:05:42 crc kubenswrapper[4940]: I0223 09:05:42.268932 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rq8z2\" (UniqueName: \"kubernetes.io/projected/c81581e5-15a7-4b56-9b22-ecfd026749bc-kube-api-access-rq8z2\") pod \"test-operator-controller-manager-7866795846-phggr\" (UID: \"c81581e5-15a7-4b56-9b22-ecfd026749bc\") " pod="openstack-operators/test-operator-controller-manager-7866795846-phggr" Feb 23 09:05:42 crc kubenswrapper[4940]: I0223 09:05:42.269005 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ce0016e4-e6c7-4ac5-8b5e-bd9edfa9c1b8-metrics-certs\") pod \"openstack-operator-controller-manager-554b4c57dc-7gq48\" (UID: \"ce0016e4-e6c7-4ac5-8b5e-bd9edfa9c1b8\") " pod="openstack-operators/openstack-operator-controller-manager-554b4c57dc-7gq48" Feb 23 09:05:42 crc kubenswrapper[4940]: I0223 09:05:42.269027 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dc8f5\" (UniqueName: \"kubernetes.io/projected/c6e874c6-520a-40fa-b182-e7a0daab54c7-kube-api-access-dc8f5\") pod \"watcher-operator-controller-manager-5db88f68c-s2vxb\" (UID: \"c6e874c6-520a-40fa-b182-e7a0daab54c7\") " pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-s2vxb" Feb 23 09:05:42 crc kubenswrapper[4940]: I0223 09:05:42.269059 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-88tb9\" (UniqueName: \"kubernetes.io/projected/ce0016e4-e6c7-4ac5-8b5e-bd9edfa9c1b8-kube-api-access-88tb9\") pod \"openstack-operator-controller-manager-554b4c57dc-7gq48\" (UID: \"ce0016e4-e6c7-4ac5-8b5e-bd9edfa9c1b8\") " pod="openstack-operators/openstack-operator-controller-manager-554b4c57dc-7gq48" Feb 23 09:05:42 crc kubenswrapper[4940]: E0223 09:05:42.269375 4940 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 23 09:05:42 crc kubenswrapper[4940]: E0223 09:05:42.269421 4940 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/82d3766e-53e7-4dc8-9c9b-d71e9d930595-cert podName:82d3766e-53e7-4dc8-9c9b-d71e9d930595 nodeName:}" failed. No retries permitted until 2026-02-23 09:05:43.269407637 +0000 UTC m=+1074.652613794 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/82d3766e-53e7-4dc8-9c9b-d71e9d930595-cert") pod "infra-operator-controller-manager-79d975b745-86vf7" (UID: "82d3766e-53e7-4dc8-9c9b-d71e9d930595") : secret "infra-operator-webhook-server-cert" not found Feb 23 09:05:42 crc kubenswrapper[4940]: I0223 09:05:42.286013 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-868647ff47-vp5zb"] Feb 23 09:05:42 crc kubenswrapper[4940]: I0223 09:05:42.351733 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-zb9xm" Feb 23 09:05:42 crc kubenswrapper[4940]: I0223 09:05:42.353754 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-58p99" Feb 23 09:05:42 crc kubenswrapper[4940]: I0223 09:05:42.356320 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rq8z2\" (UniqueName: \"kubernetes.io/projected/c81581e5-15a7-4b56-9b22-ecfd026749bc-kube-api-access-rq8z2\") pod \"test-operator-controller-manager-7866795846-phggr\" (UID: \"c81581e5-15a7-4b56-9b22-ecfd026749bc\") " pod="openstack-operators/test-operator-controller-manager-7866795846-phggr" Feb 23 09:05:42 crc kubenswrapper[4940]: I0223 09:05:42.365941 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-68f46476f-qzv55" Feb 23 09:05:42 crc kubenswrapper[4940]: I0223 09:05:42.370394 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/ce0016e4-e6c7-4ac5-8b5e-bd9edfa9c1b8-webhook-certs\") pod \"openstack-operator-controller-manager-554b4c57dc-7gq48\" (UID: \"ce0016e4-e6c7-4ac5-8b5e-bd9edfa9c1b8\") " pod="openstack-operators/openstack-operator-controller-manager-554b4c57dc-7gq48" Feb 23 09:05:42 crc kubenswrapper[4940]: I0223 09:05:42.370439 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5fskp\" (UniqueName: \"kubernetes.io/projected/69a079c2-ac60-4b97-ae60-25c8189e6816-kube-api-access-5fskp\") pod \"rabbitmq-cluster-operator-manager-668c99d594-gk729\" (UID: \"69a079c2-ac60-4b97-ae60-25c8189e6816\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-gk729" Feb 23 09:05:42 crc kubenswrapper[4940]: I0223 09:05:42.370513 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ce0016e4-e6c7-4ac5-8b5e-bd9edfa9c1b8-metrics-certs\") pod \"openstack-operator-controller-manager-554b4c57dc-7gq48\" (UID: \"ce0016e4-e6c7-4ac5-8b5e-bd9edfa9c1b8\") " pod="openstack-operators/openstack-operator-controller-manager-554b4c57dc-7gq48" Feb 23 09:05:42 crc kubenswrapper[4940]: I0223 09:05:42.370529 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dc8f5\" (UniqueName: \"kubernetes.io/projected/c6e874c6-520a-40fa-b182-e7a0daab54c7-kube-api-access-dc8f5\") pod \"watcher-operator-controller-manager-5db88f68c-s2vxb\" (UID: \"c6e874c6-520a-40fa-b182-e7a0daab54c7\") " pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-s2vxb" Feb 23 09:05:42 crc kubenswrapper[4940]: I0223 09:05:42.370550 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-88tb9\" (UniqueName: \"kubernetes.io/projected/ce0016e4-e6c7-4ac5-8b5e-bd9edfa9c1b8-kube-api-access-88tb9\") pod \"openstack-operator-controller-manager-554b4c57dc-7gq48\" (UID: \"ce0016e4-e6c7-4ac5-8b5e-bd9edfa9c1b8\") " pod="openstack-operators/openstack-operator-controller-manager-554b4c57dc-7gq48" Feb 23 09:05:42 crc kubenswrapper[4940]: E0223 09:05:42.371746 4940 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 23 09:05:42 crc kubenswrapper[4940]: E0223 09:05:42.371790 4940 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ce0016e4-e6c7-4ac5-8b5e-bd9edfa9c1b8-webhook-certs podName:ce0016e4-e6c7-4ac5-8b5e-bd9edfa9c1b8 nodeName:}" failed. No retries permitted until 2026-02-23 09:05:42.871776181 +0000 UTC m=+1074.254982338 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/ce0016e4-e6c7-4ac5-8b5e-bd9edfa9c1b8-webhook-certs") pod "openstack-operator-controller-manager-554b4c57dc-7gq48" (UID: "ce0016e4-e6c7-4ac5-8b5e-bd9edfa9c1b8") : secret "webhook-server-cert" not found Feb 23 09:05:42 crc kubenswrapper[4940]: E0223 09:05:42.372096 4940 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 23 09:05:42 crc kubenswrapper[4940]: E0223 09:05:42.372120 4940 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ce0016e4-e6c7-4ac5-8b5e-bd9edfa9c1b8-metrics-certs podName:ce0016e4-e6c7-4ac5-8b5e-bd9edfa9c1b8 nodeName:}" failed. No retries permitted until 2026-02-23 09:05:42.872113411 +0000 UTC m=+1074.255319568 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ce0016e4-e6c7-4ac5-8b5e-bd9edfa9c1b8-metrics-certs") pod "openstack-operator-controller-manager-554b4c57dc-7gq48" (UID: "ce0016e4-e6c7-4ac5-8b5e-bd9edfa9c1b8") : secret "metrics-server-cert" not found Feb 23 09:05:42 crc kubenswrapper[4940]: I0223 09:05:42.395212 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dc8f5\" (UniqueName: \"kubernetes.io/projected/c6e874c6-520a-40fa-b182-e7a0daab54c7-kube-api-access-dc8f5\") pod \"watcher-operator-controller-manager-5db88f68c-s2vxb\" (UID: \"c6e874c6-520a-40fa-b182-e7a0daab54c7\") " pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-s2vxb" Feb 23 09:05:42 crc kubenswrapper[4940]: I0223 09:05:42.396273 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-88tb9\" (UniqueName: \"kubernetes.io/projected/ce0016e4-e6c7-4ac5-8b5e-bd9edfa9c1b8-kube-api-access-88tb9\") pod \"openstack-operator-controller-manager-554b4c57dc-7gq48\" (UID: \"ce0016e4-e6c7-4ac5-8b5e-bd9edfa9c1b8\") " pod="openstack-operators/openstack-operator-controller-manager-554b4c57dc-7gq48" Feb 23 09:05:42 crc kubenswrapper[4940]: I0223 09:05:42.458971 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-7866795846-phggr" Feb 23 09:05:42 crc kubenswrapper[4940]: I0223 09:05:42.472122 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/70ee3a78-ae3e-4b61-a6f6-4c3e36f50c3e-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cwwlzn\" (UID: \"70ee3a78-ae3e-4b61-a6f6-4c3e36f50c3e\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cwwlzn" Feb 23 09:05:42 crc kubenswrapper[4940]: I0223 09:05:42.472195 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5fskp\" (UniqueName: \"kubernetes.io/projected/69a079c2-ac60-4b97-ae60-25c8189e6816-kube-api-access-5fskp\") pod \"rabbitmq-cluster-operator-manager-668c99d594-gk729\" (UID: \"69a079c2-ac60-4b97-ae60-25c8189e6816\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-gk729" Feb 23 09:05:42 crc kubenswrapper[4940]: E0223 09:05:42.472557 4940 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 23 09:05:42 crc kubenswrapper[4940]: E0223 09:05:42.472752 4940 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/70ee3a78-ae3e-4b61-a6f6-4c3e36f50c3e-cert podName:70ee3a78-ae3e-4b61-a6f6-4c3e36f50c3e nodeName:}" failed. No retries permitted until 2026-02-23 09:05:43.472726961 +0000 UTC m=+1074.855933118 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/70ee3a78-ae3e-4b61-a6f6-4c3e36f50c3e-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9cwwlzn" (UID: "70ee3a78-ae3e-4b61-a6f6-4c3e36f50c3e") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 23 09:05:42 crc kubenswrapper[4940]: I0223 09:05:42.482398 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-s2vxb" Feb 23 09:05:42 crc kubenswrapper[4940]: I0223 09:05:42.492219 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5fskp\" (UniqueName: \"kubernetes.io/projected/69a079c2-ac60-4b97-ae60-25c8189e6816-kube-api-access-5fskp\") pod \"rabbitmq-cluster-operator-manager-668c99d594-gk729\" (UID: \"69a079c2-ac60-4b97-ae60-25c8189e6816\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-gk729" Feb 23 09:05:42 crc kubenswrapper[4940]: I0223 09:05:42.562335 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-gk729" Feb 23 09:05:43 crc kubenswrapper[4940]: I0223 09:05:42.888517 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ce0016e4-e6c7-4ac5-8b5e-bd9edfa9c1b8-metrics-certs\") pod \"openstack-operator-controller-manager-554b4c57dc-7gq48\" (UID: \"ce0016e4-e6c7-4ac5-8b5e-bd9edfa9c1b8\") " pod="openstack-operators/openstack-operator-controller-manager-554b4c57dc-7gq48" Feb 23 09:05:43 crc kubenswrapper[4940]: I0223 09:05:42.888831 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/ce0016e4-e6c7-4ac5-8b5e-bd9edfa9c1b8-webhook-certs\") pod \"openstack-operator-controller-manager-554b4c57dc-7gq48\" (UID: \"ce0016e4-e6c7-4ac5-8b5e-bd9edfa9c1b8\") " pod="openstack-operators/openstack-operator-controller-manager-554b4c57dc-7gq48" Feb 23 09:05:43 crc kubenswrapper[4940]: E0223 09:05:42.889006 4940 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 23 09:05:43 crc kubenswrapper[4940]: E0223 09:05:42.889065 4940 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ce0016e4-e6c7-4ac5-8b5e-bd9edfa9c1b8-webhook-certs podName:ce0016e4-e6c7-4ac5-8b5e-bd9edfa9c1b8 nodeName:}" failed. No retries permitted until 2026-02-23 09:05:43.88904761 +0000 UTC m=+1075.272253767 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/ce0016e4-e6c7-4ac5-8b5e-bd9edfa9c1b8-webhook-certs") pod "openstack-operator-controller-manager-554b4c57dc-7gq48" (UID: "ce0016e4-e6c7-4ac5-8b5e-bd9edfa9c1b8") : secret "webhook-server-cert" not found Feb 23 09:05:43 crc kubenswrapper[4940]: E0223 09:05:42.890836 4940 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 23 09:05:43 crc kubenswrapper[4940]: E0223 09:05:42.890927 4940 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ce0016e4-e6c7-4ac5-8b5e-bd9edfa9c1b8-metrics-certs podName:ce0016e4-e6c7-4ac5-8b5e-bd9edfa9c1b8 nodeName:}" failed. No retries permitted until 2026-02-23 09:05:43.890904757 +0000 UTC m=+1075.274110914 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ce0016e4-e6c7-4ac5-8b5e-bd9edfa9c1b8-metrics-certs") pod "openstack-operator-controller-manager-554b4c57dc-7gq48" (UID: "ce0016e4-e6c7-4ac5-8b5e-bd9edfa9c1b8") : secret "metrics-server-cert" not found Feb 23 09:05:43 crc kubenswrapper[4940]: I0223 09:05:43.386178 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/82d3766e-53e7-4dc8-9c9b-d71e9d930595-cert\") pod \"infra-operator-controller-manager-79d975b745-86vf7\" (UID: \"82d3766e-53e7-4dc8-9c9b-d71e9d930595\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-86vf7" Feb 23 09:05:43 crc kubenswrapper[4940]: E0223 09:05:43.386464 4940 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 23 09:05:43 crc kubenswrapper[4940]: E0223 09:05:43.386527 4940 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/82d3766e-53e7-4dc8-9c9b-d71e9d930595-cert podName:82d3766e-53e7-4dc8-9c9b-d71e9d930595 nodeName:}" failed. No retries permitted until 2026-02-23 09:05:45.386508032 +0000 UTC m=+1076.769714179 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/82d3766e-53e7-4dc8-9c9b-d71e9d930595-cert") pod "infra-operator-controller-manager-79d975b745-86vf7" (UID: "82d3766e-53e7-4dc8-9c9b-d71e9d930595") : secret "infra-operator-webhook-server-cert" not found Feb 23 09:05:43 crc kubenswrapper[4940]: I0223 09:05:43.450535 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-vp5zb" event={"ID":"dfc9a681-c309-4803-9be0-6150d615b023","Type":"ContainerStarted","Data":"0c92a5d439d34a2663a2485eba10f8ebb72d94f4a66571aca412dc7b8becf3fa"} Feb 23 09:05:43 crc kubenswrapper[4940]: I0223 09:05:43.487583 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/70ee3a78-ae3e-4b61-a6f6-4c3e36f50c3e-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cwwlzn\" (UID: \"70ee3a78-ae3e-4b61-a6f6-4c3e36f50c3e\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cwwlzn" Feb 23 09:05:43 crc kubenswrapper[4940]: E0223 09:05:43.487835 4940 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 23 09:05:43 crc kubenswrapper[4940]: E0223 09:05:43.487899 4940 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/70ee3a78-ae3e-4b61-a6f6-4c3e36f50c3e-cert podName:70ee3a78-ae3e-4b61-a6f6-4c3e36f50c3e nodeName:}" failed. No retries permitted until 2026-02-23 09:05:45.487879635 +0000 UTC m=+1076.871085792 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/70ee3a78-ae3e-4b61-a6f6-4c3e36f50c3e-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9cwwlzn" (UID: "70ee3a78-ae3e-4b61-a6f6-4c3e36f50c3e") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 23 09:05:44 crc kubenswrapper[4940]: I0223 09:05:44.155828 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ce0016e4-e6c7-4ac5-8b5e-bd9edfa9c1b8-metrics-certs\") pod \"openstack-operator-controller-manager-554b4c57dc-7gq48\" (UID: \"ce0016e4-e6c7-4ac5-8b5e-bd9edfa9c1b8\") " pod="openstack-operators/openstack-operator-controller-manager-554b4c57dc-7gq48" Feb 23 09:05:44 crc kubenswrapper[4940]: I0223 09:05:44.156135 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/ce0016e4-e6c7-4ac5-8b5e-bd9edfa9c1b8-webhook-certs\") pod \"openstack-operator-controller-manager-554b4c57dc-7gq48\" (UID: \"ce0016e4-e6c7-4ac5-8b5e-bd9edfa9c1b8\") " pod="openstack-operators/openstack-operator-controller-manager-554b4c57dc-7gq48" Feb 23 09:05:44 crc kubenswrapper[4940]: E0223 09:05:44.156285 4940 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 23 09:05:44 crc kubenswrapper[4940]: E0223 09:05:44.156339 4940 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ce0016e4-e6c7-4ac5-8b5e-bd9edfa9c1b8-webhook-certs podName:ce0016e4-e6c7-4ac5-8b5e-bd9edfa9c1b8 nodeName:}" failed. No retries permitted until 2026-02-23 09:05:46.156319454 +0000 UTC m=+1077.539525611 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/ce0016e4-e6c7-4ac5-8b5e-bd9edfa9c1b8-webhook-certs") pod "openstack-operator-controller-manager-554b4c57dc-7gq48" (UID: "ce0016e4-e6c7-4ac5-8b5e-bd9edfa9c1b8") : secret "webhook-server-cert" not found Feb 23 09:05:44 crc kubenswrapper[4940]: E0223 09:05:44.156469 4940 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 23 09:05:44 crc kubenswrapper[4940]: E0223 09:05:44.156496 4940 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ce0016e4-e6c7-4ac5-8b5e-bd9edfa9c1b8-metrics-certs podName:ce0016e4-e6c7-4ac5-8b5e-bd9edfa9c1b8 nodeName:}" failed. No retries permitted until 2026-02-23 09:05:46.15648445 +0000 UTC m=+1077.539690607 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ce0016e4-e6c7-4ac5-8b5e-bd9edfa9c1b8-metrics-certs") pod "openstack-operator-controller-manager-554b4c57dc-7gq48" (UID: "ce0016e4-e6c7-4ac5-8b5e-bd9edfa9c1b8") : secret "metrics-server-cert" not found Feb 23 09:05:44 crc kubenswrapper[4940]: I0223 09:05:44.739328 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5b9b8895d5-qzd5f"] Feb 23 09:05:45 crc kubenswrapper[4940]: W0223 09:05:45.066764 4940 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddb71f743_426e_4fe8_ab74_17c3f68798fc.slice/crio-346d2c1ad0c37a262b7706b9c6b319faf8c040d4fa14eba254164212ab59bcda WatchSource:0}: Error finding container 346d2c1ad0c37a262b7706b9c6b319faf8c040d4fa14eba254164212ab59bcda: Status 404 returned error can't find the container with id 346d2c1ad0c37a262b7706b9c6b319faf8c040d4fa14eba254164212ab59bcda Feb 23 09:05:45 crc kubenswrapper[4940]: I0223 09:05:45.644558 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/82d3766e-53e7-4dc8-9c9b-d71e9d930595-cert\") pod \"infra-operator-controller-manager-79d975b745-86vf7\" (UID: \"82d3766e-53e7-4dc8-9c9b-d71e9d930595\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-86vf7" Feb 23 09:05:45 crc kubenswrapper[4940]: I0223 09:05:45.645005 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/70ee3a78-ae3e-4b61-a6f6-4c3e36f50c3e-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cwwlzn\" (UID: \"70ee3a78-ae3e-4b61-a6f6-4c3e36f50c3e\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cwwlzn" Feb 23 09:05:45 crc kubenswrapper[4940]: E0223 09:05:45.645795 4940 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 23 09:05:45 crc kubenswrapper[4940]: E0223 09:05:45.645869 4940 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/82d3766e-53e7-4dc8-9c9b-d71e9d930595-cert podName:82d3766e-53e7-4dc8-9c9b-d71e9d930595 nodeName:}" failed. No retries permitted until 2026-02-23 09:05:49.645848032 +0000 UTC m=+1081.029054189 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/82d3766e-53e7-4dc8-9c9b-d71e9d930595-cert") pod "infra-operator-controller-manager-79d975b745-86vf7" (UID: "82d3766e-53e7-4dc8-9c9b-d71e9d930595") : secret "infra-operator-webhook-server-cert" not found Feb 23 09:05:45 crc kubenswrapper[4940]: E0223 09:05:45.646143 4940 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 23 09:05:45 crc kubenswrapper[4940]: E0223 09:05:45.646200 4940 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/70ee3a78-ae3e-4b61-a6f6-4c3e36f50c3e-cert podName:70ee3a78-ae3e-4b61-a6f6-4c3e36f50c3e nodeName:}" failed. No retries permitted until 2026-02-23 09:05:49.646184362 +0000 UTC m=+1081.029390519 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/70ee3a78-ae3e-4b61-a6f6-4c3e36f50c3e-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9cwwlzn" (UID: "70ee3a78-ae3e-4b61-a6f6-4c3e36f50c3e") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 23 09:05:45 crc kubenswrapper[4940]: I0223 09:05:45.742645 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-qzd5f" event={"ID":"db71f743-426e-4fe8-ab74-17c3f68798fc","Type":"ContainerStarted","Data":"346d2c1ad0c37a262b7706b9c6b319faf8c040d4fa14eba254164212ab59bcda"} Feb 23 09:05:46 crc kubenswrapper[4940]: I0223 09:05:46.304890 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/ce0016e4-e6c7-4ac5-8b5e-bd9edfa9c1b8-webhook-certs\") pod \"openstack-operator-controller-manager-554b4c57dc-7gq48\" (UID: \"ce0016e4-e6c7-4ac5-8b5e-bd9edfa9c1b8\") " pod="openstack-operators/openstack-operator-controller-manager-554b4c57dc-7gq48" Feb 23 09:05:46 crc kubenswrapper[4940]: I0223 09:05:46.304994 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ce0016e4-e6c7-4ac5-8b5e-bd9edfa9c1b8-metrics-certs\") pod \"openstack-operator-controller-manager-554b4c57dc-7gq48\" (UID: \"ce0016e4-e6c7-4ac5-8b5e-bd9edfa9c1b8\") " pod="openstack-operators/openstack-operator-controller-manager-554b4c57dc-7gq48" Feb 23 09:05:46 crc kubenswrapper[4940]: E0223 09:05:46.305142 4940 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 23 09:05:46 crc kubenswrapper[4940]: E0223 09:05:46.305193 4940 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ce0016e4-e6c7-4ac5-8b5e-bd9edfa9c1b8-metrics-certs podName:ce0016e4-e6c7-4ac5-8b5e-bd9edfa9c1b8 nodeName:}" failed. No retries permitted until 2026-02-23 09:05:50.305177768 +0000 UTC m=+1081.688383915 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ce0016e4-e6c7-4ac5-8b5e-bd9edfa9c1b8-metrics-certs") pod "openstack-operator-controller-manager-554b4c57dc-7gq48" (UID: "ce0016e4-e6c7-4ac5-8b5e-bd9edfa9c1b8") : secret "metrics-server-cert" not found Feb 23 09:05:46 crc kubenswrapper[4940]: E0223 09:05:46.305319 4940 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 23 09:05:46 crc kubenswrapper[4940]: E0223 09:05:46.305435 4940 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ce0016e4-e6c7-4ac5-8b5e-bd9edfa9c1b8-webhook-certs podName:ce0016e4-e6c7-4ac5-8b5e-bd9edfa9c1b8 nodeName:}" failed. No retries permitted until 2026-02-23 09:05:50.305399215 +0000 UTC m=+1081.688605372 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/ce0016e4-e6c7-4ac5-8b5e-bd9edfa9c1b8-webhook-certs") pod "openstack-operator-controller-manager-554b4c57dc-7gq48" (UID: "ce0016e4-e6c7-4ac5-8b5e-bd9edfa9c1b8") : secret "webhook-server-cert" not found Feb 23 09:05:46 crc kubenswrapper[4940]: I0223 09:05:46.404825 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-5d946d989d-bqhhr"] Feb 23 09:05:46 crc kubenswrapper[4940]: I0223 09:05:46.410706 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-gk729"] Feb 23 09:05:46 crc kubenswrapper[4940]: I0223 09:05:46.418965 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-77987464f4-p857l"] Feb 23 09:05:46 crc kubenswrapper[4940]: I0223 09:05:46.426976 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d8bf5c495-92fk4"] Feb 23 09:05:46 crc kubenswrapper[4940]: I0223 09:05:46.434634 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6994f66f48-rwvf9"] Feb 23 09:05:46 crc kubenswrapper[4940]: I0223 09:05:46.448056 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-567668f5cf-6ztzk"] Feb 23 09:05:46 crc kubenswrapper[4940]: I0223 09:05:46.454543 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-69f49c598c-pvb4b"] Feb 23 09:05:46 crc kubenswrapper[4940]: I0223 09:05:46.463688 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-54f6768c69-vh4r6"] Feb 23 09:05:46 crc kubenswrapper[4940]: I0223 09:05:46.482856 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-khtmd"] Feb 23 09:05:46 crc kubenswrapper[4940]: I0223 09:05:46.492753 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-64ddbf8bb-6nlcd"] Feb 23 09:05:46 crc kubenswrapper[4940]: I0223 09:05:46.531680 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-5db88f68c-s2vxb"] Feb 23 09:05:46 crc kubenswrapper[4940]: W0223 09:05:46.541285 4940 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod15249b0f_c437_4d93_b97a_c7e078139e07.slice/crio-ff1de984a4d965e34b4d7b0b31be7259cf195babee9dd95641524b6de888bd9a WatchSource:0}: Error finding container ff1de984a4d965e34b4d7b0b31be7259cf195babee9dd95641524b6de888bd9a: Status 404 returned error can't find the container with id ff1de984a4d965e34b4d7b0b31be7259cf195babee9dd95641524b6de888bd9a Feb 23 09:05:46 crc kubenswrapper[4940]: I0223 09:05:46.549885 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-69f8888797-cmbf8"] Feb 23 09:05:46 crc kubenswrapper[4940]: I0223 09:05:46.556214 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-7866795846-phggr"] Feb 23 09:05:46 crc kubenswrapper[4940]: I0223 09:05:46.559688 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-68f46476f-qzv55"] Feb 23 09:05:46 crc kubenswrapper[4940]: E0223 09:05:46.565400 4940 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/swift-operator@sha256:3d676f1281e24ef07de617570d2f7fbf625032e41866d1551a856c052248bb04,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-zhqdt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-68f46476f-qzv55_openstack-operators(8d39a603-93c8-4c09-a1d2-97e6c14902fe): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 23 09:05:46 crc kubenswrapper[4940]: E0223 09:05:46.565505 4940 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/watcher-operator@sha256:d01ae848290e880c09127d5297418dea40fc7f090fdab9bf2c578c7e7f53aec0,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-dc8f5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-5db88f68c-s2vxb_openstack-operators(c6e874c6-520a-40fa-b182-e7a0daab54c7): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 23 09:05:46 crc kubenswrapper[4940]: E0223 09:05:46.565621 4940 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:f0fabdf79095def0f8b1c0442925548a94ca94bed4de2d3b171277129f8079e6,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-rq8z2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-7866795846-phggr_openstack-operators(c81581e5-15a7-4b56-9b22-ecfd026749bc): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 23 09:05:46 crc kubenswrapper[4940]: E0223 09:05:46.565737 4940 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/octavia-operator@sha256:229fc8c8d94dd4102d2151cd4ec1eaaa09d897c2b396d06e903f61ea29c1fa34,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-6cd5q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-operator-controller-manager-69f8888797-cmbf8_openstack-operators(32fc4d76-59e1-44b3-ace9-e9f14dc4f86a): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 23 09:05:46 crc kubenswrapper[4940]: E0223 09:05:46.567093 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-cmbf8" podUID="32fc4d76-59e1-44b3-ace9-e9f14dc4f86a" Feb 23 09:05:46 crc kubenswrapper[4940]: E0223 09:05:46.567240 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/swift-operator-controller-manager-68f46476f-qzv55" podUID="8d39a603-93c8-4c09-a1d2-97e6c14902fe" Feb 23 09:05:46 crc kubenswrapper[4940]: E0223 09:05:46.567336 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-s2vxb" podUID="c6e874c6-520a-40fa-b182-e7a0daab54c7" Feb 23 09:05:46 crc kubenswrapper[4940]: E0223 09:05:46.567413 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/test-operator-controller-manager-7866795846-phggr" podUID="c81581e5-15a7-4b56-9b22-ecfd026749bc" Feb 23 09:05:46 crc kubenswrapper[4940]: I0223 09:05:46.588863 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-554564d7fc-8wv98"] Feb 23 09:05:46 crc kubenswrapper[4940]: W0223 09:05:46.591955 4940 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod34061626_0f45_4bb5_a16f_9059fa45be7f.slice/crio-580a3ecdb79a35e733d7e8396520de523bd7b621a5e2b709fed13d9ab53e1762 WatchSource:0}: Error finding container 580a3ecdb79a35e733d7e8396520de523bd7b621a5e2b709fed13d9ab53e1762: Status 404 returned error can't find the container with id 580a3ecdb79a35e733d7e8396520de523bd7b621a5e2b709fed13d9ab53e1762 Feb 23 09:05:46 crc kubenswrapper[4940]: E0223 09:05:46.595424 4940 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ironic-operator@sha256:7e1b0b7b172ad0d707ab80dd72d609e1d0f5bbd38a22c24a28ed0f17a960c867,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-plsxq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ironic-operator-controller-manager-554564d7fc-8wv98_openstack-operators(34061626-0f45-4bb5-a16f-9059fa45be7f): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 23 09:05:46 crc kubenswrapper[4940]: E0223 09:05:46.597092 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-8wv98" podUID="34061626-0f45-4bb5-a16f-9059fa45be7f" Feb 23 09:05:46 crc kubenswrapper[4940]: I0223 09:05:46.748073 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-8497b45c89-zb9xm"] Feb 23 09:05:46 crc kubenswrapper[4940]: I0223 09:05:46.753326 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b4d948c87-zqz6k"] Feb 23 09:05:46 crc kubenswrapper[4940]: I0223 09:05:46.758363 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-8wv98" event={"ID":"34061626-0f45-4bb5-a16f-9059fa45be7f","Type":"ContainerStarted","Data":"580a3ecdb79a35e733d7e8396520de523bd7b621a5e2b709fed13d9ab53e1762"} Feb 23 09:05:46 crc kubenswrapper[4940]: E0223 09:05:46.760354 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ironic-operator@sha256:7e1b0b7b172ad0d707ab80dd72d609e1d0f5bbd38a22c24a28ed0f17a960c867\\\"\"" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-8wv98" podUID="34061626-0f45-4bb5-a16f-9059fa45be7f" Feb 23 09:05:46 crc kubenswrapper[4940]: I0223 09:05:46.760960 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-d44cf6b75-58p99"] Feb 23 09:05:46 crc kubenswrapper[4940]: I0223 09:05:46.761980 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-68f46476f-qzv55" event={"ID":"8d39a603-93c8-4c09-a1d2-97e6c14902fe","Type":"ContainerStarted","Data":"88d165c9296ff73b49697f09f73249ed6cc4a37937880e00f25029e1e0a4af8c"} Feb 23 09:05:46 crc kubenswrapper[4940]: E0223 09:05:46.763520 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:3d676f1281e24ef07de617570d2f7fbf625032e41866d1551a856c052248bb04\\\"\"" pod="openstack-operators/swift-operator-controller-manager-68f46476f-qzv55" podUID="8d39a603-93c8-4c09-a1d2-97e6c14902fe" Feb 23 09:05:46 crc kubenswrapper[4940]: I0223 09:05:46.764267 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-vh4r6" event={"ID":"780fe903-e160-47c9-9291-31ee2d139266","Type":"ContainerStarted","Data":"fcd20ac1910cd39a75c38adbb71771380bab8152b29156ebfeac8e084a71af9c"} Feb 23 09:05:46 crc kubenswrapper[4940]: I0223 09:05:46.766509 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-khtmd" event={"ID":"d2fb7a6a-317d-4180-bcc3-07087b8a48ba","Type":"ContainerStarted","Data":"182c69217387ab82b720e48bc666a6d9e8a9aa42c189a65667c4eec19d7dd40d"} Feb 23 09:05:46 crc kubenswrapper[4940]: E0223 09:05:46.767988 4940 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/keystone-operator@sha256:c6ad383f55f955902b074d1ee947a2233a5fcbf40698479ae693ce056c80dcc1,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-8fz87,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-b4d948c87-zqz6k_openstack-operators(2fb7ee71-a9af-4504-8899-932449157080): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 23 09:05:46 crc kubenswrapper[4940]: E0223 09:05:46.769338 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-zqz6k" podUID="2fb7ee71-a9af-4504-8899-932449157080" Feb 23 09:05:46 crc kubenswrapper[4940]: I0223 09:05:46.770584 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-gk729" event={"ID":"69a079c2-ac60-4b97-ae60-25c8189e6816","Type":"ContainerStarted","Data":"8c34446205f7c2042815e95006007b0ed104eb18f1d052495534699741714bf9"} Feb 23 09:05:46 crc kubenswrapper[4940]: I0223 09:05:46.778907 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-6ztzk" event={"ID":"d2c13199-d708-496b-b69a-43fba1068955","Type":"ContainerStarted","Data":"19630fe3b3a1757a7e72ec59bd77d174004cf4e5d17e029c428494ad0d7a78a4"} Feb 23 09:05:46 crc kubenswrapper[4940]: E0223 09:05:46.796861 4940 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ovn-operator@sha256:543c103838f3e6ef48755665a7695dfa3ed84753c557560257d265db31f92759,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-cjdql,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-d44cf6b75-58p99_openstack-operators(e810e429-c05d-4451-a863-196e8e071d9b): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 23 09:05:46 crc kubenswrapper[4940]: E0223 09:05:46.798018 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-58p99" podUID="e810e429-c05d-4451-a863-196e8e071d9b" Feb 23 09:05:46 crc kubenswrapper[4940]: I0223 09:05:46.815740 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-pvb4b" event={"ID":"61343538-79c0-4565-ae70-a397b5fd6b2f","Type":"ContainerStarted","Data":"376516f3e661c779bd7bc3de75bdf1820f3858b1e7b128289574da1c5b33bba5"} Feb 23 09:05:46 crc kubenswrapper[4940]: I0223 09:05:46.819851 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-s2vxb" event={"ID":"c6e874c6-520a-40fa-b182-e7a0daab54c7","Type":"ContainerStarted","Data":"cfce94c24d79b31001e47b27a6242a1d92f4844c0b0b92f6cc02cce81f498993"} Feb 23 09:05:46 crc kubenswrapper[4940]: E0223 09:05:46.823033 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:d01ae848290e880c09127d5297418dea40fc7f090fdab9bf2c578c7e7f53aec0\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-s2vxb" podUID="c6e874c6-520a-40fa-b182-e7a0daab54c7" Feb 23 09:05:46 crc kubenswrapper[4940]: I0223 09:05:46.828511 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-bqhhr" event={"ID":"c8d94d12-5d54-4c60-85d4-de19e4dfde67","Type":"ContainerStarted","Data":"b768cf39a80fb47bd075cf47ebcc81182bf3bf9e2eb57fb67f9e9aeb12971686"} Feb 23 09:05:46 crc kubenswrapper[4940]: I0223 09:05:46.831522 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-92fk4" event={"ID":"0d68e7dc-1d8e-4edd-a2f9-585043e15a98","Type":"ContainerStarted","Data":"d1255e007c5e9fc3241615e147e6e310816a51c5b447f3bf3ea1b1d058f9ff45"} Feb 23 09:05:46 crc kubenswrapper[4940]: I0223 09:05:46.832667 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-77987464f4-p857l" event={"ID":"2a7c5730-7ed4-44b1-832d-109fa4460dc5","Type":"ContainerStarted","Data":"16460935e174c5b0c9ad404aadb81887edb3ce5d78851008dd25793c105b7742"} Feb 23 09:05:46 crc kubenswrapper[4940]: I0223 09:05:46.834122 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-rwvf9" event={"ID":"e05a318b-495f-49c1-83cf-056d5ce99c8c","Type":"ContainerStarted","Data":"610935fda2448ce1a36259c366f859328a13e4fe18331ac85c6187f7f9f9e8c3"} Feb 23 09:05:46 crc kubenswrapper[4940]: I0223 09:05:46.851375 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-6nlcd" event={"ID":"15249b0f-c437-4d93-b97a-c7e078139e07","Type":"ContainerStarted","Data":"ff1de984a4d965e34b4d7b0b31be7259cf195babee9dd95641524b6de888bd9a"} Feb 23 09:05:46 crc kubenswrapper[4940]: I0223 09:05:46.855330 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-cmbf8" event={"ID":"32fc4d76-59e1-44b3-ace9-e9f14dc4f86a","Type":"ContainerStarted","Data":"734347a767abb7713d9e16bfdfb6f0ac5cb0c58369f43a3d869c7c9914ec97d9"} Feb 23 09:05:46 crc kubenswrapper[4940]: E0223 09:05:46.857380 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:229fc8c8d94dd4102d2151cd4ec1eaaa09d897c2b396d06e903f61ea29c1fa34\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-cmbf8" podUID="32fc4d76-59e1-44b3-ace9-e9f14dc4f86a" Feb 23 09:05:46 crc kubenswrapper[4940]: I0223 09:05:46.857875 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-7866795846-phggr" event={"ID":"c81581e5-15a7-4b56-9b22-ecfd026749bc","Type":"ContainerStarted","Data":"09c6ca42988fe62962f49a0dbf0c458ebfc521f911ad641cc2e6bef8186a0520"} Feb 23 09:05:46 crc kubenswrapper[4940]: E0223 09:05:46.859591 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:f0fabdf79095def0f8b1c0442925548a94ca94bed4de2d3b171277129f8079e6\\\"\"" pod="openstack-operators/test-operator-controller-manager-7866795846-phggr" podUID="c81581e5-15a7-4b56-9b22-ecfd026749bc" Feb 23 09:05:47 crc kubenswrapper[4940]: I0223 09:05:47.866364 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-zqz6k" event={"ID":"2fb7ee71-a9af-4504-8899-932449157080","Type":"ContainerStarted","Data":"6ae0b46524f56b2dd2533bb40fcfa2c51807b4afd92727ececd9329bad16505e"} Feb 23 09:05:47 crc kubenswrapper[4940]: E0223 09:05:47.867574 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/keystone-operator@sha256:c6ad383f55f955902b074d1ee947a2233a5fcbf40698479ae693ce056c80dcc1\\\"\"" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-zqz6k" podUID="2fb7ee71-a9af-4504-8899-932449157080" Feb 23 09:05:47 crc kubenswrapper[4940]: I0223 09:05:47.872485 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-zb9xm" event={"ID":"bda50d0f-3559-47b6-9ee2-8104750b30c4","Type":"ContainerStarted","Data":"1fd4f09c341a3d6df275c72e8679c0f57a4897b9262633bf56c7cc11ef90e29b"} Feb 23 09:05:47 crc kubenswrapper[4940]: I0223 09:05:47.874022 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-58p99" event={"ID":"e810e429-c05d-4451-a863-196e8e071d9b","Type":"ContainerStarted","Data":"46b46874938809b4124507de479104bdd717cb39e0edcb086466f9d7428fa44e"} Feb 23 09:05:47 crc kubenswrapper[4940]: E0223 09:05:47.875192 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:f0fabdf79095def0f8b1c0442925548a94ca94bed4de2d3b171277129f8079e6\\\"\"" pod="openstack-operators/test-operator-controller-manager-7866795846-phggr" podUID="c81581e5-15a7-4b56-9b22-ecfd026749bc" Feb 23 09:05:47 crc kubenswrapper[4940]: E0223 09:05:47.875411 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:229fc8c8d94dd4102d2151cd4ec1eaaa09d897c2b396d06e903f61ea29c1fa34\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-cmbf8" podUID="32fc4d76-59e1-44b3-ace9-e9f14dc4f86a" Feb 23 09:05:47 crc kubenswrapper[4940]: E0223 09:05:47.878132 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:3d676f1281e24ef07de617570d2f7fbf625032e41866d1551a856c052248bb04\\\"\"" pod="openstack-operators/swift-operator-controller-manager-68f46476f-qzv55" podUID="8d39a603-93c8-4c09-a1d2-97e6c14902fe" Feb 23 09:05:47 crc kubenswrapper[4940]: E0223 09:05:47.878467 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:d01ae848290e880c09127d5297418dea40fc7f090fdab9bf2c578c7e7f53aec0\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-s2vxb" podUID="c6e874c6-520a-40fa-b182-e7a0daab54c7" Feb 23 09:05:47 crc kubenswrapper[4940]: E0223 09:05:47.878529 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ironic-operator@sha256:7e1b0b7b172ad0d707ab80dd72d609e1d0f5bbd38a22c24a28ed0f17a960c867\\\"\"" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-8wv98" podUID="34061626-0f45-4bb5-a16f-9059fa45be7f" Feb 23 09:05:47 crc kubenswrapper[4940]: E0223 09:05:47.878620 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:543c103838f3e6ef48755665a7695dfa3ed84753c557560257d265db31f92759\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-58p99" podUID="e810e429-c05d-4451-a863-196e8e071d9b" Feb 23 09:05:49 crc kubenswrapper[4940]: E0223 09:05:49.153756 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/keystone-operator@sha256:c6ad383f55f955902b074d1ee947a2233a5fcbf40698479ae693ce056c80dcc1\\\"\"" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-zqz6k" podUID="2fb7ee71-a9af-4504-8899-932449157080" Feb 23 09:05:49 crc kubenswrapper[4940]: E0223 09:05:49.154042 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:543c103838f3e6ef48755665a7695dfa3ed84753c557560257d265db31f92759\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-58p99" podUID="e810e429-c05d-4451-a863-196e8e071d9b" Feb 23 09:05:49 crc kubenswrapper[4940]: I0223 09:05:49.654507 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/82d3766e-53e7-4dc8-9c9b-d71e9d930595-cert\") pod \"infra-operator-controller-manager-79d975b745-86vf7\" (UID: \"82d3766e-53e7-4dc8-9c9b-d71e9d930595\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-86vf7" Feb 23 09:05:49 crc kubenswrapper[4940]: I0223 09:05:49.654582 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/70ee3a78-ae3e-4b61-a6f6-4c3e36f50c3e-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cwwlzn\" (UID: \"70ee3a78-ae3e-4b61-a6f6-4c3e36f50c3e\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cwwlzn" Feb 23 09:05:49 crc kubenswrapper[4940]: E0223 09:05:49.654771 4940 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 23 09:05:49 crc kubenswrapper[4940]: E0223 09:05:49.654817 4940 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/70ee3a78-ae3e-4b61-a6f6-4c3e36f50c3e-cert podName:70ee3a78-ae3e-4b61-a6f6-4c3e36f50c3e nodeName:}" failed. No retries permitted until 2026-02-23 09:05:57.654803518 +0000 UTC m=+1089.038009675 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/70ee3a78-ae3e-4b61-a6f6-4c3e36f50c3e-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9cwwlzn" (UID: "70ee3a78-ae3e-4b61-a6f6-4c3e36f50c3e") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 23 09:05:49 crc kubenswrapper[4940]: E0223 09:05:49.655241 4940 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 23 09:05:49 crc kubenswrapper[4940]: E0223 09:05:49.655266 4940 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/82d3766e-53e7-4dc8-9c9b-d71e9d930595-cert podName:82d3766e-53e7-4dc8-9c9b-d71e9d930595 nodeName:}" failed. No retries permitted until 2026-02-23 09:05:57.655259052 +0000 UTC m=+1089.038465209 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/82d3766e-53e7-4dc8-9c9b-d71e9d930595-cert") pod "infra-operator-controller-manager-79d975b745-86vf7" (UID: "82d3766e-53e7-4dc8-9c9b-d71e9d930595") : secret "infra-operator-webhook-server-cert" not found Feb 23 09:05:50 crc kubenswrapper[4940]: I0223 09:05:50.642539 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/ce0016e4-e6c7-4ac5-8b5e-bd9edfa9c1b8-webhook-certs\") pod \"openstack-operator-controller-manager-554b4c57dc-7gq48\" (UID: \"ce0016e4-e6c7-4ac5-8b5e-bd9edfa9c1b8\") " pod="openstack-operators/openstack-operator-controller-manager-554b4c57dc-7gq48" Feb 23 09:05:50 crc kubenswrapper[4940]: I0223 09:05:50.642644 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ce0016e4-e6c7-4ac5-8b5e-bd9edfa9c1b8-metrics-certs\") pod \"openstack-operator-controller-manager-554b4c57dc-7gq48\" (UID: \"ce0016e4-e6c7-4ac5-8b5e-bd9edfa9c1b8\") " pod="openstack-operators/openstack-operator-controller-manager-554b4c57dc-7gq48" Feb 23 09:05:50 crc kubenswrapper[4940]: E0223 09:05:50.644119 4940 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 23 09:05:50 crc kubenswrapper[4940]: E0223 09:05:50.644169 4940 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ce0016e4-e6c7-4ac5-8b5e-bd9edfa9c1b8-metrics-certs podName:ce0016e4-e6c7-4ac5-8b5e-bd9edfa9c1b8 nodeName:}" failed. No retries permitted until 2026-02-23 09:05:58.644154939 +0000 UTC m=+1090.027361086 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ce0016e4-e6c7-4ac5-8b5e-bd9edfa9c1b8-metrics-certs") pod "openstack-operator-controller-manager-554b4c57dc-7gq48" (UID: "ce0016e4-e6c7-4ac5-8b5e-bd9edfa9c1b8") : secret "metrics-server-cert" not found Feb 23 09:05:50 crc kubenswrapper[4940]: E0223 09:05:50.646281 4940 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 23 09:05:50 crc kubenswrapper[4940]: E0223 09:05:50.646334 4940 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ce0016e4-e6c7-4ac5-8b5e-bd9edfa9c1b8-webhook-certs podName:ce0016e4-e6c7-4ac5-8b5e-bd9edfa9c1b8 nodeName:}" failed. No retries permitted until 2026-02-23 09:05:58.646321676 +0000 UTC m=+1090.029527833 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/ce0016e4-e6c7-4ac5-8b5e-bd9edfa9c1b8-webhook-certs") pod "openstack-operator-controller-manager-554b4c57dc-7gq48" (UID: "ce0016e4-e6c7-4ac5-8b5e-bd9edfa9c1b8") : secret "webhook-server-cert" not found Feb 23 09:05:57 crc kubenswrapper[4940]: I0223 09:05:57.654993 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/70ee3a78-ae3e-4b61-a6f6-4c3e36f50c3e-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cwwlzn\" (UID: \"70ee3a78-ae3e-4b61-a6f6-4c3e36f50c3e\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cwwlzn" Feb 23 09:05:57 crc kubenswrapper[4940]: E0223 09:05:57.655189 4940 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 23 09:05:57 crc kubenswrapper[4940]: E0223 09:05:57.655680 4940 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/70ee3a78-ae3e-4b61-a6f6-4c3e36f50c3e-cert podName:70ee3a78-ae3e-4b61-a6f6-4c3e36f50c3e nodeName:}" failed. No retries permitted until 2026-02-23 09:06:13.65565818 +0000 UTC m=+1105.038864347 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/70ee3a78-ae3e-4b61-a6f6-4c3e36f50c3e-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9cwwlzn" (UID: "70ee3a78-ae3e-4b61-a6f6-4c3e36f50c3e") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 23 09:05:57 crc kubenswrapper[4940]: I0223 09:05:57.756912 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/82d3766e-53e7-4dc8-9c9b-d71e9d930595-cert\") pod \"infra-operator-controller-manager-79d975b745-86vf7\" (UID: \"82d3766e-53e7-4dc8-9c9b-d71e9d930595\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-86vf7" Feb 23 09:05:57 crc kubenswrapper[4940]: I0223 09:05:57.764403 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/82d3766e-53e7-4dc8-9c9b-d71e9d930595-cert\") pod \"infra-operator-controller-manager-79d975b745-86vf7\" (UID: \"82d3766e-53e7-4dc8-9c9b-d71e9d930595\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-86vf7" Feb 23 09:05:58 crc kubenswrapper[4940]: I0223 09:05:58.030988 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-b7dr6" Feb 23 09:05:58 crc kubenswrapper[4940]: I0223 09:05:58.040287 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-79d975b745-86vf7" Feb 23 09:05:58 crc kubenswrapper[4940]: I0223 09:05:58.671523 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/ce0016e4-e6c7-4ac5-8b5e-bd9edfa9c1b8-webhook-certs\") pod \"openstack-operator-controller-manager-554b4c57dc-7gq48\" (UID: \"ce0016e4-e6c7-4ac5-8b5e-bd9edfa9c1b8\") " pod="openstack-operators/openstack-operator-controller-manager-554b4c57dc-7gq48" Feb 23 09:05:58 crc kubenswrapper[4940]: I0223 09:05:58.671646 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ce0016e4-e6c7-4ac5-8b5e-bd9edfa9c1b8-metrics-certs\") pod \"openstack-operator-controller-manager-554b4c57dc-7gq48\" (UID: \"ce0016e4-e6c7-4ac5-8b5e-bd9edfa9c1b8\") " pod="openstack-operators/openstack-operator-controller-manager-554b4c57dc-7gq48" Feb 23 09:05:58 crc kubenswrapper[4940]: E0223 09:05:58.671778 4940 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 23 09:05:58 crc kubenswrapper[4940]: E0223 09:05:58.671823 4940 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ce0016e4-e6c7-4ac5-8b5e-bd9edfa9c1b8-metrics-certs podName:ce0016e4-e6c7-4ac5-8b5e-bd9edfa9c1b8 nodeName:}" failed. No retries permitted until 2026-02-23 09:06:14.671810124 +0000 UTC m=+1106.055016271 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ce0016e4-e6c7-4ac5-8b5e-bd9edfa9c1b8-metrics-certs") pod "openstack-operator-controller-manager-554b4c57dc-7gq48" (UID: "ce0016e4-e6c7-4ac5-8b5e-bd9edfa9c1b8") : secret "metrics-server-cert" not found Feb 23 09:05:58 crc kubenswrapper[4940]: E0223 09:05:58.672001 4940 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 23 09:05:58 crc kubenswrapper[4940]: E0223 09:05:58.672098 4940 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ce0016e4-e6c7-4ac5-8b5e-bd9edfa9c1b8-webhook-certs podName:ce0016e4-e6c7-4ac5-8b5e-bd9edfa9c1b8 nodeName:}" failed. No retries permitted until 2026-02-23 09:06:14.672060982 +0000 UTC m=+1106.055267149 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/ce0016e4-e6c7-4ac5-8b5e-bd9edfa9c1b8-webhook-certs") pod "openstack-operator-controller-manager-554b4c57dc-7gq48" (UID: "ce0016e4-e6c7-4ac5-8b5e-bd9edfa9c1b8") : secret "webhook-server-cert" not found Feb 23 09:06:01 crc kubenswrapper[4940]: E0223 09:06:01.888125 4940 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/placement-operator@sha256:a57336b9f95b703f80453db87e43a2834ca1bdc89480796d28ebbe0a9702ecfd" Feb 23 09:06:01 crc kubenswrapper[4940]: E0223 09:06:01.888768 4940 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/placement-operator@sha256:a57336b9f95b703f80453db87e43a2834ca1bdc89480796d28ebbe0a9702ecfd,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-cwmhx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-8497b45c89-zb9xm_openstack-operators(bda50d0f-3559-47b6-9ee2-8104750b30c4): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 23 09:06:01 crc kubenswrapper[4940]: E0223 09:06:01.889986 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-zb9xm" podUID="bda50d0f-3559-47b6-9ee2-8104750b30c4" Feb 23 09:06:02 crc kubenswrapper[4940]: E0223 09:06:02.549575 4940 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/mariadb-operator@sha256:a18f12497b7159b100fcfd72c7ba2273d0669a5c00600a9ff1333bca028f256a" Feb 23 09:06:02 crc kubenswrapper[4940]: E0223 09:06:02.549848 4940 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/mariadb-operator@sha256:a18f12497b7159b100fcfd72c7ba2273d0669a5c00600a9ff1333bca028f256a,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-xjq4j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod mariadb-operator-controller-manager-6994f66f48-rwvf9_openstack-operators(e05a318b-495f-49c1-83cf-056d5ce99c8c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 23 09:06:02 crc kubenswrapper[4940]: E0223 09:06:02.551067 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-rwvf9" podUID="e05a318b-495f-49c1-83cf-056d5ce99c8c" Feb 23 09:06:02 crc kubenswrapper[4940]: E0223 09:06:02.724421 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:a57336b9f95b703f80453db87e43a2834ca1bdc89480796d28ebbe0a9702ecfd\\\"\"" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-zb9xm" podUID="bda50d0f-3559-47b6-9ee2-8104750b30c4" Feb 23 09:06:02 crc kubenswrapper[4940]: E0223 09:06:02.724442 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/mariadb-operator@sha256:a18f12497b7159b100fcfd72c7ba2273d0669a5c00600a9ff1333bca028f256a\\\"\"" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-rwvf9" podUID="e05a318b-495f-49c1-83cf-056d5ce99c8c" Feb 23 09:06:04 crc kubenswrapper[4940]: E0223 09:06:04.153471 4940 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2" Feb 23 09:06:04 crc kubenswrapper[4940]: E0223 09:06:04.153654 4940 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-5fskp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-gk729_openstack-operators(69a079c2-ac60-4b97-ae60-25c8189e6816): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 23 09:06:04 crc kubenswrapper[4940]: E0223 09:06:04.154830 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-gk729" podUID="69a079c2-ac60-4b97-ae60-25c8189e6816" Feb 23 09:06:04 crc kubenswrapper[4940]: E0223 09:06:04.742566 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-gk729" podUID="69a079c2-ac60-4b97-ae60-25c8189e6816" Feb 23 09:06:07 crc kubenswrapper[4940]: E0223 09:06:07.842222 4940 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/nova-operator@sha256:fe85dd595906fac0fe1e7a42215bb306a963cf87d55e07cd2573726b690b2838" Feb 23 09:06:07 crc kubenswrapper[4940]: E0223 09:06:07.842675 4940 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/nova-operator@sha256:fe85dd595906fac0fe1e7a42215bb306a963cf87d55e07cd2573726b690b2838,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-dvkpb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-567668f5cf-6ztzk_openstack-operators(d2c13199-d708-496b-b69a-43fba1068955): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 23 09:06:07 crc kubenswrapper[4940]: E0223 09:06:07.843965 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-6ztzk" podUID="d2c13199-d708-496b-b69a-43fba1068955" Feb 23 09:06:08 crc kubenswrapper[4940]: E0223 09:06:08.767995 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:fe85dd595906fac0fe1e7a42215bb306a963cf87d55e07cd2573726b690b2838\\\"\"" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-6ztzk" podUID="d2c13199-d708-496b-b69a-43fba1068955" Feb 23 09:06:11 crc kubenswrapper[4940]: I0223 09:06:11.812109 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-79d975b745-86vf7"] Feb 23 09:06:11 crc kubenswrapper[4940]: W0223 09:06:11.818425 4940 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod82d3766e_53e7_4dc8_9c9b_d71e9d930595.slice/crio-bc093a0229276edfc0c40fe11338f622e98dfad40ac5e453bdfdbd0a59cf7450 WatchSource:0}: Error finding container bc093a0229276edfc0c40fe11338f622e98dfad40ac5e453bdfdbd0a59cf7450: Status 404 returned error can't find the container with id bc093a0229276edfc0c40fe11338f622e98dfad40ac5e453bdfdbd0a59cf7450 Feb 23 09:06:12 crc kubenswrapper[4940]: I0223 09:06:12.513781 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-cmbf8" event={"ID":"32fc4d76-59e1-44b3-ace9-e9f14dc4f86a","Type":"ContainerStarted","Data":"f2dddbd03e385de6ebce3e21f5c81e3581f6e08e6f602db15cf24e22dbeddddd"} Feb 23 09:06:12 crc kubenswrapper[4940]: I0223 09:06:12.515275 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-cmbf8" Feb 23 09:06:12 crc kubenswrapper[4940]: I0223 09:06:12.529253 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-s2vxb" event={"ID":"c6e874c6-520a-40fa-b182-e7a0daab54c7","Type":"ContainerStarted","Data":"ab61633f3792861bbc9e0dea04a7155d35704eb9ab8a63e1fec9e78f7390d5a6"} Feb 23 09:06:12 crc kubenswrapper[4940]: I0223 09:06:12.529602 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-s2vxb" Feb 23 09:06:12 crc kubenswrapper[4940]: I0223 09:06:12.538524 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-77987464f4-p857l" event={"ID":"2a7c5730-7ed4-44b1-832d-109fa4460dc5","Type":"ContainerStarted","Data":"e95d6f3f43f850d8f664efaa56d02438b3dbab17ed7de6dca532d96a116b1399"} Feb 23 09:06:12 crc kubenswrapper[4940]: I0223 09:06:12.539188 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-77987464f4-p857l" Feb 23 09:06:12 crc kubenswrapper[4940]: I0223 09:06:12.540655 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-qzd5f" event={"ID":"db71f743-426e-4fe8-ab74-17c3f68798fc","Type":"ContainerStarted","Data":"ac768b32b42d72141dbec6a545974a255522edc92ccf94bc3e68b0d9e7f425c2"} Feb 23 09:06:12 crc kubenswrapper[4940]: I0223 09:06:12.541843 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-qzd5f" Feb 23 09:06:12 crc kubenswrapper[4940]: I0223 09:06:12.542270 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-vp5zb" event={"ID":"dfc9a681-c309-4803-9be0-6150d615b023","Type":"ContainerStarted","Data":"7242ac9d42398629c1d6e226db66aa3846e45cfd36fd3fcc0a5e119e9749fd9c"} Feb 23 09:06:12 crc kubenswrapper[4940]: I0223 09:06:12.544373 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-vp5zb" Feb 23 09:06:12 crc kubenswrapper[4940]: I0223 09:06:12.545066 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-6nlcd" event={"ID":"15249b0f-c437-4d93-b97a-c7e078139e07","Type":"ContainerStarted","Data":"e7e97fd0a36e0be20229f8c6e705fb53331c68fa2fd800516a6c49eecebffed6"} Feb 23 09:06:12 crc kubenswrapper[4940]: I0223 09:06:12.545656 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-6nlcd" Feb 23 09:06:12 crc kubenswrapper[4940]: I0223 09:06:12.553382 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-8wv98" event={"ID":"34061626-0f45-4bb5-a16f-9059fa45be7f","Type":"ContainerStarted","Data":"4252f56916b663552598319849cbcc9729769720095cd568642010c4802bea2f"} Feb 23 09:06:12 crc kubenswrapper[4940]: I0223 09:06:12.554488 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-8wv98" Feb 23 09:06:12 crc kubenswrapper[4940]: I0223 09:06:12.554751 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-bqhhr" event={"ID":"c8d94d12-5d54-4c60-85d4-de19e4dfde67","Type":"ContainerStarted","Data":"743c8007425ff258889267b893fe101eefd798bcba3bd919916a5e0633b49b5f"} Feb 23 09:06:12 crc kubenswrapper[4940]: I0223 09:06:12.555332 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-bqhhr" Feb 23 09:06:12 crc kubenswrapper[4940]: I0223 09:06:12.564791 4940 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-cmbf8" podStartSLOduration=6.738534959 podStartE2EDuration="31.564769585s" podCreationTimestamp="2026-02-23 09:05:41 +0000 UTC" firstStartedPulling="2026-02-23 09:05:46.565666049 +0000 UTC m=+1077.948872206" lastFinishedPulling="2026-02-23 09:06:11.391900675 +0000 UTC m=+1102.775106832" observedRunningTime="2026-02-23 09:06:12.561528564 +0000 UTC m=+1103.944734721" watchObservedRunningTime="2026-02-23 09:06:12.564769585 +0000 UTC m=+1103.947975742" Feb 23 09:06:12 crc kubenswrapper[4940]: I0223 09:06:12.566656 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-pvb4b" event={"ID":"61343538-79c0-4565-ae70-a397b5fd6b2f","Type":"ContainerStarted","Data":"b0bd2f1e8accb3515ab0ccc422c9bc4a2c17a65612341f5fa49295bde2809e43"} Feb 23 09:06:12 crc kubenswrapper[4940]: I0223 09:06:12.567453 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-pvb4b" Feb 23 09:06:12 crc kubenswrapper[4940]: I0223 09:06:12.584900 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-79d975b745-86vf7" event={"ID":"82d3766e-53e7-4dc8-9c9b-d71e9d930595","Type":"ContainerStarted","Data":"bc093a0229276edfc0c40fe11338f622e98dfad40ac5e453bdfdbd0a59cf7450"} Feb 23 09:06:12 crc kubenswrapper[4940]: I0223 09:06:12.585347 4940 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-qzd5f" podStartSLOduration=9.58819099 podStartE2EDuration="31.585331783s" podCreationTimestamp="2026-02-23 09:05:41 +0000 UTC" firstStartedPulling="2026-02-23 09:05:45.069907639 +0000 UTC m=+1076.453113796" lastFinishedPulling="2026-02-23 09:06:07.067048432 +0000 UTC m=+1098.450254589" observedRunningTime="2026-02-23 09:06:12.584321753 +0000 UTC m=+1103.967527910" watchObservedRunningTime="2026-02-23 09:06:12.585331783 +0000 UTC m=+1103.968537940" Feb 23 09:06:12 crc kubenswrapper[4940]: I0223 09:06:12.589969 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-zqz6k" event={"ID":"2fb7ee71-a9af-4504-8899-932449157080","Type":"ContainerStarted","Data":"fd148d191aa43251d010d76d07575b77c745a3293542f5a25f4b9034e60b1414"} Feb 23 09:06:12 crc kubenswrapper[4940]: I0223 09:06:12.590644 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-zqz6k" Feb 23 09:06:12 crc kubenswrapper[4940]: I0223 09:06:12.595988 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-vh4r6" Feb 23 09:06:12 crc kubenswrapper[4940]: I0223 09:06:12.602908 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-7866795846-phggr" event={"ID":"c81581e5-15a7-4b56-9b22-ecfd026749bc","Type":"ContainerStarted","Data":"d8b59e8b03adb9d65e5924b1d6f35007cc577387622504dcfec1136b86e92866"} Feb 23 09:06:12 crc kubenswrapper[4940]: I0223 09:06:12.603657 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-7866795846-phggr" Feb 23 09:06:12 crc kubenswrapper[4940]: I0223 09:06:12.604894 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-khtmd" event={"ID":"d2fb7a6a-317d-4180-bcc3-07087b8a48ba","Type":"ContainerStarted","Data":"1fb967326c067983a805096e794bfd9ae1b7f9014d40b911114f3ac5100ac93b"} Feb 23 09:06:12 crc kubenswrapper[4940]: I0223 09:06:12.605323 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-khtmd" Feb 23 09:06:12 crc kubenswrapper[4940]: I0223 09:06:12.606397 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-58p99" event={"ID":"e810e429-c05d-4451-a863-196e8e071d9b","Type":"ContainerStarted","Data":"70aeaed60b6a9af2888f05f1af848831028bb2707793429f710ce7c746a21816"} Feb 23 09:06:12 crc kubenswrapper[4940]: I0223 09:06:12.606882 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-58p99" Feb 23 09:06:12 crc kubenswrapper[4940]: I0223 09:06:12.608062 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-68f46476f-qzv55" event={"ID":"8d39a603-93c8-4c09-a1d2-97e6c14902fe","Type":"ContainerStarted","Data":"1b2a3a7dabab2709c31533dad453b37170ec6ea67ab807469a6e4d647cabb94c"} Feb 23 09:06:12 crc kubenswrapper[4940]: I0223 09:06:12.608274 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-68f46476f-qzv55" Feb 23 09:06:12 crc kubenswrapper[4940]: I0223 09:06:12.611453 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-92fk4" Feb 23 09:06:12 crc kubenswrapper[4940]: I0223 09:06:12.898446 4940 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-8wv98" podStartSLOduration=7.105633075 podStartE2EDuration="31.89843357s" podCreationTimestamp="2026-02-23 09:05:41 +0000 UTC" firstStartedPulling="2026-02-23 09:05:46.59525625 +0000 UTC m=+1077.978462407" lastFinishedPulling="2026-02-23 09:06:11.388056745 +0000 UTC m=+1102.771262902" observedRunningTime="2026-02-23 09:06:12.893974452 +0000 UTC m=+1104.277180609" watchObservedRunningTime="2026-02-23 09:06:12.89843357 +0000 UTC m=+1104.281639727" Feb 23 09:06:12 crc kubenswrapper[4940]: I0223 09:06:12.969446 4940 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-vp5zb" podStartSLOduration=8.009851167 podStartE2EDuration="31.969425306s" podCreationTimestamp="2026-02-23 09:05:41 +0000 UTC" firstStartedPulling="2026-02-23 09:05:43.107476623 +0000 UTC m=+1074.490682780" lastFinishedPulling="2026-02-23 09:06:07.067050742 +0000 UTC m=+1098.450256919" observedRunningTime="2026-02-23 09:06:12.966388731 +0000 UTC m=+1104.349594888" watchObservedRunningTime="2026-02-23 09:06:12.969425306 +0000 UTC m=+1104.352631463" Feb 23 09:06:13 crc kubenswrapper[4940]: I0223 09:06:12.997666 4940 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-77987464f4-p857l" podStartSLOduration=10.606342832 podStartE2EDuration="31.997646923s" podCreationTimestamp="2026-02-23 09:05:41 +0000 UTC" firstStartedPulling="2026-02-23 09:05:46.432072555 +0000 UTC m=+1077.815278712" lastFinishedPulling="2026-02-23 09:06:07.823376646 +0000 UTC m=+1099.206582803" observedRunningTime="2026-02-23 09:06:12.993580466 +0000 UTC m=+1104.376786623" watchObservedRunningTime="2026-02-23 09:06:12.997646923 +0000 UTC m=+1104.380853080" Feb 23 09:06:13 crc kubenswrapper[4940]: I0223 09:06:13.016422 4940 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-s2vxb" podStartSLOduration=7.191684828 podStartE2EDuration="32.016401275s" podCreationTimestamp="2026-02-23 09:05:41 +0000 UTC" firstStartedPulling="2026-02-23 09:05:46.565453363 +0000 UTC m=+1077.948659510" lastFinishedPulling="2026-02-23 09:06:11.3901698 +0000 UTC m=+1102.773375957" observedRunningTime="2026-02-23 09:06:13.012956569 +0000 UTC m=+1104.396162736" watchObservedRunningTime="2026-02-23 09:06:13.016401275 +0000 UTC m=+1104.399607432" Feb 23 09:06:13 crc kubenswrapper[4940]: I0223 09:06:13.027195 4940 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-bqhhr" podStartSLOduration=10.644829977 podStartE2EDuration="32.02718143s" podCreationTimestamp="2026-02-23 09:05:41 +0000 UTC" firstStartedPulling="2026-02-23 09:05:46.441023783 +0000 UTC m=+1077.824229940" lastFinishedPulling="2026-02-23 09:06:07.823375236 +0000 UTC m=+1099.206581393" observedRunningTime="2026-02-23 09:06:13.026542401 +0000 UTC m=+1104.409748558" watchObservedRunningTime="2026-02-23 09:06:13.02718143 +0000 UTC m=+1104.410387587" Feb 23 09:06:13 crc kubenswrapper[4940]: I0223 09:06:13.054411 4940 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-6nlcd" podStartSLOduration=11.532874826 podStartE2EDuration="32.054380355s" podCreationTimestamp="2026-02-23 09:05:41 +0000 UTC" firstStartedPulling="2026-02-23 09:05:46.544572924 +0000 UTC m=+1077.927779081" lastFinishedPulling="2026-02-23 09:06:07.066078433 +0000 UTC m=+1098.449284610" observedRunningTime="2026-02-23 09:06:13.052513197 +0000 UTC m=+1104.435719374" watchObservedRunningTime="2026-02-23 09:06:13.054380355 +0000 UTC m=+1104.437586502" Feb 23 09:06:13 crc kubenswrapper[4940]: I0223 09:06:13.133988 4940 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-pvb4b" podStartSLOduration=11.599225257 podStartE2EDuration="32.133967897s" podCreationTimestamp="2026-02-23 09:05:41 +0000 UTC" firstStartedPulling="2026-02-23 09:05:46.532750146 +0000 UTC m=+1077.915956303" lastFinishedPulling="2026-02-23 09:06:07.067492776 +0000 UTC m=+1098.450698943" observedRunningTime="2026-02-23 09:06:13.130066167 +0000 UTC m=+1104.513272344" watchObservedRunningTime="2026-02-23 09:06:13.133967897 +0000 UTC m=+1104.517174074" Feb 23 09:06:13 crc kubenswrapper[4940]: I0223 09:06:13.174532 4940 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-zqz6k" podStartSLOduration=7.537108647 podStartE2EDuration="32.174517018s" podCreationTimestamp="2026-02-23 09:05:41 +0000 UTC" firstStartedPulling="2026-02-23 09:05:46.767800147 +0000 UTC m=+1078.151006304" lastFinishedPulling="2026-02-23 09:06:11.405208518 +0000 UTC m=+1102.788414675" observedRunningTime="2026-02-23 09:06:13.17301105 +0000 UTC m=+1104.556217207" watchObservedRunningTime="2026-02-23 09:06:13.174517018 +0000 UTC m=+1104.557723175" Feb 23 09:06:13 crc kubenswrapper[4940]: I0223 09:06:13.223280 4940 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-68f46476f-qzv55" podStartSLOduration=7.401066353 podStartE2EDuration="32.223264031s" podCreationTimestamp="2026-02-23 09:05:41 +0000 UTC" firstStartedPulling="2026-02-23 09:05:46.565252288 +0000 UTC m=+1077.948458435" lastFinishedPulling="2026-02-23 09:06:11.387449956 +0000 UTC m=+1102.770656113" observedRunningTime="2026-02-23 09:06:13.217828483 +0000 UTC m=+1104.601034640" watchObservedRunningTime="2026-02-23 09:06:13.223264031 +0000 UTC m=+1104.606470188" Feb 23 09:06:13 crc kubenswrapper[4940]: I0223 09:06:13.595089 4940 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-58p99" podStartSLOduration=8.00197322 podStartE2EDuration="32.595073103s" podCreationTimestamp="2026-02-23 09:05:41 +0000 UTC" firstStartedPulling="2026-02-23 09:05:46.796686535 +0000 UTC m=+1078.179892692" lastFinishedPulling="2026-02-23 09:06:11.389786418 +0000 UTC m=+1102.772992575" observedRunningTime="2026-02-23 09:06:13.300529072 +0000 UTC m=+1104.683735229" watchObservedRunningTime="2026-02-23 09:06:13.595073103 +0000 UTC m=+1104.978279260" Feb 23 09:06:13 crc kubenswrapper[4940]: I0223 09:06:13.595338 4940 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-vh4r6" podStartSLOduration=11.310726397 podStartE2EDuration="32.59533441s" podCreationTimestamp="2026-02-23 09:05:41 +0000 UTC" firstStartedPulling="2026-02-23 09:05:46.538770323 +0000 UTC m=+1077.921976470" lastFinishedPulling="2026-02-23 09:06:07.823378326 +0000 UTC m=+1099.206584483" observedRunningTime="2026-02-23 09:06:13.594144824 +0000 UTC m=+1104.977350981" watchObservedRunningTime="2026-02-23 09:06:13.59533441 +0000 UTC m=+1104.978540567" Feb 23 09:06:13 crc kubenswrapper[4940]: I0223 09:06:13.621712 4940 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-92fk4" podStartSLOduration=12.056920886 podStartE2EDuration="32.62169501s" podCreationTimestamp="2026-02-23 09:05:41 +0000 UTC" firstStartedPulling="2026-02-23 09:05:46.503408754 +0000 UTC m=+1077.886614911" lastFinishedPulling="2026-02-23 09:06:07.068182878 +0000 UTC m=+1098.451389035" observedRunningTime="2026-02-23 09:06:13.619469991 +0000 UTC m=+1105.002676158" watchObservedRunningTime="2026-02-23 09:06:13.62169501 +0000 UTC m=+1105.004901157" Feb 23 09:06:13 crc kubenswrapper[4940]: I0223 09:06:13.625064 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-vh4r6" event={"ID":"780fe903-e160-47c9-9291-31ee2d139266","Type":"ContainerStarted","Data":"fd8978db5c65bee628653dae99e862c540310f9700e82cd63504a73036aaf104"} Feb 23 09:06:13 crc kubenswrapper[4940]: I0223 09:06:13.631110 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-92fk4" event={"ID":"0d68e7dc-1d8e-4edd-a2f9-585043e15a98","Type":"ContainerStarted","Data":"91f3246432eb2067bd96291bc9befbc98cb83d01cb62a75ef5470e2b78231c74"} Feb 23 09:06:13 crc kubenswrapper[4940]: I0223 09:06:13.643346 4940 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-khtmd" podStartSLOduration=12.11304289 podStartE2EDuration="32.643328261s" podCreationTimestamp="2026-02-23 09:05:41 +0000 UTC" firstStartedPulling="2026-02-23 09:05:46.53673103 +0000 UTC m=+1077.919937187" lastFinishedPulling="2026-02-23 09:06:07.067016381 +0000 UTC m=+1098.450222558" observedRunningTime="2026-02-23 09:06:13.640441012 +0000 UTC m=+1105.023647169" watchObservedRunningTime="2026-02-23 09:06:13.643328261 +0000 UTC m=+1105.026534418" Feb 23 09:06:13 crc kubenswrapper[4940]: I0223 09:06:13.666579 4940 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-7866795846-phggr" podStartSLOduration=7.842615181 podStartE2EDuration="32.666563504s" podCreationTimestamp="2026-02-23 09:05:41 +0000 UTC" firstStartedPulling="2026-02-23 09:05:46.565547776 +0000 UTC m=+1077.948753933" lastFinishedPulling="2026-02-23 09:06:11.389496099 +0000 UTC m=+1102.772702256" observedRunningTime="2026-02-23 09:06:13.664394846 +0000 UTC m=+1105.047601003" watchObservedRunningTime="2026-02-23 09:06:13.666563504 +0000 UTC m=+1105.049769661" Feb 23 09:06:13 crc kubenswrapper[4940]: I0223 09:06:13.716865 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/70ee3a78-ae3e-4b61-a6f6-4c3e36f50c3e-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cwwlzn\" (UID: \"70ee3a78-ae3e-4b61-a6f6-4c3e36f50c3e\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cwwlzn" Feb 23 09:06:13 crc kubenswrapper[4940]: I0223 09:06:13.730522 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/70ee3a78-ae3e-4b61-a6f6-4c3e36f50c3e-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cwwlzn\" (UID: \"70ee3a78-ae3e-4b61-a6f6-4c3e36f50c3e\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cwwlzn" Feb 23 09:06:13 crc kubenswrapper[4940]: I0223 09:06:13.764327 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-gdcnb" Feb 23 09:06:13 crc kubenswrapper[4940]: I0223 09:06:13.773301 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cwwlzn" Feb 23 09:06:14 crc kubenswrapper[4940]: I0223 09:06:14.789045 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ce0016e4-e6c7-4ac5-8b5e-bd9edfa9c1b8-metrics-certs\") pod \"openstack-operator-controller-manager-554b4c57dc-7gq48\" (UID: \"ce0016e4-e6c7-4ac5-8b5e-bd9edfa9c1b8\") " pod="openstack-operators/openstack-operator-controller-manager-554b4c57dc-7gq48" Feb 23 09:06:14 crc kubenswrapper[4940]: I0223 09:06:14.789362 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/ce0016e4-e6c7-4ac5-8b5e-bd9edfa9c1b8-webhook-certs\") pod \"openstack-operator-controller-manager-554b4c57dc-7gq48\" (UID: \"ce0016e4-e6c7-4ac5-8b5e-bd9edfa9c1b8\") " pod="openstack-operators/openstack-operator-controller-manager-554b4c57dc-7gq48" Feb 23 09:06:14 crc kubenswrapper[4940]: I0223 09:06:14.795126 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/ce0016e4-e6c7-4ac5-8b5e-bd9edfa9c1b8-webhook-certs\") pod \"openstack-operator-controller-manager-554b4c57dc-7gq48\" (UID: \"ce0016e4-e6c7-4ac5-8b5e-bd9edfa9c1b8\") " pod="openstack-operators/openstack-operator-controller-manager-554b4c57dc-7gq48" Feb 23 09:06:14 crc kubenswrapper[4940]: I0223 09:06:14.802887 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ce0016e4-e6c7-4ac5-8b5e-bd9edfa9c1b8-metrics-certs\") pod \"openstack-operator-controller-manager-554b4c57dc-7gq48\" (UID: \"ce0016e4-e6c7-4ac5-8b5e-bd9edfa9c1b8\") " pod="openstack-operators/openstack-operator-controller-manager-554b4c57dc-7gq48" Feb 23 09:06:14 crc kubenswrapper[4940]: I0223 09:06:14.884037 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cwwlzn"] Feb 23 09:06:14 crc kubenswrapper[4940]: I0223 09:06:14.920886 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-qrhjk" Feb 23 09:06:14 crc kubenswrapper[4940]: I0223 09:06:14.929914 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-554b4c57dc-7gq48" Feb 23 09:06:15 crc kubenswrapper[4940]: I0223 09:06:15.818421 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cwwlzn" event={"ID":"70ee3a78-ae3e-4b61-a6f6-4c3e36f50c3e","Type":"ContainerStarted","Data":"763e8d9f78d0d789d5a7cfa09b651aa4a6b8e832ebe5bfcd47613255f24b8b79"} Feb 23 09:06:15 crc kubenswrapper[4940]: I0223 09:06:15.950269 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-554b4c57dc-7gq48"] Feb 23 09:06:15 crc kubenswrapper[4940]: W0223 09:06:15.971433 4940 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podce0016e4_e6c7_4ac5_8b5e_bd9edfa9c1b8.slice/crio-4cbc5182a1a3e7ea33c6fa95ff39a1a92d308fe52ce9a5af24caf4cbf5353e64 WatchSource:0}: Error finding container 4cbc5182a1a3e7ea33c6fa95ff39a1a92d308fe52ce9a5af24caf4cbf5353e64: Status 404 returned error can't find the container with id 4cbc5182a1a3e7ea33c6fa95ff39a1a92d308fe52ce9a5af24caf4cbf5353e64 Feb 23 09:06:16 crc kubenswrapper[4940]: I0223 09:06:16.919996 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-rwvf9" event={"ID":"e05a318b-495f-49c1-83cf-056d5ce99c8c","Type":"ContainerStarted","Data":"cfa29927135d55bc34d98b459b452cd6eea450b8e8471148669f6305d2a9a4bf"} Feb 23 09:06:16 crc kubenswrapper[4940]: I0223 09:06:16.920960 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-rwvf9" Feb 23 09:06:16 crc kubenswrapper[4940]: I0223 09:06:16.938673 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-554b4c57dc-7gq48" event={"ID":"ce0016e4-e6c7-4ac5-8b5e-bd9edfa9c1b8","Type":"ContainerStarted","Data":"a3f33aab1930d57cc022d4b2133e2346434dc116b1c983fc06cc1ae4a3d729d5"} Feb 23 09:06:16 crc kubenswrapper[4940]: I0223 09:06:16.941038 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-554b4c57dc-7gq48" Feb 23 09:06:16 crc kubenswrapper[4940]: I0223 09:06:16.941194 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-554b4c57dc-7gq48" event={"ID":"ce0016e4-e6c7-4ac5-8b5e-bd9edfa9c1b8","Type":"ContainerStarted","Data":"4cbc5182a1a3e7ea33c6fa95ff39a1a92d308fe52ce9a5af24caf4cbf5353e64"} Feb 23 09:06:17 crc kubenswrapper[4940]: I0223 09:06:17.042102 4940 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-rwvf9" podStartSLOduration=6.414958019 podStartE2EDuration="36.042085258s" podCreationTimestamp="2026-02-23 09:05:41 +0000 UTC" firstStartedPulling="2026-02-23 09:05:46.424996075 +0000 UTC m=+1077.808202232" lastFinishedPulling="2026-02-23 09:06:16.052123314 +0000 UTC m=+1107.435329471" observedRunningTime="2026-02-23 09:06:16.996818752 +0000 UTC m=+1108.380024919" watchObservedRunningTime="2026-02-23 09:06:17.042085258 +0000 UTC m=+1108.425291415" Feb 23 09:06:17 crc kubenswrapper[4940]: I0223 09:06:17.373580 4940 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-554b4c57dc-7gq48" podStartSLOduration=35.373558527 podStartE2EDuration="35.373558527s" podCreationTimestamp="2026-02-23 09:05:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 09:06:17.04759739 +0000 UTC m=+1108.430803547" watchObservedRunningTime="2026-02-23 09:06:17.373558527 +0000 UTC m=+1108.756764684" Feb 23 09:06:21 crc kubenswrapper[4940]: I0223 09:06:21.619917 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-vp5zb" Feb 23 09:06:21 crc kubenswrapper[4940]: I0223 09:06:21.634835 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-bqhhr" Feb 23 09:06:21 crc kubenswrapper[4940]: I0223 09:06:21.656064 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-92fk4" Feb 23 09:06:21 crc kubenswrapper[4940]: I0223 09:06:21.686722 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-77987464f4-p857l" Feb 23 09:06:21 crc kubenswrapper[4940]: I0223 09:06:21.798472 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-qzd5f" Feb 23 09:06:22 crc kubenswrapper[4940]: I0223 09:06:22.022105 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-zqz6k" Feb 23 09:06:22 crc kubenswrapper[4940]: I0223 09:06:22.067525 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-pvb4b" Feb 23 09:06:22 crc kubenswrapper[4940]: I0223 09:06:22.067664 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-rwvf9" Feb 23 09:06:22 crc kubenswrapper[4940]: I0223 09:06:22.079112 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-vh4r6" Feb 23 09:06:22 crc kubenswrapper[4940]: I0223 09:06:22.125223 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-khtmd" Feb 23 09:06:22 crc kubenswrapper[4940]: I0223 09:06:22.140024 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-6nlcd" Feb 23 09:06:22 crc kubenswrapper[4940]: I0223 09:06:22.211346 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-8wv98" Feb 23 09:06:22 crc kubenswrapper[4940]: I0223 09:06:22.248647 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-cmbf8" Feb 23 09:06:22 crc kubenswrapper[4940]: I0223 09:06:22.359875 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-58p99" Feb 23 09:06:22 crc kubenswrapper[4940]: I0223 09:06:22.375775 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-68f46476f-qzv55" Feb 23 09:06:22 crc kubenswrapper[4940]: I0223 09:06:22.460676 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-7866795846-phggr" Feb 23 09:06:22 crc kubenswrapper[4940]: I0223 09:06:22.487439 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-s2vxb" Feb 23 09:06:24 crc kubenswrapper[4940]: I0223 09:06:24.938037 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-554b4c57dc-7gq48" Feb 23 09:06:27 crc kubenswrapper[4940]: I0223 09:06:27.109005 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-79d975b745-86vf7" event={"ID":"82d3766e-53e7-4dc8-9c9b-d71e9d930595","Type":"ContainerStarted","Data":"b660db0ff203b726fe0ed9102397f30b626a8bca5e45d7a6e0c1c625c37f3897"} Feb 23 09:06:27 crc kubenswrapper[4940]: I0223 09:06:27.110333 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-79d975b745-86vf7" Feb 23 09:06:27 crc kubenswrapper[4940]: I0223 09:06:27.111351 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cwwlzn" event={"ID":"70ee3a78-ae3e-4b61-a6f6-4c3e36f50c3e","Type":"ContainerStarted","Data":"9d14cc247e8c9edb22fddb4ce2b2a58d257508a0e78c55f6110b3d80b3c81789"} Feb 23 09:06:27 crc kubenswrapper[4940]: I0223 09:06:27.111700 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cwwlzn" Feb 23 09:06:27 crc kubenswrapper[4940]: I0223 09:06:27.113719 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-6ztzk" event={"ID":"d2c13199-d708-496b-b69a-43fba1068955","Type":"ContainerStarted","Data":"ee16ac09e1d09be7ea6e88a72179d793a0ac4d45a90e5cd5aeff889d6e61b3fa"} Feb 23 09:06:27 crc kubenswrapper[4940]: I0223 09:06:27.114052 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-6ztzk" Feb 23 09:06:27 crc kubenswrapper[4940]: I0223 09:06:27.134358 4940 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-79d975b745-86vf7" podStartSLOduration=31.05862703 podStartE2EDuration="46.134137031s" podCreationTimestamp="2026-02-23 09:05:41 +0000 UTC" firstStartedPulling="2026-02-23 09:06:11.820675298 +0000 UTC m=+1103.203881455" lastFinishedPulling="2026-02-23 09:06:26.896185299 +0000 UTC m=+1118.279391456" observedRunningTime="2026-02-23 09:06:27.125459442 +0000 UTC m=+1118.508665599" watchObservedRunningTime="2026-02-23 09:06:27.134137031 +0000 UTC m=+1118.517343208" Feb 23 09:06:27 crc kubenswrapper[4940]: I0223 09:06:27.193016 4940 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-6ztzk" podStartSLOduration=5.754121061 podStartE2EDuration="46.19299655s" podCreationTimestamp="2026-02-23 09:05:41 +0000 UTC" firstStartedPulling="2026-02-23 09:05:46.458683543 +0000 UTC m=+1077.841889700" lastFinishedPulling="2026-02-23 09:06:26.897559032 +0000 UTC m=+1118.280765189" observedRunningTime="2026-02-23 09:06:27.187930062 +0000 UTC m=+1118.571136229" watchObservedRunningTime="2026-02-23 09:06:27.19299655 +0000 UTC m=+1118.576202717" Feb 23 09:06:27 crc kubenswrapper[4940]: I0223 09:06:27.196184 4940 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cwwlzn" podStartSLOduration=34.220939403 podStartE2EDuration="46.196171969s" podCreationTimestamp="2026-02-23 09:05:41 +0000 UTC" firstStartedPulling="2026-02-23 09:06:14.92117561 +0000 UTC m=+1106.304381767" lastFinishedPulling="2026-02-23 09:06:26.896408176 +0000 UTC m=+1118.279614333" observedRunningTime="2026-02-23 09:06:27.16918645 +0000 UTC m=+1118.552392627" watchObservedRunningTime="2026-02-23 09:06:27.196171969 +0000 UTC m=+1118.579378136" Feb 23 09:06:28 crc kubenswrapper[4940]: I0223 09:06:28.304922 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-zb9xm" event={"ID":"bda50d0f-3559-47b6-9ee2-8104750b30c4","Type":"ContainerStarted","Data":"b77cbc488f5b0ee43fe0ab1919398cc0e290fc4dee52dc829dd2c8fe6ab62592"} Feb 23 09:06:28 crc kubenswrapper[4940]: I0223 09:06:28.305171 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-zb9xm" Feb 23 09:06:28 crc kubenswrapper[4940]: I0223 09:06:28.307520 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-gk729" event={"ID":"69a079c2-ac60-4b97-ae60-25c8189e6816","Type":"ContainerStarted","Data":"18782e77a6afc6ddaad2a9c499f2597f739e32411eb5c3052072ffaa1983bcab"} Feb 23 09:06:28 crc kubenswrapper[4940]: I0223 09:06:28.324060 4940 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-zb9xm" podStartSLOduration=7.190428472 podStartE2EDuration="47.324040987s" podCreationTimestamp="2026-02-23 09:05:41 +0000 UTC" firstStartedPulling="2026-02-23 09:05:46.763268886 +0000 UTC m=+1078.146475043" lastFinishedPulling="2026-02-23 09:06:26.896881401 +0000 UTC m=+1118.280087558" observedRunningTime="2026-02-23 09:06:28.318480574 +0000 UTC m=+1119.701686751" watchObservedRunningTime="2026-02-23 09:06:28.324040987 +0000 UTC m=+1119.707247154" Feb 23 09:06:28 crc kubenswrapper[4940]: I0223 09:06:28.353464 4940 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-gk729" podStartSLOduration=5.948735924 podStartE2EDuration="46.353443081s" podCreationTimestamp="2026-02-23 09:05:42 +0000 UTC" firstStartedPulling="2026-02-23 09:05:46.493107023 +0000 UTC m=+1077.876313180" lastFinishedPulling="2026-02-23 09:06:26.89781418 +0000 UTC m=+1118.281020337" observedRunningTime="2026-02-23 09:06:28.347083163 +0000 UTC m=+1119.730289320" watchObservedRunningTime="2026-02-23 09:06:28.353443081 +0000 UTC m=+1119.736649238" Feb 23 09:06:31 crc kubenswrapper[4940]: I0223 09:06:31.430039 4940 patch_prober.go:28] interesting pod/machine-config-daemon-26mgs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 23 09:06:31 crc kubenswrapper[4940]: I0223 09:06:31.430365 4940 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" podUID="f3f2cfd6-5ddf-436d-998f-440f1cc642b1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 23 09:06:32 crc kubenswrapper[4940]: I0223 09:06:32.121636 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-6ztzk" Feb 23 09:06:32 crc kubenswrapper[4940]: I0223 09:06:32.356064 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-zb9xm" Feb 23 09:06:33 crc kubenswrapper[4940]: I0223 09:06:33.779291 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cwwlzn" Feb 23 09:06:38 crc kubenswrapper[4940]: I0223 09:06:38.219273 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-79d975b745-86vf7" Feb 23 09:07:01 crc kubenswrapper[4940]: I0223 09:07:01.429568 4940 patch_prober.go:28] interesting pod/machine-config-daemon-26mgs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 23 09:07:01 crc kubenswrapper[4940]: I0223 09:07:01.430504 4940 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" podUID="f3f2cfd6-5ddf-436d-998f-440f1cc642b1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 23 09:07:04 crc kubenswrapper[4940]: I0223 09:07:04.249001 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-5mvgt"] Feb 23 09:07:04 crc kubenswrapper[4940]: I0223 09:07:04.251419 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-5mvgt" Feb 23 09:07:04 crc kubenswrapper[4940]: I0223 09:07:04.256178 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Feb 23 09:07:04 crc kubenswrapper[4940]: I0223 09:07:04.256568 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-h9n5q" Feb 23 09:07:04 crc kubenswrapper[4940]: I0223 09:07:04.265866 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-5mvgt"] Feb 23 09:07:04 crc kubenswrapper[4940]: I0223 09:07:04.318958 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-2gpsf"] Feb 23 09:07:04 crc kubenswrapper[4940]: I0223 09:07:04.320156 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-2gpsf" Feb 23 09:07:04 crc kubenswrapper[4940]: I0223 09:07:04.321697 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Feb 23 09:07:04 crc kubenswrapper[4940]: I0223 09:07:04.338153 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-2gpsf"] Feb 23 09:07:04 crc kubenswrapper[4940]: I0223 09:07:04.366818 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-46j5l\" (UniqueName: \"kubernetes.io/projected/20421f0a-8d8e-42b8-b181-2ae112e75172-kube-api-access-46j5l\") pod \"dnsmasq-dns-675f4bcbfc-5mvgt\" (UID: \"20421f0a-8d8e-42b8-b181-2ae112e75172\") " pod="openstack/dnsmasq-dns-675f4bcbfc-5mvgt" Feb 23 09:07:04 crc kubenswrapper[4940]: I0223 09:07:04.367054 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/20421f0a-8d8e-42b8-b181-2ae112e75172-config\") pod \"dnsmasq-dns-675f4bcbfc-5mvgt\" (UID: \"20421f0a-8d8e-42b8-b181-2ae112e75172\") " pod="openstack/dnsmasq-dns-675f4bcbfc-5mvgt" Feb 23 09:07:04 crc kubenswrapper[4940]: I0223 09:07:04.468989 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-46j5l\" (UniqueName: \"kubernetes.io/projected/20421f0a-8d8e-42b8-b181-2ae112e75172-kube-api-access-46j5l\") pod \"dnsmasq-dns-675f4bcbfc-5mvgt\" (UID: \"20421f0a-8d8e-42b8-b181-2ae112e75172\") " pod="openstack/dnsmasq-dns-675f4bcbfc-5mvgt" Feb 23 09:07:04 crc kubenswrapper[4940]: I0223 09:07:04.469339 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9f0d8522-e151-43e6-9da2-881c191d28bd-config\") pod \"dnsmasq-dns-78dd6ddcc-2gpsf\" (UID: \"9f0d8522-e151-43e6-9da2-881c191d28bd\") " pod="openstack/dnsmasq-dns-78dd6ddcc-2gpsf" Feb 23 09:07:04 crc kubenswrapper[4940]: I0223 09:07:04.469373 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9f0d8522-e151-43e6-9da2-881c191d28bd-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-2gpsf\" (UID: \"9f0d8522-e151-43e6-9da2-881c191d28bd\") " pod="openstack/dnsmasq-dns-78dd6ddcc-2gpsf" Feb 23 09:07:04 crc kubenswrapper[4940]: I0223 09:07:04.469701 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/20421f0a-8d8e-42b8-b181-2ae112e75172-config\") pod \"dnsmasq-dns-675f4bcbfc-5mvgt\" (UID: \"20421f0a-8d8e-42b8-b181-2ae112e75172\") " pod="openstack/dnsmasq-dns-675f4bcbfc-5mvgt" Feb 23 09:07:04 crc kubenswrapper[4940]: I0223 09:07:04.469818 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4jc7g\" (UniqueName: \"kubernetes.io/projected/9f0d8522-e151-43e6-9da2-881c191d28bd-kube-api-access-4jc7g\") pod \"dnsmasq-dns-78dd6ddcc-2gpsf\" (UID: \"9f0d8522-e151-43e6-9da2-881c191d28bd\") " pod="openstack/dnsmasq-dns-78dd6ddcc-2gpsf" Feb 23 09:07:04 crc kubenswrapper[4940]: I0223 09:07:04.470784 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/20421f0a-8d8e-42b8-b181-2ae112e75172-config\") pod \"dnsmasq-dns-675f4bcbfc-5mvgt\" (UID: \"20421f0a-8d8e-42b8-b181-2ae112e75172\") " pod="openstack/dnsmasq-dns-675f4bcbfc-5mvgt" Feb 23 09:07:04 crc kubenswrapper[4940]: I0223 09:07:04.500456 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-46j5l\" (UniqueName: \"kubernetes.io/projected/20421f0a-8d8e-42b8-b181-2ae112e75172-kube-api-access-46j5l\") pod \"dnsmasq-dns-675f4bcbfc-5mvgt\" (UID: \"20421f0a-8d8e-42b8-b181-2ae112e75172\") " pod="openstack/dnsmasq-dns-675f4bcbfc-5mvgt" Feb 23 09:07:04 crc kubenswrapper[4940]: I0223 09:07:04.568893 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-5mvgt" Feb 23 09:07:04 crc kubenswrapper[4940]: I0223 09:07:04.570743 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4jc7g\" (UniqueName: \"kubernetes.io/projected/9f0d8522-e151-43e6-9da2-881c191d28bd-kube-api-access-4jc7g\") pod \"dnsmasq-dns-78dd6ddcc-2gpsf\" (UID: \"9f0d8522-e151-43e6-9da2-881c191d28bd\") " pod="openstack/dnsmasq-dns-78dd6ddcc-2gpsf" Feb 23 09:07:04 crc kubenswrapper[4940]: I0223 09:07:04.570801 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9f0d8522-e151-43e6-9da2-881c191d28bd-config\") pod \"dnsmasq-dns-78dd6ddcc-2gpsf\" (UID: \"9f0d8522-e151-43e6-9da2-881c191d28bd\") " pod="openstack/dnsmasq-dns-78dd6ddcc-2gpsf" Feb 23 09:07:04 crc kubenswrapper[4940]: I0223 09:07:04.570835 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9f0d8522-e151-43e6-9da2-881c191d28bd-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-2gpsf\" (UID: \"9f0d8522-e151-43e6-9da2-881c191d28bd\") " pod="openstack/dnsmasq-dns-78dd6ddcc-2gpsf" Feb 23 09:07:04 crc kubenswrapper[4940]: I0223 09:07:04.571679 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9f0d8522-e151-43e6-9da2-881c191d28bd-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-2gpsf\" (UID: \"9f0d8522-e151-43e6-9da2-881c191d28bd\") " pod="openstack/dnsmasq-dns-78dd6ddcc-2gpsf" Feb 23 09:07:04 crc kubenswrapper[4940]: I0223 09:07:04.572692 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9f0d8522-e151-43e6-9da2-881c191d28bd-config\") pod \"dnsmasq-dns-78dd6ddcc-2gpsf\" (UID: \"9f0d8522-e151-43e6-9da2-881c191d28bd\") " pod="openstack/dnsmasq-dns-78dd6ddcc-2gpsf" Feb 23 09:07:04 crc kubenswrapper[4940]: I0223 09:07:04.591067 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4jc7g\" (UniqueName: \"kubernetes.io/projected/9f0d8522-e151-43e6-9da2-881c191d28bd-kube-api-access-4jc7g\") pod \"dnsmasq-dns-78dd6ddcc-2gpsf\" (UID: \"9f0d8522-e151-43e6-9da2-881c191d28bd\") " pod="openstack/dnsmasq-dns-78dd6ddcc-2gpsf" Feb 23 09:07:04 crc kubenswrapper[4940]: I0223 09:07:04.634933 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-2gpsf" Feb 23 09:07:05 crc kubenswrapper[4940]: I0223 09:07:05.082286 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-5mvgt"] Feb 23 09:07:05 crc kubenswrapper[4940]: I0223 09:07:05.139346 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-2gpsf"] Feb 23 09:07:05 crc kubenswrapper[4940]: W0223 09:07:05.140355 4940 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9f0d8522_e151_43e6_9da2_881c191d28bd.slice/crio-a76358ae6e33f990cfd8faf055f44b4a90cb3e7fb71a05903ac5777928e14cf0 WatchSource:0}: Error finding container a76358ae6e33f990cfd8faf055f44b4a90cb3e7fb71a05903ac5777928e14cf0: Status 404 returned error can't find the container with id a76358ae6e33f990cfd8faf055f44b4a90cb3e7fb71a05903ac5777928e14cf0 Feb 23 09:07:05 crc kubenswrapper[4940]: I0223 09:07:05.745654 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-2gpsf" event={"ID":"9f0d8522-e151-43e6-9da2-881c191d28bd","Type":"ContainerStarted","Data":"a76358ae6e33f990cfd8faf055f44b4a90cb3e7fb71a05903ac5777928e14cf0"} Feb 23 09:07:05 crc kubenswrapper[4940]: I0223 09:07:05.747348 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-5mvgt" event={"ID":"20421f0a-8d8e-42b8-b181-2ae112e75172","Type":"ContainerStarted","Data":"bd7217b96b0281c5fc37a71ed12ad3758f4b4345b23ea74b52512310ed745c4f"} Feb 23 09:07:07 crc kubenswrapper[4940]: I0223 09:07:07.031604 4940 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-5mvgt"] Feb 23 09:07:07 crc kubenswrapper[4940]: I0223 09:07:07.078729 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-vnk88"] Feb 23 09:07:07 crc kubenswrapper[4940]: I0223 09:07:07.093692 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-vnk88"] Feb 23 09:07:07 crc kubenswrapper[4940]: I0223 09:07:07.093809 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-vnk88" Feb 23 09:07:07 crc kubenswrapper[4940]: I0223 09:07:07.262281 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f7ad5edd-bc1d-473a-baca-cfe974fb32f1-config\") pod \"dnsmasq-dns-666b6646f7-vnk88\" (UID: \"f7ad5edd-bc1d-473a-baca-cfe974fb32f1\") " pod="openstack/dnsmasq-dns-666b6646f7-vnk88" Feb 23 09:07:07 crc kubenswrapper[4940]: I0223 09:07:07.262362 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f7ad5edd-bc1d-473a-baca-cfe974fb32f1-dns-svc\") pod \"dnsmasq-dns-666b6646f7-vnk88\" (UID: \"f7ad5edd-bc1d-473a-baca-cfe974fb32f1\") " pod="openstack/dnsmasq-dns-666b6646f7-vnk88" Feb 23 09:07:07 crc kubenswrapper[4940]: I0223 09:07:07.262389 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7ck79\" (UniqueName: \"kubernetes.io/projected/f7ad5edd-bc1d-473a-baca-cfe974fb32f1-kube-api-access-7ck79\") pod \"dnsmasq-dns-666b6646f7-vnk88\" (UID: \"f7ad5edd-bc1d-473a-baca-cfe974fb32f1\") " pod="openstack/dnsmasq-dns-666b6646f7-vnk88" Feb 23 09:07:07 crc kubenswrapper[4940]: I0223 09:07:07.363908 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f7ad5edd-bc1d-473a-baca-cfe974fb32f1-config\") pod \"dnsmasq-dns-666b6646f7-vnk88\" (UID: \"f7ad5edd-bc1d-473a-baca-cfe974fb32f1\") " pod="openstack/dnsmasq-dns-666b6646f7-vnk88" Feb 23 09:07:07 crc kubenswrapper[4940]: I0223 09:07:07.363983 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f7ad5edd-bc1d-473a-baca-cfe974fb32f1-dns-svc\") pod \"dnsmasq-dns-666b6646f7-vnk88\" (UID: \"f7ad5edd-bc1d-473a-baca-cfe974fb32f1\") " pod="openstack/dnsmasq-dns-666b6646f7-vnk88" Feb 23 09:07:07 crc kubenswrapper[4940]: I0223 09:07:07.364006 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7ck79\" (UniqueName: \"kubernetes.io/projected/f7ad5edd-bc1d-473a-baca-cfe974fb32f1-kube-api-access-7ck79\") pod \"dnsmasq-dns-666b6646f7-vnk88\" (UID: \"f7ad5edd-bc1d-473a-baca-cfe974fb32f1\") " pod="openstack/dnsmasq-dns-666b6646f7-vnk88" Feb 23 09:07:07 crc kubenswrapper[4940]: I0223 09:07:07.365479 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f7ad5edd-bc1d-473a-baca-cfe974fb32f1-config\") pod \"dnsmasq-dns-666b6646f7-vnk88\" (UID: \"f7ad5edd-bc1d-473a-baca-cfe974fb32f1\") " pod="openstack/dnsmasq-dns-666b6646f7-vnk88" Feb 23 09:07:07 crc kubenswrapper[4940]: I0223 09:07:07.366190 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f7ad5edd-bc1d-473a-baca-cfe974fb32f1-dns-svc\") pod \"dnsmasq-dns-666b6646f7-vnk88\" (UID: \"f7ad5edd-bc1d-473a-baca-cfe974fb32f1\") " pod="openstack/dnsmasq-dns-666b6646f7-vnk88" Feb 23 09:07:07 crc kubenswrapper[4940]: I0223 09:07:07.408476 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7ck79\" (UniqueName: \"kubernetes.io/projected/f7ad5edd-bc1d-473a-baca-cfe974fb32f1-kube-api-access-7ck79\") pod \"dnsmasq-dns-666b6646f7-vnk88\" (UID: \"f7ad5edd-bc1d-473a-baca-cfe974fb32f1\") " pod="openstack/dnsmasq-dns-666b6646f7-vnk88" Feb 23 09:07:07 crc kubenswrapper[4940]: I0223 09:07:07.433915 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-vnk88" Feb 23 09:07:07 crc kubenswrapper[4940]: I0223 09:07:07.481398 4940 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-2gpsf"] Feb 23 09:07:07 crc kubenswrapper[4940]: I0223 09:07:07.506276 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-lp4jp"] Feb 23 09:07:07 crc kubenswrapper[4940]: I0223 09:07:07.509720 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-lp4jp" Feb 23 09:07:07 crc kubenswrapper[4940]: I0223 09:07:07.522633 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-lp4jp"] Feb 23 09:07:07 crc kubenswrapper[4940]: I0223 09:07:07.671678 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d1db4565-596f-4be8-985b-dd8efdd96172-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-lp4jp\" (UID: \"d1db4565-596f-4be8-985b-dd8efdd96172\") " pod="openstack/dnsmasq-dns-57d769cc4f-lp4jp" Feb 23 09:07:07 crc kubenswrapper[4940]: I0223 09:07:07.671746 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kt28q\" (UniqueName: \"kubernetes.io/projected/d1db4565-596f-4be8-985b-dd8efdd96172-kube-api-access-kt28q\") pod \"dnsmasq-dns-57d769cc4f-lp4jp\" (UID: \"d1db4565-596f-4be8-985b-dd8efdd96172\") " pod="openstack/dnsmasq-dns-57d769cc4f-lp4jp" Feb 23 09:07:07 crc kubenswrapper[4940]: I0223 09:07:07.671942 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d1db4565-596f-4be8-985b-dd8efdd96172-config\") pod \"dnsmasq-dns-57d769cc4f-lp4jp\" (UID: \"d1db4565-596f-4be8-985b-dd8efdd96172\") " pod="openstack/dnsmasq-dns-57d769cc4f-lp4jp" Feb 23 09:07:07 crc kubenswrapper[4940]: I0223 09:07:07.775511 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d1db4565-596f-4be8-985b-dd8efdd96172-config\") pod \"dnsmasq-dns-57d769cc4f-lp4jp\" (UID: \"d1db4565-596f-4be8-985b-dd8efdd96172\") " pod="openstack/dnsmasq-dns-57d769cc4f-lp4jp" Feb 23 09:07:07 crc kubenswrapper[4940]: I0223 09:07:07.776924 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d1db4565-596f-4be8-985b-dd8efdd96172-config\") pod \"dnsmasq-dns-57d769cc4f-lp4jp\" (UID: \"d1db4565-596f-4be8-985b-dd8efdd96172\") " pod="openstack/dnsmasq-dns-57d769cc4f-lp4jp" Feb 23 09:07:07 crc kubenswrapper[4940]: I0223 09:07:07.777720 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d1db4565-596f-4be8-985b-dd8efdd96172-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-lp4jp\" (UID: \"d1db4565-596f-4be8-985b-dd8efdd96172\") " pod="openstack/dnsmasq-dns-57d769cc4f-lp4jp" Feb 23 09:07:07 crc kubenswrapper[4940]: I0223 09:07:07.777791 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kt28q\" (UniqueName: \"kubernetes.io/projected/d1db4565-596f-4be8-985b-dd8efdd96172-kube-api-access-kt28q\") pod \"dnsmasq-dns-57d769cc4f-lp4jp\" (UID: \"d1db4565-596f-4be8-985b-dd8efdd96172\") " pod="openstack/dnsmasq-dns-57d769cc4f-lp4jp" Feb 23 09:07:07 crc kubenswrapper[4940]: I0223 09:07:07.779156 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d1db4565-596f-4be8-985b-dd8efdd96172-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-lp4jp\" (UID: \"d1db4565-596f-4be8-985b-dd8efdd96172\") " pod="openstack/dnsmasq-dns-57d769cc4f-lp4jp" Feb 23 09:07:07 crc kubenswrapper[4940]: I0223 09:07:07.802305 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kt28q\" (UniqueName: \"kubernetes.io/projected/d1db4565-596f-4be8-985b-dd8efdd96172-kube-api-access-kt28q\") pod \"dnsmasq-dns-57d769cc4f-lp4jp\" (UID: \"d1db4565-596f-4be8-985b-dd8efdd96172\") " pod="openstack/dnsmasq-dns-57d769cc4f-lp4jp" Feb 23 09:07:07 crc kubenswrapper[4940]: I0223 09:07:07.899816 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-lp4jp" Feb 23 09:07:08 crc kubenswrapper[4940]: I0223 09:07:08.054073 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-vnk88"] Feb 23 09:07:08 crc kubenswrapper[4940]: I0223 09:07:08.250202 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Feb 23 09:07:08 crc kubenswrapper[4940]: I0223 09:07:08.276473 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 23 09:07:08 crc kubenswrapper[4940]: I0223 09:07:08.276677 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 23 09:07:08 crc kubenswrapper[4940]: I0223 09:07:08.280854 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Feb 23 09:07:08 crc kubenswrapper[4940]: I0223 09:07:08.280913 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-g5jj6" Feb 23 09:07:08 crc kubenswrapper[4940]: I0223 09:07:08.281092 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Feb 23 09:07:08 crc kubenswrapper[4940]: I0223 09:07:08.281149 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Feb 23 09:07:08 crc kubenswrapper[4940]: I0223 09:07:08.281212 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Feb 23 09:07:08 crc kubenswrapper[4940]: I0223 09:07:08.281280 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Feb 23 09:07:08 crc kubenswrapper[4940]: I0223 09:07:08.281424 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Feb 23 09:07:08 crc kubenswrapper[4940]: I0223 09:07:08.386640 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/987e4448-8da2-41e3-9dba-777d599609f5-pod-info\") pod \"rabbitmq-server-0\" (UID: \"987e4448-8da2-41e3-9dba-777d599609f5\") " pod="openstack/rabbitmq-server-0" Feb 23 09:07:08 crc kubenswrapper[4940]: I0223 09:07:08.386693 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/987e4448-8da2-41e3-9dba-777d599609f5-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"987e4448-8da2-41e3-9dba-777d599609f5\") " pod="openstack/rabbitmq-server-0" Feb 23 09:07:08 crc kubenswrapper[4940]: I0223 09:07:08.386725 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/987e4448-8da2-41e3-9dba-777d599609f5-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"987e4448-8da2-41e3-9dba-777d599609f5\") " pod="openstack/rabbitmq-server-0" Feb 23 09:07:08 crc kubenswrapper[4940]: I0223 09:07:08.386753 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m9wgl\" (UniqueName: \"kubernetes.io/projected/987e4448-8da2-41e3-9dba-777d599609f5-kube-api-access-m9wgl\") pod \"rabbitmq-server-0\" (UID: \"987e4448-8da2-41e3-9dba-777d599609f5\") " pod="openstack/rabbitmq-server-0" Feb 23 09:07:08 crc kubenswrapper[4940]: I0223 09:07:08.386805 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/987e4448-8da2-41e3-9dba-777d599609f5-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"987e4448-8da2-41e3-9dba-777d599609f5\") " pod="openstack/rabbitmq-server-0" Feb 23 09:07:08 crc kubenswrapper[4940]: I0223 09:07:08.386843 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/987e4448-8da2-41e3-9dba-777d599609f5-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"987e4448-8da2-41e3-9dba-777d599609f5\") " pod="openstack/rabbitmq-server-0" Feb 23 09:07:08 crc kubenswrapper[4940]: I0223 09:07:08.386879 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/987e4448-8da2-41e3-9dba-777d599609f5-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"987e4448-8da2-41e3-9dba-777d599609f5\") " pod="openstack/rabbitmq-server-0" Feb 23 09:07:08 crc kubenswrapper[4940]: I0223 09:07:08.386905 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/987e4448-8da2-41e3-9dba-777d599609f5-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"987e4448-8da2-41e3-9dba-777d599609f5\") " pod="openstack/rabbitmq-server-0" Feb 23 09:07:08 crc kubenswrapper[4940]: I0223 09:07:08.386938 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-server-0\" (UID: \"987e4448-8da2-41e3-9dba-777d599609f5\") " pod="openstack/rabbitmq-server-0" Feb 23 09:07:08 crc kubenswrapper[4940]: I0223 09:07:08.386970 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/987e4448-8da2-41e3-9dba-777d599609f5-config-data\") pod \"rabbitmq-server-0\" (UID: \"987e4448-8da2-41e3-9dba-777d599609f5\") " pod="openstack/rabbitmq-server-0" Feb 23 09:07:08 crc kubenswrapper[4940]: I0223 09:07:08.386990 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/987e4448-8da2-41e3-9dba-777d599609f5-server-conf\") pod \"rabbitmq-server-0\" (UID: \"987e4448-8da2-41e3-9dba-777d599609f5\") " pod="openstack/rabbitmq-server-0" Feb 23 09:07:08 crc kubenswrapper[4940]: I0223 09:07:08.423335 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-lp4jp"] Feb 23 09:07:08 crc kubenswrapper[4940]: I0223 09:07:08.489192 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/987e4448-8da2-41e3-9dba-777d599609f5-config-data\") pod \"rabbitmq-server-0\" (UID: \"987e4448-8da2-41e3-9dba-777d599609f5\") " pod="openstack/rabbitmq-server-0" Feb 23 09:07:08 crc kubenswrapper[4940]: I0223 09:07:08.489443 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/987e4448-8da2-41e3-9dba-777d599609f5-server-conf\") pod \"rabbitmq-server-0\" (UID: \"987e4448-8da2-41e3-9dba-777d599609f5\") " pod="openstack/rabbitmq-server-0" Feb 23 09:07:08 crc kubenswrapper[4940]: I0223 09:07:08.489478 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/987e4448-8da2-41e3-9dba-777d599609f5-pod-info\") pod \"rabbitmq-server-0\" (UID: \"987e4448-8da2-41e3-9dba-777d599609f5\") " pod="openstack/rabbitmq-server-0" Feb 23 09:07:08 crc kubenswrapper[4940]: I0223 09:07:08.489500 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/987e4448-8da2-41e3-9dba-777d599609f5-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"987e4448-8da2-41e3-9dba-777d599609f5\") " pod="openstack/rabbitmq-server-0" Feb 23 09:07:08 crc kubenswrapper[4940]: I0223 09:07:08.489522 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/987e4448-8da2-41e3-9dba-777d599609f5-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"987e4448-8da2-41e3-9dba-777d599609f5\") " pod="openstack/rabbitmq-server-0" Feb 23 09:07:08 crc kubenswrapper[4940]: I0223 09:07:08.489540 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m9wgl\" (UniqueName: \"kubernetes.io/projected/987e4448-8da2-41e3-9dba-777d599609f5-kube-api-access-m9wgl\") pod \"rabbitmq-server-0\" (UID: \"987e4448-8da2-41e3-9dba-777d599609f5\") " pod="openstack/rabbitmq-server-0" Feb 23 09:07:08 crc kubenswrapper[4940]: I0223 09:07:08.489598 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/987e4448-8da2-41e3-9dba-777d599609f5-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"987e4448-8da2-41e3-9dba-777d599609f5\") " pod="openstack/rabbitmq-server-0" Feb 23 09:07:08 crc kubenswrapper[4940]: I0223 09:07:08.489647 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/987e4448-8da2-41e3-9dba-777d599609f5-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"987e4448-8da2-41e3-9dba-777d599609f5\") " pod="openstack/rabbitmq-server-0" Feb 23 09:07:08 crc kubenswrapper[4940]: I0223 09:07:08.489683 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/987e4448-8da2-41e3-9dba-777d599609f5-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"987e4448-8da2-41e3-9dba-777d599609f5\") " pod="openstack/rabbitmq-server-0" Feb 23 09:07:08 crc kubenswrapper[4940]: I0223 09:07:08.489704 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/987e4448-8da2-41e3-9dba-777d599609f5-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"987e4448-8da2-41e3-9dba-777d599609f5\") " pod="openstack/rabbitmq-server-0" Feb 23 09:07:08 crc kubenswrapper[4940]: I0223 09:07:08.489730 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-server-0\" (UID: \"987e4448-8da2-41e3-9dba-777d599609f5\") " pod="openstack/rabbitmq-server-0" Feb 23 09:07:08 crc kubenswrapper[4940]: I0223 09:07:08.490002 4940 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-server-0\" (UID: \"987e4448-8da2-41e3-9dba-777d599609f5\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/rabbitmq-server-0" Feb 23 09:07:08 crc kubenswrapper[4940]: I0223 09:07:08.492349 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/987e4448-8da2-41e3-9dba-777d599609f5-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"987e4448-8da2-41e3-9dba-777d599609f5\") " pod="openstack/rabbitmq-server-0" Feb 23 09:07:08 crc kubenswrapper[4940]: I0223 09:07:08.493074 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/987e4448-8da2-41e3-9dba-777d599609f5-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"987e4448-8da2-41e3-9dba-777d599609f5\") " pod="openstack/rabbitmq-server-0" Feb 23 09:07:08 crc kubenswrapper[4940]: I0223 09:07:08.493925 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/987e4448-8da2-41e3-9dba-777d599609f5-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"987e4448-8da2-41e3-9dba-777d599609f5\") " pod="openstack/rabbitmq-server-0" Feb 23 09:07:08 crc kubenswrapper[4940]: I0223 09:07:08.495268 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/987e4448-8da2-41e3-9dba-777d599609f5-server-conf\") pod \"rabbitmq-server-0\" (UID: \"987e4448-8da2-41e3-9dba-777d599609f5\") " pod="openstack/rabbitmq-server-0" Feb 23 09:07:08 crc kubenswrapper[4940]: I0223 09:07:08.496154 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/987e4448-8da2-41e3-9dba-777d599609f5-config-data\") pod \"rabbitmq-server-0\" (UID: \"987e4448-8da2-41e3-9dba-777d599609f5\") " pod="openstack/rabbitmq-server-0" Feb 23 09:07:08 crc kubenswrapper[4940]: I0223 09:07:08.498339 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/987e4448-8da2-41e3-9dba-777d599609f5-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"987e4448-8da2-41e3-9dba-777d599609f5\") " pod="openstack/rabbitmq-server-0" Feb 23 09:07:08 crc kubenswrapper[4940]: I0223 09:07:08.498965 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/987e4448-8da2-41e3-9dba-777d599609f5-pod-info\") pod \"rabbitmq-server-0\" (UID: \"987e4448-8da2-41e3-9dba-777d599609f5\") " pod="openstack/rabbitmq-server-0" Feb 23 09:07:08 crc kubenswrapper[4940]: I0223 09:07:08.502196 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/987e4448-8da2-41e3-9dba-777d599609f5-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"987e4448-8da2-41e3-9dba-777d599609f5\") " pod="openstack/rabbitmq-server-0" Feb 23 09:07:08 crc kubenswrapper[4940]: I0223 09:07:08.502280 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/987e4448-8da2-41e3-9dba-777d599609f5-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"987e4448-8da2-41e3-9dba-777d599609f5\") " pod="openstack/rabbitmq-server-0" Feb 23 09:07:08 crc kubenswrapper[4940]: I0223 09:07:08.509003 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m9wgl\" (UniqueName: \"kubernetes.io/projected/987e4448-8da2-41e3-9dba-777d599609f5-kube-api-access-m9wgl\") pod \"rabbitmq-server-0\" (UID: \"987e4448-8da2-41e3-9dba-777d599609f5\") " pod="openstack/rabbitmq-server-0" Feb 23 09:07:08 crc kubenswrapper[4940]: I0223 09:07:08.526283 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-server-0\" (UID: \"987e4448-8da2-41e3-9dba-777d599609f5\") " pod="openstack/rabbitmq-server-0" Feb 23 09:07:08 crc kubenswrapper[4940]: I0223 09:07:08.608328 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 23 09:07:08 crc kubenswrapper[4940]: I0223 09:07:08.624359 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 23 09:07:08 crc kubenswrapper[4940]: I0223 09:07:08.626103 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 23 09:07:08 crc kubenswrapper[4940]: I0223 09:07:08.629206 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Feb 23 09:07:08 crc kubenswrapper[4940]: I0223 09:07:08.629574 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Feb 23 09:07:08 crc kubenswrapper[4940]: I0223 09:07:08.630192 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Feb 23 09:07:08 crc kubenswrapper[4940]: I0223 09:07:08.631770 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Feb 23 09:07:08 crc kubenswrapper[4940]: I0223 09:07:08.631921 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Feb 23 09:07:08 crc kubenswrapper[4940]: I0223 09:07:08.631948 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Feb 23 09:07:08 crc kubenswrapper[4940]: I0223 09:07:08.632170 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-n24ms" Feb 23 09:07:08 crc kubenswrapper[4940]: I0223 09:07:08.639235 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 23 09:07:08 crc kubenswrapper[4940]: I0223 09:07:08.692339 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/7453ffea-4a1f-426c-ac9c-3377973fdb19-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"7453ffea-4a1f-426c-ac9c-3377973fdb19\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 09:07:08 crc kubenswrapper[4940]: I0223 09:07:08.692438 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/7453ffea-4a1f-426c-ac9c-3377973fdb19-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"7453ffea-4a1f-426c-ac9c-3377973fdb19\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 09:07:08 crc kubenswrapper[4940]: I0223 09:07:08.692468 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/7453ffea-4a1f-426c-ac9c-3377973fdb19-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"7453ffea-4a1f-426c-ac9c-3377973fdb19\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 09:07:08 crc kubenswrapper[4940]: I0223 09:07:08.692513 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/7453ffea-4a1f-426c-ac9c-3377973fdb19-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"7453ffea-4a1f-426c-ac9c-3377973fdb19\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 09:07:08 crc kubenswrapper[4940]: I0223 09:07:08.692544 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x59bq\" (UniqueName: \"kubernetes.io/projected/7453ffea-4a1f-426c-ac9c-3377973fdb19-kube-api-access-x59bq\") pod \"rabbitmq-cell1-server-0\" (UID: \"7453ffea-4a1f-426c-ac9c-3377973fdb19\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 09:07:08 crc kubenswrapper[4940]: I0223 09:07:08.692592 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/7453ffea-4a1f-426c-ac9c-3377973fdb19-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"7453ffea-4a1f-426c-ac9c-3377973fdb19\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 09:07:08 crc kubenswrapper[4940]: I0223 09:07:08.692643 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/7453ffea-4a1f-426c-ac9c-3377973fdb19-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"7453ffea-4a1f-426c-ac9c-3377973fdb19\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 09:07:08 crc kubenswrapper[4940]: I0223 09:07:08.692690 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/7453ffea-4a1f-426c-ac9c-3377973fdb19-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"7453ffea-4a1f-426c-ac9c-3377973fdb19\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 09:07:08 crc kubenswrapper[4940]: I0223 09:07:08.692731 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7453ffea-4a1f-426c-ac9c-3377973fdb19-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"7453ffea-4a1f-426c-ac9c-3377973fdb19\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 09:07:08 crc kubenswrapper[4940]: I0223 09:07:08.692760 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/7453ffea-4a1f-426c-ac9c-3377973fdb19-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"7453ffea-4a1f-426c-ac9c-3377973fdb19\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 09:07:08 crc kubenswrapper[4940]: I0223 09:07:08.692798 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"7453ffea-4a1f-426c-ac9c-3377973fdb19\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 09:07:08 crc kubenswrapper[4940]: I0223 09:07:08.842941 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/7453ffea-4a1f-426c-ac9c-3377973fdb19-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"7453ffea-4a1f-426c-ac9c-3377973fdb19\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 09:07:08 crc kubenswrapper[4940]: I0223 09:07:08.842971 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/7453ffea-4a1f-426c-ac9c-3377973fdb19-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"7453ffea-4a1f-426c-ac9c-3377973fdb19\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 09:07:08 crc kubenswrapper[4940]: I0223 09:07:08.842998 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/7453ffea-4a1f-426c-ac9c-3377973fdb19-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"7453ffea-4a1f-426c-ac9c-3377973fdb19\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 09:07:08 crc kubenswrapper[4940]: I0223 09:07:08.843021 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7453ffea-4a1f-426c-ac9c-3377973fdb19-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"7453ffea-4a1f-426c-ac9c-3377973fdb19\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 09:07:08 crc kubenswrapper[4940]: I0223 09:07:08.843037 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/7453ffea-4a1f-426c-ac9c-3377973fdb19-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"7453ffea-4a1f-426c-ac9c-3377973fdb19\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 09:07:08 crc kubenswrapper[4940]: I0223 09:07:08.843060 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"7453ffea-4a1f-426c-ac9c-3377973fdb19\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 09:07:08 crc kubenswrapper[4940]: I0223 09:07:08.843090 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/7453ffea-4a1f-426c-ac9c-3377973fdb19-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"7453ffea-4a1f-426c-ac9c-3377973fdb19\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 09:07:08 crc kubenswrapper[4940]: I0223 09:07:08.843125 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/7453ffea-4a1f-426c-ac9c-3377973fdb19-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"7453ffea-4a1f-426c-ac9c-3377973fdb19\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 09:07:08 crc kubenswrapper[4940]: I0223 09:07:08.843142 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/7453ffea-4a1f-426c-ac9c-3377973fdb19-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"7453ffea-4a1f-426c-ac9c-3377973fdb19\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 09:07:08 crc kubenswrapper[4940]: I0223 09:07:08.843162 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/7453ffea-4a1f-426c-ac9c-3377973fdb19-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"7453ffea-4a1f-426c-ac9c-3377973fdb19\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 09:07:08 crc kubenswrapper[4940]: I0223 09:07:08.843179 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x59bq\" (UniqueName: \"kubernetes.io/projected/7453ffea-4a1f-426c-ac9c-3377973fdb19-kube-api-access-x59bq\") pod \"rabbitmq-cell1-server-0\" (UID: \"7453ffea-4a1f-426c-ac9c-3377973fdb19\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 09:07:08 crc kubenswrapper[4940]: I0223 09:07:08.847449 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/7453ffea-4a1f-426c-ac9c-3377973fdb19-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"7453ffea-4a1f-426c-ac9c-3377973fdb19\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 09:07:08 crc kubenswrapper[4940]: I0223 09:07:08.851857 4940 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"7453ffea-4a1f-426c-ac9c-3377973fdb19\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/rabbitmq-cell1-server-0" Feb 23 09:07:08 crc kubenswrapper[4940]: I0223 09:07:08.857341 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/7453ffea-4a1f-426c-ac9c-3377973fdb19-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"7453ffea-4a1f-426c-ac9c-3377973fdb19\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 09:07:08 crc kubenswrapper[4940]: I0223 09:07:08.857598 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/7453ffea-4a1f-426c-ac9c-3377973fdb19-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"7453ffea-4a1f-426c-ac9c-3377973fdb19\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 09:07:08 crc kubenswrapper[4940]: I0223 09:07:08.857864 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/7453ffea-4a1f-426c-ac9c-3377973fdb19-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"7453ffea-4a1f-426c-ac9c-3377973fdb19\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 09:07:08 crc kubenswrapper[4940]: I0223 09:07:08.858242 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/7453ffea-4a1f-426c-ac9c-3377973fdb19-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"7453ffea-4a1f-426c-ac9c-3377973fdb19\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 09:07:08 crc kubenswrapper[4940]: I0223 09:07:08.858712 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7453ffea-4a1f-426c-ac9c-3377973fdb19-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"7453ffea-4a1f-426c-ac9c-3377973fdb19\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 09:07:08 crc kubenswrapper[4940]: I0223 09:07:08.859225 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/7453ffea-4a1f-426c-ac9c-3377973fdb19-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"7453ffea-4a1f-426c-ac9c-3377973fdb19\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 09:07:08 crc kubenswrapper[4940]: I0223 09:07:08.859766 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/7453ffea-4a1f-426c-ac9c-3377973fdb19-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"7453ffea-4a1f-426c-ac9c-3377973fdb19\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 09:07:08 crc kubenswrapper[4940]: I0223 09:07:08.861383 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/7453ffea-4a1f-426c-ac9c-3377973fdb19-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"7453ffea-4a1f-426c-ac9c-3377973fdb19\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 09:07:08 crc kubenswrapper[4940]: I0223 09:07:08.862318 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x59bq\" (UniqueName: \"kubernetes.io/projected/7453ffea-4a1f-426c-ac9c-3377973fdb19-kube-api-access-x59bq\") pod \"rabbitmq-cell1-server-0\" (UID: \"7453ffea-4a1f-426c-ac9c-3377973fdb19\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 09:07:08 crc kubenswrapper[4940]: I0223 09:07:08.875709 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-vnk88" event={"ID":"f7ad5edd-bc1d-473a-baca-cfe974fb32f1","Type":"ContainerStarted","Data":"1914f7d7a08b7bd89df8469e4526a01e4c1661fb3aa2554f87dbd85ba5fe1acd"} Feb 23 09:07:08 crc kubenswrapper[4940]: I0223 09:07:08.880025 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"7453ffea-4a1f-426c-ac9c-3377973fdb19\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 09:07:08 crc kubenswrapper[4940]: I0223 09:07:08.883837 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-lp4jp" event={"ID":"d1db4565-596f-4be8-985b-dd8efdd96172","Type":"ContainerStarted","Data":"af1a148f150b822a4fa47835aa8ec1f5e65e29cc428f3f406136ed7d3edb7396"} Feb 23 09:07:09 crc kubenswrapper[4940]: I0223 09:07:08.955058 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 23 09:07:09 crc kubenswrapper[4940]: I0223 09:07:09.507027 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Feb 23 09:07:09 crc kubenswrapper[4940]: I0223 09:07:09.509416 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Feb 23 09:07:09 crc kubenswrapper[4940]: I0223 09:07:09.515283 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Feb 23 09:07:09 crc kubenswrapper[4940]: I0223 09:07:09.517788 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Feb 23 09:07:09 crc kubenswrapper[4940]: I0223 09:07:09.518025 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Feb 23 09:07:09 crc kubenswrapper[4940]: I0223 09:07:09.518221 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Feb 23 09:07:09 crc kubenswrapper[4940]: I0223 09:07:09.528021 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-ffvk5" Feb 23 09:07:09 crc kubenswrapper[4940]: I0223 09:07:09.536809 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Feb 23 09:07:09 crc kubenswrapper[4940]: I0223 09:07:09.650079 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b7438a4-1302-46b5-a005-b74758200871-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"1b7438a4-1302-46b5-a005-b74758200871\") " pod="openstack/openstack-galera-0" Feb 23 09:07:09 crc kubenswrapper[4940]: I0223 09:07:09.650125 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5x5wc\" (UniqueName: \"kubernetes.io/projected/1b7438a4-1302-46b5-a005-b74758200871-kube-api-access-5x5wc\") pod \"openstack-galera-0\" (UID: \"1b7438a4-1302-46b5-a005-b74758200871\") " pod="openstack/openstack-galera-0" Feb 23 09:07:09 crc kubenswrapper[4940]: I0223 09:07:09.650161 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/1b7438a4-1302-46b5-a005-b74758200871-config-data-generated\") pod \"openstack-galera-0\" (UID: \"1b7438a4-1302-46b5-a005-b74758200871\") " pod="openstack/openstack-galera-0" Feb 23 09:07:09 crc kubenswrapper[4940]: I0223 09:07:09.650260 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/1b7438a4-1302-46b5-a005-b74758200871-kolla-config\") pod \"openstack-galera-0\" (UID: \"1b7438a4-1302-46b5-a005-b74758200871\") " pod="openstack/openstack-galera-0" Feb 23 09:07:09 crc kubenswrapper[4940]: I0223 09:07:09.650354 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/1b7438a4-1302-46b5-a005-b74758200871-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"1b7438a4-1302-46b5-a005-b74758200871\") " pod="openstack/openstack-galera-0" Feb 23 09:07:09 crc kubenswrapper[4940]: I0223 09:07:09.650400 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"openstack-galera-0\" (UID: \"1b7438a4-1302-46b5-a005-b74758200871\") " pod="openstack/openstack-galera-0" Feb 23 09:07:09 crc kubenswrapper[4940]: I0223 09:07:09.650460 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1b7438a4-1302-46b5-a005-b74758200871-operator-scripts\") pod \"openstack-galera-0\" (UID: \"1b7438a4-1302-46b5-a005-b74758200871\") " pod="openstack/openstack-galera-0" Feb 23 09:07:09 crc kubenswrapper[4940]: I0223 09:07:09.650488 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/1b7438a4-1302-46b5-a005-b74758200871-config-data-default\") pod \"openstack-galera-0\" (UID: \"1b7438a4-1302-46b5-a005-b74758200871\") " pod="openstack/openstack-galera-0" Feb 23 09:07:09 crc kubenswrapper[4940]: I0223 09:07:09.842555 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 23 09:07:09 crc kubenswrapper[4940]: I0223 09:07:09.851873 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/1b7438a4-1302-46b5-a005-b74758200871-config-data-default\") pod \"openstack-galera-0\" (UID: \"1b7438a4-1302-46b5-a005-b74758200871\") " pod="openstack/openstack-galera-0" Feb 23 09:07:09 crc kubenswrapper[4940]: I0223 09:07:09.851944 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b7438a4-1302-46b5-a005-b74758200871-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"1b7438a4-1302-46b5-a005-b74758200871\") " pod="openstack/openstack-galera-0" Feb 23 09:07:09 crc kubenswrapper[4940]: I0223 09:07:09.851979 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5x5wc\" (UniqueName: \"kubernetes.io/projected/1b7438a4-1302-46b5-a005-b74758200871-kube-api-access-5x5wc\") pod \"openstack-galera-0\" (UID: \"1b7438a4-1302-46b5-a005-b74758200871\") " pod="openstack/openstack-galera-0" Feb 23 09:07:09 crc kubenswrapper[4940]: I0223 09:07:09.852027 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/1b7438a4-1302-46b5-a005-b74758200871-config-data-generated\") pod \"openstack-galera-0\" (UID: \"1b7438a4-1302-46b5-a005-b74758200871\") " pod="openstack/openstack-galera-0" Feb 23 09:07:09 crc kubenswrapper[4940]: I0223 09:07:09.852084 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/1b7438a4-1302-46b5-a005-b74758200871-kolla-config\") pod \"openstack-galera-0\" (UID: \"1b7438a4-1302-46b5-a005-b74758200871\") " pod="openstack/openstack-galera-0" Feb 23 09:07:09 crc kubenswrapper[4940]: I0223 09:07:09.852153 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/1b7438a4-1302-46b5-a005-b74758200871-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"1b7438a4-1302-46b5-a005-b74758200871\") " pod="openstack/openstack-galera-0" Feb 23 09:07:09 crc kubenswrapper[4940]: I0223 09:07:09.852212 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"openstack-galera-0\" (UID: \"1b7438a4-1302-46b5-a005-b74758200871\") " pod="openstack/openstack-galera-0" Feb 23 09:07:09 crc kubenswrapper[4940]: I0223 09:07:09.852285 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1b7438a4-1302-46b5-a005-b74758200871-operator-scripts\") pod \"openstack-galera-0\" (UID: \"1b7438a4-1302-46b5-a005-b74758200871\") " pod="openstack/openstack-galera-0" Feb 23 09:07:09 crc kubenswrapper[4940]: I0223 09:07:09.854218 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1b7438a4-1302-46b5-a005-b74758200871-operator-scripts\") pod \"openstack-galera-0\" (UID: \"1b7438a4-1302-46b5-a005-b74758200871\") " pod="openstack/openstack-galera-0" Feb 23 09:07:09 crc kubenswrapper[4940]: I0223 09:07:09.855187 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/1b7438a4-1302-46b5-a005-b74758200871-config-data-default\") pod \"openstack-galera-0\" (UID: \"1b7438a4-1302-46b5-a005-b74758200871\") " pod="openstack/openstack-galera-0" Feb 23 09:07:09 crc kubenswrapper[4940]: I0223 09:07:09.866664 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/1b7438a4-1302-46b5-a005-b74758200871-kolla-config\") pod \"openstack-galera-0\" (UID: \"1b7438a4-1302-46b5-a005-b74758200871\") " pod="openstack/openstack-galera-0" Feb 23 09:07:09 crc kubenswrapper[4940]: I0223 09:07:09.867299 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/1b7438a4-1302-46b5-a005-b74758200871-config-data-generated\") pod \"openstack-galera-0\" (UID: \"1b7438a4-1302-46b5-a005-b74758200871\") " pod="openstack/openstack-galera-0" Feb 23 09:07:09 crc kubenswrapper[4940]: I0223 09:07:09.867555 4940 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"openstack-galera-0\" (UID: \"1b7438a4-1302-46b5-a005-b74758200871\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/openstack-galera-0" Feb 23 09:07:09 crc kubenswrapper[4940]: I0223 09:07:09.876011 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b7438a4-1302-46b5-a005-b74758200871-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"1b7438a4-1302-46b5-a005-b74758200871\") " pod="openstack/openstack-galera-0" Feb 23 09:07:09 crc kubenswrapper[4940]: I0223 09:07:09.887419 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/1b7438a4-1302-46b5-a005-b74758200871-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"1b7438a4-1302-46b5-a005-b74758200871\") " pod="openstack/openstack-galera-0" Feb 23 09:07:09 crc kubenswrapper[4940]: I0223 09:07:09.896711 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"openstack-galera-0\" (UID: \"1b7438a4-1302-46b5-a005-b74758200871\") " pod="openstack/openstack-galera-0" Feb 23 09:07:09 crc kubenswrapper[4940]: I0223 09:07:09.898140 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5x5wc\" (UniqueName: \"kubernetes.io/projected/1b7438a4-1302-46b5-a005-b74758200871-kube-api-access-5x5wc\") pod \"openstack-galera-0\" (UID: \"1b7438a4-1302-46b5-a005-b74758200871\") " pod="openstack/openstack-galera-0" Feb 23 09:07:09 crc kubenswrapper[4940]: I0223 09:07:09.949863 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 23 09:07:09 crc kubenswrapper[4940]: W0223 09:07:09.979948 4940 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod987e4448_8da2_41e3_9dba_777d599609f5.slice/crio-98e288e021d3d09cd8a267c6d906525d120d6a9fff7050756a42252df693a837 WatchSource:0}: Error finding container 98e288e021d3d09cd8a267c6d906525d120d6a9fff7050756a42252df693a837: Status 404 returned error can't find the container with id 98e288e021d3d09cd8a267c6d906525d120d6a9fff7050756a42252df693a837 Feb 23 09:07:10 crc kubenswrapper[4940]: I0223 09:07:10.154345 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Feb 23 09:07:10 crc kubenswrapper[4940]: I0223 09:07:10.814246 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Feb 23 09:07:10 crc kubenswrapper[4940]: W0223 09:07:10.870643 4940 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1b7438a4_1302_46b5_a005_b74758200871.slice/crio-76e570d7404cc420f7d4fc59583cb35757371cbd875ba32dd034d89f6909ef11 WatchSource:0}: Error finding container 76e570d7404cc420f7d4fc59583cb35757371cbd875ba32dd034d89f6909ef11: Status 404 returned error can't find the container with id 76e570d7404cc420f7d4fc59583cb35757371cbd875ba32dd034d89f6909ef11 Feb 23 09:07:10 crc kubenswrapper[4940]: I0223 09:07:10.939768 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"1b7438a4-1302-46b5-a005-b74758200871","Type":"ContainerStarted","Data":"76e570d7404cc420f7d4fc59583cb35757371cbd875ba32dd034d89f6909ef11"} Feb 23 09:07:10 crc kubenswrapper[4940]: I0223 09:07:10.940873 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"7453ffea-4a1f-426c-ac9c-3377973fdb19","Type":"ContainerStarted","Data":"72648ff7c46df20d8ff12411dce7e3bb33e795f32f329f812bc084bc9863a2af"} Feb 23 09:07:10 crc kubenswrapper[4940]: I0223 09:07:10.949934 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"987e4448-8da2-41e3-9dba-777d599609f5","Type":"ContainerStarted","Data":"98e288e021d3d09cd8a267c6d906525d120d6a9fff7050756a42252df693a837"} Feb 23 09:07:11 crc kubenswrapper[4940]: I0223 09:07:11.084177 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Feb 23 09:07:11 crc kubenswrapper[4940]: I0223 09:07:11.085440 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Feb 23 09:07:11 crc kubenswrapper[4940]: I0223 09:07:11.111775 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-rds8t" Feb 23 09:07:11 crc kubenswrapper[4940]: I0223 09:07:11.111869 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Feb 23 09:07:11 crc kubenswrapper[4940]: I0223 09:07:11.112305 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Feb 23 09:07:11 crc kubenswrapper[4940]: I0223 09:07:11.128607 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Feb 23 09:07:11 crc kubenswrapper[4940]: I0223 09:07:11.156206 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 23 09:07:11 crc kubenswrapper[4940]: I0223 09:07:11.163009 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Feb 23 09:07:11 crc kubenswrapper[4940]: I0223 09:07:11.167261 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Feb 23 09:07:11 crc kubenswrapper[4940]: I0223 09:07:11.167526 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-s2tsg" Feb 23 09:07:11 crc kubenswrapper[4940]: I0223 09:07:11.176907 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Feb 23 09:07:11 crc kubenswrapper[4940]: I0223 09:07:11.177825 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Feb 23 09:07:11 crc kubenswrapper[4940]: I0223 09:07:11.188864 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/e0aedede-6061-46c9-8fd2-88a2e1880c2f-memcached-tls-certs\") pod \"memcached-0\" (UID: \"e0aedede-6061-46c9-8fd2-88a2e1880c2f\") " pod="openstack/memcached-0" Feb 23 09:07:11 crc kubenswrapper[4940]: I0223 09:07:11.189595 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8bccr\" (UniqueName: \"kubernetes.io/projected/e0aedede-6061-46c9-8fd2-88a2e1880c2f-kube-api-access-8bccr\") pod \"memcached-0\" (UID: \"e0aedede-6061-46c9-8fd2-88a2e1880c2f\") " pod="openstack/memcached-0" Feb 23 09:07:11 crc kubenswrapper[4940]: I0223 09:07:11.189854 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/e0aedede-6061-46c9-8fd2-88a2e1880c2f-kolla-config\") pod \"memcached-0\" (UID: \"e0aedede-6061-46c9-8fd2-88a2e1880c2f\") " pod="openstack/memcached-0" Feb 23 09:07:11 crc kubenswrapper[4940]: I0223 09:07:11.190307 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e0aedede-6061-46c9-8fd2-88a2e1880c2f-config-data\") pod \"memcached-0\" (UID: \"e0aedede-6061-46c9-8fd2-88a2e1880c2f\") " pod="openstack/memcached-0" Feb 23 09:07:11 crc kubenswrapper[4940]: I0223 09:07:11.190457 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0aedede-6061-46c9-8fd2-88a2e1880c2f-combined-ca-bundle\") pod \"memcached-0\" (UID: \"e0aedede-6061-46c9-8fd2-88a2e1880c2f\") " pod="openstack/memcached-0" Feb 23 09:07:11 crc kubenswrapper[4940]: I0223 09:07:11.203683 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 23 09:07:11 crc kubenswrapper[4940]: I0223 09:07:11.298300 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/f228632e-c649-4cbf-9a32-5baad303ef28-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"f228632e-c649-4cbf-9a32-5baad303ef28\") " pod="openstack/openstack-cell1-galera-0" Feb 23 09:07:11 crc kubenswrapper[4940]: I0223 09:07:11.298645 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/f228632e-c649-4cbf-9a32-5baad303ef28-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"f228632e-c649-4cbf-9a32-5baad303ef28\") " pod="openstack/openstack-cell1-galera-0" Feb 23 09:07:11 crc kubenswrapper[4940]: I0223 09:07:11.298672 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/f228632e-c649-4cbf-9a32-5baad303ef28-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"f228632e-c649-4cbf-9a32-5baad303ef28\") " pod="openstack/openstack-cell1-galera-0" Feb 23 09:07:11 crc kubenswrapper[4940]: I0223 09:07:11.298729 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e0aedede-6061-46c9-8fd2-88a2e1880c2f-config-data\") pod \"memcached-0\" (UID: \"e0aedede-6061-46c9-8fd2-88a2e1880c2f\") " pod="openstack/memcached-0" Feb 23 09:07:11 crc kubenswrapper[4940]: I0223 09:07:11.298755 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f228632e-c649-4cbf-9a32-5baad303ef28-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"f228632e-c649-4cbf-9a32-5baad303ef28\") " pod="openstack/openstack-cell1-galera-0" Feb 23 09:07:11 crc kubenswrapper[4940]: I0223 09:07:11.298801 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0aedede-6061-46c9-8fd2-88a2e1880c2f-combined-ca-bundle\") pod \"memcached-0\" (UID: \"e0aedede-6061-46c9-8fd2-88a2e1880c2f\") " pod="openstack/memcached-0" Feb 23 09:07:11 crc kubenswrapper[4940]: I0223 09:07:11.298833 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f228632e-c649-4cbf-9a32-5baad303ef28-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"f228632e-c649-4cbf-9a32-5baad303ef28\") " pod="openstack/openstack-cell1-galera-0" Feb 23 09:07:11 crc kubenswrapper[4940]: I0223 09:07:11.298899 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/e0aedede-6061-46c9-8fd2-88a2e1880c2f-memcached-tls-certs\") pod \"memcached-0\" (UID: \"e0aedede-6061-46c9-8fd2-88a2e1880c2f\") " pod="openstack/memcached-0" Feb 23 09:07:11 crc kubenswrapper[4940]: I0223 09:07:11.298955 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nmqr4\" (UniqueName: \"kubernetes.io/projected/f228632e-c649-4cbf-9a32-5baad303ef28-kube-api-access-nmqr4\") pod \"openstack-cell1-galera-0\" (UID: \"f228632e-c649-4cbf-9a32-5baad303ef28\") " pod="openstack/openstack-cell1-galera-0" Feb 23 09:07:11 crc kubenswrapper[4940]: I0223 09:07:11.298998 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8bccr\" (UniqueName: \"kubernetes.io/projected/e0aedede-6061-46c9-8fd2-88a2e1880c2f-kube-api-access-8bccr\") pod \"memcached-0\" (UID: \"e0aedede-6061-46c9-8fd2-88a2e1880c2f\") " pod="openstack/memcached-0" Feb 23 09:07:11 crc kubenswrapper[4940]: I0223 09:07:11.299655 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"openstack-cell1-galera-0\" (UID: \"f228632e-c649-4cbf-9a32-5baad303ef28\") " pod="openstack/openstack-cell1-galera-0" Feb 23 09:07:11 crc kubenswrapper[4940]: I0223 09:07:11.299715 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/f228632e-c649-4cbf-9a32-5baad303ef28-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"f228632e-c649-4cbf-9a32-5baad303ef28\") " pod="openstack/openstack-cell1-galera-0" Feb 23 09:07:11 crc kubenswrapper[4940]: I0223 09:07:11.299744 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/e0aedede-6061-46c9-8fd2-88a2e1880c2f-kolla-config\") pod \"memcached-0\" (UID: \"e0aedede-6061-46c9-8fd2-88a2e1880c2f\") " pod="openstack/memcached-0" Feb 23 09:07:11 crc kubenswrapper[4940]: I0223 09:07:11.304000 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e0aedede-6061-46c9-8fd2-88a2e1880c2f-config-data\") pod \"memcached-0\" (UID: \"e0aedede-6061-46c9-8fd2-88a2e1880c2f\") " pod="openstack/memcached-0" Feb 23 09:07:11 crc kubenswrapper[4940]: I0223 09:07:11.305525 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/e0aedede-6061-46c9-8fd2-88a2e1880c2f-kolla-config\") pod \"memcached-0\" (UID: \"e0aedede-6061-46c9-8fd2-88a2e1880c2f\") " pod="openstack/memcached-0" Feb 23 09:07:11 crc kubenswrapper[4940]: I0223 09:07:11.313306 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0aedede-6061-46c9-8fd2-88a2e1880c2f-combined-ca-bundle\") pod \"memcached-0\" (UID: \"e0aedede-6061-46c9-8fd2-88a2e1880c2f\") " pod="openstack/memcached-0" Feb 23 09:07:11 crc kubenswrapper[4940]: I0223 09:07:11.316091 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/e0aedede-6061-46c9-8fd2-88a2e1880c2f-memcached-tls-certs\") pod \"memcached-0\" (UID: \"e0aedede-6061-46c9-8fd2-88a2e1880c2f\") " pod="openstack/memcached-0" Feb 23 09:07:11 crc kubenswrapper[4940]: I0223 09:07:11.334270 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8bccr\" (UniqueName: \"kubernetes.io/projected/e0aedede-6061-46c9-8fd2-88a2e1880c2f-kube-api-access-8bccr\") pod \"memcached-0\" (UID: \"e0aedede-6061-46c9-8fd2-88a2e1880c2f\") " pod="openstack/memcached-0" Feb 23 09:07:11 crc kubenswrapper[4940]: I0223 09:07:11.400807 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f228632e-c649-4cbf-9a32-5baad303ef28-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"f228632e-c649-4cbf-9a32-5baad303ef28\") " pod="openstack/openstack-cell1-galera-0" Feb 23 09:07:11 crc kubenswrapper[4940]: I0223 09:07:11.400879 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nmqr4\" (UniqueName: \"kubernetes.io/projected/f228632e-c649-4cbf-9a32-5baad303ef28-kube-api-access-nmqr4\") pod \"openstack-cell1-galera-0\" (UID: \"f228632e-c649-4cbf-9a32-5baad303ef28\") " pod="openstack/openstack-cell1-galera-0" Feb 23 09:07:11 crc kubenswrapper[4940]: I0223 09:07:11.400937 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"openstack-cell1-galera-0\" (UID: \"f228632e-c649-4cbf-9a32-5baad303ef28\") " pod="openstack/openstack-cell1-galera-0" Feb 23 09:07:11 crc kubenswrapper[4940]: I0223 09:07:11.400964 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/f228632e-c649-4cbf-9a32-5baad303ef28-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"f228632e-c649-4cbf-9a32-5baad303ef28\") " pod="openstack/openstack-cell1-galera-0" Feb 23 09:07:11 crc kubenswrapper[4940]: I0223 09:07:11.401015 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/f228632e-c649-4cbf-9a32-5baad303ef28-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"f228632e-c649-4cbf-9a32-5baad303ef28\") " pod="openstack/openstack-cell1-galera-0" Feb 23 09:07:11 crc kubenswrapper[4940]: I0223 09:07:11.401070 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/f228632e-c649-4cbf-9a32-5baad303ef28-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"f228632e-c649-4cbf-9a32-5baad303ef28\") " pod="openstack/openstack-cell1-galera-0" Feb 23 09:07:11 crc kubenswrapper[4940]: I0223 09:07:11.401094 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/f228632e-c649-4cbf-9a32-5baad303ef28-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"f228632e-c649-4cbf-9a32-5baad303ef28\") " pod="openstack/openstack-cell1-galera-0" Feb 23 09:07:11 crc kubenswrapper[4940]: I0223 09:07:11.401140 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f228632e-c649-4cbf-9a32-5baad303ef28-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"f228632e-c649-4cbf-9a32-5baad303ef28\") " pod="openstack/openstack-cell1-galera-0" Feb 23 09:07:11 crc kubenswrapper[4940]: I0223 09:07:11.402729 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/f228632e-c649-4cbf-9a32-5baad303ef28-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"f228632e-c649-4cbf-9a32-5baad303ef28\") " pod="openstack/openstack-cell1-galera-0" Feb 23 09:07:11 crc kubenswrapper[4940]: I0223 09:07:11.402915 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f228632e-c649-4cbf-9a32-5baad303ef28-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"f228632e-c649-4cbf-9a32-5baad303ef28\") " pod="openstack/openstack-cell1-galera-0" Feb 23 09:07:11 crc kubenswrapper[4940]: I0223 09:07:11.402982 4940 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"openstack-cell1-galera-0\" (UID: \"f228632e-c649-4cbf-9a32-5baad303ef28\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/openstack-cell1-galera-0" Feb 23 09:07:11 crc kubenswrapper[4940]: I0223 09:07:11.403871 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/f228632e-c649-4cbf-9a32-5baad303ef28-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"f228632e-c649-4cbf-9a32-5baad303ef28\") " pod="openstack/openstack-cell1-galera-0" Feb 23 09:07:11 crc kubenswrapper[4940]: I0223 09:07:11.406784 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/f228632e-c649-4cbf-9a32-5baad303ef28-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"f228632e-c649-4cbf-9a32-5baad303ef28\") " pod="openstack/openstack-cell1-galera-0" Feb 23 09:07:11 crc kubenswrapper[4940]: I0223 09:07:11.421766 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/f228632e-c649-4cbf-9a32-5baad303ef28-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"f228632e-c649-4cbf-9a32-5baad303ef28\") " pod="openstack/openstack-cell1-galera-0" Feb 23 09:07:11 crc kubenswrapper[4940]: I0223 09:07:11.430461 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f228632e-c649-4cbf-9a32-5baad303ef28-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"f228632e-c649-4cbf-9a32-5baad303ef28\") " pod="openstack/openstack-cell1-galera-0" Feb 23 09:07:11 crc kubenswrapper[4940]: I0223 09:07:11.430951 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Feb 23 09:07:11 crc kubenswrapper[4940]: I0223 09:07:11.436554 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nmqr4\" (UniqueName: \"kubernetes.io/projected/f228632e-c649-4cbf-9a32-5baad303ef28-kube-api-access-nmqr4\") pod \"openstack-cell1-galera-0\" (UID: \"f228632e-c649-4cbf-9a32-5baad303ef28\") " pod="openstack/openstack-cell1-galera-0" Feb 23 09:07:11 crc kubenswrapper[4940]: I0223 09:07:11.446713 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"openstack-cell1-galera-0\" (UID: \"f228632e-c649-4cbf-9a32-5baad303ef28\") " pod="openstack/openstack-cell1-galera-0" Feb 23 09:07:11 crc kubenswrapper[4940]: I0223 09:07:11.503492 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Feb 23 09:07:12 crc kubenswrapper[4940]: I0223 09:07:12.388662 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Feb 23 09:07:12 crc kubenswrapper[4940]: I0223 09:07:12.405439 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 23 09:07:13 crc kubenswrapper[4940]: I0223 09:07:13.069202 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"f228632e-c649-4cbf-9a32-5baad303ef28","Type":"ContainerStarted","Data":"78eb85cc4df4852295a5fe8618b73458fcf1c288f612f5933bdc4d0277054d58"} Feb 23 09:07:13 crc kubenswrapper[4940]: I0223 09:07:13.078050 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"e0aedede-6061-46c9-8fd2-88a2e1880c2f","Type":"ContainerStarted","Data":"d05dbd3a030094c239b02d86f9649d4f922d651f45451cf88f9060066d8bf0c7"} Feb 23 09:07:13 crc kubenswrapper[4940]: I0223 09:07:13.648102 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Feb 23 09:07:13 crc kubenswrapper[4940]: I0223 09:07:13.653624 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 23 09:07:13 crc kubenswrapper[4940]: I0223 09:07:13.656817 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-vzj9p" Feb 23 09:07:13 crc kubenswrapper[4940]: I0223 09:07:13.664019 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 23 09:07:13 crc kubenswrapper[4940]: I0223 09:07:13.761259 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jkhtl\" (UniqueName: \"kubernetes.io/projected/e38a100d-49bb-4138-a8c7-3eade8ae78f6-kube-api-access-jkhtl\") pod \"kube-state-metrics-0\" (UID: \"e38a100d-49bb-4138-a8c7-3eade8ae78f6\") " pod="openstack/kube-state-metrics-0" Feb 23 09:07:13 crc kubenswrapper[4940]: I0223 09:07:13.862954 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jkhtl\" (UniqueName: \"kubernetes.io/projected/e38a100d-49bb-4138-a8c7-3eade8ae78f6-kube-api-access-jkhtl\") pod \"kube-state-metrics-0\" (UID: \"e38a100d-49bb-4138-a8c7-3eade8ae78f6\") " pod="openstack/kube-state-metrics-0" Feb 23 09:07:13 crc kubenswrapper[4940]: I0223 09:07:13.935723 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jkhtl\" (UniqueName: \"kubernetes.io/projected/e38a100d-49bb-4138-a8c7-3eade8ae78f6-kube-api-access-jkhtl\") pod \"kube-state-metrics-0\" (UID: \"e38a100d-49bb-4138-a8c7-3eade8ae78f6\") " pod="openstack/kube-state-metrics-0" Feb 23 09:07:14 crc kubenswrapper[4940]: I0223 09:07:14.011381 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 23 09:07:17 crc kubenswrapper[4940]: I0223 09:07:17.167797 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-skhdb"] Feb 23 09:07:17 crc kubenswrapper[4940]: I0223 09:07:17.168887 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-skhdb" Feb 23 09:07:17 crc kubenswrapper[4940]: I0223 09:07:17.173536 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-skhdb"] Feb 23 09:07:17 crc kubenswrapper[4940]: I0223 09:07:17.211448 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Feb 23 09:07:17 crc kubenswrapper[4940]: I0223 09:07:17.211702 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-xlhbb" Feb 23 09:07:17 crc kubenswrapper[4940]: I0223 09:07:17.211861 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Feb 23 09:07:17 crc kubenswrapper[4940]: I0223 09:07:17.245762 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-srtp4"] Feb 23 09:07:17 crc kubenswrapper[4940]: I0223 09:07:17.247354 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-srtp4" Feb 23 09:07:17 crc kubenswrapper[4940]: I0223 09:07:17.255852 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 23 09:07:17 crc kubenswrapper[4940]: I0223 09:07:17.257244 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Feb 23 09:07:17 crc kubenswrapper[4940]: I0223 09:07:17.258979 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Feb 23 09:07:17 crc kubenswrapper[4940]: I0223 09:07:17.259241 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Feb 23 09:07:17 crc kubenswrapper[4940]: I0223 09:07:17.259634 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-hqcrz" Feb 23 09:07:17 crc kubenswrapper[4940]: I0223 09:07:17.259763 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Feb 23 09:07:17 crc kubenswrapper[4940]: I0223 09:07:17.259798 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Feb 23 09:07:17 crc kubenswrapper[4940]: I0223 09:07:17.269502 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m5j5f\" (UniqueName: \"kubernetes.io/projected/f4dfcca1-21ca-42ff-bed0-1bb4f8d14aaa-kube-api-access-m5j5f\") pod \"ovn-controller-skhdb\" (UID: \"f4dfcca1-21ca-42ff-bed0-1bb4f8d14aaa\") " pod="openstack/ovn-controller-skhdb" Feb 23 09:07:17 crc kubenswrapper[4940]: I0223 09:07:17.270101 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/67fcaa2c-2af4-49db-8193-de6e83317807-scripts\") pod \"ovn-controller-ovs-srtp4\" (UID: \"67fcaa2c-2af4-49db-8193-de6e83317807\") " pod="openstack/ovn-controller-ovs-srtp4" Feb 23 09:07:17 crc kubenswrapper[4940]: I0223 09:07:17.270138 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/f4dfcca1-21ca-42ff-bed0-1bb4f8d14aaa-var-log-ovn\") pod \"ovn-controller-skhdb\" (UID: \"f4dfcca1-21ca-42ff-bed0-1bb4f8d14aaa\") " pod="openstack/ovn-controller-skhdb" Feb 23 09:07:17 crc kubenswrapper[4940]: I0223 09:07:17.270181 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/67fcaa2c-2af4-49db-8193-de6e83317807-var-log\") pod \"ovn-controller-ovs-srtp4\" (UID: \"67fcaa2c-2af4-49db-8193-de6e83317807\") " pod="openstack/ovn-controller-ovs-srtp4" Feb 23 09:07:17 crc kubenswrapper[4940]: I0223 09:07:17.270205 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/f4dfcca1-21ca-42ff-bed0-1bb4f8d14aaa-ovn-controller-tls-certs\") pod \"ovn-controller-skhdb\" (UID: \"f4dfcca1-21ca-42ff-bed0-1bb4f8d14aaa\") " pod="openstack/ovn-controller-skhdb" Feb 23 09:07:17 crc kubenswrapper[4940]: I0223 09:07:17.270262 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4dfcca1-21ca-42ff-bed0-1bb4f8d14aaa-combined-ca-bundle\") pod \"ovn-controller-skhdb\" (UID: \"f4dfcca1-21ca-42ff-bed0-1bb4f8d14aaa\") " pod="openstack/ovn-controller-skhdb" Feb 23 09:07:17 crc kubenswrapper[4940]: I0223 09:07:17.270290 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/67fcaa2c-2af4-49db-8193-de6e83317807-var-lib\") pod \"ovn-controller-ovs-srtp4\" (UID: \"67fcaa2c-2af4-49db-8193-de6e83317807\") " pod="openstack/ovn-controller-ovs-srtp4" Feb 23 09:07:17 crc kubenswrapper[4940]: I0223 09:07:17.270324 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/67fcaa2c-2af4-49db-8193-de6e83317807-var-run\") pod \"ovn-controller-ovs-srtp4\" (UID: \"67fcaa2c-2af4-49db-8193-de6e83317807\") " pod="openstack/ovn-controller-ovs-srtp4" Feb 23 09:07:17 crc kubenswrapper[4940]: I0223 09:07:17.270351 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/67fcaa2c-2af4-49db-8193-de6e83317807-etc-ovs\") pod \"ovn-controller-ovs-srtp4\" (UID: \"67fcaa2c-2af4-49db-8193-de6e83317807\") " pod="openstack/ovn-controller-ovs-srtp4" Feb 23 09:07:17 crc kubenswrapper[4940]: I0223 09:07:17.270395 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/f4dfcca1-21ca-42ff-bed0-1bb4f8d14aaa-var-run-ovn\") pod \"ovn-controller-skhdb\" (UID: \"f4dfcca1-21ca-42ff-bed0-1bb4f8d14aaa\") " pod="openstack/ovn-controller-skhdb" Feb 23 09:07:17 crc kubenswrapper[4940]: I0223 09:07:17.270438 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v7bmb\" (UniqueName: \"kubernetes.io/projected/67fcaa2c-2af4-49db-8193-de6e83317807-kube-api-access-v7bmb\") pod \"ovn-controller-ovs-srtp4\" (UID: \"67fcaa2c-2af4-49db-8193-de6e83317807\") " pod="openstack/ovn-controller-ovs-srtp4" Feb 23 09:07:17 crc kubenswrapper[4940]: I0223 09:07:17.270461 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f4dfcca1-21ca-42ff-bed0-1bb4f8d14aaa-scripts\") pod \"ovn-controller-skhdb\" (UID: \"f4dfcca1-21ca-42ff-bed0-1bb4f8d14aaa\") " pod="openstack/ovn-controller-skhdb" Feb 23 09:07:17 crc kubenswrapper[4940]: I0223 09:07:17.270479 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/f4dfcca1-21ca-42ff-bed0-1bb4f8d14aaa-var-run\") pod \"ovn-controller-skhdb\" (UID: \"f4dfcca1-21ca-42ff-bed0-1bb4f8d14aaa\") " pod="openstack/ovn-controller-skhdb" Feb 23 09:07:17 crc kubenswrapper[4940]: I0223 09:07:17.294383 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-srtp4"] Feb 23 09:07:17 crc kubenswrapper[4940]: I0223 09:07:17.310031 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 23 09:07:17 crc kubenswrapper[4940]: I0223 09:07:17.371718 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/67fcaa2c-2af4-49db-8193-de6e83317807-var-log\") pod \"ovn-controller-ovs-srtp4\" (UID: \"67fcaa2c-2af4-49db-8193-de6e83317807\") " pod="openstack/ovn-controller-ovs-srtp4" Feb 23 09:07:17 crc kubenswrapper[4940]: I0223 09:07:17.371781 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/f4dfcca1-21ca-42ff-bed0-1bb4f8d14aaa-ovn-controller-tls-certs\") pod \"ovn-controller-skhdb\" (UID: \"f4dfcca1-21ca-42ff-bed0-1bb4f8d14aaa\") " pod="openstack/ovn-controller-skhdb" Feb 23 09:07:17 crc kubenswrapper[4940]: I0223 09:07:17.371818 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/20aa8441-57d4-4190-8edb-609af4891496-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"20aa8441-57d4-4190-8edb-609af4891496\") " pod="openstack/ovsdbserver-nb-0" Feb 23 09:07:17 crc kubenswrapper[4940]: I0223 09:07:17.371858 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/20aa8441-57d4-4190-8edb-609af4891496-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"20aa8441-57d4-4190-8edb-609af4891496\") " pod="openstack/ovsdbserver-nb-0" Feb 23 09:07:17 crc kubenswrapper[4940]: I0223 09:07:17.371893 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4dfcca1-21ca-42ff-bed0-1bb4f8d14aaa-combined-ca-bundle\") pod \"ovn-controller-skhdb\" (UID: \"f4dfcca1-21ca-42ff-bed0-1bb4f8d14aaa\") " pod="openstack/ovn-controller-skhdb" Feb 23 09:07:17 crc kubenswrapper[4940]: I0223 09:07:17.371923 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/20aa8441-57d4-4190-8edb-609af4891496-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"20aa8441-57d4-4190-8edb-609af4891496\") " pod="openstack/ovsdbserver-nb-0" Feb 23 09:07:17 crc kubenswrapper[4940]: I0223 09:07:17.371947 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/67fcaa2c-2af4-49db-8193-de6e83317807-var-lib\") pod \"ovn-controller-ovs-srtp4\" (UID: \"67fcaa2c-2af4-49db-8193-de6e83317807\") " pod="openstack/ovn-controller-ovs-srtp4" Feb 23 09:07:17 crc kubenswrapper[4940]: I0223 09:07:17.371986 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mkb2h\" (UniqueName: \"kubernetes.io/projected/20aa8441-57d4-4190-8edb-609af4891496-kube-api-access-mkb2h\") pod \"ovsdbserver-nb-0\" (UID: \"20aa8441-57d4-4190-8edb-609af4891496\") " pod="openstack/ovsdbserver-nb-0" Feb 23 09:07:17 crc kubenswrapper[4940]: I0223 09:07:17.372010 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/67fcaa2c-2af4-49db-8193-de6e83317807-var-run\") pod \"ovn-controller-ovs-srtp4\" (UID: \"67fcaa2c-2af4-49db-8193-de6e83317807\") " pod="openstack/ovn-controller-ovs-srtp4" Feb 23 09:07:17 crc kubenswrapper[4940]: I0223 09:07:17.372042 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/67fcaa2c-2af4-49db-8193-de6e83317807-etc-ovs\") pod \"ovn-controller-ovs-srtp4\" (UID: \"67fcaa2c-2af4-49db-8193-de6e83317807\") " pod="openstack/ovn-controller-ovs-srtp4" Feb 23 09:07:17 crc kubenswrapper[4940]: I0223 09:07:17.372091 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/20aa8441-57d4-4190-8edb-609af4891496-config\") pod \"ovsdbserver-nb-0\" (UID: \"20aa8441-57d4-4190-8edb-609af4891496\") " pod="openstack/ovsdbserver-nb-0" Feb 23 09:07:17 crc kubenswrapper[4940]: I0223 09:07:17.372140 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/f4dfcca1-21ca-42ff-bed0-1bb4f8d14aaa-var-run-ovn\") pod \"ovn-controller-skhdb\" (UID: \"f4dfcca1-21ca-42ff-bed0-1bb4f8d14aaa\") " pod="openstack/ovn-controller-skhdb" Feb 23 09:07:17 crc kubenswrapper[4940]: I0223 09:07:17.372168 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"ovsdbserver-nb-0\" (UID: \"20aa8441-57d4-4190-8edb-609af4891496\") " pod="openstack/ovsdbserver-nb-0" Feb 23 09:07:17 crc kubenswrapper[4940]: I0223 09:07:17.372206 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v7bmb\" (UniqueName: \"kubernetes.io/projected/67fcaa2c-2af4-49db-8193-de6e83317807-kube-api-access-v7bmb\") pod \"ovn-controller-ovs-srtp4\" (UID: \"67fcaa2c-2af4-49db-8193-de6e83317807\") " pod="openstack/ovn-controller-ovs-srtp4" Feb 23 09:07:17 crc kubenswrapper[4940]: I0223 09:07:17.372237 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f4dfcca1-21ca-42ff-bed0-1bb4f8d14aaa-scripts\") pod \"ovn-controller-skhdb\" (UID: \"f4dfcca1-21ca-42ff-bed0-1bb4f8d14aaa\") " pod="openstack/ovn-controller-skhdb" Feb 23 09:07:17 crc kubenswrapper[4940]: I0223 09:07:17.372261 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/f4dfcca1-21ca-42ff-bed0-1bb4f8d14aaa-var-run\") pod \"ovn-controller-skhdb\" (UID: \"f4dfcca1-21ca-42ff-bed0-1bb4f8d14aaa\") " pod="openstack/ovn-controller-skhdb" Feb 23 09:07:17 crc kubenswrapper[4940]: I0223 09:07:17.372283 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20aa8441-57d4-4190-8edb-609af4891496-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"20aa8441-57d4-4190-8edb-609af4891496\") " pod="openstack/ovsdbserver-nb-0" Feb 23 09:07:17 crc kubenswrapper[4940]: I0223 09:07:17.372314 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m5j5f\" (UniqueName: \"kubernetes.io/projected/f4dfcca1-21ca-42ff-bed0-1bb4f8d14aaa-kube-api-access-m5j5f\") pod \"ovn-controller-skhdb\" (UID: \"f4dfcca1-21ca-42ff-bed0-1bb4f8d14aaa\") " pod="openstack/ovn-controller-skhdb" Feb 23 09:07:17 crc kubenswrapper[4940]: I0223 09:07:17.372340 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/67fcaa2c-2af4-49db-8193-de6e83317807-scripts\") pod \"ovn-controller-ovs-srtp4\" (UID: \"67fcaa2c-2af4-49db-8193-de6e83317807\") " pod="openstack/ovn-controller-ovs-srtp4" Feb 23 09:07:17 crc kubenswrapper[4940]: I0223 09:07:17.372365 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/f4dfcca1-21ca-42ff-bed0-1bb4f8d14aaa-var-log-ovn\") pod \"ovn-controller-skhdb\" (UID: \"f4dfcca1-21ca-42ff-bed0-1bb4f8d14aaa\") " pod="openstack/ovn-controller-skhdb" Feb 23 09:07:17 crc kubenswrapper[4940]: I0223 09:07:17.372407 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/20aa8441-57d4-4190-8edb-609af4891496-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"20aa8441-57d4-4190-8edb-609af4891496\") " pod="openstack/ovsdbserver-nb-0" Feb 23 09:07:17 crc kubenswrapper[4940]: I0223 09:07:17.373060 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/67fcaa2c-2af4-49db-8193-de6e83317807-var-log\") pod \"ovn-controller-ovs-srtp4\" (UID: \"67fcaa2c-2af4-49db-8193-de6e83317807\") " pod="openstack/ovn-controller-ovs-srtp4" Feb 23 09:07:17 crc kubenswrapper[4940]: I0223 09:07:17.374036 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/f4dfcca1-21ca-42ff-bed0-1bb4f8d14aaa-var-run\") pod \"ovn-controller-skhdb\" (UID: \"f4dfcca1-21ca-42ff-bed0-1bb4f8d14aaa\") " pod="openstack/ovn-controller-skhdb" Feb 23 09:07:17 crc kubenswrapper[4940]: I0223 09:07:17.374048 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/67fcaa2c-2af4-49db-8193-de6e83317807-var-run\") pod \"ovn-controller-ovs-srtp4\" (UID: \"67fcaa2c-2af4-49db-8193-de6e83317807\") " pod="openstack/ovn-controller-ovs-srtp4" Feb 23 09:07:17 crc kubenswrapper[4940]: I0223 09:07:17.374357 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/67fcaa2c-2af4-49db-8193-de6e83317807-etc-ovs\") pod \"ovn-controller-ovs-srtp4\" (UID: \"67fcaa2c-2af4-49db-8193-de6e83317807\") " pod="openstack/ovn-controller-ovs-srtp4" Feb 23 09:07:17 crc kubenswrapper[4940]: I0223 09:07:17.374418 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/f4dfcca1-21ca-42ff-bed0-1bb4f8d14aaa-var-run-ovn\") pod \"ovn-controller-skhdb\" (UID: \"f4dfcca1-21ca-42ff-bed0-1bb4f8d14aaa\") " pod="openstack/ovn-controller-skhdb" Feb 23 09:07:17 crc kubenswrapper[4940]: I0223 09:07:17.374557 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/f4dfcca1-21ca-42ff-bed0-1bb4f8d14aaa-var-log-ovn\") pod \"ovn-controller-skhdb\" (UID: \"f4dfcca1-21ca-42ff-bed0-1bb4f8d14aaa\") " pod="openstack/ovn-controller-skhdb" Feb 23 09:07:17 crc kubenswrapper[4940]: I0223 09:07:17.375129 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/67fcaa2c-2af4-49db-8193-de6e83317807-var-lib\") pod \"ovn-controller-ovs-srtp4\" (UID: \"67fcaa2c-2af4-49db-8193-de6e83317807\") " pod="openstack/ovn-controller-ovs-srtp4" Feb 23 09:07:17 crc kubenswrapper[4940]: I0223 09:07:17.380688 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4dfcca1-21ca-42ff-bed0-1bb4f8d14aaa-combined-ca-bundle\") pod \"ovn-controller-skhdb\" (UID: \"f4dfcca1-21ca-42ff-bed0-1bb4f8d14aaa\") " pod="openstack/ovn-controller-skhdb" Feb 23 09:07:17 crc kubenswrapper[4940]: I0223 09:07:17.381242 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/67fcaa2c-2af4-49db-8193-de6e83317807-scripts\") pod \"ovn-controller-ovs-srtp4\" (UID: \"67fcaa2c-2af4-49db-8193-de6e83317807\") " pod="openstack/ovn-controller-ovs-srtp4" Feb 23 09:07:17 crc kubenswrapper[4940]: I0223 09:07:17.387044 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f4dfcca1-21ca-42ff-bed0-1bb4f8d14aaa-scripts\") pod \"ovn-controller-skhdb\" (UID: \"f4dfcca1-21ca-42ff-bed0-1bb4f8d14aaa\") " pod="openstack/ovn-controller-skhdb" Feb 23 09:07:17 crc kubenswrapper[4940]: I0223 09:07:17.391089 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/f4dfcca1-21ca-42ff-bed0-1bb4f8d14aaa-ovn-controller-tls-certs\") pod \"ovn-controller-skhdb\" (UID: \"f4dfcca1-21ca-42ff-bed0-1bb4f8d14aaa\") " pod="openstack/ovn-controller-skhdb" Feb 23 09:07:17 crc kubenswrapper[4940]: I0223 09:07:17.391765 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m5j5f\" (UniqueName: \"kubernetes.io/projected/f4dfcca1-21ca-42ff-bed0-1bb4f8d14aaa-kube-api-access-m5j5f\") pod \"ovn-controller-skhdb\" (UID: \"f4dfcca1-21ca-42ff-bed0-1bb4f8d14aaa\") " pod="openstack/ovn-controller-skhdb" Feb 23 09:07:17 crc kubenswrapper[4940]: I0223 09:07:17.393388 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v7bmb\" (UniqueName: \"kubernetes.io/projected/67fcaa2c-2af4-49db-8193-de6e83317807-kube-api-access-v7bmb\") pod \"ovn-controller-ovs-srtp4\" (UID: \"67fcaa2c-2af4-49db-8193-de6e83317807\") " pod="openstack/ovn-controller-ovs-srtp4" Feb 23 09:07:17 crc kubenswrapper[4940]: I0223 09:07:17.473899 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20aa8441-57d4-4190-8edb-609af4891496-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"20aa8441-57d4-4190-8edb-609af4891496\") " pod="openstack/ovsdbserver-nb-0" Feb 23 09:07:17 crc kubenswrapper[4940]: I0223 09:07:17.474915 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/20aa8441-57d4-4190-8edb-609af4891496-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"20aa8441-57d4-4190-8edb-609af4891496\") " pod="openstack/ovsdbserver-nb-0" Feb 23 09:07:17 crc kubenswrapper[4940]: I0223 09:07:17.474972 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/20aa8441-57d4-4190-8edb-609af4891496-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"20aa8441-57d4-4190-8edb-609af4891496\") " pod="openstack/ovsdbserver-nb-0" Feb 23 09:07:17 crc kubenswrapper[4940]: I0223 09:07:17.475058 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/20aa8441-57d4-4190-8edb-609af4891496-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"20aa8441-57d4-4190-8edb-609af4891496\") " pod="openstack/ovsdbserver-nb-0" Feb 23 09:07:17 crc kubenswrapper[4940]: I0223 09:07:17.475916 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/20aa8441-57d4-4190-8edb-609af4891496-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"20aa8441-57d4-4190-8edb-609af4891496\") " pod="openstack/ovsdbserver-nb-0" Feb 23 09:07:17 crc kubenswrapper[4940]: I0223 09:07:17.476314 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/20aa8441-57d4-4190-8edb-609af4891496-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"20aa8441-57d4-4190-8edb-609af4891496\") " pod="openstack/ovsdbserver-nb-0" Feb 23 09:07:17 crc kubenswrapper[4940]: I0223 09:07:17.476412 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mkb2h\" (UniqueName: \"kubernetes.io/projected/20aa8441-57d4-4190-8edb-609af4891496-kube-api-access-mkb2h\") pod \"ovsdbserver-nb-0\" (UID: \"20aa8441-57d4-4190-8edb-609af4891496\") " pod="openstack/ovsdbserver-nb-0" Feb 23 09:07:17 crc kubenswrapper[4940]: I0223 09:07:17.476423 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/20aa8441-57d4-4190-8edb-609af4891496-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"20aa8441-57d4-4190-8edb-609af4891496\") " pod="openstack/ovsdbserver-nb-0" Feb 23 09:07:17 crc kubenswrapper[4940]: I0223 09:07:17.476484 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/20aa8441-57d4-4190-8edb-609af4891496-config\") pod \"ovsdbserver-nb-0\" (UID: \"20aa8441-57d4-4190-8edb-609af4891496\") " pod="openstack/ovsdbserver-nb-0" Feb 23 09:07:17 crc kubenswrapper[4940]: I0223 09:07:17.476544 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"ovsdbserver-nb-0\" (UID: \"20aa8441-57d4-4190-8edb-609af4891496\") " pod="openstack/ovsdbserver-nb-0" Feb 23 09:07:17 crc kubenswrapper[4940]: I0223 09:07:17.476800 4940 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"ovsdbserver-nb-0\" (UID: \"20aa8441-57d4-4190-8edb-609af4891496\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/ovsdbserver-nb-0" Feb 23 09:07:17 crc kubenswrapper[4940]: I0223 09:07:17.477861 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/20aa8441-57d4-4190-8edb-609af4891496-config\") pod \"ovsdbserver-nb-0\" (UID: \"20aa8441-57d4-4190-8edb-609af4891496\") " pod="openstack/ovsdbserver-nb-0" Feb 23 09:07:17 crc kubenswrapper[4940]: I0223 09:07:17.484597 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20aa8441-57d4-4190-8edb-609af4891496-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"20aa8441-57d4-4190-8edb-609af4891496\") " pod="openstack/ovsdbserver-nb-0" Feb 23 09:07:17 crc kubenswrapper[4940]: I0223 09:07:17.485094 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/20aa8441-57d4-4190-8edb-609af4891496-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"20aa8441-57d4-4190-8edb-609af4891496\") " pod="openstack/ovsdbserver-nb-0" Feb 23 09:07:17 crc kubenswrapper[4940]: I0223 09:07:17.493831 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mkb2h\" (UniqueName: \"kubernetes.io/projected/20aa8441-57d4-4190-8edb-609af4891496-kube-api-access-mkb2h\") pod \"ovsdbserver-nb-0\" (UID: \"20aa8441-57d4-4190-8edb-609af4891496\") " pod="openstack/ovsdbserver-nb-0" Feb 23 09:07:17 crc kubenswrapper[4940]: I0223 09:07:17.502769 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"ovsdbserver-nb-0\" (UID: \"20aa8441-57d4-4190-8edb-609af4891496\") " pod="openstack/ovsdbserver-nb-0" Feb 23 09:07:17 crc kubenswrapper[4940]: I0223 09:07:17.509017 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/20aa8441-57d4-4190-8edb-609af4891496-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"20aa8441-57d4-4190-8edb-609af4891496\") " pod="openstack/ovsdbserver-nb-0" Feb 23 09:07:17 crc kubenswrapper[4940]: I0223 09:07:17.553211 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-skhdb" Feb 23 09:07:17 crc kubenswrapper[4940]: I0223 09:07:17.577777 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-srtp4" Feb 23 09:07:17 crc kubenswrapper[4940]: I0223 09:07:17.598408 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Feb 23 09:07:20 crc kubenswrapper[4940]: I0223 09:07:20.949360 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 23 09:07:20 crc kubenswrapper[4940]: I0223 09:07:20.951173 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Feb 23 09:07:20 crc kubenswrapper[4940]: I0223 09:07:20.954019 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Feb 23 09:07:20 crc kubenswrapper[4940]: I0223 09:07:20.954167 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Feb 23 09:07:20 crc kubenswrapper[4940]: I0223 09:07:20.954345 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Feb 23 09:07:20 crc kubenswrapper[4940]: I0223 09:07:20.954889 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-7q29f" Feb 23 09:07:20 crc kubenswrapper[4940]: I0223 09:07:20.966433 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 23 09:07:21 crc kubenswrapper[4940]: I0223 09:07:21.033089 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/0e5b3c11-0f21-4277-b49b-15dc23cc9d96-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"0e5b3c11-0f21-4277-b49b-15dc23cc9d96\") " pod="openstack/ovsdbserver-sb-0" Feb 23 09:07:21 crc kubenswrapper[4940]: I0223 09:07:21.033153 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ck6jc\" (UniqueName: \"kubernetes.io/projected/0e5b3c11-0f21-4277-b49b-15dc23cc9d96-kube-api-access-ck6jc\") pod \"ovsdbserver-sb-0\" (UID: \"0e5b3c11-0f21-4277-b49b-15dc23cc9d96\") " pod="openstack/ovsdbserver-sb-0" Feb 23 09:07:21 crc kubenswrapper[4940]: I0223 09:07:21.033181 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0e5b3c11-0f21-4277-b49b-15dc23cc9d96-config\") pod \"ovsdbserver-sb-0\" (UID: \"0e5b3c11-0f21-4277-b49b-15dc23cc9d96\") " pod="openstack/ovsdbserver-sb-0" Feb 23 09:07:21 crc kubenswrapper[4940]: I0223 09:07:21.033279 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0e5b3c11-0f21-4277-b49b-15dc23cc9d96-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"0e5b3c11-0f21-4277-b49b-15dc23cc9d96\") " pod="openstack/ovsdbserver-sb-0" Feb 23 09:07:21 crc kubenswrapper[4940]: I0223 09:07:21.033382 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/0e5b3c11-0f21-4277-b49b-15dc23cc9d96-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"0e5b3c11-0f21-4277-b49b-15dc23cc9d96\") " pod="openstack/ovsdbserver-sb-0" Feb 23 09:07:21 crc kubenswrapper[4940]: I0223 09:07:21.033501 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"ovsdbserver-sb-0\" (UID: \"0e5b3c11-0f21-4277-b49b-15dc23cc9d96\") " pod="openstack/ovsdbserver-sb-0" Feb 23 09:07:21 crc kubenswrapper[4940]: I0223 09:07:21.033579 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/0e5b3c11-0f21-4277-b49b-15dc23cc9d96-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"0e5b3c11-0f21-4277-b49b-15dc23cc9d96\") " pod="openstack/ovsdbserver-sb-0" Feb 23 09:07:21 crc kubenswrapper[4940]: I0223 09:07:21.033684 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e5b3c11-0f21-4277-b49b-15dc23cc9d96-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"0e5b3c11-0f21-4277-b49b-15dc23cc9d96\") " pod="openstack/ovsdbserver-sb-0" Feb 23 09:07:21 crc kubenswrapper[4940]: I0223 09:07:21.135190 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/0e5b3c11-0f21-4277-b49b-15dc23cc9d96-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"0e5b3c11-0f21-4277-b49b-15dc23cc9d96\") " pod="openstack/ovsdbserver-sb-0" Feb 23 09:07:21 crc kubenswrapper[4940]: I0223 09:07:21.135338 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"ovsdbserver-sb-0\" (UID: \"0e5b3c11-0f21-4277-b49b-15dc23cc9d96\") " pod="openstack/ovsdbserver-sb-0" Feb 23 09:07:21 crc kubenswrapper[4940]: I0223 09:07:21.135371 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/0e5b3c11-0f21-4277-b49b-15dc23cc9d96-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"0e5b3c11-0f21-4277-b49b-15dc23cc9d96\") " pod="openstack/ovsdbserver-sb-0" Feb 23 09:07:21 crc kubenswrapper[4940]: I0223 09:07:21.135416 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e5b3c11-0f21-4277-b49b-15dc23cc9d96-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"0e5b3c11-0f21-4277-b49b-15dc23cc9d96\") " pod="openstack/ovsdbserver-sb-0" Feb 23 09:07:21 crc kubenswrapper[4940]: I0223 09:07:21.135461 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/0e5b3c11-0f21-4277-b49b-15dc23cc9d96-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"0e5b3c11-0f21-4277-b49b-15dc23cc9d96\") " pod="openstack/ovsdbserver-sb-0" Feb 23 09:07:21 crc kubenswrapper[4940]: I0223 09:07:21.135496 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ck6jc\" (UniqueName: \"kubernetes.io/projected/0e5b3c11-0f21-4277-b49b-15dc23cc9d96-kube-api-access-ck6jc\") pod \"ovsdbserver-sb-0\" (UID: \"0e5b3c11-0f21-4277-b49b-15dc23cc9d96\") " pod="openstack/ovsdbserver-sb-0" Feb 23 09:07:21 crc kubenswrapper[4940]: I0223 09:07:21.135517 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0e5b3c11-0f21-4277-b49b-15dc23cc9d96-config\") pod \"ovsdbserver-sb-0\" (UID: \"0e5b3c11-0f21-4277-b49b-15dc23cc9d96\") " pod="openstack/ovsdbserver-sb-0" Feb 23 09:07:21 crc kubenswrapper[4940]: I0223 09:07:21.135541 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0e5b3c11-0f21-4277-b49b-15dc23cc9d96-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"0e5b3c11-0f21-4277-b49b-15dc23cc9d96\") " pod="openstack/ovsdbserver-sb-0" Feb 23 09:07:21 crc kubenswrapper[4940]: I0223 09:07:21.136539 4940 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"ovsdbserver-sb-0\" (UID: \"0e5b3c11-0f21-4277-b49b-15dc23cc9d96\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/ovsdbserver-sb-0" Feb 23 09:07:21 crc kubenswrapper[4940]: I0223 09:07:21.136785 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/0e5b3c11-0f21-4277-b49b-15dc23cc9d96-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"0e5b3c11-0f21-4277-b49b-15dc23cc9d96\") " pod="openstack/ovsdbserver-sb-0" Feb 23 09:07:21 crc kubenswrapper[4940]: I0223 09:07:21.137160 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0e5b3c11-0f21-4277-b49b-15dc23cc9d96-config\") pod \"ovsdbserver-sb-0\" (UID: \"0e5b3c11-0f21-4277-b49b-15dc23cc9d96\") " pod="openstack/ovsdbserver-sb-0" Feb 23 09:07:21 crc kubenswrapper[4940]: I0223 09:07:21.138453 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0e5b3c11-0f21-4277-b49b-15dc23cc9d96-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"0e5b3c11-0f21-4277-b49b-15dc23cc9d96\") " pod="openstack/ovsdbserver-sb-0" Feb 23 09:07:21 crc kubenswrapper[4940]: I0223 09:07:21.145234 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/0e5b3c11-0f21-4277-b49b-15dc23cc9d96-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"0e5b3c11-0f21-4277-b49b-15dc23cc9d96\") " pod="openstack/ovsdbserver-sb-0" Feb 23 09:07:21 crc kubenswrapper[4940]: I0223 09:07:21.146237 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/0e5b3c11-0f21-4277-b49b-15dc23cc9d96-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"0e5b3c11-0f21-4277-b49b-15dc23cc9d96\") " pod="openstack/ovsdbserver-sb-0" Feb 23 09:07:21 crc kubenswrapper[4940]: I0223 09:07:21.150640 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e5b3c11-0f21-4277-b49b-15dc23cc9d96-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"0e5b3c11-0f21-4277-b49b-15dc23cc9d96\") " pod="openstack/ovsdbserver-sb-0" Feb 23 09:07:21 crc kubenswrapper[4940]: I0223 09:07:21.161154 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ck6jc\" (UniqueName: \"kubernetes.io/projected/0e5b3c11-0f21-4277-b49b-15dc23cc9d96-kube-api-access-ck6jc\") pod \"ovsdbserver-sb-0\" (UID: \"0e5b3c11-0f21-4277-b49b-15dc23cc9d96\") " pod="openstack/ovsdbserver-sb-0" Feb 23 09:07:21 crc kubenswrapper[4940]: I0223 09:07:21.180308 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"ovsdbserver-sb-0\" (UID: \"0e5b3c11-0f21-4277-b49b-15dc23cc9d96\") " pod="openstack/ovsdbserver-sb-0" Feb 23 09:07:21 crc kubenswrapper[4940]: I0223 09:07:21.278512 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Feb 23 09:07:28 crc kubenswrapper[4940]: E0223 09:07:28.803945 4940 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-mariadb:current-podified" Feb 23 09:07:28 crc kubenswrapper[4940]: E0223 09:07:28.804439 4940 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:mysql-bootstrap,Image:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,Command:[bash /var/lib/operator-scripts/mysql_bootstrap.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:True,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:mysql-db,ReadOnly:false,MountPath:/var/lib/mysql,SubPath:mysql,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-default,ReadOnly:true,MountPath:/var/lib/config-data/default,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-generated,ReadOnly:false,MountPath:/var/lib/config-data/generated,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:operator-scripts,ReadOnly:true,MountPath:/var/lib/operator-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5x5wc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstack-galera-0_openstack(1b7438a4-1302-46b5-a005-b74758200871): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 23 09:07:28 crc kubenswrapper[4940]: E0223 09:07:28.805642 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/openstack-galera-0" podUID="1b7438a4-1302-46b5-a005-b74758200871" Feb 23 09:07:29 crc kubenswrapper[4940]: E0223 09:07:29.282204 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-mariadb:current-podified\\\"\"" pod="openstack/openstack-galera-0" podUID="1b7438a4-1302-46b5-a005-b74758200871" Feb 23 09:07:31 crc kubenswrapper[4940]: I0223 09:07:31.429669 4940 patch_prober.go:28] interesting pod/machine-config-daemon-26mgs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 23 09:07:31 crc kubenswrapper[4940]: I0223 09:07:31.429747 4940 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" podUID="f3f2cfd6-5ddf-436d-998f-440f1cc642b1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 23 09:07:31 crc kubenswrapper[4940]: I0223 09:07:31.429807 4940 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" Feb 23 09:07:31 crc kubenswrapper[4940]: I0223 09:07:31.430731 4940 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c29b2fbbefbec40ee98e54f4b935b91d77c404cc4f7cfd7802c91d0a773948d7"} pod="openshift-machine-config-operator/machine-config-daemon-26mgs" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 23 09:07:31 crc kubenswrapper[4940]: I0223 09:07:31.430812 4940 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" podUID="f3f2cfd6-5ddf-436d-998f-440f1cc642b1" containerName="machine-config-daemon" containerID="cri-o://c29b2fbbefbec40ee98e54f4b935b91d77c404cc4f7cfd7802c91d0a773948d7" gracePeriod=600 Feb 23 09:07:32 crc kubenswrapper[4940]: I0223 09:07:32.311356 4940 generic.go:334] "Generic (PLEG): container finished" podID="f3f2cfd6-5ddf-436d-998f-440f1cc642b1" containerID="c29b2fbbefbec40ee98e54f4b935b91d77c404cc4f7cfd7802c91d0a773948d7" exitCode=0 Feb 23 09:07:32 crc kubenswrapper[4940]: I0223 09:07:32.311417 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" event={"ID":"f3f2cfd6-5ddf-436d-998f-440f1cc642b1","Type":"ContainerDied","Data":"c29b2fbbefbec40ee98e54f4b935b91d77c404cc4f7cfd7802c91d0a773948d7"} Feb 23 09:07:32 crc kubenswrapper[4940]: I0223 09:07:32.311857 4940 scope.go:117] "RemoveContainer" containerID="cb93228543200fd2d6020d08fa2989a091f75ae9c39f73df80e0c09ed858a572" Feb 23 09:07:39 crc kubenswrapper[4940]: E0223 09:07:39.470367 4940 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-memcached:current-podified" Feb 23 09:07:39 crc kubenswrapper[4940]: E0223 09:07:39.471140 4940 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:memcached,Image:quay.io/podified-antelope-centos9/openstack-memcached:current-podified,Command:[/usr/bin/dumb-init -- /usr/local/bin/kolla_start],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:memcached,HostPort:0,ContainerPort:11211,Protocol:TCP,HostIP:,},ContainerPort{Name:memcached-tls,HostPort:0,ContainerPort:11212,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:POD_IPS,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIPs,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:CONFIG_HASH,Value:n547h68dhbfh648h5bfh6bh546hcfh7ch648h68fhf5h599hbbhb8hc4h95h8bh86h654hch78hbdh5d5hfch65fh54fh658h6bhfdhf5h665q,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/src,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:memcached-tls-certs,ReadOnly:true,MountPath:/var/lib/config-data/tls/certs/memcached.crt,SubPath:tls.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:memcached-tls-certs,ReadOnly:true,MountPath:/var/lib/config-data/tls/private/memcached.key,SubPath:tls.key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8bccr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 11211 },Host:,},GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 11211 },Host:,},GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42457,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42457,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod memcached-0_openstack(e0aedede-6061-46c9-8fd2-88a2e1880c2f): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 23 09:07:39 crc kubenswrapper[4940]: E0223 09:07:39.472838 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"memcached\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/memcached-0" podUID="e0aedede-6061-46c9-8fd2-88a2e1880c2f" Feb 23 09:07:40 crc kubenswrapper[4940]: E0223 09:07:40.205132 4940 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Feb 23 09:07:40 crc kubenswrapper[4940]: E0223 09:07:40.205903 4940 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4jc7g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-78dd6ddcc-2gpsf_openstack(9f0d8522-e151-43e6-9da2-881c191d28bd): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 23 09:07:40 crc kubenswrapper[4940]: E0223 09:07:40.207043 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-78dd6ddcc-2gpsf" podUID="9f0d8522-e151-43e6-9da2-881c191d28bd" Feb 23 09:07:40 crc kubenswrapper[4940]: E0223 09:07:40.214976 4940 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Feb 23 09:07:40 crc kubenswrapper[4940]: E0223 09:07:40.215113 4940 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n68chd6h679hbfh55fhc6h5ffh5d8h94h56ch589hb4hc5h57bh677hcdh655h8dh667h675h654h66ch567h8fh659h5b4h675h566h55bh54h67dh6dq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7ck79,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-666b6646f7-vnk88_openstack(f7ad5edd-bc1d-473a-baca-cfe974fb32f1): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 23 09:07:40 crc kubenswrapper[4940]: E0223 09:07:40.216375 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-666b6646f7-vnk88" podUID="f7ad5edd-bc1d-473a-baca-cfe974fb32f1" Feb 23 09:07:40 crc kubenswrapper[4940]: E0223 09:07:40.235747 4940 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Feb 23 09:07:40 crc kubenswrapper[4940]: E0223 09:07:40.235909 4940 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-46j5l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-675f4bcbfc-5mvgt_openstack(20421f0a-8d8e-42b8-b181-2ae112e75172): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 23 09:07:40 crc kubenswrapper[4940]: E0223 09:07:40.238455 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-675f4bcbfc-5mvgt" podUID="20421f0a-8d8e-42b8-b181-2ae112e75172" Feb 23 09:07:40 crc kubenswrapper[4940]: E0223 09:07:40.240348 4940 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Feb 23 09:07:40 crc kubenswrapper[4940]: E0223 09:07:40.240441 4940 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n659h4h664hbh658h587h67ch89h587h8fh679hc6hf9h55fh644h5d5h698h68dh5cdh5ffh669h54ch9h689hb8hd4h5bfhd8h5d7h5fh665h574q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kt28q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-57d769cc4f-lp4jp_openstack(d1db4565-596f-4be8-985b-dd8efdd96172): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 23 09:07:40 crc kubenswrapper[4940]: E0223 09:07:40.241532 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-57d769cc4f-lp4jp" podUID="d1db4565-596f-4be8-985b-dd8efdd96172" Feb 23 09:07:40 crc kubenswrapper[4940]: E0223 09:07:40.413043 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-57d769cc4f-lp4jp" podUID="d1db4565-596f-4be8-985b-dd8efdd96172" Feb 23 09:07:40 crc kubenswrapper[4940]: E0223 09:07:40.413668 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"memcached\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-memcached:current-podified\\\"\"" pod="openstack/memcached-0" podUID="e0aedede-6061-46c9-8fd2-88a2e1880c2f" Feb 23 09:07:40 crc kubenswrapper[4940]: E0223 09:07:40.413919 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-666b6646f7-vnk88" podUID="f7ad5edd-bc1d-473a-baca-cfe974fb32f1" Feb 23 09:07:40 crc kubenswrapper[4940]: I0223 09:07:40.968038 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-skhdb"] Feb 23 09:07:41 crc kubenswrapper[4940]: W0223 09:07:41.041794 4940 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf4dfcca1_21ca_42ff_bed0_1bb4f8d14aaa.slice/crio-4f7678356c8cd213acafdbe142dd551b04fd3fa941cf62ad406c16417728fc2b WatchSource:0}: Error finding container 4f7678356c8cd213acafdbe142dd551b04fd3fa941cf62ad406c16417728fc2b: Status 404 returned error can't find the container with id 4f7678356c8cd213acafdbe142dd551b04fd3fa941cf62ad406c16417728fc2b Feb 23 09:07:41 crc kubenswrapper[4940]: I0223 09:07:41.131296 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 23 09:07:41 crc kubenswrapper[4940]: I0223 09:07:41.181792 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 23 09:07:41 crc kubenswrapper[4940]: W0223 09:07:41.232318 4940 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode38a100d_49bb_4138_a8c7_3eade8ae78f6.slice/crio-3fe4742bf6fb38311ecd6e100b5fe9025a408d4d741188abd326e7be1b0b9a87 WatchSource:0}: Error finding container 3fe4742bf6fb38311ecd6e100b5fe9025a408d4d741188abd326e7be1b0b9a87: Status 404 returned error can't find the container with id 3fe4742bf6fb38311ecd6e100b5fe9025a408d4d741188abd326e7be1b0b9a87 Feb 23 09:07:41 crc kubenswrapper[4940]: I0223 09:07:41.272762 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-srtp4"] Feb 23 09:07:41 crc kubenswrapper[4940]: W0223 09:07:41.281283 4940 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod67fcaa2c_2af4_49db_8193_de6e83317807.slice/crio-19a2def52bc1872d048179a51c9cb9efdc4dafdd6391d8ba2c557583185ee519 WatchSource:0}: Error finding container 19a2def52bc1872d048179a51c9cb9efdc4dafdd6391d8ba2c557583185ee519: Status 404 returned error can't find the container with id 19a2def52bc1872d048179a51c9cb9efdc4dafdd6391d8ba2c557583185ee519 Feb 23 09:07:41 crc kubenswrapper[4940]: I0223 09:07:41.325855 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-5mvgt" Feb 23 09:07:41 crc kubenswrapper[4940]: I0223 09:07:41.331886 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-2gpsf" Feb 23 09:07:41 crc kubenswrapper[4940]: I0223 09:07:41.423228 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-5mvgt" Feb 23 09:07:41 crc kubenswrapper[4940]: I0223 09:07:41.434684 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-2gpsf" Feb 23 09:07:41 crc kubenswrapper[4940]: I0223 09:07:41.461527 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-srtp4" event={"ID":"67fcaa2c-2af4-49db-8193-de6e83317807","Type":"ContainerStarted","Data":"19a2def52bc1872d048179a51c9cb9efdc4dafdd6391d8ba2c557583185ee519"} Feb 23 09:07:41 crc kubenswrapper[4940]: I0223 09:07:41.461585 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"e38a100d-49bb-4138-a8c7-3eade8ae78f6","Type":"ContainerStarted","Data":"3fe4742bf6fb38311ecd6e100b5fe9025a408d4d741188abd326e7be1b0b9a87"} Feb 23 09:07:41 crc kubenswrapper[4940]: I0223 09:07:41.461638 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-5mvgt" event={"ID":"20421f0a-8d8e-42b8-b181-2ae112e75172","Type":"ContainerDied","Data":"bd7217b96b0281c5fc37a71ed12ad3758f4b4345b23ea74b52512310ed745c4f"} Feb 23 09:07:41 crc kubenswrapper[4940]: I0223 09:07:41.461661 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 23 09:07:41 crc kubenswrapper[4940]: I0223 09:07:41.461683 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" event={"ID":"f3f2cfd6-5ddf-436d-998f-440f1cc642b1","Type":"ContainerStarted","Data":"4fad30523a6437e41dff0a057c489e191d38b824c4f57fb0c206fe34b2b2c2ec"} Feb 23 09:07:41 crc kubenswrapper[4940]: I0223 09:07:41.461705 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"20aa8441-57d4-4190-8edb-609af4891496","Type":"ContainerStarted","Data":"155d3a3012e57c6e4a22c28b04e6e89218d448132d4a911331282b62e1c42e41"} Feb 23 09:07:41 crc kubenswrapper[4940]: I0223 09:07:41.461720 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"f228632e-c649-4cbf-9a32-5baad303ef28","Type":"ContainerStarted","Data":"27483da9cb526ca1e577b509bc317c2696a94a67fb234f229751e10e9f22130e"} Feb 23 09:07:41 crc kubenswrapper[4940]: I0223 09:07:41.461735 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"987e4448-8da2-41e3-9dba-777d599609f5","Type":"ContainerStarted","Data":"23d2df5af434448a2262ddf59ca2c552ad6b605e7bc232716cfffc3c75d49713"} Feb 23 09:07:41 crc kubenswrapper[4940]: I0223 09:07:41.461754 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-2gpsf" event={"ID":"9f0d8522-e151-43e6-9da2-881c191d28bd","Type":"ContainerDied","Data":"a76358ae6e33f990cfd8faf055f44b4a90cb3e7fb71a05903ac5777928e14cf0"} Feb 23 09:07:41 crc kubenswrapper[4940]: I0223 09:07:41.461772 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-skhdb" event={"ID":"f4dfcca1-21ca-42ff-bed0-1bb4f8d14aaa","Type":"ContainerStarted","Data":"4f7678356c8cd213acafdbe142dd551b04fd3fa941cf62ad406c16417728fc2b"} Feb 23 09:07:41 crc kubenswrapper[4940]: I0223 09:07:41.514740 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4jc7g\" (UniqueName: \"kubernetes.io/projected/9f0d8522-e151-43e6-9da2-881c191d28bd-kube-api-access-4jc7g\") pod \"9f0d8522-e151-43e6-9da2-881c191d28bd\" (UID: \"9f0d8522-e151-43e6-9da2-881c191d28bd\") " Feb 23 09:07:41 crc kubenswrapper[4940]: I0223 09:07:41.514803 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-46j5l\" (UniqueName: \"kubernetes.io/projected/20421f0a-8d8e-42b8-b181-2ae112e75172-kube-api-access-46j5l\") pod \"20421f0a-8d8e-42b8-b181-2ae112e75172\" (UID: \"20421f0a-8d8e-42b8-b181-2ae112e75172\") " Feb 23 09:07:41 crc kubenswrapper[4940]: I0223 09:07:41.514831 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9f0d8522-e151-43e6-9da2-881c191d28bd-config\") pod \"9f0d8522-e151-43e6-9da2-881c191d28bd\" (UID: \"9f0d8522-e151-43e6-9da2-881c191d28bd\") " Feb 23 09:07:41 crc kubenswrapper[4940]: I0223 09:07:41.514966 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9f0d8522-e151-43e6-9da2-881c191d28bd-dns-svc\") pod \"9f0d8522-e151-43e6-9da2-881c191d28bd\" (UID: \"9f0d8522-e151-43e6-9da2-881c191d28bd\") " Feb 23 09:07:41 crc kubenswrapper[4940]: I0223 09:07:41.515005 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/20421f0a-8d8e-42b8-b181-2ae112e75172-config\") pod \"20421f0a-8d8e-42b8-b181-2ae112e75172\" (UID: \"20421f0a-8d8e-42b8-b181-2ae112e75172\") " Feb 23 09:07:41 crc kubenswrapper[4940]: I0223 09:07:41.515773 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/20421f0a-8d8e-42b8-b181-2ae112e75172-config" (OuterVolumeSpecName: "config") pod "20421f0a-8d8e-42b8-b181-2ae112e75172" (UID: "20421f0a-8d8e-42b8-b181-2ae112e75172"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 09:07:41 crc kubenswrapper[4940]: I0223 09:07:41.516898 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9f0d8522-e151-43e6-9da2-881c191d28bd-config" (OuterVolumeSpecName: "config") pod "9f0d8522-e151-43e6-9da2-881c191d28bd" (UID: "9f0d8522-e151-43e6-9da2-881c191d28bd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 09:07:41 crc kubenswrapper[4940]: I0223 09:07:41.520871 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f0d8522-e151-43e6-9da2-881c191d28bd-kube-api-access-4jc7g" (OuterVolumeSpecName: "kube-api-access-4jc7g") pod "9f0d8522-e151-43e6-9da2-881c191d28bd" (UID: "9f0d8522-e151-43e6-9da2-881c191d28bd"). InnerVolumeSpecName "kube-api-access-4jc7g". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 09:07:41 crc kubenswrapper[4940]: I0223 09:07:41.521274 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9f0d8522-e151-43e6-9da2-881c191d28bd-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "9f0d8522-e151-43e6-9da2-881c191d28bd" (UID: "9f0d8522-e151-43e6-9da2-881c191d28bd"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 09:07:41 crc kubenswrapper[4940]: I0223 09:07:41.521704 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20421f0a-8d8e-42b8-b181-2ae112e75172-kube-api-access-46j5l" (OuterVolumeSpecName: "kube-api-access-46j5l") pod "20421f0a-8d8e-42b8-b181-2ae112e75172" (UID: "20421f0a-8d8e-42b8-b181-2ae112e75172"). InnerVolumeSpecName "kube-api-access-46j5l". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 09:07:41 crc kubenswrapper[4940]: I0223 09:07:41.620942 4940 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9f0d8522-e151-43e6-9da2-881c191d28bd-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 23 09:07:41 crc kubenswrapper[4940]: I0223 09:07:41.620976 4940 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/20421f0a-8d8e-42b8-b181-2ae112e75172-config\") on node \"crc\" DevicePath \"\"" Feb 23 09:07:41 crc kubenswrapper[4940]: I0223 09:07:41.620989 4940 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4jc7g\" (UniqueName: \"kubernetes.io/projected/9f0d8522-e151-43e6-9da2-881c191d28bd-kube-api-access-4jc7g\") on node \"crc\" DevicePath \"\"" Feb 23 09:07:41 crc kubenswrapper[4940]: I0223 09:07:41.621002 4940 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-46j5l\" (UniqueName: \"kubernetes.io/projected/20421f0a-8d8e-42b8-b181-2ae112e75172-kube-api-access-46j5l\") on node \"crc\" DevicePath \"\"" Feb 23 09:07:41 crc kubenswrapper[4940]: I0223 09:07:41.621013 4940 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9f0d8522-e151-43e6-9da2-881c191d28bd-config\") on node \"crc\" DevicePath \"\"" Feb 23 09:07:41 crc kubenswrapper[4940]: I0223 09:07:41.781807 4940 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-5mvgt"] Feb 23 09:07:41 crc kubenswrapper[4940]: I0223 09:07:41.795503 4940 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-5mvgt"] Feb 23 09:07:41 crc kubenswrapper[4940]: I0223 09:07:41.816007 4940 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-2gpsf"] Feb 23 09:07:41 crc kubenswrapper[4940]: I0223 09:07:41.821300 4940 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-2gpsf"] Feb 23 09:07:42 crc kubenswrapper[4940]: I0223 09:07:42.446210 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"0e5b3c11-0f21-4277-b49b-15dc23cc9d96","Type":"ContainerStarted","Data":"ae68cb2dafa70d91a7cf63474aa9c03cca74c8c327681cd04f2f052bf6221c4d"} Feb 23 09:07:42 crc kubenswrapper[4940]: I0223 09:07:42.448703 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"7453ffea-4a1f-426c-ac9c-3377973fdb19","Type":"ContainerStarted","Data":"bb4754e574db70293cbad0d3fe8fac2eb6386e6f7db81f1c3fe396e83c71ff0e"} Feb 23 09:07:43 crc kubenswrapper[4940]: I0223 09:07:43.357600 4940 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20421f0a-8d8e-42b8-b181-2ae112e75172" path="/var/lib/kubelet/pods/20421f0a-8d8e-42b8-b181-2ae112e75172/volumes" Feb 23 09:07:43 crc kubenswrapper[4940]: I0223 09:07:43.359659 4940 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9f0d8522-e151-43e6-9da2-881c191d28bd" path="/var/lib/kubelet/pods/9f0d8522-e151-43e6-9da2-881c191d28bd/volumes" Feb 23 09:07:45 crc kubenswrapper[4940]: I0223 09:07:45.479428 4940 generic.go:334] "Generic (PLEG): container finished" podID="f228632e-c649-4cbf-9a32-5baad303ef28" containerID="27483da9cb526ca1e577b509bc317c2696a94a67fb234f229751e10e9f22130e" exitCode=0 Feb 23 09:07:45 crc kubenswrapper[4940]: I0223 09:07:45.479908 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"f228632e-c649-4cbf-9a32-5baad303ef28","Type":"ContainerDied","Data":"27483da9cb526ca1e577b509bc317c2696a94a67fb234f229751e10e9f22130e"} Feb 23 09:07:46 crc kubenswrapper[4940]: I0223 09:07:46.488145 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"1b7438a4-1302-46b5-a005-b74758200871","Type":"ContainerStarted","Data":"e035aac7900f26ccdbae86ceff73d9770dc41950d884ccf2cf4a31eefd774f6f"} Feb 23 09:07:46 crc kubenswrapper[4940]: I0223 09:07:46.489680 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"0e5b3c11-0f21-4277-b49b-15dc23cc9d96","Type":"ContainerStarted","Data":"bcc0a472d8bf1db9c73767a711f3aab76f59c832a83623c2ec6546e4f95ccf07"} Feb 23 09:07:46 crc kubenswrapper[4940]: I0223 09:07:46.491747 4940 generic.go:334] "Generic (PLEG): container finished" podID="67fcaa2c-2af4-49db-8193-de6e83317807" containerID="27c6bd06535fe8e4c74d85da2ae58c6ee6127e133508c3b4265e151eeda72d9d" exitCode=0 Feb 23 09:07:46 crc kubenswrapper[4940]: I0223 09:07:46.491794 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-srtp4" event={"ID":"67fcaa2c-2af4-49db-8193-de6e83317807","Type":"ContainerDied","Data":"27c6bd06535fe8e4c74d85da2ae58c6ee6127e133508c3b4265e151eeda72d9d"} Feb 23 09:07:46 crc kubenswrapper[4940]: I0223 09:07:46.493214 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-skhdb" event={"ID":"f4dfcca1-21ca-42ff-bed0-1bb4f8d14aaa","Type":"ContainerStarted","Data":"f6affbfa9142985d1aead9efd8bb662874500c76070d802d03d3318e84acd712"} Feb 23 09:07:46 crc kubenswrapper[4940]: I0223 09:07:46.493369 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-skhdb" Feb 23 09:07:46 crc kubenswrapper[4940]: I0223 09:07:46.495059 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"e38a100d-49bb-4138-a8c7-3eade8ae78f6","Type":"ContainerStarted","Data":"dca5188ad7f58440fcc360a71472b1bd9a8634c005b1d1db477a303d16535c42"} Feb 23 09:07:46 crc kubenswrapper[4940]: I0223 09:07:46.495507 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Feb 23 09:07:46 crc kubenswrapper[4940]: I0223 09:07:46.496765 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"20aa8441-57d4-4190-8edb-609af4891496","Type":"ContainerStarted","Data":"0727ad4498252982d0ab9727fa2211c7f3abddb4b58b89f244c719d17d7ab852"} Feb 23 09:07:46 crc kubenswrapper[4940]: I0223 09:07:46.498689 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"f228632e-c649-4cbf-9a32-5baad303ef28","Type":"ContainerStarted","Data":"83e18390190e9b8272205290ef7bbb0231f69effc37d9072b82d2d758a12fcbb"} Feb 23 09:07:46 crc kubenswrapper[4940]: I0223 09:07:46.551117 4940 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=29.387790407 podStartE2EDuration="33.551093546s" podCreationTimestamp="2026-02-23 09:07:13 +0000 UTC" firstStartedPulling="2026-02-23 09:07:41.234594171 +0000 UTC m=+1192.617800328" lastFinishedPulling="2026-02-23 09:07:45.3978973 +0000 UTC m=+1196.781103467" observedRunningTime="2026-02-23 09:07:46.546516934 +0000 UTC m=+1197.929723091" watchObservedRunningTime="2026-02-23 09:07:46.551093546 +0000 UTC m=+1197.934299703" Feb 23 09:07:46 crc kubenswrapper[4940]: I0223 09:07:46.573788 4940 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-skhdb" podStartSLOduration=25.333669404 podStartE2EDuration="29.57376479s" podCreationTimestamp="2026-02-23 09:07:17 +0000 UTC" firstStartedPulling="2026-02-23 09:07:41.045907598 +0000 UTC m=+1192.429113755" lastFinishedPulling="2026-02-23 09:07:45.286002984 +0000 UTC m=+1196.669209141" observedRunningTime="2026-02-23 09:07:46.567265618 +0000 UTC m=+1197.950471775" watchObservedRunningTime="2026-02-23 09:07:46.57376479 +0000 UTC m=+1197.956970947" Feb 23 09:07:46 crc kubenswrapper[4940]: I0223 09:07:46.587622 4940 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=8.930408918 podStartE2EDuration="36.58759511s" podCreationTimestamp="2026-02-23 09:07:10 +0000 UTC" firstStartedPulling="2026-02-23 09:07:12.494702673 +0000 UTC m=+1163.877908830" lastFinishedPulling="2026-02-23 09:07:40.151888865 +0000 UTC m=+1191.535095022" observedRunningTime="2026-02-23 09:07:46.584268756 +0000 UTC m=+1197.967474923" watchObservedRunningTime="2026-02-23 09:07:46.58759511 +0000 UTC m=+1197.970801267" Feb 23 09:07:47 crc kubenswrapper[4940]: I0223 09:07:47.507144 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"0e5b3c11-0f21-4277-b49b-15dc23cc9d96","Type":"ContainerStarted","Data":"8eeb057fc4844644d8462b025640062da690ffb40c6605b093747456374649d8"} Feb 23 09:07:47 crc kubenswrapper[4940]: I0223 09:07:47.511008 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-srtp4" event={"ID":"67fcaa2c-2af4-49db-8193-de6e83317807","Type":"ContainerStarted","Data":"9c1ba57c8bec4e3e0cfaa9828eac8137b7c0cb483cc9d7f4981c4600a52c665c"} Feb 23 09:07:47 crc kubenswrapper[4940]: I0223 09:07:47.512984 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"20aa8441-57d4-4190-8edb-609af4891496","Type":"ContainerStarted","Data":"8edd85b865c8f3a1b90d2d74a5b0d22c658b804833f5cdccb4f92e9fd5f840b0"} Feb 23 09:07:47 crc kubenswrapper[4940]: I0223 09:07:47.525903 4940 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=22.879482335 podStartE2EDuration="28.525883069s" podCreationTimestamp="2026-02-23 09:07:19 +0000 UTC" firstStartedPulling="2026-02-23 09:07:41.478783277 +0000 UTC m=+1192.861989434" lastFinishedPulling="2026-02-23 09:07:47.125184001 +0000 UTC m=+1198.508390168" observedRunningTime="2026-02-23 09:07:47.524341361 +0000 UTC m=+1198.907547538" watchObservedRunningTime="2026-02-23 09:07:47.525883069 +0000 UTC m=+1198.909089226" Feb 23 09:07:47 crc kubenswrapper[4940]: I0223 09:07:47.547479 4940 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=25.659521201 podStartE2EDuration="31.547461049s" podCreationTimestamp="2026-02-23 09:07:16 +0000 UTC" firstStartedPulling="2026-02-23 09:07:41.247750109 +0000 UTC m=+1192.630956266" lastFinishedPulling="2026-02-23 09:07:47.135689957 +0000 UTC m=+1198.518896114" observedRunningTime="2026-02-23 09:07:47.541969969 +0000 UTC m=+1198.925176146" watchObservedRunningTime="2026-02-23 09:07:47.547461049 +0000 UTC m=+1198.930667206" Feb 23 09:07:47 crc kubenswrapper[4940]: I0223 09:07:47.599190 4940 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Feb 23 09:07:47 crc kubenswrapper[4940]: I0223 09:07:47.599487 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Feb 23 09:07:48 crc kubenswrapper[4940]: I0223 09:07:48.279314 4940 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Feb 23 09:07:48 crc kubenswrapper[4940]: I0223 09:07:48.336428 4940 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Feb 23 09:07:48 crc kubenswrapper[4940]: I0223 09:07:48.522543 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-srtp4" event={"ID":"67fcaa2c-2af4-49db-8193-de6e83317807","Type":"ContainerStarted","Data":"8c7c72dc29c428706cb923d15b410261f96cecc2658512c556000f670803c057"} Feb 23 09:07:48 crc kubenswrapper[4940]: I0223 09:07:48.523091 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Feb 23 09:07:48 crc kubenswrapper[4940]: I0223 09:07:48.550478 4940 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-srtp4" podStartSLOduration=27.831470282 podStartE2EDuration="31.550456449s" podCreationTimestamp="2026-02-23 09:07:17 +0000 UTC" firstStartedPulling="2026-02-23 09:07:41.284006625 +0000 UTC m=+1192.667212782" lastFinishedPulling="2026-02-23 09:07:45.002992762 +0000 UTC m=+1196.386198949" observedRunningTime="2026-02-23 09:07:48.54694734 +0000 UTC m=+1199.930153517" watchObservedRunningTime="2026-02-23 09:07:48.550456449 +0000 UTC m=+1199.933662626" Feb 23 09:07:49 crc kubenswrapper[4940]: I0223 09:07:49.529831 4940 generic.go:334] "Generic (PLEG): container finished" podID="1b7438a4-1302-46b5-a005-b74758200871" containerID="e035aac7900f26ccdbae86ceff73d9770dc41950d884ccf2cf4a31eefd774f6f" exitCode=0 Feb 23 09:07:49 crc kubenswrapper[4940]: I0223 09:07:49.529919 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"1b7438a4-1302-46b5-a005-b74758200871","Type":"ContainerDied","Data":"e035aac7900f26ccdbae86ceff73d9770dc41950d884ccf2cf4a31eefd774f6f"} Feb 23 09:07:49 crc kubenswrapper[4940]: I0223 09:07:49.530303 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-srtp4" Feb 23 09:07:49 crc kubenswrapper[4940]: I0223 09:07:49.530328 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-srtp4" Feb 23 09:07:50 crc kubenswrapper[4940]: I0223 09:07:50.363913 4940 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/metallb-operator-controller-manager-6fbfbdcfc7-6tv8l" podUID="19abcf46-c53b-4409-a6f9-e7e8b41e3182" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.47:8080/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 23 09:07:50 crc kubenswrapper[4940]: I0223 09:07:50.634364 4940 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Feb 23 09:07:50 crc kubenswrapper[4940]: I0223 09:07:50.677184 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Feb 23 09:07:50 crc kubenswrapper[4940]: I0223 09:07:50.931474 4940 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-vnk88"] Feb 23 09:07:50 crc kubenswrapper[4940]: I0223 09:07:50.974953 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-7gv4v"] Feb 23 09:07:50 crc kubenswrapper[4940]: I0223 09:07:50.976681 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-7gv4v" Feb 23 09:07:50 crc kubenswrapper[4940]: I0223 09:07:50.986893 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Feb 23 09:07:50 crc kubenswrapper[4940]: I0223 09:07:50.987107 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-7gv4v"] Feb 23 09:07:51 crc kubenswrapper[4940]: I0223 09:07:51.002958 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0e4471c0-8a8f-4f32-8e0f-678066e4afc1-dns-svc\") pod \"dnsmasq-dns-7fd796d7df-7gv4v\" (UID: \"0e4471c0-8a8f-4f32-8e0f-678066e4afc1\") " pod="openstack/dnsmasq-dns-7fd796d7df-7gv4v" Feb 23 09:07:51 crc kubenswrapper[4940]: I0223 09:07:51.003026 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0e4471c0-8a8f-4f32-8e0f-678066e4afc1-config\") pod \"dnsmasq-dns-7fd796d7df-7gv4v\" (UID: \"0e4471c0-8a8f-4f32-8e0f-678066e4afc1\") " pod="openstack/dnsmasq-dns-7fd796d7df-7gv4v" Feb 23 09:07:51 crc kubenswrapper[4940]: I0223 09:07:51.003043 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5rp8s\" (UniqueName: \"kubernetes.io/projected/0e4471c0-8a8f-4f32-8e0f-678066e4afc1-kube-api-access-5rp8s\") pod \"dnsmasq-dns-7fd796d7df-7gv4v\" (UID: \"0e4471c0-8a8f-4f32-8e0f-678066e4afc1\") " pod="openstack/dnsmasq-dns-7fd796d7df-7gv4v" Feb 23 09:07:51 crc kubenswrapper[4940]: I0223 09:07:51.003224 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0e4471c0-8a8f-4f32-8e0f-678066e4afc1-ovsdbserver-nb\") pod \"dnsmasq-dns-7fd796d7df-7gv4v\" (UID: \"0e4471c0-8a8f-4f32-8e0f-678066e4afc1\") " pod="openstack/dnsmasq-dns-7fd796d7df-7gv4v" Feb 23 09:07:51 crc kubenswrapper[4940]: I0223 09:07:51.057752 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-jl7wx"] Feb 23 09:07:51 crc kubenswrapper[4940]: I0223 09:07:51.058807 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-jl7wx" Feb 23 09:07:51 crc kubenswrapper[4940]: I0223 09:07:51.061299 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Feb 23 09:07:51 crc kubenswrapper[4940]: I0223 09:07:51.065586 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-jl7wx"] Feb 23 09:07:51 crc kubenswrapper[4940]: I0223 09:07:51.105473 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0e4471c0-8a8f-4f32-8e0f-678066e4afc1-ovsdbserver-nb\") pod \"dnsmasq-dns-7fd796d7df-7gv4v\" (UID: \"0e4471c0-8a8f-4f32-8e0f-678066e4afc1\") " pod="openstack/dnsmasq-dns-7fd796d7df-7gv4v" Feb 23 09:07:51 crc kubenswrapper[4940]: I0223 09:07:51.105569 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/53a2f9a0-c632-432a-aebd-7f3c5863d0bc-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-jl7wx\" (UID: \"53a2f9a0-c632-432a-aebd-7f3c5863d0bc\") " pod="openstack/ovn-controller-metrics-jl7wx" Feb 23 09:07:51 crc kubenswrapper[4940]: I0223 09:07:51.105596 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/53a2f9a0-c632-432a-aebd-7f3c5863d0bc-config\") pod \"ovn-controller-metrics-jl7wx\" (UID: \"53a2f9a0-c632-432a-aebd-7f3c5863d0bc\") " pod="openstack/ovn-controller-metrics-jl7wx" Feb 23 09:07:51 crc kubenswrapper[4940]: I0223 09:07:51.105728 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53a2f9a0-c632-432a-aebd-7f3c5863d0bc-combined-ca-bundle\") pod \"ovn-controller-metrics-jl7wx\" (UID: \"53a2f9a0-c632-432a-aebd-7f3c5863d0bc\") " pod="openstack/ovn-controller-metrics-jl7wx" Feb 23 09:07:51 crc kubenswrapper[4940]: I0223 09:07:51.105785 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/53a2f9a0-c632-432a-aebd-7f3c5863d0bc-ovs-rundir\") pod \"ovn-controller-metrics-jl7wx\" (UID: \"53a2f9a0-c632-432a-aebd-7f3c5863d0bc\") " pod="openstack/ovn-controller-metrics-jl7wx" Feb 23 09:07:51 crc kubenswrapper[4940]: I0223 09:07:51.105821 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/53a2f9a0-c632-432a-aebd-7f3c5863d0bc-ovn-rundir\") pod \"ovn-controller-metrics-jl7wx\" (UID: \"53a2f9a0-c632-432a-aebd-7f3c5863d0bc\") " pod="openstack/ovn-controller-metrics-jl7wx" Feb 23 09:07:51 crc kubenswrapper[4940]: I0223 09:07:51.105840 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0e4471c0-8a8f-4f32-8e0f-678066e4afc1-dns-svc\") pod \"dnsmasq-dns-7fd796d7df-7gv4v\" (UID: \"0e4471c0-8a8f-4f32-8e0f-678066e4afc1\") " pod="openstack/dnsmasq-dns-7fd796d7df-7gv4v" Feb 23 09:07:51 crc kubenswrapper[4940]: I0223 09:07:51.105869 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sdhm6\" (UniqueName: \"kubernetes.io/projected/53a2f9a0-c632-432a-aebd-7f3c5863d0bc-kube-api-access-sdhm6\") pod \"ovn-controller-metrics-jl7wx\" (UID: \"53a2f9a0-c632-432a-aebd-7f3c5863d0bc\") " pod="openstack/ovn-controller-metrics-jl7wx" Feb 23 09:07:51 crc kubenswrapper[4940]: I0223 09:07:51.105896 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0e4471c0-8a8f-4f32-8e0f-678066e4afc1-config\") pod \"dnsmasq-dns-7fd796d7df-7gv4v\" (UID: \"0e4471c0-8a8f-4f32-8e0f-678066e4afc1\") " pod="openstack/dnsmasq-dns-7fd796d7df-7gv4v" Feb 23 09:07:51 crc kubenswrapper[4940]: I0223 09:07:51.105914 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5rp8s\" (UniqueName: \"kubernetes.io/projected/0e4471c0-8a8f-4f32-8e0f-678066e4afc1-kube-api-access-5rp8s\") pod \"dnsmasq-dns-7fd796d7df-7gv4v\" (UID: \"0e4471c0-8a8f-4f32-8e0f-678066e4afc1\") " pod="openstack/dnsmasq-dns-7fd796d7df-7gv4v" Feb 23 09:07:51 crc kubenswrapper[4940]: I0223 09:07:51.111998 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0e4471c0-8a8f-4f32-8e0f-678066e4afc1-ovsdbserver-nb\") pod \"dnsmasq-dns-7fd796d7df-7gv4v\" (UID: \"0e4471c0-8a8f-4f32-8e0f-678066e4afc1\") " pod="openstack/dnsmasq-dns-7fd796d7df-7gv4v" Feb 23 09:07:51 crc kubenswrapper[4940]: I0223 09:07:51.112005 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0e4471c0-8a8f-4f32-8e0f-678066e4afc1-dns-svc\") pod \"dnsmasq-dns-7fd796d7df-7gv4v\" (UID: \"0e4471c0-8a8f-4f32-8e0f-678066e4afc1\") " pod="openstack/dnsmasq-dns-7fd796d7df-7gv4v" Feb 23 09:07:51 crc kubenswrapper[4940]: I0223 09:07:51.112444 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0e4471c0-8a8f-4f32-8e0f-678066e4afc1-config\") pod \"dnsmasq-dns-7fd796d7df-7gv4v\" (UID: \"0e4471c0-8a8f-4f32-8e0f-678066e4afc1\") " pod="openstack/dnsmasq-dns-7fd796d7df-7gv4v" Feb 23 09:07:51 crc kubenswrapper[4940]: I0223 09:07:51.133109 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5rp8s\" (UniqueName: \"kubernetes.io/projected/0e4471c0-8a8f-4f32-8e0f-678066e4afc1-kube-api-access-5rp8s\") pod \"dnsmasq-dns-7fd796d7df-7gv4v\" (UID: \"0e4471c0-8a8f-4f32-8e0f-678066e4afc1\") " pod="openstack/dnsmasq-dns-7fd796d7df-7gv4v" Feb 23 09:07:51 crc kubenswrapper[4940]: I0223 09:07:51.207332 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53a2f9a0-c632-432a-aebd-7f3c5863d0bc-combined-ca-bundle\") pod \"ovn-controller-metrics-jl7wx\" (UID: \"53a2f9a0-c632-432a-aebd-7f3c5863d0bc\") " pod="openstack/ovn-controller-metrics-jl7wx" Feb 23 09:07:51 crc kubenswrapper[4940]: I0223 09:07:51.207383 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/53a2f9a0-c632-432a-aebd-7f3c5863d0bc-ovs-rundir\") pod \"ovn-controller-metrics-jl7wx\" (UID: \"53a2f9a0-c632-432a-aebd-7f3c5863d0bc\") " pod="openstack/ovn-controller-metrics-jl7wx" Feb 23 09:07:51 crc kubenswrapper[4940]: I0223 09:07:51.207404 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/53a2f9a0-c632-432a-aebd-7f3c5863d0bc-ovn-rundir\") pod \"ovn-controller-metrics-jl7wx\" (UID: \"53a2f9a0-c632-432a-aebd-7f3c5863d0bc\") " pod="openstack/ovn-controller-metrics-jl7wx" Feb 23 09:07:51 crc kubenswrapper[4940]: I0223 09:07:51.207437 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sdhm6\" (UniqueName: \"kubernetes.io/projected/53a2f9a0-c632-432a-aebd-7f3c5863d0bc-kube-api-access-sdhm6\") pod \"ovn-controller-metrics-jl7wx\" (UID: \"53a2f9a0-c632-432a-aebd-7f3c5863d0bc\") " pod="openstack/ovn-controller-metrics-jl7wx" Feb 23 09:07:51 crc kubenswrapper[4940]: I0223 09:07:51.207537 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/53a2f9a0-c632-432a-aebd-7f3c5863d0bc-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-jl7wx\" (UID: \"53a2f9a0-c632-432a-aebd-7f3c5863d0bc\") " pod="openstack/ovn-controller-metrics-jl7wx" Feb 23 09:07:51 crc kubenswrapper[4940]: I0223 09:07:51.207566 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/53a2f9a0-c632-432a-aebd-7f3c5863d0bc-config\") pod \"ovn-controller-metrics-jl7wx\" (UID: \"53a2f9a0-c632-432a-aebd-7f3c5863d0bc\") " pod="openstack/ovn-controller-metrics-jl7wx" Feb 23 09:07:51 crc kubenswrapper[4940]: I0223 09:07:51.214846 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/53a2f9a0-c632-432a-aebd-7f3c5863d0bc-ovs-rundir\") pod \"ovn-controller-metrics-jl7wx\" (UID: \"53a2f9a0-c632-432a-aebd-7f3c5863d0bc\") " pod="openstack/ovn-controller-metrics-jl7wx" Feb 23 09:07:51 crc kubenswrapper[4940]: I0223 09:07:51.214992 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/53a2f9a0-c632-432a-aebd-7f3c5863d0bc-ovn-rundir\") pod \"ovn-controller-metrics-jl7wx\" (UID: \"53a2f9a0-c632-432a-aebd-7f3c5863d0bc\") " pod="openstack/ovn-controller-metrics-jl7wx" Feb 23 09:07:51 crc kubenswrapper[4940]: I0223 09:07:51.217071 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/53a2f9a0-c632-432a-aebd-7f3c5863d0bc-config\") pod \"ovn-controller-metrics-jl7wx\" (UID: \"53a2f9a0-c632-432a-aebd-7f3c5863d0bc\") " pod="openstack/ovn-controller-metrics-jl7wx" Feb 23 09:07:51 crc kubenswrapper[4940]: I0223 09:07:51.231205 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53a2f9a0-c632-432a-aebd-7f3c5863d0bc-combined-ca-bundle\") pod \"ovn-controller-metrics-jl7wx\" (UID: \"53a2f9a0-c632-432a-aebd-7f3c5863d0bc\") " pod="openstack/ovn-controller-metrics-jl7wx" Feb 23 09:07:51 crc kubenswrapper[4940]: I0223 09:07:51.242088 4940 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-lp4jp"] Feb 23 09:07:51 crc kubenswrapper[4940]: I0223 09:07:51.268280 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sdhm6\" (UniqueName: \"kubernetes.io/projected/53a2f9a0-c632-432a-aebd-7f3c5863d0bc-kube-api-access-sdhm6\") pod \"ovn-controller-metrics-jl7wx\" (UID: \"53a2f9a0-c632-432a-aebd-7f3c5863d0bc\") " pod="openstack/ovn-controller-metrics-jl7wx" Feb 23 09:07:51 crc kubenswrapper[4940]: I0223 09:07:51.269348 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/53a2f9a0-c632-432a-aebd-7f3c5863d0bc-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-jl7wx\" (UID: \"53a2f9a0-c632-432a-aebd-7f3c5863d0bc\") " pod="openstack/ovn-controller-metrics-jl7wx" Feb 23 09:07:51 crc kubenswrapper[4940]: I0223 09:07:51.278011 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-5lc7s"] Feb 23 09:07:51 crc kubenswrapper[4940]: I0223 09:07:51.300949 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-5lc7s" Feb 23 09:07:51 crc kubenswrapper[4940]: I0223 09:07:51.302027 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-5lc7s"] Feb 23 09:07:51 crc kubenswrapper[4940]: I0223 09:07:51.302737 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Feb 23 09:07:51 crc kubenswrapper[4940]: I0223 09:07:51.324855 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-7gv4v" Feb 23 09:07:51 crc kubenswrapper[4940]: I0223 09:07:51.338771 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-vnk88" Feb 23 09:07:51 crc kubenswrapper[4940]: I0223 09:07:51.343975 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Feb 23 09:07:51 crc kubenswrapper[4940]: I0223 09:07:51.380671 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-jl7wx" Feb 23 09:07:51 crc kubenswrapper[4940]: I0223 09:07:51.414405 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3ee12f4d-4ae5-496e-b7dc-71b4e3b80300-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-5lc7s\" (UID: \"3ee12f4d-4ae5-496e-b7dc-71b4e3b80300\") " pod="openstack/dnsmasq-dns-86db49b7ff-5lc7s" Feb 23 09:07:51 crc kubenswrapper[4940]: I0223 09:07:51.414670 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2cxml\" (UniqueName: \"kubernetes.io/projected/3ee12f4d-4ae5-496e-b7dc-71b4e3b80300-kube-api-access-2cxml\") pod \"dnsmasq-dns-86db49b7ff-5lc7s\" (UID: \"3ee12f4d-4ae5-496e-b7dc-71b4e3b80300\") " pod="openstack/dnsmasq-dns-86db49b7ff-5lc7s" Feb 23 09:07:51 crc kubenswrapper[4940]: I0223 09:07:51.414717 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3ee12f4d-4ae5-496e-b7dc-71b4e3b80300-config\") pod \"dnsmasq-dns-86db49b7ff-5lc7s\" (UID: \"3ee12f4d-4ae5-496e-b7dc-71b4e3b80300\") " pod="openstack/dnsmasq-dns-86db49b7ff-5lc7s" Feb 23 09:07:51 crc kubenswrapper[4940]: I0223 09:07:51.414859 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3ee12f4d-4ae5-496e-b7dc-71b4e3b80300-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-5lc7s\" (UID: \"3ee12f4d-4ae5-496e-b7dc-71b4e3b80300\") " pod="openstack/dnsmasq-dns-86db49b7ff-5lc7s" Feb 23 09:07:51 crc kubenswrapper[4940]: I0223 09:07:51.414882 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3ee12f4d-4ae5-496e-b7dc-71b4e3b80300-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-5lc7s\" (UID: \"3ee12f4d-4ae5-496e-b7dc-71b4e3b80300\") " pod="openstack/dnsmasq-dns-86db49b7ff-5lc7s" Feb 23 09:07:51 crc kubenswrapper[4940]: I0223 09:07:51.505624 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Feb 23 09:07:51 crc kubenswrapper[4940]: I0223 09:07:51.506996 4940 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Feb 23 09:07:51 crc kubenswrapper[4940]: I0223 09:07:51.517659 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7ck79\" (UniqueName: \"kubernetes.io/projected/f7ad5edd-bc1d-473a-baca-cfe974fb32f1-kube-api-access-7ck79\") pod \"f7ad5edd-bc1d-473a-baca-cfe974fb32f1\" (UID: \"f7ad5edd-bc1d-473a-baca-cfe974fb32f1\") " Feb 23 09:07:51 crc kubenswrapper[4940]: I0223 09:07:51.517699 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f7ad5edd-bc1d-473a-baca-cfe974fb32f1-config\") pod \"f7ad5edd-bc1d-473a-baca-cfe974fb32f1\" (UID: \"f7ad5edd-bc1d-473a-baca-cfe974fb32f1\") " Feb 23 09:07:51 crc kubenswrapper[4940]: I0223 09:07:51.517737 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f7ad5edd-bc1d-473a-baca-cfe974fb32f1-dns-svc\") pod \"f7ad5edd-bc1d-473a-baca-cfe974fb32f1\" (UID: \"f7ad5edd-bc1d-473a-baca-cfe974fb32f1\") " Feb 23 09:07:51 crc kubenswrapper[4940]: I0223 09:07:51.518040 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3ee12f4d-4ae5-496e-b7dc-71b4e3b80300-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-5lc7s\" (UID: \"3ee12f4d-4ae5-496e-b7dc-71b4e3b80300\") " pod="openstack/dnsmasq-dns-86db49b7ff-5lc7s" Feb 23 09:07:51 crc kubenswrapper[4940]: I0223 09:07:51.518063 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2cxml\" (UniqueName: \"kubernetes.io/projected/3ee12f4d-4ae5-496e-b7dc-71b4e3b80300-kube-api-access-2cxml\") pod \"dnsmasq-dns-86db49b7ff-5lc7s\" (UID: \"3ee12f4d-4ae5-496e-b7dc-71b4e3b80300\") " pod="openstack/dnsmasq-dns-86db49b7ff-5lc7s" Feb 23 09:07:51 crc kubenswrapper[4940]: I0223 09:07:51.518081 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3ee12f4d-4ae5-496e-b7dc-71b4e3b80300-config\") pod \"dnsmasq-dns-86db49b7ff-5lc7s\" (UID: \"3ee12f4d-4ae5-496e-b7dc-71b4e3b80300\") " pod="openstack/dnsmasq-dns-86db49b7ff-5lc7s" Feb 23 09:07:51 crc kubenswrapper[4940]: I0223 09:07:51.518172 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3ee12f4d-4ae5-496e-b7dc-71b4e3b80300-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-5lc7s\" (UID: \"3ee12f4d-4ae5-496e-b7dc-71b4e3b80300\") " pod="openstack/dnsmasq-dns-86db49b7ff-5lc7s" Feb 23 09:07:51 crc kubenswrapper[4940]: I0223 09:07:51.518187 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3ee12f4d-4ae5-496e-b7dc-71b4e3b80300-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-5lc7s\" (UID: \"3ee12f4d-4ae5-496e-b7dc-71b4e3b80300\") " pod="openstack/dnsmasq-dns-86db49b7ff-5lc7s" Feb 23 09:07:51 crc kubenswrapper[4940]: I0223 09:07:51.518266 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f7ad5edd-bc1d-473a-baca-cfe974fb32f1-config" (OuterVolumeSpecName: "config") pod "f7ad5edd-bc1d-473a-baca-cfe974fb32f1" (UID: "f7ad5edd-bc1d-473a-baca-cfe974fb32f1"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 09:07:51 crc kubenswrapper[4940]: I0223 09:07:51.519261 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f7ad5edd-bc1d-473a-baca-cfe974fb32f1-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "f7ad5edd-bc1d-473a-baca-cfe974fb32f1" (UID: "f7ad5edd-bc1d-473a-baca-cfe974fb32f1"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 09:07:51 crc kubenswrapper[4940]: I0223 09:07:51.520187 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3ee12f4d-4ae5-496e-b7dc-71b4e3b80300-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-5lc7s\" (UID: \"3ee12f4d-4ae5-496e-b7dc-71b4e3b80300\") " pod="openstack/dnsmasq-dns-86db49b7ff-5lc7s" Feb 23 09:07:51 crc kubenswrapper[4940]: I0223 09:07:51.521031 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3ee12f4d-4ae5-496e-b7dc-71b4e3b80300-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-5lc7s\" (UID: \"3ee12f4d-4ae5-496e-b7dc-71b4e3b80300\") " pod="openstack/dnsmasq-dns-86db49b7ff-5lc7s" Feb 23 09:07:51 crc kubenswrapper[4940]: I0223 09:07:51.521469 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3ee12f4d-4ae5-496e-b7dc-71b4e3b80300-config\") pod \"dnsmasq-dns-86db49b7ff-5lc7s\" (UID: \"3ee12f4d-4ae5-496e-b7dc-71b4e3b80300\") " pod="openstack/dnsmasq-dns-86db49b7ff-5lc7s" Feb 23 09:07:51 crc kubenswrapper[4940]: I0223 09:07:51.521576 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3ee12f4d-4ae5-496e-b7dc-71b4e3b80300-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-5lc7s\" (UID: \"3ee12f4d-4ae5-496e-b7dc-71b4e3b80300\") " pod="openstack/dnsmasq-dns-86db49b7ff-5lc7s" Feb 23 09:07:51 crc kubenswrapper[4940]: I0223 09:07:51.524519 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7ad5edd-bc1d-473a-baca-cfe974fb32f1-kube-api-access-7ck79" (OuterVolumeSpecName: "kube-api-access-7ck79") pod "f7ad5edd-bc1d-473a-baca-cfe974fb32f1" (UID: "f7ad5edd-bc1d-473a-baca-cfe974fb32f1"). InnerVolumeSpecName "kube-api-access-7ck79". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 09:07:51 crc kubenswrapper[4940]: I0223 09:07:51.543518 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2cxml\" (UniqueName: \"kubernetes.io/projected/3ee12f4d-4ae5-496e-b7dc-71b4e3b80300-kube-api-access-2cxml\") pod \"dnsmasq-dns-86db49b7ff-5lc7s\" (UID: \"3ee12f4d-4ae5-496e-b7dc-71b4e3b80300\") " pod="openstack/dnsmasq-dns-86db49b7ff-5lc7s" Feb 23 09:07:51 crc kubenswrapper[4940]: I0223 09:07:51.552581 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"1b7438a4-1302-46b5-a005-b74758200871","Type":"ContainerStarted","Data":"2f908f0c95eb2c41ccd7e89c7051d9e997dc4b779c3a4447987a6fb5c28693f6"} Feb 23 09:07:51 crc kubenswrapper[4940]: I0223 09:07:51.554659 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-vnk88" Feb 23 09:07:51 crc kubenswrapper[4940]: I0223 09:07:51.557740 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-vnk88" event={"ID":"f7ad5edd-bc1d-473a-baca-cfe974fb32f1","Type":"ContainerDied","Data":"1914f7d7a08b7bd89df8469e4526a01e4c1661fb3aa2554f87dbd85ba5fe1acd"} Feb 23 09:07:51 crc kubenswrapper[4940]: I0223 09:07:51.598493 4940 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=-9223371993.256308 podStartE2EDuration="43.59846907s" podCreationTimestamp="2026-02-23 09:07:08 +0000 UTC" firstStartedPulling="2026-02-23 09:07:10.874518759 +0000 UTC m=+1162.257724916" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 09:07:51.59074037 +0000 UTC m=+1202.973946537" watchObservedRunningTime="2026-02-23 09:07:51.59846907 +0000 UTC m=+1202.981675237" Feb 23 09:07:51 crc kubenswrapper[4940]: I0223 09:07:51.610267 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-lp4jp" Feb 23 09:07:51 crc kubenswrapper[4940]: I0223 09:07:51.620128 4940 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f7ad5edd-bc1d-473a-baca-cfe974fb32f1-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 23 09:07:51 crc kubenswrapper[4940]: I0223 09:07:51.620159 4940 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7ck79\" (UniqueName: \"kubernetes.io/projected/f7ad5edd-bc1d-473a-baca-cfe974fb32f1-kube-api-access-7ck79\") on node \"crc\" DevicePath \"\"" Feb 23 09:07:51 crc kubenswrapper[4940]: I0223 09:07:51.620174 4940 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f7ad5edd-bc1d-473a-baca-cfe974fb32f1-config\") on node \"crc\" DevicePath \"\"" Feb 23 09:07:51 crc kubenswrapper[4940]: I0223 09:07:51.623174 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Feb 23 09:07:51 crc kubenswrapper[4940]: I0223 09:07:51.624382 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Feb 23 09:07:51 crc kubenswrapper[4940]: I0223 09:07:51.628201 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Feb 23 09:07:51 crc kubenswrapper[4940]: I0223 09:07:51.628374 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Feb 23 09:07:51 crc kubenswrapper[4940]: I0223 09:07:51.628769 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-5kmdk" Feb 23 09:07:51 crc kubenswrapper[4940]: I0223 09:07:51.628785 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Feb 23 09:07:51 crc kubenswrapper[4940]: I0223 09:07:51.648953 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Feb 23 09:07:51 crc kubenswrapper[4940]: I0223 09:07:51.651848 4940 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-vnk88"] Feb 23 09:07:51 crc kubenswrapper[4940]: I0223 09:07:51.663736 4940 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-vnk88"] Feb 23 09:07:51 crc kubenswrapper[4940]: I0223 09:07:51.720671 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d1db4565-596f-4be8-985b-dd8efdd96172-dns-svc\") pod \"d1db4565-596f-4be8-985b-dd8efdd96172\" (UID: \"d1db4565-596f-4be8-985b-dd8efdd96172\") " Feb 23 09:07:51 crc kubenswrapper[4940]: I0223 09:07:51.720858 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kt28q\" (UniqueName: \"kubernetes.io/projected/d1db4565-596f-4be8-985b-dd8efdd96172-kube-api-access-kt28q\") pod \"d1db4565-596f-4be8-985b-dd8efdd96172\" (UID: \"d1db4565-596f-4be8-985b-dd8efdd96172\") " Feb 23 09:07:51 crc kubenswrapper[4940]: I0223 09:07:51.720906 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d1db4565-596f-4be8-985b-dd8efdd96172-config\") pod \"d1db4565-596f-4be8-985b-dd8efdd96172\" (UID: \"d1db4565-596f-4be8-985b-dd8efdd96172\") " Feb 23 09:07:51 crc kubenswrapper[4940]: I0223 09:07:51.721103 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d1db4565-596f-4be8-985b-dd8efdd96172-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "d1db4565-596f-4be8-985b-dd8efdd96172" (UID: "d1db4565-596f-4be8-985b-dd8efdd96172"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 09:07:51 crc kubenswrapper[4940]: I0223 09:07:51.721108 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a6f61bb8-d6f5-464b-8ee4-fee2fce8c60a-config\") pod \"ovn-northd-0\" (UID: \"a6f61bb8-d6f5-464b-8ee4-fee2fce8c60a\") " pod="openstack/ovn-northd-0" Feb 23 09:07:51 crc kubenswrapper[4940]: I0223 09:07:51.721223 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/a6f61bb8-d6f5-464b-8ee4-fee2fce8c60a-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"a6f61bb8-d6f5-464b-8ee4-fee2fce8c60a\") " pod="openstack/ovn-northd-0" Feb 23 09:07:51 crc kubenswrapper[4940]: I0223 09:07:51.721263 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/a6f61bb8-d6f5-464b-8ee4-fee2fce8c60a-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"a6f61bb8-d6f5-464b-8ee4-fee2fce8c60a\") " pod="openstack/ovn-northd-0" Feb 23 09:07:51 crc kubenswrapper[4940]: I0223 09:07:51.721414 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a6f61bb8-d6f5-464b-8ee4-fee2fce8c60a-scripts\") pod \"ovn-northd-0\" (UID: \"a6f61bb8-d6f5-464b-8ee4-fee2fce8c60a\") " pod="openstack/ovn-northd-0" Feb 23 09:07:51 crc kubenswrapper[4940]: I0223 09:07:51.721448 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d1db4565-596f-4be8-985b-dd8efdd96172-config" (OuterVolumeSpecName: "config") pod "d1db4565-596f-4be8-985b-dd8efdd96172" (UID: "d1db4565-596f-4be8-985b-dd8efdd96172"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 09:07:51 crc kubenswrapper[4940]: I0223 09:07:51.721537 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mzkr5\" (UniqueName: \"kubernetes.io/projected/a6f61bb8-d6f5-464b-8ee4-fee2fce8c60a-kube-api-access-mzkr5\") pod \"ovn-northd-0\" (UID: \"a6f61bb8-d6f5-464b-8ee4-fee2fce8c60a\") " pod="openstack/ovn-northd-0" Feb 23 09:07:51 crc kubenswrapper[4940]: I0223 09:07:51.721595 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/a6f61bb8-d6f5-464b-8ee4-fee2fce8c60a-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"a6f61bb8-d6f5-464b-8ee4-fee2fce8c60a\") " pod="openstack/ovn-northd-0" Feb 23 09:07:51 crc kubenswrapper[4940]: I0223 09:07:51.721675 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6f61bb8-d6f5-464b-8ee4-fee2fce8c60a-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"a6f61bb8-d6f5-464b-8ee4-fee2fce8c60a\") " pod="openstack/ovn-northd-0" Feb 23 09:07:51 crc kubenswrapper[4940]: I0223 09:07:51.721748 4940 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d1db4565-596f-4be8-985b-dd8efdd96172-config\") on node \"crc\" DevicePath \"\"" Feb 23 09:07:51 crc kubenswrapper[4940]: I0223 09:07:51.721765 4940 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d1db4565-596f-4be8-985b-dd8efdd96172-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 23 09:07:51 crc kubenswrapper[4940]: I0223 09:07:51.723918 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d1db4565-596f-4be8-985b-dd8efdd96172-kube-api-access-kt28q" (OuterVolumeSpecName: "kube-api-access-kt28q") pod "d1db4565-596f-4be8-985b-dd8efdd96172" (UID: "d1db4565-596f-4be8-985b-dd8efdd96172"). InnerVolumeSpecName "kube-api-access-kt28q". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 09:07:51 crc kubenswrapper[4940]: I0223 09:07:51.742240 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-5lc7s" Feb 23 09:07:51 crc kubenswrapper[4940]: I0223 09:07:51.822710 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a6f61bb8-d6f5-464b-8ee4-fee2fce8c60a-config\") pod \"ovn-northd-0\" (UID: \"a6f61bb8-d6f5-464b-8ee4-fee2fce8c60a\") " pod="openstack/ovn-northd-0" Feb 23 09:07:51 crc kubenswrapper[4940]: I0223 09:07:51.822805 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/a6f61bb8-d6f5-464b-8ee4-fee2fce8c60a-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"a6f61bb8-d6f5-464b-8ee4-fee2fce8c60a\") " pod="openstack/ovn-northd-0" Feb 23 09:07:51 crc kubenswrapper[4940]: I0223 09:07:51.822846 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/a6f61bb8-d6f5-464b-8ee4-fee2fce8c60a-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"a6f61bb8-d6f5-464b-8ee4-fee2fce8c60a\") " pod="openstack/ovn-northd-0" Feb 23 09:07:51 crc kubenswrapper[4940]: I0223 09:07:51.822897 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a6f61bb8-d6f5-464b-8ee4-fee2fce8c60a-scripts\") pod \"ovn-northd-0\" (UID: \"a6f61bb8-d6f5-464b-8ee4-fee2fce8c60a\") " pod="openstack/ovn-northd-0" Feb 23 09:07:51 crc kubenswrapper[4940]: I0223 09:07:51.822965 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mzkr5\" (UniqueName: \"kubernetes.io/projected/a6f61bb8-d6f5-464b-8ee4-fee2fce8c60a-kube-api-access-mzkr5\") pod \"ovn-northd-0\" (UID: \"a6f61bb8-d6f5-464b-8ee4-fee2fce8c60a\") " pod="openstack/ovn-northd-0" Feb 23 09:07:51 crc kubenswrapper[4940]: I0223 09:07:51.822997 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/a6f61bb8-d6f5-464b-8ee4-fee2fce8c60a-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"a6f61bb8-d6f5-464b-8ee4-fee2fce8c60a\") " pod="openstack/ovn-northd-0" Feb 23 09:07:51 crc kubenswrapper[4940]: I0223 09:07:51.823024 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6f61bb8-d6f5-464b-8ee4-fee2fce8c60a-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"a6f61bb8-d6f5-464b-8ee4-fee2fce8c60a\") " pod="openstack/ovn-northd-0" Feb 23 09:07:51 crc kubenswrapper[4940]: I0223 09:07:51.823090 4940 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kt28q\" (UniqueName: \"kubernetes.io/projected/d1db4565-596f-4be8-985b-dd8efdd96172-kube-api-access-kt28q\") on node \"crc\" DevicePath \"\"" Feb 23 09:07:51 crc kubenswrapper[4940]: I0223 09:07:51.823462 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a6f61bb8-d6f5-464b-8ee4-fee2fce8c60a-config\") pod \"ovn-northd-0\" (UID: \"a6f61bb8-d6f5-464b-8ee4-fee2fce8c60a\") " pod="openstack/ovn-northd-0" Feb 23 09:07:51 crc kubenswrapper[4940]: I0223 09:07:51.823902 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a6f61bb8-d6f5-464b-8ee4-fee2fce8c60a-scripts\") pod \"ovn-northd-0\" (UID: \"a6f61bb8-d6f5-464b-8ee4-fee2fce8c60a\") " pod="openstack/ovn-northd-0" Feb 23 09:07:51 crc kubenswrapper[4940]: I0223 09:07:51.824174 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/a6f61bb8-d6f5-464b-8ee4-fee2fce8c60a-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"a6f61bb8-d6f5-464b-8ee4-fee2fce8c60a\") " pod="openstack/ovn-northd-0" Feb 23 09:07:51 crc kubenswrapper[4940]: I0223 09:07:51.828413 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/a6f61bb8-d6f5-464b-8ee4-fee2fce8c60a-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"a6f61bb8-d6f5-464b-8ee4-fee2fce8c60a\") " pod="openstack/ovn-northd-0" Feb 23 09:07:51 crc kubenswrapper[4940]: I0223 09:07:51.829018 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/a6f61bb8-d6f5-464b-8ee4-fee2fce8c60a-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"a6f61bb8-d6f5-464b-8ee4-fee2fce8c60a\") " pod="openstack/ovn-northd-0" Feb 23 09:07:51 crc kubenswrapper[4940]: I0223 09:07:51.834476 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6f61bb8-d6f5-464b-8ee4-fee2fce8c60a-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"a6f61bb8-d6f5-464b-8ee4-fee2fce8c60a\") " pod="openstack/ovn-northd-0" Feb 23 09:07:51 crc kubenswrapper[4940]: I0223 09:07:51.852660 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mzkr5\" (UniqueName: \"kubernetes.io/projected/a6f61bb8-d6f5-464b-8ee4-fee2fce8c60a-kube-api-access-mzkr5\") pod \"ovn-northd-0\" (UID: \"a6f61bb8-d6f5-464b-8ee4-fee2fce8c60a\") " pod="openstack/ovn-northd-0" Feb 23 09:07:51 crc kubenswrapper[4940]: I0223 09:07:51.869989 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-7gv4v"] Feb 23 09:07:51 crc kubenswrapper[4940]: I0223 09:07:51.944238 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-jl7wx"] Feb 23 09:07:51 crc kubenswrapper[4940]: W0223 09:07:51.945916 4940 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod53a2f9a0_c632_432a_aebd_7f3c5863d0bc.slice/crio-7fe5931502836b675382587803a049ac7d91d79d5a1432a5d26988ab3a5d43b9 WatchSource:0}: Error finding container 7fe5931502836b675382587803a049ac7d91d79d5a1432a5d26988ab3a5d43b9: Status 404 returned error can't find the container with id 7fe5931502836b675382587803a049ac7d91d79d5a1432a5d26988ab3a5d43b9 Feb 23 09:07:51 crc kubenswrapper[4940]: I0223 09:07:51.950884 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Feb 23 09:07:52 crc kubenswrapper[4940]: I0223 09:07:52.005709 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-5lc7s"] Feb 23 09:07:52 crc kubenswrapper[4940]: W0223 09:07:52.011096 4940 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3ee12f4d_4ae5_496e_b7dc_71b4e3b80300.slice/crio-d3f14565721488b5f88e1be4d5eb3ea13b666450778e2f7a4067c47614bc715c WatchSource:0}: Error finding container d3f14565721488b5f88e1be4d5eb3ea13b666450778e2f7a4067c47614bc715c: Status 404 returned error can't find the container with id d3f14565721488b5f88e1be4d5eb3ea13b666450778e2f7a4067c47614bc715c Feb 23 09:07:52 crc kubenswrapper[4940]: I0223 09:07:52.395873 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Feb 23 09:07:52 crc kubenswrapper[4940]: I0223 09:07:52.573593 4940 generic.go:334] "Generic (PLEG): container finished" podID="0e4471c0-8a8f-4f32-8e0f-678066e4afc1" containerID="aa9155c082aaba36d6b8cf687657f5b5fbfc2ea042c2cfe390578dd4ac636bab" exitCode=0 Feb 23 09:07:52 crc kubenswrapper[4940]: I0223 09:07:52.573950 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-7gv4v" event={"ID":"0e4471c0-8a8f-4f32-8e0f-678066e4afc1","Type":"ContainerDied","Data":"aa9155c082aaba36d6b8cf687657f5b5fbfc2ea042c2cfe390578dd4ac636bab"} Feb 23 09:07:52 crc kubenswrapper[4940]: I0223 09:07:52.574030 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-7gv4v" event={"ID":"0e4471c0-8a8f-4f32-8e0f-678066e4afc1","Type":"ContainerStarted","Data":"61c15a27def7a2d9053c18b9fa9be120e84da5cfba14ec8df05886f485e7941c"} Feb 23 09:07:52 crc kubenswrapper[4940]: I0223 09:07:52.577492 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-5lc7s" event={"ID":"3ee12f4d-4ae5-496e-b7dc-71b4e3b80300","Type":"ContainerStarted","Data":"d3f14565721488b5f88e1be4d5eb3ea13b666450778e2f7a4067c47614bc715c"} Feb 23 09:07:52 crc kubenswrapper[4940]: I0223 09:07:52.580633 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-lp4jp" event={"ID":"d1db4565-596f-4be8-985b-dd8efdd96172","Type":"ContainerDied","Data":"af1a148f150b822a4fa47835aa8ec1f5e65e29cc428f3f406136ed7d3edb7396"} Feb 23 09:07:52 crc kubenswrapper[4940]: I0223 09:07:52.580657 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-lp4jp" Feb 23 09:07:52 crc kubenswrapper[4940]: I0223 09:07:52.582370 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"a6f61bb8-d6f5-464b-8ee4-fee2fce8c60a","Type":"ContainerStarted","Data":"b13b1997563b79e53bf9f05f315b6ac13f563939413f685a81fc7d7c19c61fa1"} Feb 23 09:07:52 crc kubenswrapper[4940]: I0223 09:07:52.586772 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-jl7wx" event={"ID":"53a2f9a0-c632-432a-aebd-7f3c5863d0bc","Type":"ContainerStarted","Data":"bc7ce2e224d23ec99360d1d145d62b008f7693a9340077207d85b1ea6924a120"} Feb 23 09:07:52 crc kubenswrapper[4940]: I0223 09:07:52.586822 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-jl7wx" event={"ID":"53a2f9a0-c632-432a-aebd-7f3c5863d0bc","Type":"ContainerStarted","Data":"7fe5931502836b675382587803a049ac7d91d79d5a1432a5d26988ab3a5d43b9"} Feb 23 09:07:52 crc kubenswrapper[4940]: I0223 09:07:52.618490 4940 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-jl7wx" podStartSLOduration=1.618469468 podStartE2EDuration="1.618469468s" podCreationTimestamp="2026-02-23 09:07:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 09:07:52.613962907 +0000 UTC m=+1203.997169074" watchObservedRunningTime="2026-02-23 09:07:52.618469468 +0000 UTC m=+1204.001675625" Feb 23 09:07:52 crc kubenswrapper[4940]: I0223 09:07:52.702680 4940 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-lp4jp"] Feb 23 09:07:52 crc kubenswrapper[4940]: I0223 09:07:52.710872 4940 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-lp4jp"] Feb 23 09:07:53 crc kubenswrapper[4940]: I0223 09:07:53.363326 4940 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d1db4565-596f-4be8-985b-dd8efdd96172" path="/var/lib/kubelet/pods/d1db4565-596f-4be8-985b-dd8efdd96172/volumes" Feb 23 09:07:53 crc kubenswrapper[4940]: I0223 09:07:53.364050 4940 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7ad5edd-bc1d-473a-baca-cfe974fb32f1" path="/var/lib/kubelet/pods/f7ad5edd-bc1d-473a-baca-cfe974fb32f1/volumes" Feb 23 09:07:53 crc kubenswrapper[4940]: I0223 09:07:53.599105 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-7gv4v" event={"ID":"0e4471c0-8a8f-4f32-8e0f-678066e4afc1","Type":"ContainerStarted","Data":"d8e4fdbe8a3d253f35d6fbfbcae806e545f30ee2d3d432835fe6b074235a33eb"} Feb 23 09:07:53 crc kubenswrapper[4940]: I0223 09:07:53.599166 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7fd796d7df-7gv4v" Feb 23 09:07:53 crc kubenswrapper[4940]: I0223 09:07:53.603326 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"e0aedede-6061-46c9-8fd2-88a2e1880c2f","Type":"ContainerStarted","Data":"9eba580887b894e74b1638e138889641c2f879a517194a8e61c4c3493d9595f1"} Feb 23 09:07:53 crc kubenswrapper[4940]: I0223 09:07:53.603869 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Feb 23 09:07:53 crc kubenswrapper[4940]: I0223 09:07:53.605774 4940 generic.go:334] "Generic (PLEG): container finished" podID="3ee12f4d-4ae5-496e-b7dc-71b4e3b80300" containerID="0269ecd30a3ef9ef55024688668afe4bf9fdf73c5ca4d29f60884609f4eb964c" exitCode=0 Feb 23 09:07:53 crc kubenswrapper[4940]: I0223 09:07:53.605847 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-5lc7s" event={"ID":"3ee12f4d-4ae5-496e-b7dc-71b4e3b80300","Type":"ContainerDied","Data":"0269ecd30a3ef9ef55024688668afe4bf9fdf73c5ca4d29f60884609f4eb964c"} Feb 23 09:07:53 crc kubenswrapper[4940]: I0223 09:07:53.623276 4940 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7fd796d7df-7gv4v" podStartSLOduration=3.221190002 podStartE2EDuration="3.623256733s" podCreationTimestamp="2026-02-23 09:07:50 +0000 UTC" firstStartedPulling="2026-02-23 09:07:51.879317755 +0000 UTC m=+1203.262523922" lastFinishedPulling="2026-02-23 09:07:52.281384496 +0000 UTC m=+1203.664590653" observedRunningTime="2026-02-23 09:07:53.617666219 +0000 UTC m=+1205.000872426" watchObservedRunningTime="2026-02-23 09:07:53.623256733 +0000 UTC m=+1205.006462890" Feb 23 09:07:53 crc kubenswrapper[4940]: I0223 09:07:53.653041 4940 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=2.312501568 podStartE2EDuration="42.653018587s" podCreationTimestamp="2026-02-23 09:07:11 +0000 UTC" firstStartedPulling="2026-02-23 09:07:12.493516456 +0000 UTC m=+1163.876722613" lastFinishedPulling="2026-02-23 09:07:52.834033465 +0000 UTC m=+1204.217239632" observedRunningTime="2026-02-23 09:07:53.633396278 +0000 UTC m=+1205.016602445" watchObservedRunningTime="2026-02-23 09:07:53.653018587 +0000 UTC m=+1205.036224754" Feb 23 09:07:53 crc kubenswrapper[4940]: I0223 09:07:53.903280 4940 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Feb 23 09:07:54 crc kubenswrapper[4940]: I0223 09:07:54.002347 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Feb 23 09:07:54 crc kubenswrapper[4940]: I0223 09:07:54.098277 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Feb 23 09:07:54 crc kubenswrapper[4940]: I0223 09:07:54.614903 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"a6f61bb8-d6f5-464b-8ee4-fee2fce8c60a","Type":"ContainerStarted","Data":"86a0f688a911b36a5f6639091d5c5e7b9d68c6ccd8dec0912c6ff64e4df1616c"} Feb 23 09:07:54 crc kubenswrapper[4940]: I0223 09:07:54.614949 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"a6f61bb8-d6f5-464b-8ee4-fee2fce8c60a","Type":"ContainerStarted","Data":"0838d98986241840487ea78b0f5f13babeebbe999e56ee8634893839176b1e71"} Feb 23 09:07:54 crc kubenswrapper[4940]: I0223 09:07:54.615034 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Feb 23 09:07:54 crc kubenswrapper[4940]: I0223 09:07:54.619259 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-5lc7s" event={"ID":"3ee12f4d-4ae5-496e-b7dc-71b4e3b80300","Type":"ContainerStarted","Data":"3fd79e18a9a35c110c0ea409f0a2bba6996e2006a65f895f949e188b602368ff"} Feb 23 09:07:54 crc kubenswrapper[4940]: I0223 09:07:54.632647 4940 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=2.235797867 podStartE2EDuration="3.63260386s" podCreationTimestamp="2026-02-23 09:07:51 +0000 UTC" firstStartedPulling="2026-02-23 09:07:52.412945383 +0000 UTC m=+1203.796151540" lastFinishedPulling="2026-02-23 09:07:53.809751336 +0000 UTC m=+1205.192957533" observedRunningTime="2026-02-23 09:07:54.630607858 +0000 UTC m=+1206.013814015" watchObservedRunningTime="2026-02-23 09:07:54.63260386 +0000 UTC m=+1206.015810017" Feb 23 09:07:54 crc kubenswrapper[4940]: I0223 09:07:54.654334 4940 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-86db49b7ff-5lc7s" podStartSLOduration=3.25919038 podStartE2EDuration="3.654310994s" podCreationTimestamp="2026-02-23 09:07:51 +0000 UTC" firstStartedPulling="2026-02-23 09:07:52.014170235 +0000 UTC m=+1203.397376382" lastFinishedPulling="2026-02-23 09:07:52.409290839 +0000 UTC m=+1203.792496996" observedRunningTime="2026-02-23 09:07:54.652140246 +0000 UTC m=+1206.035346403" watchObservedRunningTime="2026-02-23 09:07:54.654310994 +0000 UTC m=+1206.037517171" Feb 23 09:07:55 crc kubenswrapper[4940]: I0223 09:07:55.627260 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-86db49b7ff-5lc7s" Feb 23 09:08:00 crc kubenswrapper[4940]: I0223 09:08:00.135294 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-tqvm8"] Feb 23 09:08:00 crc kubenswrapper[4940]: I0223 09:08:00.137152 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-tqvm8" Feb 23 09:08:00 crc kubenswrapper[4940]: I0223 09:08:00.142353 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Feb 23 09:08:00 crc kubenswrapper[4940]: I0223 09:08:00.154739 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Feb 23 09:08:00 crc kubenswrapper[4940]: I0223 09:08:00.154790 4940 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Feb 23 09:08:00 crc kubenswrapper[4940]: I0223 09:08:00.173001 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-tqvm8"] Feb 23 09:08:00 crc kubenswrapper[4940]: I0223 09:08:00.225682 4940 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Feb 23 09:08:00 crc kubenswrapper[4940]: I0223 09:08:00.295673 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/15943682-1c2d-49c0-997a-1770d98ce9c2-operator-scripts\") pod \"root-account-create-update-tqvm8\" (UID: \"15943682-1c2d-49c0-997a-1770d98ce9c2\") " pod="openstack/root-account-create-update-tqvm8" Feb 23 09:08:00 crc kubenswrapper[4940]: I0223 09:08:00.295828 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7qzmt\" (UniqueName: \"kubernetes.io/projected/15943682-1c2d-49c0-997a-1770d98ce9c2-kube-api-access-7qzmt\") pod \"root-account-create-update-tqvm8\" (UID: \"15943682-1c2d-49c0-997a-1770d98ce9c2\") " pod="openstack/root-account-create-update-tqvm8" Feb 23 09:08:00 crc kubenswrapper[4940]: I0223 09:08:00.397239 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7qzmt\" (UniqueName: \"kubernetes.io/projected/15943682-1c2d-49c0-997a-1770d98ce9c2-kube-api-access-7qzmt\") pod \"root-account-create-update-tqvm8\" (UID: \"15943682-1c2d-49c0-997a-1770d98ce9c2\") " pod="openstack/root-account-create-update-tqvm8" Feb 23 09:08:00 crc kubenswrapper[4940]: I0223 09:08:00.397422 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/15943682-1c2d-49c0-997a-1770d98ce9c2-operator-scripts\") pod \"root-account-create-update-tqvm8\" (UID: \"15943682-1c2d-49c0-997a-1770d98ce9c2\") " pod="openstack/root-account-create-update-tqvm8" Feb 23 09:08:00 crc kubenswrapper[4940]: I0223 09:08:00.398846 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/15943682-1c2d-49c0-997a-1770d98ce9c2-operator-scripts\") pod \"root-account-create-update-tqvm8\" (UID: \"15943682-1c2d-49c0-997a-1770d98ce9c2\") " pod="openstack/root-account-create-update-tqvm8" Feb 23 09:08:00 crc kubenswrapper[4940]: I0223 09:08:00.420321 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7qzmt\" (UniqueName: \"kubernetes.io/projected/15943682-1c2d-49c0-997a-1770d98ce9c2-kube-api-access-7qzmt\") pod \"root-account-create-update-tqvm8\" (UID: \"15943682-1c2d-49c0-997a-1770d98ce9c2\") " pod="openstack/root-account-create-update-tqvm8" Feb 23 09:08:00 crc kubenswrapper[4940]: I0223 09:08:00.475536 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-tqvm8" Feb 23 09:08:00 crc kubenswrapper[4940]: I0223 09:08:00.756586 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Feb 23 09:08:00 crc kubenswrapper[4940]: I0223 09:08:00.915802 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-tqvm8"] Feb 23 09:08:01 crc kubenswrapper[4940]: I0223 09:08:01.327860 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7fd796d7df-7gv4v" Feb 23 09:08:01 crc kubenswrapper[4940]: I0223 09:08:01.435902 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Feb 23 09:08:01 crc kubenswrapper[4940]: I0223 09:08:01.602261 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-dzncq"] Feb 23 09:08:01 crc kubenswrapper[4940]: I0223 09:08:01.603313 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-dzncq" Feb 23 09:08:01 crc kubenswrapper[4940]: I0223 09:08:01.609964 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-dzncq"] Feb 23 09:08:01 crc kubenswrapper[4940]: I0223 09:08:01.676519 4940 generic.go:334] "Generic (PLEG): container finished" podID="15943682-1c2d-49c0-997a-1770d98ce9c2" containerID="b9bfdd82c352d157813b0fa560b8738de873ad062b4b47db9c19da0f078c62a4" exitCode=0 Feb 23 09:08:01 crc kubenswrapper[4940]: I0223 09:08:01.676635 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-tqvm8" event={"ID":"15943682-1c2d-49c0-997a-1770d98ce9c2","Type":"ContainerDied","Data":"b9bfdd82c352d157813b0fa560b8738de873ad062b4b47db9c19da0f078c62a4"} Feb 23 09:08:01 crc kubenswrapper[4940]: I0223 09:08:01.676679 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-tqvm8" event={"ID":"15943682-1c2d-49c0-997a-1770d98ce9c2","Type":"ContainerStarted","Data":"7f50d0e41de5a6bd8f9ffc82d93c36d50a19b964f6d9052d7e3ac1f3a04d7478"} Feb 23 09:08:01 crc kubenswrapper[4940]: I0223 09:08:01.714119 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-3b70-account-create-update-7xz8h"] Feb 23 09:08:01 crc kubenswrapper[4940]: I0223 09:08:01.715428 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-3b70-account-create-update-7xz8h" Feb 23 09:08:01 crc kubenswrapper[4940]: I0223 09:08:01.718156 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Feb 23 09:08:01 crc kubenswrapper[4940]: I0223 09:08:01.721893 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-3b70-account-create-update-7xz8h"] Feb 23 09:08:01 crc kubenswrapper[4940]: I0223 09:08:01.727862 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f4374df7-da62-4cf6-a912-f1463d42cf3a-operator-scripts\") pod \"glance-db-create-dzncq\" (UID: \"f4374df7-da62-4cf6-a912-f1463d42cf3a\") " pod="openstack/glance-db-create-dzncq" Feb 23 09:08:01 crc kubenswrapper[4940]: I0223 09:08:01.727924 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vdp7k\" (UniqueName: \"kubernetes.io/projected/f4374df7-da62-4cf6-a912-f1463d42cf3a-kube-api-access-vdp7k\") pod \"glance-db-create-dzncq\" (UID: \"f4374df7-da62-4cf6-a912-f1463d42cf3a\") " pod="openstack/glance-db-create-dzncq" Feb 23 09:08:01 crc kubenswrapper[4940]: I0223 09:08:01.743473 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-86db49b7ff-5lc7s" Feb 23 09:08:01 crc kubenswrapper[4940]: I0223 09:08:01.793069 4940 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-7gv4v"] Feb 23 09:08:01 crc kubenswrapper[4940]: I0223 09:08:01.793486 4940 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7fd796d7df-7gv4v" podUID="0e4471c0-8a8f-4f32-8e0f-678066e4afc1" containerName="dnsmasq-dns" containerID="cri-o://d8e4fdbe8a3d253f35d6fbfbcae806e545f30ee2d3d432835fe6b074235a33eb" gracePeriod=10 Feb 23 09:08:01 crc kubenswrapper[4940]: I0223 09:08:01.829237 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f4374df7-da62-4cf6-a912-f1463d42cf3a-operator-scripts\") pod \"glance-db-create-dzncq\" (UID: \"f4374df7-da62-4cf6-a912-f1463d42cf3a\") " pod="openstack/glance-db-create-dzncq" Feb 23 09:08:01 crc kubenswrapper[4940]: I0223 09:08:01.829823 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fbd1d1cf-4935-4c7b-b7d2-35a6d801d15e-operator-scripts\") pod \"glance-3b70-account-create-update-7xz8h\" (UID: \"fbd1d1cf-4935-4c7b-b7d2-35a6d801d15e\") " pod="openstack/glance-3b70-account-create-update-7xz8h" Feb 23 09:08:01 crc kubenswrapper[4940]: I0223 09:08:01.829946 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cjsfw\" (UniqueName: \"kubernetes.io/projected/fbd1d1cf-4935-4c7b-b7d2-35a6d801d15e-kube-api-access-cjsfw\") pod \"glance-3b70-account-create-update-7xz8h\" (UID: \"fbd1d1cf-4935-4c7b-b7d2-35a6d801d15e\") " pod="openstack/glance-3b70-account-create-update-7xz8h" Feb 23 09:08:01 crc kubenswrapper[4940]: I0223 09:08:01.829998 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vdp7k\" (UniqueName: \"kubernetes.io/projected/f4374df7-da62-4cf6-a912-f1463d42cf3a-kube-api-access-vdp7k\") pod \"glance-db-create-dzncq\" (UID: \"f4374df7-da62-4cf6-a912-f1463d42cf3a\") " pod="openstack/glance-db-create-dzncq" Feb 23 09:08:01 crc kubenswrapper[4940]: I0223 09:08:01.830284 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f4374df7-da62-4cf6-a912-f1463d42cf3a-operator-scripts\") pod \"glance-db-create-dzncq\" (UID: \"f4374df7-da62-4cf6-a912-f1463d42cf3a\") " pod="openstack/glance-db-create-dzncq" Feb 23 09:08:01 crc kubenswrapper[4940]: I0223 09:08:01.848826 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vdp7k\" (UniqueName: \"kubernetes.io/projected/f4374df7-da62-4cf6-a912-f1463d42cf3a-kube-api-access-vdp7k\") pod \"glance-db-create-dzncq\" (UID: \"f4374df7-da62-4cf6-a912-f1463d42cf3a\") " pod="openstack/glance-db-create-dzncq" Feb 23 09:08:01 crc kubenswrapper[4940]: I0223 09:08:01.918553 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-dzncq" Feb 23 09:08:01 crc kubenswrapper[4940]: I0223 09:08:01.931236 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fbd1d1cf-4935-4c7b-b7d2-35a6d801d15e-operator-scripts\") pod \"glance-3b70-account-create-update-7xz8h\" (UID: \"fbd1d1cf-4935-4c7b-b7d2-35a6d801d15e\") " pod="openstack/glance-3b70-account-create-update-7xz8h" Feb 23 09:08:01 crc kubenswrapper[4940]: I0223 09:08:01.931286 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cjsfw\" (UniqueName: \"kubernetes.io/projected/fbd1d1cf-4935-4c7b-b7d2-35a6d801d15e-kube-api-access-cjsfw\") pod \"glance-3b70-account-create-update-7xz8h\" (UID: \"fbd1d1cf-4935-4c7b-b7d2-35a6d801d15e\") " pod="openstack/glance-3b70-account-create-update-7xz8h" Feb 23 09:08:01 crc kubenswrapper[4940]: I0223 09:08:01.932035 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fbd1d1cf-4935-4c7b-b7d2-35a6d801d15e-operator-scripts\") pod \"glance-3b70-account-create-update-7xz8h\" (UID: \"fbd1d1cf-4935-4c7b-b7d2-35a6d801d15e\") " pod="openstack/glance-3b70-account-create-update-7xz8h" Feb 23 09:08:01 crc kubenswrapper[4940]: I0223 09:08:01.954291 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cjsfw\" (UniqueName: \"kubernetes.io/projected/fbd1d1cf-4935-4c7b-b7d2-35a6d801d15e-kube-api-access-cjsfw\") pod \"glance-3b70-account-create-update-7xz8h\" (UID: \"fbd1d1cf-4935-4c7b-b7d2-35a6d801d15e\") " pod="openstack/glance-3b70-account-create-update-7xz8h" Feb 23 09:08:02 crc kubenswrapper[4940]: I0223 09:08:02.043858 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-3b70-account-create-update-7xz8h" Feb 23 09:08:02 crc kubenswrapper[4940]: I0223 09:08:02.264823 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-7gv4v" Feb 23 09:08:02 crc kubenswrapper[4940]: I0223 09:08:02.337298 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0e4471c0-8a8f-4f32-8e0f-678066e4afc1-ovsdbserver-nb\") pod \"0e4471c0-8a8f-4f32-8e0f-678066e4afc1\" (UID: \"0e4471c0-8a8f-4f32-8e0f-678066e4afc1\") " Feb 23 09:08:02 crc kubenswrapper[4940]: I0223 09:08:02.337458 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5rp8s\" (UniqueName: \"kubernetes.io/projected/0e4471c0-8a8f-4f32-8e0f-678066e4afc1-kube-api-access-5rp8s\") pod \"0e4471c0-8a8f-4f32-8e0f-678066e4afc1\" (UID: \"0e4471c0-8a8f-4f32-8e0f-678066e4afc1\") " Feb 23 09:08:02 crc kubenswrapper[4940]: I0223 09:08:02.337506 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0e4471c0-8a8f-4f32-8e0f-678066e4afc1-config\") pod \"0e4471c0-8a8f-4f32-8e0f-678066e4afc1\" (UID: \"0e4471c0-8a8f-4f32-8e0f-678066e4afc1\") " Feb 23 09:08:02 crc kubenswrapper[4940]: I0223 09:08:02.337693 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0e4471c0-8a8f-4f32-8e0f-678066e4afc1-dns-svc\") pod \"0e4471c0-8a8f-4f32-8e0f-678066e4afc1\" (UID: \"0e4471c0-8a8f-4f32-8e0f-678066e4afc1\") " Feb 23 09:08:02 crc kubenswrapper[4940]: I0223 09:08:02.341621 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0e4471c0-8a8f-4f32-8e0f-678066e4afc1-kube-api-access-5rp8s" (OuterVolumeSpecName: "kube-api-access-5rp8s") pod "0e4471c0-8a8f-4f32-8e0f-678066e4afc1" (UID: "0e4471c0-8a8f-4f32-8e0f-678066e4afc1"). InnerVolumeSpecName "kube-api-access-5rp8s". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 09:08:02 crc kubenswrapper[4940]: I0223 09:08:02.378205 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0e4471c0-8a8f-4f32-8e0f-678066e4afc1-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "0e4471c0-8a8f-4f32-8e0f-678066e4afc1" (UID: "0e4471c0-8a8f-4f32-8e0f-678066e4afc1"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 09:08:02 crc kubenswrapper[4940]: I0223 09:08:02.378538 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0e4471c0-8a8f-4f32-8e0f-678066e4afc1-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "0e4471c0-8a8f-4f32-8e0f-678066e4afc1" (UID: "0e4471c0-8a8f-4f32-8e0f-678066e4afc1"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 09:08:02 crc kubenswrapper[4940]: I0223 09:08:02.387327 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0e4471c0-8a8f-4f32-8e0f-678066e4afc1-config" (OuterVolumeSpecName: "config") pod "0e4471c0-8a8f-4f32-8e0f-678066e4afc1" (UID: "0e4471c0-8a8f-4f32-8e0f-678066e4afc1"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 09:08:02 crc kubenswrapper[4940]: I0223 09:08:02.414680 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-dzncq"] Feb 23 09:08:02 crc kubenswrapper[4940]: W0223 09:08:02.417683 4940 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf4374df7_da62_4cf6_a912_f1463d42cf3a.slice/crio-d144715e873b80dd3c05c8dd3a966a86133670f34772aa5980036a4660b5abe8 WatchSource:0}: Error finding container d144715e873b80dd3c05c8dd3a966a86133670f34772aa5980036a4660b5abe8: Status 404 returned error can't find the container with id d144715e873b80dd3c05c8dd3a966a86133670f34772aa5980036a4660b5abe8 Feb 23 09:08:02 crc kubenswrapper[4940]: I0223 09:08:02.440171 4940 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0e4471c0-8a8f-4f32-8e0f-678066e4afc1-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 23 09:08:02 crc kubenswrapper[4940]: I0223 09:08:02.440210 4940 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5rp8s\" (UniqueName: \"kubernetes.io/projected/0e4471c0-8a8f-4f32-8e0f-678066e4afc1-kube-api-access-5rp8s\") on node \"crc\" DevicePath \"\"" Feb 23 09:08:02 crc kubenswrapper[4940]: I0223 09:08:02.440227 4940 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0e4471c0-8a8f-4f32-8e0f-678066e4afc1-config\") on node \"crc\" DevicePath \"\"" Feb 23 09:08:02 crc kubenswrapper[4940]: I0223 09:08:02.440242 4940 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0e4471c0-8a8f-4f32-8e0f-678066e4afc1-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 23 09:08:02 crc kubenswrapper[4940]: I0223 09:08:02.481053 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-pjzmw"] Feb 23 09:08:02 crc kubenswrapper[4940]: E0223 09:08:02.481383 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e4471c0-8a8f-4f32-8e0f-678066e4afc1" containerName="init" Feb 23 09:08:02 crc kubenswrapper[4940]: I0223 09:08:02.481396 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e4471c0-8a8f-4f32-8e0f-678066e4afc1" containerName="init" Feb 23 09:08:02 crc kubenswrapper[4940]: E0223 09:08:02.481428 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e4471c0-8a8f-4f32-8e0f-678066e4afc1" containerName="dnsmasq-dns" Feb 23 09:08:02 crc kubenswrapper[4940]: I0223 09:08:02.481434 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e4471c0-8a8f-4f32-8e0f-678066e4afc1" containerName="dnsmasq-dns" Feb 23 09:08:02 crc kubenswrapper[4940]: I0223 09:08:02.481700 4940 memory_manager.go:354] "RemoveStaleState removing state" podUID="0e4471c0-8a8f-4f32-8e0f-678066e4afc1" containerName="dnsmasq-dns" Feb 23 09:08:02 crc kubenswrapper[4940]: I0223 09:08:02.482205 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-pjzmw" Feb 23 09:08:02 crc kubenswrapper[4940]: I0223 09:08:02.492777 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-pjzmw"] Feb 23 09:08:02 crc kubenswrapper[4940]: W0223 09:08:02.539591 4940 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfbd1d1cf_4935_4c7b_b7d2_35a6d801d15e.slice/crio-b0532b6155437f8027aa328bde2cb3ba95db5c3a55efd10429cbf7e4442a3a03 WatchSource:0}: Error finding container b0532b6155437f8027aa328bde2cb3ba95db5c3a55efd10429cbf7e4442a3a03: Status 404 returned error can't find the container with id b0532b6155437f8027aa328bde2cb3ba95db5c3a55efd10429cbf7e4442a3a03 Feb 23 09:08:02 crc kubenswrapper[4940]: I0223 09:08:02.539876 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-3b70-account-create-update-7xz8h"] Feb 23 09:08:02 crc kubenswrapper[4940]: I0223 09:08:02.542182 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5xscx\" (UniqueName: \"kubernetes.io/projected/7c0d4f47-b6ec-4115-95ed-466d4aa7edf5-kube-api-access-5xscx\") pod \"keystone-db-create-pjzmw\" (UID: \"7c0d4f47-b6ec-4115-95ed-466d4aa7edf5\") " pod="openstack/keystone-db-create-pjzmw" Feb 23 09:08:02 crc kubenswrapper[4940]: I0223 09:08:02.542690 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7c0d4f47-b6ec-4115-95ed-466d4aa7edf5-operator-scripts\") pod \"keystone-db-create-pjzmw\" (UID: \"7c0d4f47-b6ec-4115-95ed-466d4aa7edf5\") " pod="openstack/keystone-db-create-pjzmw" Feb 23 09:08:02 crc kubenswrapper[4940]: I0223 09:08:02.597383 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-2013-account-create-update-5rbqj"] Feb 23 09:08:02 crc kubenswrapper[4940]: I0223 09:08:02.598464 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-2013-account-create-update-5rbqj" Feb 23 09:08:02 crc kubenswrapper[4940]: I0223 09:08:02.605052 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-2013-account-create-update-5rbqj"] Feb 23 09:08:02 crc kubenswrapper[4940]: I0223 09:08:02.610386 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Feb 23 09:08:02 crc kubenswrapper[4940]: I0223 09:08:02.644647 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pgtr7\" (UniqueName: \"kubernetes.io/projected/b7335ef7-f87f-4e06-9992-59f607a87dfa-kube-api-access-pgtr7\") pod \"keystone-2013-account-create-update-5rbqj\" (UID: \"b7335ef7-f87f-4e06-9992-59f607a87dfa\") " pod="openstack/keystone-2013-account-create-update-5rbqj" Feb 23 09:08:02 crc kubenswrapper[4940]: I0223 09:08:02.644738 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5xscx\" (UniqueName: \"kubernetes.io/projected/7c0d4f47-b6ec-4115-95ed-466d4aa7edf5-kube-api-access-5xscx\") pod \"keystone-db-create-pjzmw\" (UID: \"7c0d4f47-b6ec-4115-95ed-466d4aa7edf5\") " pod="openstack/keystone-db-create-pjzmw" Feb 23 09:08:02 crc kubenswrapper[4940]: I0223 09:08:02.644800 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7c0d4f47-b6ec-4115-95ed-466d4aa7edf5-operator-scripts\") pod \"keystone-db-create-pjzmw\" (UID: \"7c0d4f47-b6ec-4115-95ed-466d4aa7edf5\") " pod="openstack/keystone-db-create-pjzmw" Feb 23 09:08:02 crc kubenswrapper[4940]: I0223 09:08:02.644833 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b7335ef7-f87f-4e06-9992-59f607a87dfa-operator-scripts\") pod \"keystone-2013-account-create-update-5rbqj\" (UID: \"b7335ef7-f87f-4e06-9992-59f607a87dfa\") " pod="openstack/keystone-2013-account-create-update-5rbqj" Feb 23 09:08:02 crc kubenswrapper[4940]: I0223 09:08:02.645684 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7c0d4f47-b6ec-4115-95ed-466d4aa7edf5-operator-scripts\") pod \"keystone-db-create-pjzmw\" (UID: \"7c0d4f47-b6ec-4115-95ed-466d4aa7edf5\") " pod="openstack/keystone-db-create-pjzmw" Feb 23 09:08:02 crc kubenswrapper[4940]: I0223 09:08:02.662527 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5xscx\" (UniqueName: \"kubernetes.io/projected/7c0d4f47-b6ec-4115-95ed-466d4aa7edf5-kube-api-access-5xscx\") pod \"keystone-db-create-pjzmw\" (UID: \"7c0d4f47-b6ec-4115-95ed-466d4aa7edf5\") " pod="openstack/keystone-db-create-pjzmw" Feb 23 09:08:02 crc kubenswrapper[4940]: I0223 09:08:02.681971 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-htjl5"] Feb 23 09:08:02 crc kubenswrapper[4940]: I0223 09:08:02.683261 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-htjl5" Feb 23 09:08:02 crc kubenswrapper[4940]: I0223 09:08:02.691278 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-3b70-account-create-update-7xz8h" event={"ID":"fbd1d1cf-4935-4c7b-b7d2-35a6d801d15e","Type":"ContainerStarted","Data":"b0532b6155437f8027aa328bde2cb3ba95db5c3a55efd10429cbf7e4442a3a03"} Feb 23 09:08:02 crc kubenswrapper[4940]: I0223 09:08:02.691415 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-htjl5"] Feb 23 09:08:02 crc kubenswrapper[4940]: I0223 09:08:02.696542 4940 generic.go:334] "Generic (PLEG): container finished" podID="0e4471c0-8a8f-4f32-8e0f-678066e4afc1" containerID="d8e4fdbe8a3d253f35d6fbfbcae806e545f30ee2d3d432835fe6b074235a33eb" exitCode=0 Feb 23 09:08:02 crc kubenswrapper[4940]: I0223 09:08:02.696624 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-7gv4v" event={"ID":"0e4471c0-8a8f-4f32-8e0f-678066e4afc1","Type":"ContainerDied","Data":"d8e4fdbe8a3d253f35d6fbfbcae806e545f30ee2d3d432835fe6b074235a33eb"} Feb 23 09:08:02 crc kubenswrapper[4940]: I0223 09:08:02.696639 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-7gv4v" Feb 23 09:08:02 crc kubenswrapper[4940]: I0223 09:08:02.696664 4940 scope.go:117] "RemoveContainer" containerID="d8e4fdbe8a3d253f35d6fbfbcae806e545f30ee2d3d432835fe6b074235a33eb" Feb 23 09:08:02 crc kubenswrapper[4940]: I0223 09:08:02.696652 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-7gv4v" event={"ID":"0e4471c0-8a8f-4f32-8e0f-678066e4afc1","Type":"ContainerDied","Data":"61c15a27def7a2d9053c18b9fa9be120e84da5cfba14ec8df05886f485e7941c"} Feb 23 09:08:02 crc kubenswrapper[4940]: I0223 09:08:02.698797 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-dzncq" event={"ID":"f4374df7-da62-4cf6-a912-f1463d42cf3a","Type":"ContainerStarted","Data":"238321d87e92995516535d3e05a15bbf1e7b3cfed587e6ba115a0680d34cfb77"} Feb 23 09:08:02 crc kubenswrapper[4940]: I0223 09:08:02.698837 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-dzncq" event={"ID":"f4374df7-da62-4cf6-a912-f1463d42cf3a","Type":"ContainerStarted","Data":"d144715e873b80dd3c05c8dd3a966a86133670f34772aa5980036a4660b5abe8"} Feb 23 09:08:02 crc kubenswrapper[4940]: I0223 09:08:02.727729 4940 scope.go:117] "RemoveContainer" containerID="aa9155c082aaba36d6b8cf687657f5b5fbfc2ea042c2cfe390578dd4ac636bab" Feb 23 09:08:02 crc kubenswrapper[4940]: I0223 09:08:02.728242 4940 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-create-dzncq" podStartSLOduration=1.72821729 podStartE2EDuration="1.72821729s" podCreationTimestamp="2026-02-23 09:08:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 09:08:02.719727247 +0000 UTC m=+1214.102933404" watchObservedRunningTime="2026-02-23 09:08:02.72821729 +0000 UTC m=+1214.111423457" Feb 23 09:08:02 crc kubenswrapper[4940]: I0223 09:08:02.746416 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b3b4aa31-5e69-4df5-ba1c-19b12f8ba67b-operator-scripts\") pod \"placement-db-create-htjl5\" (UID: \"b3b4aa31-5e69-4df5-ba1c-19b12f8ba67b\") " pod="openstack/placement-db-create-htjl5" Feb 23 09:08:02 crc kubenswrapper[4940]: I0223 09:08:02.747438 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b7335ef7-f87f-4e06-9992-59f607a87dfa-operator-scripts\") pod \"keystone-2013-account-create-update-5rbqj\" (UID: \"b7335ef7-f87f-4e06-9992-59f607a87dfa\") " pod="openstack/keystone-2013-account-create-update-5rbqj" Feb 23 09:08:02 crc kubenswrapper[4940]: I0223 09:08:02.747525 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2skt5\" (UniqueName: \"kubernetes.io/projected/b3b4aa31-5e69-4df5-ba1c-19b12f8ba67b-kube-api-access-2skt5\") pod \"placement-db-create-htjl5\" (UID: \"b3b4aa31-5e69-4df5-ba1c-19b12f8ba67b\") " pod="openstack/placement-db-create-htjl5" Feb 23 09:08:02 crc kubenswrapper[4940]: I0223 09:08:02.747667 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pgtr7\" (UniqueName: \"kubernetes.io/projected/b7335ef7-f87f-4e06-9992-59f607a87dfa-kube-api-access-pgtr7\") pod \"keystone-2013-account-create-update-5rbqj\" (UID: \"b7335ef7-f87f-4e06-9992-59f607a87dfa\") " pod="openstack/keystone-2013-account-create-update-5rbqj" Feb 23 09:08:02 crc kubenswrapper[4940]: I0223 09:08:02.747749 4940 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-7gv4v"] Feb 23 09:08:02 crc kubenswrapper[4940]: I0223 09:08:02.748449 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b7335ef7-f87f-4e06-9992-59f607a87dfa-operator-scripts\") pod \"keystone-2013-account-create-update-5rbqj\" (UID: \"b7335ef7-f87f-4e06-9992-59f607a87dfa\") " pod="openstack/keystone-2013-account-create-update-5rbqj" Feb 23 09:08:02 crc kubenswrapper[4940]: I0223 09:08:02.754427 4940 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-7gv4v"] Feb 23 09:08:02 crc kubenswrapper[4940]: I0223 09:08:02.767786 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pgtr7\" (UniqueName: \"kubernetes.io/projected/b7335ef7-f87f-4e06-9992-59f607a87dfa-kube-api-access-pgtr7\") pod \"keystone-2013-account-create-update-5rbqj\" (UID: \"b7335ef7-f87f-4e06-9992-59f607a87dfa\") " pod="openstack/keystone-2013-account-create-update-5rbqj" Feb 23 09:08:02 crc kubenswrapper[4940]: I0223 09:08:02.773293 4940 scope.go:117] "RemoveContainer" containerID="d8e4fdbe8a3d253f35d6fbfbcae806e545f30ee2d3d432835fe6b074235a33eb" Feb 23 09:08:02 crc kubenswrapper[4940]: E0223 09:08:02.773742 4940 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d8e4fdbe8a3d253f35d6fbfbcae806e545f30ee2d3d432835fe6b074235a33eb\": container with ID starting with d8e4fdbe8a3d253f35d6fbfbcae806e545f30ee2d3d432835fe6b074235a33eb not found: ID does not exist" containerID="d8e4fdbe8a3d253f35d6fbfbcae806e545f30ee2d3d432835fe6b074235a33eb" Feb 23 09:08:02 crc kubenswrapper[4940]: I0223 09:08:02.773772 4940 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d8e4fdbe8a3d253f35d6fbfbcae806e545f30ee2d3d432835fe6b074235a33eb"} err="failed to get container status \"d8e4fdbe8a3d253f35d6fbfbcae806e545f30ee2d3d432835fe6b074235a33eb\": rpc error: code = NotFound desc = could not find container \"d8e4fdbe8a3d253f35d6fbfbcae806e545f30ee2d3d432835fe6b074235a33eb\": container with ID starting with d8e4fdbe8a3d253f35d6fbfbcae806e545f30ee2d3d432835fe6b074235a33eb not found: ID does not exist" Feb 23 09:08:02 crc kubenswrapper[4940]: I0223 09:08:02.773794 4940 scope.go:117] "RemoveContainer" containerID="aa9155c082aaba36d6b8cf687657f5b5fbfc2ea042c2cfe390578dd4ac636bab" Feb 23 09:08:02 crc kubenswrapper[4940]: E0223 09:08:02.784953 4940 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aa9155c082aaba36d6b8cf687657f5b5fbfc2ea042c2cfe390578dd4ac636bab\": container with ID starting with aa9155c082aaba36d6b8cf687657f5b5fbfc2ea042c2cfe390578dd4ac636bab not found: ID does not exist" containerID="aa9155c082aaba36d6b8cf687657f5b5fbfc2ea042c2cfe390578dd4ac636bab" Feb 23 09:08:02 crc kubenswrapper[4940]: I0223 09:08:02.785026 4940 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aa9155c082aaba36d6b8cf687657f5b5fbfc2ea042c2cfe390578dd4ac636bab"} err="failed to get container status \"aa9155c082aaba36d6b8cf687657f5b5fbfc2ea042c2cfe390578dd4ac636bab\": rpc error: code = NotFound desc = could not find container \"aa9155c082aaba36d6b8cf687657f5b5fbfc2ea042c2cfe390578dd4ac636bab\": container with ID starting with aa9155c082aaba36d6b8cf687657f5b5fbfc2ea042c2cfe390578dd4ac636bab not found: ID does not exist" Feb 23 09:08:02 crc kubenswrapper[4940]: I0223 09:08:02.790753 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-25d4-account-create-update-xmnzj"] Feb 23 09:08:02 crc kubenswrapper[4940]: I0223 09:08:02.792818 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-25d4-account-create-update-xmnzj" Feb 23 09:08:02 crc kubenswrapper[4940]: I0223 09:08:02.796512 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Feb 23 09:08:02 crc kubenswrapper[4940]: I0223 09:08:02.802169 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-25d4-account-create-update-xmnzj"] Feb 23 09:08:02 crc kubenswrapper[4940]: I0223 09:08:02.818629 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-pjzmw" Feb 23 09:08:02 crc kubenswrapper[4940]: I0223 09:08:02.858383 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b3b4aa31-5e69-4df5-ba1c-19b12f8ba67b-operator-scripts\") pod \"placement-db-create-htjl5\" (UID: \"b3b4aa31-5e69-4df5-ba1c-19b12f8ba67b\") " pod="openstack/placement-db-create-htjl5" Feb 23 09:08:02 crc kubenswrapper[4940]: I0223 09:08:02.858482 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a40974fa-e647-45d0-b3a4-6d9f99b3039d-operator-scripts\") pod \"placement-25d4-account-create-update-xmnzj\" (UID: \"a40974fa-e647-45d0-b3a4-6d9f99b3039d\") " pod="openstack/placement-25d4-account-create-update-xmnzj" Feb 23 09:08:02 crc kubenswrapper[4940]: I0223 09:08:02.858598 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2skt5\" (UniqueName: \"kubernetes.io/projected/b3b4aa31-5e69-4df5-ba1c-19b12f8ba67b-kube-api-access-2skt5\") pod \"placement-db-create-htjl5\" (UID: \"b3b4aa31-5e69-4df5-ba1c-19b12f8ba67b\") " pod="openstack/placement-db-create-htjl5" Feb 23 09:08:02 crc kubenswrapper[4940]: I0223 09:08:02.858679 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2cx4b\" (UniqueName: \"kubernetes.io/projected/a40974fa-e647-45d0-b3a4-6d9f99b3039d-kube-api-access-2cx4b\") pod \"placement-25d4-account-create-update-xmnzj\" (UID: \"a40974fa-e647-45d0-b3a4-6d9f99b3039d\") " pod="openstack/placement-25d4-account-create-update-xmnzj" Feb 23 09:08:02 crc kubenswrapper[4940]: I0223 09:08:02.859246 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b3b4aa31-5e69-4df5-ba1c-19b12f8ba67b-operator-scripts\") pod \"placement-db-create-htjl5\" (UID: \"b3b4aa31-5e69-4df5-ba1c-19b12f8ba67b\") " pod="openstack/placement-db-create-htjl5" Feb 23 09:08:02 crc kubenswrapper[4940]: I0223 09:08:02.879374 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2skt5\" (UniqueName: \"kubernetes.io/projected/b3b4aa31-5e69-4df5-ba1c-19b12f8ba67b-kube-api-access-2skt5\") pod \"placement-db-create-htjl5\" (UID: \"b3b4aa31-5e69-4df5-ba1c-19b12f8ba67b\") " pod="openstack/placement-db-create-htjl5" Feb 23 09:08:02 crc kubenswrapper[4940]: I0223 09:08:02.944192 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-2013-account-create-update-5rbqj" Feb 23 09:08:02 crc kubenswrapper[4940]: I0223 09:08:02.961335 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a40974fa-e647-45d0-b3a4-6d9f99b3039d-operator-scripts\") pod \"placement-25d4-account-create-update-xmnzj\" (UID: \"a40974fa-e647-45d0-b3a4-6d9f99b3039d\") " pod="openstack/placement-25d4-account-create-update-xmnzj" Feb 23 09:08:02 crc kubenswrapper[4940]: I0223 09:08:02.962190 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a40974fa-e647-45d0-b3a4-6d9f99b3039d-operator-scripts\") pod \"placement-25d4-account-create-update-xmnzj\" (UID: \"a40974fa-e647-45d0-b3a4-6d9f99b3039d\") " pod="openstack/placement-25d4-account-create-update-xmnzj" Feb 23 09:08:02 crc kubenswrapper[4940]: I0223 09:08:02.962259 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2cx4b\" (UniqueName: \"kubernetes.io/projected/a40974fa-e647-45d0-b3a4-6d9f99b3039d-kube-api-access-2cx4b\") pod \"placement-25d4-account-create-update-xmnzj\" (UID: \"a40974fa-e647-45d0-b3a4-6d9f99b3039d\") " pod="openstack/placement-25d4-account-create-update-xmnzj" Feb 23 09:08:02 crc kubenswrapper[4940]: I0223 09:08:02.979020 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2cx4b\" (UniqueName: \"kubernetes.io/projected/a40974fa-e647-45d0-b3a4-6d9f99b3039d-kube-api-access-2cx4b\") pod \"placement-25d4-account-create-update-xmnzj\" (UID: \"a40974fa-e647-45d0-b3a4-6d9f99b3039d\") " pod="openstack/placement-25d4-account-create-update-xmnzj" Feb 23 09:08:03 crc kubenswrapper[4940]: I0223 09:08:03.017760 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-htjl5" Feb 23 09:08:03 crc kubenswrapper[4940]: I0223 09:08:03.115333 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-tqvm8" Feb 23 09:08:03 crc kubenswrapper[4940]: I0223 09:08:03.164798 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/15943682-1c2d-49c0-997a-1770d98ce9c2-operator-scripts\") pod \"15943682-1c2d-49c0-997a-1770d98ce9c2\" (UID: \"15943682-1c2d-49c0-997a-1770d98ce9c2\") " Feb 23 09:08:03 crc kubenswrapper[4940]: I0223 09:08:03.164845 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7qzmt\" (UniqueName: \"kubernetes.io/projected/15943682-1c2d-49c0-997a-1770d98ce9c2-kube-api-access-7qzmt\") pod \"15943682-1c2d-49c0-997a-1770d98ce9c2\" (UID: \"15943682-1c2d-49c0-997a-1770d98ce9c2\") " Feb 23 09:08:03 crc kubenswrapper[4940]: I0223 09:08:03.165785 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/15943682-1c2d-49c0-997a-1770d98ce9c2-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "15943682-1c2d-49c0-997a-1770d98ce9c2" (UID: "15943682-1c2d-49c0-997a-1770d98ce9c2"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 09:08:03 crc kubenswrapper[4940]: I0223 09:08:03.168297 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/15943682-1c2d-49c0-997a-1770d98ce9c2-kube-api-access-7qzmt" (OuterVolumeSpecName: "kube-api-access-7qzmt") pod "15943682-1c2d-49c0-997a-1770d98ce9c2" (UID: "15943682-1c2d-49c0-997a-1770d98ce9c2"). InnerVolumeSpecName "kube-api-access-7qzmt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 09:08:03 crc kubenswrapper[4940]: I0223 09:08:03.170433 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-25d4-account-create-update-xmnzj" Feb 23 09:08:03 crc kubenswrapper[4940]: I0223 09:08:03.266730 4940 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/15943682-1c2d-49c0-997a-1770d98ce9c2-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 23 09:08:03 crc kubenswrapper[4940]: I0223 09:08:03.266763 4940 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7qzmt\" (UniqueName: \"kubernetes.io/projected/15943682-1c2d-49c0-997a-1770d98ce9c2-kube-api-access-7qzmt\") on node \"crc\" DevicePath \"\"" Feb 23 09:08:03 crc kubenswrapper[4940]: I0223 09:08:03.285591 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-pjzmw"] Feb 23 09:08:03 crc kubenswrapper[4940]: W0223 09:08:03.293501 4940 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7c0d4f47_b6ec_4115_95ed_466d4aa7edf5.slice/crio-8dad1781653d23f900bbbcb046aa6c960e25d1dc62719a48f6a53dcda2d38254 WatchSource:0}: Error finding container 8dad1781653d23f900bbbcb046aa6c960e25d1dc62719a48f6a53dcda2d38254: Status 404 returned error can't find the container with id 8dad1781653d23f900bbbcb046aa6c960e25d1dc62719a48f6a53dcda2d38254 Feb 23 09:08:03 crc kubenswrapper[4940]: I0223 09:08:03.357565 4940 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0e4471c0-8a8f-4f32-8e0f-678066e4afc1" path="/var/lib/kubelet/pods/0e4471c0-8a8f-4f32-8e0f-678066e4afc1/volumes" Feb 23 09:08:03 crc kubenswrapper[4940]: I0223 09:08:03.388429 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-2013-account-create-update-5rbqj"] Feb 23 09:08:03 crc kubenswrapper[4940]: W0223 09:08:03.395258 4940 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb7335ef7_f87f_4e06_9992_59f607a87dfa.slice/crio-6983c9ada1c13616a090a79a504a9aa252acfe100439537baaa5446d6694e471 WatchSource:0}: Error finding container 6983c9ada1c13616a090a79a504a9aa252acfe100439537baaa5446d6694e471: Status 404 returned error can't find the container with id 6983c9ada1c13616a090a79a504a9aa252acfe100439537baaa5446d6694e471 Feb 23 09:08:03 crc kubenswrapper[4940]: I0223 09:08:03.484743 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-htjl5"] Feb 23 09:08:03 crc kubenswrapper[4940]: W0223 09:08:03.487002 4940 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb3b4aa31_5e69_4df5_ba1c_19b12f8ba67b.slice/crio-52e820f0c05e97a99d3b783dbb84db4d9433d8452c4d0faaab271bb865b8b7a5 WatchSource:0}: Error finding container 52e820f0c05e97a99d3b783dbb84db4d9433d8452c4d0faaab271bb865b8b7a5: Status 404 returned error can't find the container with id 52e820f0c05e97a99d3b783dbb84db4d9433d8452c4d0faaab271bb865b8b7a5 Feb 23 09:08:03 crc kubenswrapper[4940]: I0223 09:08:03.591788 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-25d4-account-create-update-xmnzj"] Feb 23 09:08:03 crc kubenswrapper[4940]: W0223 09:08:03.597036 4940 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda40974fa_e647_45d0_b3a4_6d9f99b3039d.slice/crio-e1e18e252affd6a26b42bb268b9e49bebcb50ac5aa5de6a7efbc3164e76c845c WatchSource:0}: Error finding container e1e18e252affd6a26b42bb268b9e49bebcb50ac5aa5de6a7efbc3164e76c845c: Status 404 returned error can't find the container with id e1e18e252affd6a26b42bb268b9e49bebcb50ac5aa5de6a7efbc3164e76c845c Feb 23 09:08:03 crc kubenswrapper[4940]: I0223 09:08:03.705128 4940 generic.go:334] "Generic (PLEG): container finished" podID="fbd1d1cf-4935-4c7b-b7d2-35a6d801d15e" containerID="668cdceee79943f61deb09513605bd5d0263cea76401310aa436e5b03d86db21" exitCode=0 Feb 23 09:08:03 crc kubenswrapper[4940]: I0223 09:08:03.705197 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-3b70-account-create-update-7xz8h" event={"ID":"fbd1d1cf-4935-4c7b-b7d2-35a6d801d15e","Type":"ContainerDied","Data":"668cdceee79943f61deb09513605bd5d0263cea76401310aa436e5b03d86db21"} Feb 23 09:08:03 crc kubenswrapper[4940]: I0223 09:08:03.706970 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-tqvm8" event={"ID":"15943682-1c2d-49c0-997a-1770d98ce9c2","Type":"ContainerDied","Data":"7f50d0e41de5a6bd8f9ffc82d93c36d50a19b964f6d9052d7e3ac1f3a04d7478"} Feb 23 09:08:03 crc kubenswrapper[4940]: I0223 09:08:03.707007 4940 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7f50d0e41de5a6bd8f9ffc82d93c36d50a19b964f6d9052d7e3ac1f3a04d7478" Feb 23 09:08:03 crc kubenswrapper[4940]: I0223 09:08:03.707057 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-tqvm8" Feb 23 09:08:03 crc kubenswrapper[4940]: I0223 09:08:03.708861 4940 generic.go:334] "Generic (PLEG): container finished" podID="f4374df7-da62-4cf6-a912-f1463d42cf3a" containerID="238321d87e92995516535d3e05a15bbf1e7b3cfed587e6ba115a0680d34cfb77" exitCode=0 Feb 23 09:08:03 crc kubenswrapper[4940]: I0223 09:08:03.709012 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-dzncq" event={"ID":"f4374df7-da62-4cf6-a912-f1463d42cf3a","Type":"ContainerDied","Data":"238321d87e92995516535d3e05a15bbf1e7b3cfed587e6ba115a0680d34cfb77"} Feb 23 09:08:03 crc kubenswrapper[4940]: I0223 09:08:03.715559 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-2013-account-create-update-5rbqj" event={"ID":"b7335ef7-f87f-4e06-9992-59f607a87dfa","Type":"ContainerStarted","Data":"ae127364224ffbb3721761580fa1deedee4320a20294817cd2b2aa6e16b7b2d8"} Feb 23 09:08:03 crc kubenswrapper[4940]: I0223 09:08:03.715603 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-2013-account-create-update-5rbqj" event={"ID":"b7335ef7-f87f-4e06-9992-59f607a87dfa","Type":"ContainerStarted","Data":"6983c9ada1c13616a090a79a504a9aa252acfe100439537baaa5446d6694e471"} Feb 23 09:08:03 crc kubenswrapper[4940]: I0223 09:08:03.717887 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-25d4-account-create-update-xmnzj" event={"ID":"a40974fa-e647-45d0-b3a4-6d9f99b3039d","Type":"ContainerStarted","Data":"e1e18e252affd6a26b42bb268b9e49bebcb50ac5aa5de6a7efbc3164e76c845c"} Feb 23 09:08:03 crc kubenswrapper[4940]: I0223 09:08:03.724294 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-htjl5" event={"ID":"b3b4aa31-5e69-4df5-ba1c-19b12f8ba67b","Type":"ContainerStarted","Data":"02d61f937ec3457890463f07b84036feb2152a52fc370b14157508774d63207f"} Feb 23 09:08:03 crc kubenswrapper[4940]: I0223 09:08:03.724466 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-htjl5" event={"ID":"b3b4aa31-5e69-4df5-ba1c-19b12f8ba67b","Type":"ContainerStarted","Data":"52e820f0c05e97a99d3b783dbb84db4d9433d8452c4d0faaab271bb865b8b7a5"} Feb 23 09:08:03 crc kubenswrapper[4940]: I0223 09:08:03.726736 4940 generic.go:334] "Generic (PLEG): container finished" podID="7c0d4f47-b6ec-4115-95ed-466d4aa7edf5" containerID="ab3a5e6678b6c95a3c3d418985abae07cc44098450902f3ec7d6342bf9db75aa" exitCode=0 Feb 23 09:08:03 crc kubenswrapper[4940]: I0223 09:08:03.726782 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-pjzmw" event={"ID":"7c0d4f47-b6ec-4115-95ed-466d4aa7edf5","Type":"ContainerDied","Data":"ab3a5e6678b6c95a3c3d418985abae07cc44098450902f3ec7d6342bf9db75aa"} Feb 23 09:08:03 crc kubenswrapper[4940]: I0223 09:08:03.726834 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-pjzmw" event={"ID":"7c0d4f47-b6ec-4115-95ed-466d4aa7edf5","Type":"ContainerStarted","Data":"8dad1781653d23f900bbbcb046aa6c960e25d1dc62719a48f6a53dcda2d38254"} Feb 23 09:08:03 crc kubenswrapper[4940]: I0223 09:08:03.743835 4940 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-2013-account-create-update-5rbqj" podStartSLOduration=1.743818932 podStartE2EDuration="1.743818932s" podCreationTimestamp="2026-02-23 09:08:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 09:08:03.739035462 +0000 UTC m=+1215.122241619" watchObservedRunningTime="2026-02-23 09:08:03.743818932 +0000 UTC m=+1215.127025089" Feb 23 09:08:03 crc kubenswrapper[4940]: I0223 09:08:03.774356 4940 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-create-htjl5" podStartSLOduration=1.774333969 podStartE2EDuration="1.774333969s" podCreationTimestamp="2026-02-23 09:08:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 09:08:03.770889242 +0000 UTC m=+1215.154095419" watchObservedRunningTime="2026-02-23 09:08:03.774333969 +0000 UTC m=+1215.157540126" Feb 23 09:08:04 crc kubenswrapper[4940]: I0223 09:08:04.036855 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-698758b865-l59jm"] Feb 23 09:08:04 crc kubenswrapper[4940]: E0223 09:08:04.037536 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="15943682-1c2d-49c0-997a-1770d98ce9c2" containerName="mariadb-account-create-update" Feb 23 09:08:04 crc kubenswrapper[4940]: I0223 09:08:04.037553 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="15943682-1c2d-49c0-997a-1770d98ce9c2" containerName="mariadb-account-create-update" Feb 23 09:08:04 crc kubenswrapper[4940]: I0223 09:08:04.037764 4940 memory_manager.go:354] "RemoveStaleState removing state" podUID="15943682-1c2d-49c0-997a-1770d98ce9c2" containerName="mariadb-account-create-update" Feb 23 09:08:04 crc kubenswrapper[4940]: I0223 09:08:04.038867 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-l59jm" Feb 23 09:08:04 crc kubenswrapper[4940]: I0223 09:08:04.081733 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-698758b865-l59jm"] Feb 23 09:08:04 crc kubenswrapper[4940]: I0223 09:08:04.183860 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/03982047-fd03-484c-9467-564d3ba0876a-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-l59jm\" (UID: \"03982047-fd03-484c-9467-564d3ba0876a\") " pod="openstack/dnsmasq-dns-698758b865-l59jm" Feb 23 09:08:04 crc kubenswrapper[4940]: I0223 09:08:04.184091 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8sbd4\" (UniqueName: \"kubernetes.io/projected/03982047-fd03-484c-9467-564d3ba0876a-kube-api-access-8sbd4\") pod \"dnsmasq-dns-698758b865-l59jm\" (UID: \"03982047-fd03-484c-9467-564d3ba0876a\") " pod="openstack/dnsmasq-dns-698758b865-l59jm" Feb 23 09:08:04 crc kubenswrapper[4940]: I0223 09:08:04.184240 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/03982047-fd03-484c-9467-564d3ba0876a-config\") pod \"dnsmasq-dns-698758b865-l59jm\" (UID: \"03982047-fd03-484c-9467-564d3ba0876a\") " pod="openstack/dnsmasq-dns-698758b865-l59jm" Feb 23 09:08:04 crc kubenswrapper[4940]: I0223 09:08:04.184289 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/03982047-fd03-484c-9467-564d3ba0876a-dns-svc\") pod \"dnsmasq-dns-698758b865-l59jm\" (UID: \"03982047-fd03-484c-9467-564d3ba0876a\") " pod="openstack/dnsmasq-dns-698758b865-l59jm" Feb 23 09:08:04 crc kubenswrapper[4940]: I0223 09:08:04.184322 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/03982047-fd03-484c-9467-564d3ba0876a-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-l59jm\" (UID: \"03982047-fd03-484c-9467-564d3ba0876a\") " pod="openstack/dnsmasq-dns-698758b865-l59jm" Feb 23 09:08:04 crc kubenswrapper[4940]: I0223 09:08:04.286518 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/03982047-fd03-484c-9467-564d3ba0876a-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-l59jm\" (UID: \"03982047-fd03-484c-9467-564d3ba0876a\") " pod="openstack/dnsmasq-dns-698758b865-l59jm" Feb 23 09:08:04 crc kubenswrapper[4940]: I0223 09:08:04.286643 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/03982047-fd03-484c-9467-564d3ba0876a-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-l59jm\" (UID: \"03982047-fd03-484c-9467-564d3ba0876a\") " pod="openstack/dnsmasq-dns-698758b865-l59jm" Feb 23 09:08:04 crc kubenswrapper[4940]: I0223 09:08:04.286717 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8sbd4\" (UniqueName: \"kubernetes.io/projected/03982047-fd03-484c-9467-564d3ba0876a-kube-api-access-8sbd4\") pod \"dnsmasq-dns-698758b865-l59jm\" (UID: \"03982047-fd03-484c-9467-564d3ba0876a\") " pod="openstack/dnsmasq-dns-698758b865-l59jm" Feb 23 09:08:04 crc kubenswrapper[4940]: I0223 09:08:04.286791 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/03982047-fd03-484c-9467-564d3ba0876a-config\") pod \"dnsmasq-dns-698758b865-l59jm\" (UID: \"03982047-fd03-484c-9467-564d3ba0876a\") " pod="openstack/dnsmasq-dns-698758b865-l59jm" Feb 23 09:08:04 crc kubenswrapper[4940]: I0223 09:08:04.286833 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/03982047-fd03-484c-9467-564d3ba0876a-dns-svc\") pod \"dnsmasq-dns-698758b865-l59jm\" (UID: \"03982047-fd03-484c-9467-564d3ba0876a\") " pod="openstack/dnsmasq-dns-698758b865-l59jm" Feb 23 09:08:04 crc kubenswrapper[4940]: I0223 09:08:04.287703 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/03982047-fd03-484c-9467-564d3ba0876a-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-l59jm\" (UID: \"03982047-fd03-484c-9467-564d3ba0876a\") " pod="openstack/dnsmasq-dns-698758b865-l59jm" Feb 23 09:08:04 crc kubenswrapper[4940]: I0223 09:08:04.287711 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/03982047-fd03-484c-9467-564d3ba0876a-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-l59jm\" (UID: \"03982047-fd03-484c-9467-564d3ba0876a\") " pod="openstack/dnsmasq-dns-698758b865-l59jm" Feb 23 09:08:04 crc kubenswrapper[4940]: I0223 09:08:04.287879 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/03982047-fd03-484c-9467-564d3ba0876a-config\") pod \"dnsmasq-dns-698758b865-l59jm\" (UID: \"03982047-fd03-484c-9467-564d3ba0876a\") " pod="openstack/dnsmasq-dns-698758b865-l59jm" Feb 23 09:08:04 crc kubenswrapper[4940]: I0223 09:08:04.287961 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/03982047-fd03-484c-9467-564d3ba0876a-dns-svc\") pod \"dnsmasq-dns-698758b865-l59jm\" (UID: \"03982047-fd03-484c-9467-564d3ba0876a\") " pod="openstack/dnsmasq-dns-698758b865-l59jm" Feb 23 09:08:04 crc kubenswrapper[4940]: I0223 09:08:04.313132 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8sbd4\" (UniqueName: \"kubernetes.io/projected/03982047-fd03-484c-9467-564d3ba0876a-kube-api-access-8sbd4\") pod \"dnsmasq-dns-698758b865-l59jm\" (UID: \"03982047-fd03-484c-9467-564d3ba0876a\") " pod="openstack/dnsmasq-dns-698758b865-l59jm" Feb 23 09:08:04 crc kubenswrapper[4940]: I0223 09:08:04.375410 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-l59jm" Feb 23 09:08:04 crc kubenswrapper[4940]: I0223 09:08:04.737290 4940 generic.go:334] "Generic (PLEG): container finished" podID="b3b4aa31-5e69-4df5-ba1c-19b12f8ba67b" containerID="02d61f937ec3457890463f07b84036feb2152a52fc370b14157508774d63207f" exitCode=0 Feb 23 09:08:04 crc kubenswrapper[4940]: I0223 09:08:04.737366 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-htjl5" event={"ID":"b3b4aa31-5e69-4df5-ba1c-19b12f8ba67b","Type":"ContainerDied","Data":"02d61f937ec3457890463f07b84036feb2152a52fc370b14157508774d63207f"} Feb 23 09:08:04 crc kubenswrapper[4940]: I0223 09:08:04.740659 4940 generic.go:334] "Generic (PLEG): container finished" podID="b7335ef7-f87f-4e06-9992-59f607a87dfa" containerID="ae127364224ffbb3721761580fa1deedee4320a20294817cd2b2aa6e16b7b2d8" exitCode=0 Feb 23 09:08:04 crc kubenswrapper[4940]: I0223 09:08:04.740753 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-2013-account-create-update-5rbqj" event={"ID":"b7335ef7-f87f-4e06-9992-59f607a87dfa","Type":"ContainerDied","Data":"ae127364224ffbb3721761580fa1deedee4320a20294817cd2b2aa6e16b7b2d8"} Feb 23 09:08:04 crc kubenswrapper[4940]: I0223 09:08:04.742394 4940 generic.go:334] "Generic (PLEG): container finished" podID="a40974fa-e647-45d0-b3a4-6d9f99b3039d" containerID="4a51a5cdb80d3375354b54c221153f975715ed0531469d89d407194b20d251b1" exitCode=0 Feb 23 09:08:04 crc kubenswrapper[4940]: I0223 09:08:04.742510 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-25d4-account-create-update-xmnzj" event={"ID":"a40974fa-e647-45d0-b3a4-6d9f99b3039d","Type":"ContainerDied","Data":"4a51a5cdb80d3375354b54c221153f975715ed0531469d89d407194b20d251b1"} Feb 23 09:08:04 crc kubenswrapper[4940]: I0223 09:08:04.866198 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-698758b865-l59jm"] Feb 23 09:08:05 crc kubenswrapper[4940]: I0223 09:08:05.257100 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Feb 23 09:08:05 crc kubenswrapper[4940]: I0223 09:08:05.263217 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Feb 23 09:08:05 crc kubenswrapper[4940]: I0223 09:08:05.265512 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Feb 23 09:08:05 crc kubenswrapper[4940]: I0223 09:08:05.265548 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Feb 23 09:08:05 crc kubenswrapper[4940]: I0223 09:08:05.265803 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-5w9nc" Feb 23 09:08:05 crc kubenswrapper[4940]: I0223 09:08:05.266239 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Feb 23 09:08:05 crc kubenswrapper[4940]: I0223 09:08:05.274917 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Feb 23 09:08:05 crc kubenswrapper[4940]: I0223 09:08:05.276507 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-dzncq" Feb 23 09:08:05 crc kubenswrapper[4940]: I0223 09:08:05.285882 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-pjzmw" Feb 23 09:08:05 crc kubenswrapper[4940]: I0223 09:08:05.294694 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-3b70-account-create-update-7xz8h" Feb 23 09:08:05 crc kubenswrapper[4940]: I0223 09:08:05.423331 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7c0d4f47-b6ec-4115-95ed-466d4aa7edf5-operator-scripts\") pod \"7c0d4f47-b6ec-4115-95ed-466d4aa7edf5\" (UID: \"7c0d4f47-b6ec-4115-95ed-466d4aa7edf5\") " Feb 23 09:08:05 crc kubenswrapper[4940]: I0223 09:08:05.423387 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cjsfw\" (UniqueName: \"kubernetes.io/projected/fbd1d1cf-4935-4c7b-b7d2-35a6d801d15e-kube-api-access-cjsfw\") pod \"fbd1d1cf-4935-4c7b-b7d2-35a6d801d15e\" (UID: \"fbd1d1cf-4935-4c7b-b7d2-35a6d801d15e\") " Feb 23 09:08:05 crc kubenswrapper[4940]: I0223 09:08:05.423465 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fbd1d1cf-4935-4c7b-b7d2-35a6d801d15e-operator-scripts\") pod \"fbd1d1cf-4935-4c7b-b7d2-35a6d801d15e\" (UID: \"fbd1d1cf-4935-4c7b-b7d2-35a6d801d15e\") " Feb 23 09:08:05 crc kubenswrapper[4940]: I0223 09:08:05.423510 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5xscx\" (UniqueName: \"kubernetes.io/projected/7c0d4f47-b6ec-4115-95ed-466d4aa7edf5-kube-api-access-5xscx\") pod \"7c0d4f47-b6ec-4115-95ed-466d4aa7edf5\" (UID: \"7c0d4f47-b6ec-4115-95ed-466d4aa7edf5\") " Feb 23 09:08:05 crc kubenswrapper[4940]: I0223 09:08:05.423532 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f4374df7-da62-4cf6-a912-f1463d42cf3a-operator-scripts\") pod \"f4374df7-da62-4cf6-a912-f1463d42cf3a\" (UID: \"f4374df7-da62-4cf6-a912-f1463d42cf3a\") " Feb 23 09:08:05 crc kubenswrapper[4940]: I0223 09:08:05.423546 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vdp7k\" (UniqueName: \"kubernetes.io/projected/f4374df7-da62-4cf6-a912-f1463d42cf3a-kube-api-access-vdp7k\") pod \"f4374df7-da62-4cf6-a912-f1463d42cf3a\" (UID: \"f4374df7-da62-4cf6-a912-f1463d42cf3a\") " Feb 23 09:08:05 crc kubenswrapper[4940]: I0223 09:08:05.423804 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"swift-storage-0\" (UID: \"8b985bfb-fd7d-4c37-b935-26bc80e96fc0\") " pod="openstack/swift-storage-0" Feb 23 09:08:05 crc kubenswrapper[4940]: I0223 09:08:05.423843 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/8b985bfb-fd7d-4c37-b935-26bc80e96fc0-cache\") pod \"swift-storage-0\" (UID: \"8b985bfb-fd7d-4c37-b935-26bc80e96fc0\") " pod="openstack/swift-storage-0" Feb 23 09:08:05 crc kubenswrapper[4940]: I0223 09:08:05.423882 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/8b985bfb-fd7d-4c37-b935-26bc80e96fc0-etc-swift\") pod \"swift-storage-0\" (UID: \"8b985bfb-fd7d-4c37-b935-26bc80e96fc0\") " pod="openstack/swift-storage-0" Feb 23 09:08:05 crc kubenswrapper[4940]: I0223 09:08:05.423902 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/8b985bfb-fd7d-4c37-b935-26bc80e96fc0-lock\") pod \"swift-storage-0\" (UID: \"8b985bfb-fd7d-4c37-b935-26bc80e96fc0\") " pod="openstack/swift-storage-0" Feb 23 09:08:05 crc kubenswrapper[4940]: I0223 09:08:05.423930 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tptbs\" (UniqueName: \"kubernetes.io/projected/8b985bfb-fd7d-4c37-b935-26bc80e96fc0-kube-api-access-tptbs\") pod \"swift-storage-0\" (UID: \"8b985bfb-fd7d-4c37-b935-26bc80e96fc0\") " pod="openstack/swift-storage-0" Feb 23 09:08:05 crc kubenswrapper[4940]: I0223 09:08:05.424098 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b985bfb-fd7d-4c37-b935-26bc80e96fc0-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"8b985bfb-fd7d-4c37-b935-26bc80e96fc0\") " pod="openstack/swift-storage-0" Feb 23 09:08:05 crc kubenswrapper[4940]: I0223 09:08:05.424289 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f4374df7-da62-4cf6-a912-f1463d42cf3a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f4374df7-da62-4cf6-a912-f1463d42cf3a" (UID: "f4374df7-da62-4cf6-a912-f1463d42cf3a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 09:08:05 crc kubenswrapper[4940]: I0223 09:08:05.424296 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7c0d4f47-b6ec-4115-95ed-466d4aa7edf5-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "7c0d4f47-b6ec-4115-95ed-466d4aa7edf5" (UID: "7c0d4f47-b6ec-4115-95ed-466d4aa7edf5"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 09:08:05 crc kubenswrapper[4940]: I0223 09:08:05.424321 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fbd1d1cf-4935-4c7b-b7d2-35a6d801d15e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "fbd1d1cf-4935-4c7b-b7d2-35a6d801d15e" (UID: "fbd1d1cf-4935-4c7b-b7d2-35a6d801d15e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 09:08:05 crc kubenswrapper[4940]: I0223 09:08:05.428032 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f4374df7-da62-4cf6-a912-f1463d42cf3a-kube-api-access-vdp7k" (OuterVolumeSpecName: "kube-api-access-vdp7k") pod "f4374df7-da62-4cf6-a912-f1463d42cf3a" (UID: "f4374df7-da62-4cf6-a912-f1463d42cf3a"). InnerVolumeSpecName "kube-api-access-vdp7k". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 09:08:05 crc kubenswrapper[4940]: I0223 09:08:05.428099 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fbd1d1cf-4935-4c7b-b7d2-35a6d801d15e-kube-api-access-cjsfw" (OuterVolumeSpecName: "kube-api-access-cjsfw") pod "fbd1d1cf-4935-4c7b-b7d2-35a6d801d15e" (UID: "fbd1d1cf-4935-4c7b-b7d2-35a6d801d15e"). InnerVolumeSpecName "kube-api-access-cjsfw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 09:08:05 crc kubenswrapper[4940]: I0223 09:08:05.428131 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7c0d4f47-b6ec-4115-95ed-466d4aa7edf5-kube-api-access-5xscx" (OuterVolumeSpecName: "kube-api-access-5xscx") pod "7c0d4f47-b6ec-4115-95ed-466d4aa7edf5" (UID: "7c0d4f47-b6ec-4115-95ed-466d4aa7edf5"). InnerVolumeSpecName "kube-api-access-5xscx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 09:08:05 crc kubenswrapper[4940]: I0223 09:08:05.525359 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tptbs\" (UniqueName: \"kubernetes.io/projected/8b985bfb-fd7d-4c37-b935-26bc80e96fc0-kube-api-access-tptbs\") pod \"swift-storage-0\" (UID: \"8b985bfb-fd7d-4c37-b935-26bc80e96fc0\") " pod="openstack/swift-storage-0" Feb 23 09:08:05 crc kubenswrapper[4940]: I0223 09:08:05.525431 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b985bfb-fd7d-4c37-b935-26bc80e96fc0-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"8b985bfb-fd7d-4c37-b935-26bc80e96fc0\") " pod="openstack/swift-storage-0" Feb 23 09:08:05 crc kubenswrapper[4940]: I0223 09:08:05.525513 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"swift-storage-0\" (UID: \"8b985bfb-fd7d-4c37-b935-26bc80e96fc0\") " pod="openstack/swift-storage-0" Feb 23 09:08:05 crc kubenswrapper[4940]: I0223 09:08:05.525587 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/8b985bfb-fd7d-4c37-b935-26bc80e96fc0-cache\") pod \"swift-storage-0\" (UID: \"8b985bfb-fd7d-4c37-b935-26bc80e96fc0\") " pod="openstack/swift-storage-0" Feb 23 09:08:05 crc kubenswrapper[4940]: I0223 09:08:05.525666 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/8b985bfb-fd7d-4c37-b935-26bc80e96fc0-etc-swift\") pod \"swift-storage-0\" (UID: \"8b985bfb-fd7d-4c37-b935-26bc80e96fc0\") " pod="openstack/swift-storage-0" Feb 23 09:08:05 crc kubenswrapper[4940]: I0223 09:08:05.525690 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/8b985bfb-fd7d-4c37-b935-26bc80e96fc0-lock\") pod \"swift-storage-0\" (UID: \"8b985bfb-fd7d-4c37-b935-26bc80e96fc0\") " pod="openstack/swift-storage-0" Feb 23 09:08:05 crc kubenswrapper[4940]: I0223 09:08:05.525746 4940 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7c0d4f47-b6ec-4115-95ed-466d4aa7edf5-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 23 09:08:05 crc kubenswrapper[4940]: I0223 09:08:05.525762 4940 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cjsfw\" (UniqueName: \"kubernetes.io/projected/fbd1d1cf-4935-4c7b-b7d2-35a6d801d15e-kube-api-access-cjsfw\") on node \"crc\" DevicePath \"\"" Feb 23 09:08:05 crc kubenswrapper[4940]: I0223 09:08:05.525777 4940 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fbd1d1cf-4935-4c7b-b7d2-35a6d801d15e-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 23 09:08:05 crc kubenswrapper[4940]: I0223 09:08:05.525789 4940 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5xscx\" (UniqueName: \"kubernetes.io/projected/7c0d4f47-b6ec-4115-95ed-466d4aa7edf5-kube-api-access-5xscx\") on node \"crc\" DevicePath \"\"" Feb 23 09:08:05 crc kubenswrapper[4940]: I0223 09:08:05.525800 4940 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f4374df7-da62-4cf6-a912-f1463d42cf3a-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 23 09:08:05 crc kubenswrapper[4940]: I0223 09:08:05.525815 4940 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vdp7k\" (UniqueName: \"kubernetes.io/projected/f4374df7-da62-4cf6-a912-f1463d42cf3a-kube-api-access-vdp7k\") on node \"crc\" DevicePath \"\"" Feb 23 09:08:05 crc kubenswrapper[4940]: I0223 09:08:05.526182 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/8b985bfb-fd7d-4c37-b935-26bc80e96fc0-cache\") pod \"swift-storage-0\" (UID: \"8b985bfb-fd7d-4c37-b935-26bc80e96fc0\") " pod="openstack/swift-storage-0" Feb 23 09:08:05 crc kubenswrapper[4940]: I0223 09:08:05.526296 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/8b985bfb-fd7d-4c37-b935-26bc80e96fc0-lock\") pod \"swift-storage-0\" (UID: \"8b985bfb-fd7d-4c37-b935-26bc80e96fc0\") " pod="openstack/swift-storage-0" Feb 23 09:08:05 crc kubenswrapper[4940]: E0223 09:08:05.526399 4940 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 23 09:08:05 crc kubenswrapper[4940]: E0223 09:08:05.526419 4940 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 23 09:08:05 crc kubenswrapper[4940]: I0223 09:08:05.526460 4940 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"swift-storage-0\" (UID: \"8b985bfb-fd7d-4c37-b935-26bc80e96fc0\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/swift-storage-0" Feb 23 09:08:05 crc kubenswrapper[4940]: E0223 09:08:05.526465 4940 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8b985bfb-fd7d-4c37-b935-26bc80e96fc0-etc-swift podName:8b985bfb-fd7d-4c37-b935-26bc80e96fc0 nodeName:}" failed. No retries permitted until 2026-02-23 09:08:06.026446681 +0000 UTC m=+1217.409652838 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/8b985bfb-fd7d-4c37-b935-26bc80e96fc0-etc-swift") pod "swift-storage-0" (UID: "8b985bfb-fd7d-4c37-b935-26bc80e96fc0") : configmap "swift-ring-files" not found Feb 23 09:08:05 crc kubenswrapper[4940]: I0223 09:08:05.529750 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b985bfb-fd7d-4c37-b935-26bc80e96fc0-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"8b985bfb-fd7d-4c37-b935-26bc80e96fc0\") " pod="openstack/swift-storage-0" Feb 23 09:08:05 crc kubenswrapper[4940]: I0223 09:08:05.547367 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"swift-storage-0\" (UID: \"8b985bfb-fd7d-4c37-b935-26bc80e96fc0\") " pod="openstack/swift-storage-0" Feb 23 09:08:05 crc kubenswrapper[4940]: I0223 09:08:05.550179 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tptbs\" (UniqueName: \"kubernetes.io/projected/8b985bfb-fd7d-4c37-b935-26bc80e96fc0-kube-api-access-tptbs\") pod \"swift-storage-0\" (UID: \"8b985bfb-fd7d-4c37-b935-26bc80e96fc0\") " pod="openstack/swift-storage-0" Feb 23 09:08:05 crc kubenswrapper[4940]: I0223 09:08:05.779984 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-pjzmw" event={"ID":"7c0d4f47-b6ec-4115-95ed-466d4aa7edf5","Type":"ContainerDied","Data":"8dad1781653d23f900bbbcb046aa6c960e25d1dc62719a48f6a53dcda2d38254"} Feb 23 09:08:05 crc kubenswrapper[4940]: I0223 09:08:05.780034 4940 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8dad1781653d23f900bbbcb046aa6c960e25d1dc62719a48f6a53dcda2d38254" Feb 23 09:08:05 crc kubenswrapper[4940]: I0223 09:08:05.780120 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-pjzmw" Feb 23 09:08:05 crc kubenswrapper[4940]: I0223 09:08:05.805977 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-3b70-account-create-update-7xz8h" Feb 23 09:08:05 crc kubenswrapper[4940]: I0223 09:08:05.806001 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-3b70-account-create-update-7xz8h" event={"ID":"fbd1d1cf-4935-4c7b-b7d2-35a6d801d15e","Type":"ContainerDied","Data":"b0532b6155437f8027aa328bde2cb3ba95db5c3a55efd10429cbf7e4442a3a03"} Feb 23 09:08:05 crc kubenswrapper[4940]: I0223 09:08:05.806036 4940 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b0532b6155437f8027aa328bde2cb3ba95db5c3a55efd10429cbf7e4442a3a03" Feb 23 09:08:05 crc kubenswrapper[4940]: I0223 09:08:05.823978 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-dzncq" event={"ID":"f4374df7-da62-4cf6-a912-f1463d42cf3a","Type":"ContainerDied","Data":"d144715e873b80dd3c05c8dd3a966a86133670f34772aa5980036a4660b5abe8"} Feb 23 09:08:05 crc kubenswrapper[4940]: I0223 09:08:05.824028 4940 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d144715e873b80dd3c05c8dd3a966a86133670f34772aa5980036a4660b5abe8" Feb 23 09:08:05 crc kubenswrapper[4940]: I0223 09:08:05.824114 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-dzncq" Feb 23 09:08:05 crc kubenswrapper[4940]: I0223 09:08:05.839986 4940 generic.go:334] "Generic (PLEG): container finished" podID="03982047-fd03-484c-9467-564d3ba0876a" containerID="fccfd6744803257c984bada6c62a8c513da8f52ec2a937c0c1525c7b06f0f5f5" exitCode=0 Feb 23 09:08:05 crc kubenswrapper[4940]: I0223 09:08:05.840932 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-l59jm" event={"ID":"03982047-fd03-484c-9467-564d3ba0876a","Type":"ContainerDied","Data":"fccfd6744803257c984bada6c62a8c513da8f52ec2a937c0c1525c7b06f0f5f5"} Feb 23 09:08:05 crc kubenswrapper[4940]: I0223 09:08:05.840989 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-l59jm" event={"ID":"03982047-fd03-484c-9467-564d3ba0876a","Type":"ContainerStarted","Data":"5144b4768f140b7db0071e819cf419473fe8dcf53a9c45ec75de1315452c076a"} Feb 23 09:08:05 crc kubenswrapper[4940]: I0223 09:08:05.958523 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-4jjdt"] Feb 23 09:08:05 crc kubenswrapper[4940]: E0223 09:08:05.958993 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c0d4f47-b6ec-4115-95ed-466d4aa7edf5" containerName="mariadb-database-create" Feb 23 09:08:05 crc kubenswrapper[4940]: I0223 09:08:05.959019 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c0d4f47-b6ec-4115-95ed-466d4aa7edf5" containerName="mariadb-database-create" Feb 23 09:08:05 crc kubenswrapper[4940]: E0223 09:08:05.959044 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4374df7-da62-4cf6-a912-f1463d42cf3a" containerName="mariadb-database-create" Feb 23 09:08:05 crc kubenswrapper[4940]: I0223 09:08:05.959053 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4374df7-da62-4cf6-a912-f1463d42cf3a" containerName="mariadb-database-create" Feb 23 09:08:05 crc kubenswrapper[4940]: E0223 09:08:05.959068 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fbd1d1cf-4935-4c7b-b7d2-35a6d801d15e" containerName="mariadb-account-create-update" Feb 23 09:08:05 crc kubenswrapper[4940]: I0223 09:08:05.959077 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="fbd1d1cf-4935-4c7b-b7d2-35a6d801d15e" containerName="mariadb-account-create-update" Feb 23 09:08:05 crc kubenswrapper[4940]: I0223 09:08:05.959240 4940 memory_manager.go:354] "RemoveStaleState removing state" podUID="fbd1d1cf-4935-4c7b-b7d2-35a6d801d15e" containerName="mariadb-account-create-update" Feb 23 09:08:05 crc kubenswrapper[4940]: I0223 09:08:05.959256 4940 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4374df7-da62-4cf6-a912-f1463d42cf3a" containerName="mariadb-database-create" Feb 23 09:08:05 crc kubenswrapper[4940]: I0223 09:08:05.959386 4940 memory_manager.go:354] "RemoveStaleState removing state" podUID="7c0d4f47-b6ec-4115-95ed-466d4aa7edf5" containerName="mariadb-database-create" Feb 23 09:08:05 crc kubenswrapper[4940]: I0223 09:08:05.960094 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-4jjdt" Feb 23 09:08:05 crc kubenswrapper[4940]: I0223 09:08:05.964293 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-4jjdt"] Feb 23 09:08:05 crc kubenswrapper[4940]: I0223 09:08:05.966857 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Feb 23 09:08:05 crc kubenswrapper[4940]: I0223 09:08:05.966884 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Feb 23 09:08:05 crc kubenswrapper[4940]: I0223 09:08:05.967071 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Feb 23 09:08:06 crc kubenswrapper[4940]: I0223 09:08:06.036092 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/1b9efcfe-df2d-405e-9f10-d22dbce174e9-swiftconf\") pod \"swift-ring-rebalance-4jjdt\" (UID: \"1b9efcfe-df2d-405e-9f10-d22dbce174e9\") " pod="openstack/swift-ring-rebalance-4jjdt" Feb 23 09:08:06 crc kubenswrapper[4940]: I0223 09:08:06.036174 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b9efcfe-df2d-405e-9f10-d22dbce174e9-combined-ca-bundle\") pod \"swift-ring-rebalance-4jjdt\" (UID: \"1b9efcfe-df2d-405e-9f10-d22dbce174e9\") " pod="openstack/swift-ring-rebalance-4jjdt" Feb 23 09:08:06 crc kubenswrapper[4940]: I0223 09:08:06.036272 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6jcfq\" (UniqueName: \"kubernetes.io/projected/1b9efcfe-df2d-405e-9f10-d22dbce174e9-kube-api-access-6jcfq\") pod \"swift-ring-rebalance-4jjdt\" (UID: \"1b9efcfe-df2d-405e-9f10-d22dbce174e9\") " pod="openstack/swift-ring-rebalance-4jjdt" Feb 23 09:08:06 crc kubenswrapper[4940]: I0223 09:08:06.036310 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1b9efcfe-df2d-405e-9f10-d22dbce174e9-scripts\") pod \"swift-ring-rebalance-4jjdt\" (UID: \"1b9efcfe-df2d-405e-9f10-d22dbce174e9\") " pod="openstack/swift-ring-rebalance-4jjdt" Feb 23 09:08:06 crc kubenswrapper[4940]: I0223 09:08:06.036345 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/8b985bfb-fd7d-4c37-b935-26bc80e96fc0-etc-swift\") pod \"swift-storage-0\" (UID: \"8b985bfb-fd7d-4c37-b935-26bc80e96fc0\") " pod="openstack/swift-storage-0" Feb 23 09:08:06 crc kubenswrapper[4940]: I0223 09:08:06.036386 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/1b9efcfe-df2d-405e-9f10-d22dbce174e9-etc-swift\") pod \"swift-ring-rebalance-4jjdt\" (UID: \"1b9efcfe-df2d-405e-9f10-d22dbce174e9\") " pod="openstack/swift-ring-rebalance-4jjdt" Feb 23 09:08:06 crc kubenswrapper[4940]: I0223 09:08:06.036417 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/1b9efcfe-df2d-405e-9f10-d22dbce174e9-ring-data-devices\") pod \"swift-ring-rebalance-4jjdt\" (UID: \"1b9efcfe-df2d-405e-9f10-d22dbce174e9\") " pod="openstack/swift-ring-rebalance-4jjdt" Feb 23 09:08:06 crc kubenswrapper[4940]: I0223 09:08:06.036450 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/1b9efcfe-df2d-405e-9f10-d22dbce174e9-dispersionconf\") pod \"swift-ring-rebalance-4jjdt\" (UID: \"1b9efcfe-df2d-405e-9f10-d22dbce174e9\") " pod="openstack/swift-ring-rebalance-4jjdt" Feb 23 09:08:06 crc kubenswrapper[4940]: E0223 09:08:06.036711 4940 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 23 09:08:06 crc kubenswrapper[4940]: E0223 09:08:06.036733 4940 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 23 09:08:06 crc kubenswrapper[4940]: E0223 09:08:06.036776 4940 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8b985bfb-fd7d-4c37-b935-26bc80e96fc0-etc-swift podName:8b985bfb-fd7d-4c37-b935-26bc80e96fc0 nodeName:}" failed. No retries permitted until 2026-02-23 09:08:07.036760105 +0000 UTC m=+1218.419966262 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/8b985bfb-fd7d-4c37-b935-26bc80e96fc0-etc-swift") pod "swift-storage-0" (UID: "8b985bfb-fd7d-4c37-b935-26bc80e96fc0") : configmap "swift-ring-files" not found Feb 23 09:08:06 crc kubenswrapper[4940]: I0223 09:08:06.137795 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1b9efcfe-df2d-405e-9f10-d22dbce174e9-scripts\") pod \"swift-ring-rebalance-4jjdt\" (UID: \"1b9efcfe-df2d-405e-9f10-d22dbce174e9\") " pod="openstack/swift-ring-rebalance-4jjdt" Feb 23 09:08:06 crc kubenswrapper[4940]: I0223 09:08:06.138116 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/1b9efcfe-df2d-405e-9f10-d22dbce174e9-etc-swift\") pod \"swift-ring-rebalance-4jjdt\" (UID: \"1b9efcfe-df2d-405e-9f10-d22dbce174e9\") " pod="openstack/swift-ring-rebalance-4jjdt" Feb 23 09:08:06 crc kubenswrapper[4940]: I0223 09:08:06.138144 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/1b9efcfe-df2d-405e-9f10-d22dbce174e9-ring-data-devices\") pod \"swift-ring-rebalance-4jjdt\" (UID: \"1b9efcfe-df2d-405e-9f10-d22dbce174e9\") " pod="openstack/swift-ring-rebalance-4jjdt" Feb 23 09:08:06 crc kubenswrapper[4940]: I0223 09:08:06.138174 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/1b9efcfe-df2d-405e-9f10-d22dbce174e9-dispersionconf\") pod \"swift-ring-rebalance-4jjdt\" (UID: \"1b9efcfe-df2d-405e-9f10-d22dbce174e9\") " pod="openstack/swift-ring-rebalance-4jjdt" Feb 23 09:08:06 crc kubenswrapper[4940]: I0223 09:08:06.138200 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/1b9efcfe-df2d-405e-9f10-d22dbce174e9-swiftconf\") pod \"swift-ring-rebalance-4jjdt\" (UID: \"1b9efcfe-df2d-405e-9f10-d22dbce174e9\") " pod="openstack/swift-ring-rebalance-4jjdt" Feb 23 09:08:06 crc kubenswrapper[4940]: I0223 09:08:06.138240 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b9efcfe-df2d-405e-9f10-d22dbce174e9-combined-ca-bundle\") pod \"swift-ring-rebalance-4jjdt\" (UID: \"1b9efcfe-df2d-405e-9f10-d22dbce174e9\") " pod="openstack/swift-ring-rebalance-4jjdt" Feb 23 09:08:06 crc kubenswrapper[4940]: I0223 09:08:06.138290 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6jcfq\" (UniqueName: \"kubernetes.io/projected/1b9efcfe-df2d-405e-9f10-d22dbce174e9-kube-api-access-6jcfq\") pod \"swift-ring-rebalance-4jjdt\" (UID: \"1b9efcfe-df2d-405e-9f10-d22dbce174e9\") " pod="openstack/swift-ring-rebalance-4jjdt" Feb 23 09:08:06 crc kubenswrapper[4940]: I0223 09:08:06.139259 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1b9efcfe-df2d-405e-9f10-d22dbce174e9-scripts\") pod \"swift-ring-rebalance-4jjdt\" (UID: \"1b9efcfe-df2d-405e-9f10-d22dbce174e9\") " pod="openstack/swift-ring-rebalance-4jjdt" Feb 23 09:08:06 crc kubenswrapper[4940]: I0223 09:08:06.139266 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/1b9efcfe-df2d-405e-9f10-d22dbce174e9-etc-swift\") pod \"swift-ring-rebalance-4jjdt\" (UID: \"1b9efcfe-df2d-405e-9f10-d22dbce174e9\") " pod="openstack/swift-ring-rebalance-4jjdt" Feb 23 09:08:06 crc kubenswrapper[4940]: I0223 09:08:06.139486 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/1b9efcfe-df2d-405e-9f10-d22dbce174e9-ring-data-devices\") pod \"swift-ring-rebalance-4jjdt\" (UID: \"1b9efcfe-df2d-405e-9f10-d22dbce174e9\") " pod="openstack/swift-ring-rebalance-4jjdt" Feb 23 09:08:06 crc kubenswrapper[4940]: I0223 09:08:06.143957 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/1b9efcfe-df2d-405e-9f10-d22dbce174e9-dispersionconf\") pod \"swift-ring-rebalance-4jjdt\" (UID: \"1b9efcfe-df2d-405e-9f10-d22dbce174e9\") " pod="openstack/swift-ring-rebalance-4jjdt" Feb 23 09:08:06 crc kubenswrapper[4940]: I0223 09:08:06.144148 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/1b9efcfe-df2d-405e-9f10-d22dbce174e9-swiftconf\") pod \"swift-ring-rebalance-4jjdt\" (UID: \"1b9efcfe-df2d-405e-9f10-d22dbce174e9\") " pod="openstack/swift-ring-rebalance-4jjdt" Feb 23 09:08:06 crc kubenswrapper[4940]: I0223 09:08:06.144436 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b9efcfe-df2d-405e-9f10-d22dbce174e9-combined-ca-bundle\") pod \"swift-ring-rebalance-4jjdt\" (UID: \"1b9efcfe-df2d-405e-9f10-d22dbce174e9\") " pod="openstack/swift-ring-rebalance-4jjdt" Feb 23 09:08:06 crc kubenswrapper[4940]: I0223 09:08:06.159163 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6jcfq\" (UniqueName: \"kubernetes.io/projected/1b9efcfe-df2d-405e-9f10-d22dbce174e9-kube-api-access-6jcfq\") pod \"swift-ring-rebalance-4jjdt\" (UID: \"1b9efcfe-df2d-405e-9f10-d22dbce174e9\") " pod="openstack/swift-ring-rebalance-4jjdt" Feb 23 09:08:06 crc kubenswrapper[4940]: I0223 09:08:06.268941 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-2013-account-create-update-5rbqj" Feb 23 09:08:06 crc kubenswrapper[4940]: I0223 09:08:06.279568 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-4jjdt" Feb 23 09:08:06 crc kubenswrapper[4940]: I0223 09:08:06.344267 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pgtr7\" (UniqueName: \"kubernetes.io/projected/b7335ef7-f87f-4e06-9992-59f607a87dfa-kube-api-access-pgtr7\") pod \"b7335ef7-f87f-4e06-9992-59f607a87dfa\" (UID: \"b7335ef7-f87f-4e06-9992-59f607a87dfa\") " Feb 23 09:08:06 crc kubenswrapper[4940]: I0223 09:08:06.344702 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b7335ef7-f87f-4e06-9992-59f607a87dfa-operator-scripts\") pod \"b7335ef7-f87f-4e06-9992-59f607a87dfa\" (UID: \"b7335ef7-f87f-4e06-9992-59f607a87dfa\") " Feb 23 09:08:06 crc kubenswrapper[4940]: I0223 09:08:06.350851 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b7335ef7-f87f-4e06-9992-59f607a87dfa-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b7335ef7-f87f-4e06-9992-59f607a87dfa" (UID: "b7335ef7-f87f-4e06-9992-59f607a87dfa"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 09:08:06 crc kubenswrapper[4940]: I0223 09:08:06.353679 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b7335ef7-f87f-4e06-9992-59f607a87dfa-kube-api-access-pgtr7" (OuterVolumeSpecName: "kube-api-access-pgtr7") pod "b7335ef7-f87f-4e06-9992-59f607a87dfa" (UID: "b7335ef7-f87f-4e06-9992-59f607a87dfa"). InnerVolumeSpecName "kube-api-access-pgtr7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 09:08:06 crc kubenswrapper[4940]: I0223 09:08:06.359105 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-25d4-account-create-update-xmnzj" Feb 23 09:08:06 crc kubenswrapper[4940]: I0223 09:08:06.370428 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-htjl5" Feb 23 09:08:06 crc kubenswrapper[4940]: I0223 09:08:06.453129 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a40974fa-e647-45d0-b3a4-6d9f99b3039d-operator-scripts\") pod \"a40974fa-e647-45d0-b3a4-6d9f99b3039d\" (UID: \"a40974fa-e647-45d0-b3a4-6d9f99b3039d\") " Feb 23 09:08:06 crc kubenswrapper[4940]: I0223 09:08:06.453241 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b3b4aa31-5e69-4df5-ba1c-19b12f8ba67b-operator-scripts\") pod \"b3b4aa31-5e69-4df5-ba1c-19b12f8ba67b\" (UID: \"b3b4aa31-5e69-4df5-ba1c-19b12f8ba67b\") " Feb 23 09:08:06 crc kubenswrapper[4940]: I0223 09:08:06.453277 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2skt5\" (UniqueName: \"kubernetes.io/projected/b3b4aa31-5e69-4df5-ba1c-19b12f8ba67b-kube-api-access-2skt5\") pod \"b3b4aa31-5e69-4df5-ba1c-19b12f8ba67b\" (UID: \"b3b4aa31-5e69-4df5-ba1c-19b12f8ba67b\") " Feb 23 09:08:06 crc kubenswrapper[4940]: I0223 09:08:06.453315 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2cx4b\" (UniqueName: \"kubernetes.io/projected/a40974fa-e647-45d0-b3a4-6d9f99b3039d-kube-api-access-2cx4b\") pod \"a40974fa-e647-45d0-b3a4-6d9f99b3039d\" (UID: \"a40974fa-e647-45d0-b3a4-6d9f99b3039d\") " Feb 23 09:08:06 crc kubenswrapper[4940]: I0223 09:08:06.453798 4940 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b7335ef7-f87f-4e06-9992-59f607a87dfa-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 23 09:08:06 crc kubenswrapper[4940]: I0223 09:08:06.453813 4940 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pgtr7\" (UniqueName: \"kubernetes.io/projected/b7335ef7-f87f-4e06-9992-59f607a87dfa-kube-api-access-pgtr7\") on node \"crc\" DevicePath \"\"" Feb 23 09:08:06 crc kubenswrapper[4940]: I0223 09:08:06.454854 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a40974fa-e647-45d0-b3a4-6d9f99b3039d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a40974fa-e647-45d0-b3a4-6d9f99b3039d" (UID: "a40974fa-e647-45d0-b3a4-6d9f99b3039d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 09:08:06 crc kubenswrapper[4940]: I0223 09:08:06.455394 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b3b4aa31-5e69-4df5-ba1c-19b12f8ba67b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b3b4aa31-5e69-4df5-ba1c-19b12f8ba67b" (UID: "b3b4aa31-5e69-4df5-ba1c-19b12f8ba67b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 09:08:06 crc kubenswrapper[4940]: I0223 09:08:06.458203 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b3b4aa31-5e69-4df5-ba1c-19b12f8ba67b-kube-api-access-2skt5" (OuterVolumeSpecName: "kube-api-access-2skt5") pod "b3b4aa31-5e69-4df5-ba1c-19b12f8ba67b" (UID: "b3b4aa31-5e69-4df5-ba1c-19b12f8ba67b"). InnerVolumeSpecName "kube-api-access-2skt5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 09:08:06 crc kubenswrapper[4940]: I0223 09:08:06.476903 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a40974fa-e647-45d0-b3a4-6d9f99b3039d-kube-api-access-2cx4b" (OuterVolumeSpecName: "kube-api-access-2cx4b") pod "a40974fa-e647-45d0-b3a4-6d9f99b3039d" (UID: "a40974fa-e647-45d0-b3a4-6d9f99b3039d"). InnerVolumeSpecName "kube-api-access-2cx4b". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 09:08:06 crc kubenswrapper[4940]: I0223 09:08:06.555386 4940 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a40974fa-e647-45d0-b3a4-6d9f99b3039d-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 23 09:08:06 crc kubenswrapper[4940]: I0223 09:08:06.555421 4940 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b3b4aa31-5e69-4df5-ba1c-19b12f8ba67b-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 23 09:08:06 crc kubenswrapper[4940]: I0223 09:08:06.555431 4940 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2skt5\" (UniqueName: \"kubernetes.io/projected/b3b4aa31-5e69-4df5-ba1c-19b12f8ba67b-kube-api-access-2skt5\") on node \"crc\" DevicePath \"\"" Feb 23 09:08:06 crc kubenswrapper[4940]: I0223 09:08:06.555442 4940 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2cx4b\" (UniqueName: \"kubernetes.io/projected/a40974fa-e647-45d0-b3a4-6d9f99b3039d-kube-api-access-2cx4b\") on node \"crc\" DevicePath \"\"" Feb 23 09:08:06 crc kubenswrapper[4940]: I0223 09:08:06.809280 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-4jjdt"] Feb 23 09:08:06 crc kubenswrapper[4940]: W0223 09:08:06.827092 4940 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1b9efcfe_df2d_405e_9f10_d22dbce174e9.slice/crio-7daf4fdf2db2137a4e9297d95ff2c7ddd1b8baa90831edeb1ce590be8ac1321d WatchSource:0}: Error finding container 7daf4fdf2db2137a4e9297d95ff2c7ddd1b8baa90831edeb1ce590be8ac1321d: Status 404 returned error can't find the container with id 7daf4fdf2db2137a4e9297d95ff2c7ddd1b8baa90831edeb1ce590be8ac1321d Feb 23 09:08:06 crc kubenswrapper[4940]: I0223 09:08:06.847806 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-p9579"] Feb 23 09:08:06 crc kubenswrapper[4940]: E0223 09:08:06.848246 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3b4aa31-5e69-4df5-ba1c-19b12f8ba67b" containerName="mariadb-database-create" Feb 23 09:08:06 crc kubenswrapper[4940]: I0223 09:08:06.848273 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3b4aa31-5e69-4df5-ba1c-19b12f8ba67b" containerName="mariadb-database-create" Feb 23 09:08:06 crc kubenswrapper[4940]: E0223 09:08:06.848307 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a40974fa-e647-45d0-b3a4-6d9f99b3039d" containerName="mariadb-account-create-update" Feb 23 09:08:06 crc kubenswrapper[4940]: I0223 09:08:06.848316 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="a40974fa-e647-45d0-b3a4-6d9f99b3039d" containerName="mariadb-account-create-update" Feb 23 09:08:06 crc kubenswrapper[4940]: E0223 09:08:06.848329 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b7335ef7-f87f-4e06-9992-59f607a87dfa" containerName="mariadb-account-create-update" Feb 23 09:08:06 crc kubenswrapper[4940]: I0223 09:08:06.848337 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="b7335ef7-f87f-4e06-9992-59f607a87dfa" containerName="mariadb-account-create-update" Feb 23 09:08:06 crc kubenswrapper[4940]: I0223 09:08:06.848552 4940 memory_manager.go:354] "RemoveStaleState removing state" podUID="a40974fa-e647-45d0-b3a4-6d9f99b3039d" containerName="mariadb-account-create-update" Feb 23 09:08:06 crc kubenswrapper[4940]: I0223 09:08:06.848575 4940 memory_manager.go:354] "RemoveStaleState removing state" podUID="b7335ef7-f87f-4e06-9992-59f607a87dfa" containerName="mariadb-account-create-update" Feb 23 09:08:06 crc kubenswrapper[4940]: I0223 09:08:06.848587 4940 memory_manager.go:354] "RemoveStaleState removing state" podUID="b3b4aa31-5e69-4df5-ba1c-19b12f8ba67b" containerName="mariadb-database-create" Feb 23 09:08:06 crc kubenswrapper[4940]: I0223 09:08:06.849961 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-p9579" Feb 23 09:08:06 crc kubenswrapper[4940]: I0223 09:08:06.851799 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-htjl5" event={"ID":"b3b4aa31-5e69-4df5-ba1c-19b12f8ba67b","Type":"ContainerDied","Data":"52e820f0c05e97a99d3b783dbb84db4d9433d8452c4d0faaab271bb865b8b7a5"} Feb 23 09:08:06 crc kubenswrapper[4940]: I0223 09:08:06.851836 4940 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="52e820f0c05e97a99d3b783dbb84db4d9433d8452c4d0faaab271bb865b8b7a5" Feb 23 09:08:06 crc kubenswrapper[4940]: I0223 09:08:06.851918 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-htjl5" Feb 23 09:08:06 crc kubenswrapper[4940]: I0223 09:08:06.853082 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Feb 23 09:08:06 crc kubenswrapper[4940]: I0223 09:08:06.853195 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-h58hk" Feb 23 09:08:06 crc kubenswrapper[4940]: I0223 09:08:06.854784 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-l59jm" event={"ID":"03982047-fd03-484c-9467-564d3ba0876a","Type":"ContainerStarted","Data":"a479abe871aff9bead30bdd78c529dd77e9226d086ccbee1c43f279d34eec0c0"} Feb 23 09:08:06 crc kubenswrapper[4940]: I0223 09:08:06.854892 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-698758b865-l59jm" Feb 23 09:08:06 crc kubenswrapper[4940]: I0223 09:08:06.856691 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-2013-account-create-update-5rbqj" Feb 23 09:08:06 crc kubenswrapper[4940]: I0223 09:08:06.860784 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-2013-account-create-update-5rbqj" event={"ID":"b7335ef7-f87f-4e06-9992-59f607a87dfa","Type":"ContainerDied","Data":"6983c9ada1c13616a090a79a504a9aa252acfe100439537baaa5446d6694e471"} Feb 23 09:08:06 crc kubenswrapper[4940]: I0223 09:08:06.860847 4940 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6983c9ada1c13616a090a79a504a9aa252acfe100439537baaa5446d6694e471" Feb 23 09:08:06 crc kubenswrapper[4940]: I0223 09:08:06.863635 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-25d4-account-create-update-xmnzj" event={"ID":"a40974fa-e647-45d0-b3a4-6d9f99b3039d","Type":"ContainerDied","Data":"e1e18e252affd6a26b42bb268b9e49bebcb50ac5aa5de6a7efbc3164e76c845c"} Feb 23 09:08:06 crc kubenswrapper[4940]: I0223 09:08:06.863678 4940 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e1e18e252affd6a26b42bb268b9e49bebcb50ac5aa5de6a7efbc3164e76c845c" Feb 23 09:08:06 crc kubenswrapper[4940]: I0223 09:08:06.863706 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-25d4-account-create-update-xmnzj" Feb 23 09:08:06 crc kubenswrapper[4940]: I0223 09:08:06.866335 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-4jjdt" event={"ID":"1b9efcfe-df2d-405e-9f10-d22dbce174e9","Type":"ContainerStarted","Data":"7daf4fdf2db2137a4e9297d95ff2c7ddd1b8baa90831edeb1ce590be8ac1321d"} Feb 23 09:08:06 crc kubenswrapper[4940]: I0223 09:08:06.874483 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-p9579"] Feb 23 09:08:06 crc kubenswrapper[4940]: I0223 09:08:06.915428 4940 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-698758b865-l59jm" podStartSLOduration=2.9154048919999997 podStartE2EDuration="2.915404892s" podCreationTimestamp="2026-02-23 09:08:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 09:08:06.902289144 +0000 UTC m=+1218.285495321" watchObservedRunningTime="2026-02-23 09:08:06.915404892 +0000 UTC m=+1218.298611059" Feb 23 09:08:06 crc kubenswrapper[4940]: I0223 09:08:06.963286 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rllwr\" (UniqueName: \"kubernetes.io/projected/0fd00530-75d3-4e2e-aaf9-4b67a1f2e95e-kube-api-access-rllwr\") pod \"glance-db-sync-p9579\" (UID: \"0fd00530-75d3-4e2e-aaf9-4b67a1f2e95e\") " pod="openstack/glance-db-sync-p9579" Feb 23 09:08:06 crc kubenswrapper[4940]: I0223 09:08:06.963348 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/0fd00530-75d3-4e2e-aaf9-4b67a1f2e95e-db-sync-config-data\") pod \"glance-db-sync-p9579\" (UID: \"0fd00530-75d3-4e2e-aaf9-4b67a1f2e95e\") " pod="openstack/glance-db-sync-p9579" Feb 23 09:08:06 crc kubenswrapper[4940]: I0223 09:08:06.963412 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0fd00530-75d3-4e2e-aaf9-4b67a1f2e95e-combined-ca-bundle\") pod \"glance-db-sync-p9579\" (UID: \"0fd00530-75d3-4e2e-aaf9-4b67a1f2e95e\") " pod="openstack/glance-db-sync-p9579" Feb 23 09:08:06 crc kubenswrapper[4940]: I0223 09:08:06.963432 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0fd00530-75d3-4e2e-aaf9-4b67a1f2e95e-config-data\") pod \"glance-db-sync-p9579\" (UID: \"0fd00530-75d3-4e2e-aaf9-4b67a1f2e95e\") " pod="openstack/glance-db-sync-p9579" Feb 23 09:08:07 crc kubenswrapper[4940]: I0223 09:08:07.064496 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rllwr\" (UniqueName: \"kubernetes.io/projected/0fd00530-75d3-4e2e-aaf9-4b67a1f2e95e-kube-api-access-rllwr\") pod \"glance-db-sync-p9579\" (UID: \"0fd00530-75d3-4e2e-aaf9-4b67a1f2e95e\") " pod="openstack/glance-db-sync-p9579" Feb 23 09:08:07 crc kubenswrapper[4940]: I0223 09:08:07.064550 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/0fd00530-75d3-4e2e-aaf9-4b67a1f2e95e-db-sync-config-data\") pod \"glance-db-sync-p9579\" (UID: \"0fd00530-75d3-4e2e-aaf9-4b67a1f2e95e\") " pod="openstack/glance-db-sync-p9579" Feb 23 09:08:07 crc kubenswrapper[4940]: I0223 09:08:07.064597 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0fd00530-75d3-4e2e-aaf9-4b67a1f2e95e-combined-ca-bundle\") pod \"glance-db-sync-p9579\" (UID: \"0fd00530-75d3-4e2e-aaf9-4b67a1f2e95e\") " pod="openstack/glance-db-sync-p9579" Feb 23 09:08:07 crc kubenswrapper[4940]: I0223 09:08:07.064646 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0fd00530-75d3-4e2e-aaf9-4b67a1f2e95e-config-data\") pod \"glance-db-sync-p9579\" (UID: \"0fd00530-75d3-4e2e-aaf9-4b67a1f2e95e\") " pod="openstack/glance-db-sync-p9579" Feb 23 09:08:07 crc kubenswrapper[4940]: I0223 09:08:07.064692 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/8b985bfb-fd7d-4c37-b935-26bc80e96fc0-etc-swift\") pod \"swift-storage-0\" (UID: \"8b985bfb-fd7d-4c37-b935-26bc80e96fc0\") " pod="openstack/swift-storage-0" Feb 23 09:08:07 crc kubenswrapper[4940]: E0223 09:08:07.064911 4940 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 23 09:08:07 crc kubenswrapper[4940]: E0223 09:08:07.064936 4940 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 23 09:08:07 crc kubenswrapper[4940]: E0223 09:08:07.064984 4940 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8b985bfb-fd7d-4c37-b935-26bc80e96fc0-etc-swift podName:8b985bfb-fd7d-4c37-b935-26bc80e96fc0 nodeName:}" failed. No retries permitted until 2026-02-23 09:08:09.064968217 +0000 UTC m=+1220.448174384 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/8b985bfb-fd7d-4c37-b935-26bc80e96fc0-etc-swift") pod "swift-storage-0" (UID: "8b985bfb-fd7d-4c37-b935-26bc80e96fc0") : configmap "swift-ring-files" not found Feb 23 09:08:07 crc kubenswrapper[4940]: I0223 09:08:07.069697 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0fd00530-75d3-4e2e-aaf9-4b67a1f2e95e-combined-ca-bundle\") pod \"glance-db-sync-p9579\" (UID: \"0fd00530-75d3-4e2e-aaf9-4b67a1f2e95e\") " pod="openstack/glance-db-sync-p9579" Feb 23 09:08:07 crc kubenswrapper[4940]: I0223 09:08:07.069958 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/0fd00530-75d3-4e2e-aaf9-4b67a1f2e95e-db-sync-config-data\") pod \"glance-db-sync-p9579\" (UID: \"0fd00530-75d3-4e2e-aaf9-4b67a1f2e95e\") " pod="openstack/glance-db-sync-p9579" Feb 23 09:08:07 crc kubenswrapper[4940]: I0223 09:08:07.070086 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0fd00530-75d3-4e2e-aaf9-4b67a1f2e95e-config-data\") pod \"glance-db-sync-p9579\" (UID: \"0fd00530-75d3-4e2e-aaf9-4b67a1f2e95e\") " pod="openstack/glance-db-sync-p9579" Feb 23 09:08:07 crc kubenswrapper[4940]: I0223 09:08:07.084086 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rllwr\" (UniqueName: \"kubernetes.io/projected/0fd00530-75d3-4e2e-aaf9-4b67a1f2e95e-kube-api-access-rllwr\") pod \"glance-db-sync-p9579\" (UID: \"0fd00530-75d3-4e2e-aaf9-4b67a1f2e95e\") " pod="openstack/glance-db-sync-p9579" Feb 23 09:08:07 crc kubenswrapper[4940]: I0223 09:08:07.176418 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-p9579" Feb 23 09:08:07 crc kubenswrapper[4940]: I0223 09:08:07.736801 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-p9579"] Feb 23 09:08:07 crc kubenswrapper[4940]: W0223 09:08:07.739626 4940 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0fd00530_75d3_4e2e_aaf9_4b67a1f2e95e.slice/crio-2b1e87a7185a10bbf14dcda5a0dc9316d86c754aafb696c2f0885c7831ef7e53 WatchSource:0}: Error finding container 2b1e87a7185a10bbf14dcda5a0dc9316d86c754aafb696c2f0885c7831ef7e53: Status 404 returned error can't find the container with id 2b1e87a7185a10bbf14dcda5a0dc9316d86c754aafb696c2f0885c7831ef7e53 Feb 23 09:08:07 crc kubenswrapper[4940]: I0223 09:08:07.876295 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-p9579" event={"ID":"0fd00530-75d3-4e2e-aaf9-4b67a1f2e95e","Type":"ContainerStarted","Data":"2b1e87a7185a10bbf14dcda5a0dc9316d86c754aafb696c2f0885c7831ef7e53"} Feb 23 09:08:08 crc kubenswrapper[4940]: I0223 09:08:08.479980 4940 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-tqvm8"] Feb 23 09:08:08 crc kubenswrapper[4940]: I0223 09:08:08.487562 4940 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-tqvm8"] Feb 23 09:08:08 crc kubenswrapper[4940]: I0223 09:08:08.558478 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-k4dkj"] Feb 23 09:08:08 crc kubenswrapper[4940]: I0223 09:08:08.560306 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-k4dkj" Feb 23 09:08:08 crc kubenswrapper[4940]: I0223 09:08:08.563208 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Feb 23 09:08:08 crc kubenswrapper[4940]: I0223 09:08:08.581500 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-k4dkj"] Feb 23 09:08:08 crc kubenswrapper[4940]: I0223 09:08:08.638725 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cf2e1100-e815-4e3c-9d88-aa5cf3fb47d0-operator-scripts\") pod \"root-account-create-update-k4dkj\" (UID: \"cf2e1100-e815-4e3c-9d88-aa5cf3fb47d0\") " pod="openstack/root-account-create-update-k4dkj" Feb 23 09:08:08 crc kubenswrapper[4940]: I0223 09:08:08.638803 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dp9t9\" (UniqueName: \"kubernetes.io/projected/cf2e1100-e815-4e3c-9d88-aa5cf3fb47d0-kube-api-access-dp9t9\") pod \"root-account-create-update-k4dkj\" (UID: \"cf2e1100-e815-4e3c-9d88-aa5cf3fb47d0\") " pod="openstack/root-account-create-update-k4dkj" Feb 23 09:08:08 crc kubenswrapper[4940]: I0223 09:08:08.740970 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cf2e1100-e815-4e3c-9d88-aa5cf3fb47d0-operator-scripts\") pod \"root-account-create-update-k4dkj\" (UID: \"cf2e1100-e815-4e3c-9d88-aa5cf3fb47d0\") " pod="openstack/root-account-create-update-k4dkj" Feb 23 09:08:08 crc kubenswrapper[4940]: I0223 09:08:08.741027 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dp9t9\" (UniqueName: \"kubernetes.io/projected/cf2e1100-e815-4e3c-9d88-aa5cf3fb47d0-kube-api-access-dp9t9\") pod \"root-account-create-update-k4dkj\" (UID: \"cf2e1100-e815-4e3c-9d88-aa5cf3fb47d0\") " pod="openstack/root-account-create-update-k4dkj" Feb 23 09:08:08 crc kubenswrapper[4940]: I0223 09:08:08.742512 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cf2e1100-e815-4e3c-9d88-aa5cf3fb47d0-operator-scripts\") pod \"root-account-create-update-k4dkj\" (UID: \"cf2e1100-e815-4e3c-9d88-aa5cf3fb47d0\") " pod="openstack/root-account-create-update-k4dkj" Feb 23 09:08:08 crc kubenswrapper[4940]: I0223 09:08:08.760198 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dp9t9\" (UniqueName: \"kubernetes.io/projected/cf2e1100-e815-4e3c-9d88-aa5cf3fb47d0-kube-api-access-dp9t9\") pod \"root-account-create-update-k4dkj\" (UID: \"cf2e1100-e815-4e3c-9d88-aa5cf3fb47d0\") " pod="openstack/root-account-create-update-k4dkj" Feb 23 09:08:08 crc kubenswrapper[4940]: I0223 09:08:08.910434 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-k4dkj" Feb 23 09:08:09 crc kubenswrapper[4940]: I0223 09:08:09.146043 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/8b985bfb-fd7d-4c37-b935-26bc80e96fc0-etc-swift\") pod \"swift-storage-0\" (UID: \"8b985bfb-fd7d-4c37-b935-26bc80e96fc0\") " pod="openstack/swift-storage-0" Feb 23 09:08:09 crc kubenswrapper[4940]: E0223 09:08:09.146276 4940 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 23 09:08:09 crc kubenswrapper[4940]: E0223 09:08:09.146327 4940 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 23 09:08:09 crc kubenswrapper[4940]: E0223 09:08:09.146409 4940 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8b985bfb-fd7d-4c37-b935-26bc80e96fc0-etc-swift podName:8b985bfb-fd7d-4c37-b935-26bc80e96fc0 nodeName:}" failed. No retries permitted until 2026-02-23 09:08:13.14636185 +0000 UTC m=+1224.529568007 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/8b985bfb-fd7d-4c37-b935-26bc80e96fc0-etc-swift") pod "swift-storage-0" (UID: "8b985bfb-fd7d-4c37-b935-26bc80e96fc0") : configmap "swift-ring-files" not found Feb 23 09:08:09 crc kubenswrapper[4940]: I0223 09:08:09.358043 4940 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="15943682-1c2d-49c0-997a-1770d98ce9c2" path="/var/lib/kubelet/pods/15943682-1c2d-49c0-997a-1770d98ce9c2/volumes" Feb 23 09:08:10 crc kubenswrapper[4940]: I0223 09:08:10.378416 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-k4dkj"] Feb 23 09:08:10 crc kubenswrapper[4940]: W0223 09:08:10.385919 4940 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcf2e1100_e815_4e3c_9d88_aa5cf3fb47d0.slice/crio-d028b2b7a408d326801f98fed988194753a80d3bb481ba9aeeb0ec12fad34c9c WatchSource:0}: Error finding container d028b2b7a408d326801f98fed988194753a80d3bb481ba9aeeb0ec12fad34c9c: Status 404 returned error can't find the container with id d028b2b7a408d326801f98fed988194753a80d3bb481ba9aeeb0ec12fad34c9c Feb 23 09:08:10 crc kubenswrapper[4940]: I0223 09:08:10.905740 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-4jjdt" event={"ID":"1b9efcfe-df2d-405e-9f10-d22dbce174e9","Type":"ContainerStarted","Data":"3b8ecdd503d2223b4c7153fbc75a1f144bffc636cd6f85abf3c99b81ac5ae618"} Feb 23 09:08:10 crc kubenswrapper[4940]: I0223 09:08:10.907581 4940 generic.go:334] "Generic (PLEG): container finished" podID="cf2e1100-e815-4e3c-9d88-aa5cf3fb47d0" containerID="1c7944c7a25ff1cdb994986f3df318030a88b5c9893e000a6275cb74fed9313b" exitCode=0 Feb 23 09:08:10 crc kubenswrapper[4940]: I0223 09:08:10.907648 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-k4dkj" event={"ID":"cf2e1100-e815-4e3c-9d88-aa5cf3fb47d0","Type":"ContainerDied","Data":"1c7944c7a25ff1cdb994986f3df318030a88b5c9893e000a6275cb74fed9313b"} Feb 23 09:08:10 crc kubenswrapper[4940]: I0223 09:08:10.907667 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-k4dkj" event={"ID":"cf2e1100-e815-4e3c-9d88-aa5cf3fb47d0","Type":"ContainerStarted","Data":"d028b2b7a408d326801f98fed988194753a80d3bb481ba9aeeb0ec12fad34c9c"} Feb 23 09:08:10 crc kubenswrapper[4940]: I0223 09:08:10.936152 4940 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-4jjdt" podStartSLOduration=2.911519567 podStartE2EDuration="5.936134212s" podCreationTimestamp="2026-02-23 09:08:05 +0000 UTC" firstStartedPulling="2026-02-23 09:08:06.831186985 +0000 UTC m=+1218.214393162" lastFinishedPulling="2026-02-23 09:08:09.85580164 +0000 UTC m=+1221.239007807" observedRunningTime="2026-02-23 09:08:10.929525516 +0000 UTC m=+1222.312731673" watchObservedRunningTime="2026-02-23 09:08:10.936134212 +0000 UTC m=+1222.319340369" Feb 23 09:08:12 crc kubenswrapper[4940]: I0223 09:08:12.029122 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Feb 23 09:08:12 crc kubenswrapper[4940]: I0223 09:08:12.310438 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-k4dkj" Feb 23 09:08:12 crc kubenswrapper[4940]: I0223 09:08:12.440075 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dp9t9\" (UniqueName: \"kubernetes.io/projected/cf2e1100-e815-4e3c-9d88-aa5cf3fb47d0-kube-api-access-dp9t9\") pod \"cf2e1100-e815-4e3c-9d88-aa5cf3fb47d0\" (UID: \"cf2e1100-e815-4e3c-9d88-aa5cf3fb47d0\") " Feb 23 09:08:12 crc kubenswrapper[4940]: I0223 09:08:12.440391 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cf2e1100-e815-4e3c-9d88-aa5cf3fb47d0-operator-scripts\") pod \"cf2e1100-e815-4e3c-9d88-aa5cf3fb47d0\" (UID: \"cf2e1100-e815-4e3c-9d88-aa5cf3fb47d0\") " Feb 23 09:08:12 crc kubenswrapper[4940]: I0223 09:08:12.440964 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cf2e1100-e815-4e3c-9d88-aa5cf3fb47d0-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "cf2e1100-e815-4e3c-9d88-aa5cf3fb47d0" (UID: "cf2e1100-e815-4e3c-9d88-aa5cf3fb47d0"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 09:08:12 crc kubenswrapper[4940]: I0223 09:08:12.445952 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cf2e1100-e815-4e3c-9d88-aa5cf3fb47d0-kube-api-access-dp9t9" (OuterVolumeSpecName: "kube-api-access-dp9t9") pod "cf2e1100-e815-4e3c-9d88-aa5cf3fb47d0" (UID: "cf2e1100-e815-4e3c-9d88-aa5cf3fb47d0"). InnerVolumeSpecName "kube-api-access-dp9t9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 09:08:12 crc kubenswrapper[4940]: I0223 09:08:12.542813 4940 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dp9t9\" (UniqueName: \"kubernetes.io/projected/cf2e1100-e815-4e3c-9d88-aa5cf3fb47d0-kube-api-access-dp9t9\") on node \"crc\" DevicePath \"\"" Feb 23 09:08:12 crc kubenswrapper[4940]: I0223 09:08:12.542861 4940 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cf2e1100-e815-4e3c-9d88-aa5cf3fb47d0-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 23 09:08:12 crc kubenswrapper[4940]: I0223 09:08:12.940051 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-k4dkj" event={"ID":"cf2e1100-e815-4e3c-9d88-aa5cf3fb47d0","Type":"ContainerDied","Data":"d028b2b7a408d326801f98fed988194753a80d3bb481ba9aeeb0ec12fad34c9c"} Feb 23 09:08:12 crc kubenswrapper[4940]: I0223 09:08:12.940106 4940 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d028b2b7a408d326801f98fed988194753a80d3bb481ba9aeeb0ec12fad34c9c" Feb 23 09:08:12 crc kubenswrapper[4940]: I0223 09:08:12.940178 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-k4dkj" Feb 23 09:08:13 crc kubenswrapper[4940]: I0223 09:08:13.154377 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/8b985bfb-fd7d-4c37-b935-26bc80e96fc0-etc-swift\") pod \"swift-storage-0\" (UID: \"8b985bfb-fd7d-4c37-b935-26bc80e96fc0\") " pod="openstack/swift-storage-0" Feb 23 09:08:13 crc kubenswrapper[4940]: E0223 09:08:13.154628 4940 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 23 09:08:13 crc kubenswrapper[4940]: E0223 09:08:13.154695 4940 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 23 09:08:13 crc kubenswrapper[4940]: E0223 09:08:13.154752 4940 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8b985bfb-fd7d-4c37-b935-26bc80e96fc0-etc-swift podName:8b985bfb-fd7d-4c37-b935-26bc80e96fc0 nodeName:}" failed. No retries permitted until 2026-02-23 09:08:21.154735215 +0000 UTC m=+1232.537941362 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/8b985bfb-fd7d-4c37-b935-26bc80e96fc0-etc-swift") pod "swift-storage-0" (UID: "8b985bfb-fd7d-4c37-b935-26bc80e96fc0") : configmap "swift-ring-files" not found Feb 23 09:08:13 crc kubenswrapper[4940]: I0223 09:08:13.950288 4940 generic.go:334] "Generic (PLEG): container finished" podID="987e4448-8da2-41e3-9dba-777d599609f5" containerID="23d2df5af434448a2262ddf59ca2c552ad6b605e7bc232716cfffc3c75d49713" exitCode=0 Feb 23 09:08:13 crc kubenswrapper[4940]: I0223 09:08:13.950339 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"987e4448-8da2-41e3-9dba-777d599609f5","Type":"ContainerDied","Data":"23d2df5af434448a2262ddf59ca2c552ad6b605e7bc232716cfffc3c75d49713"} Feb 23 09:08:14 crc kubenswrapper[4940]: I0223 09:08:14.376858 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-698758b865-l59jm" Feb 23 09:08:14 crc kubenswrapper[4940]: I0223 09:08:14.433062 4940 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-5lc7s"] Feb 23 09:08:14 crc kubenswrapper[4940]: I0223 09:08:14.433290 4940 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-86db49b7ff-5lc7s" podUID="3ee12f4d-4ae5-496e-b7dc-71b4e3b80300" containerName="dnsmasq-dns" containerID="cri-o://3fd79e18a9a35c110c0ea409f0a2bba6996e2006a65f895f949e188b602368ff" gracePeriod=10 Feb 23 09:08:14 crc kubenswrapper[4940]: I0223 09:08:14.959307 4940 generic.go:334] "Generic (PLEG): container finished" podID="3ee12f4d-4ae5-496e-b7dc-71b4e3b80300" containerID="3fd79e18a9a35c110c0ea409f0a2bba6996e2006a65f895f949e188b602368ff" exitCode=0 Feb 23 09:08:14 crc kubenswrapper[4940]: I0223 09:08:14.959376 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-5lc7s" event={"ID":"3ee12f4d-4ae5-496e-b7dc-71b4e3b80300","Type":"ContainerDied","Data":"3fd79e18a9a35c110c0ea409f0a2bba6996e2006a65f895f949e188b602368ff"} Feb 23 09:08:14 crc kubenswrapper[4940]: I0223 09:08:14.961508 4940 generic.go:334] "Generic (PLEG): container finished" podID="7453ffea-4a1f-426c-ac9c-3377973fdb19" containerID="bb4754e574db70293cbad0d3fe8fac2eb6386e6f7db81f1c3fe396e83c71ff0e" exitCode=0 Feb 23 09:08:14 crc kubenswrapper[4940]: I0223 09:08:14.961546 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"7453ffea-4a1f-426c-ac9c-3377973fdb19","Type":"ContainerDied","Data":"bb4754e574db70293cbad0d3fe8fac2eb6386e6f7db81f1c3fe396e83c71ff0e"} Feb 23 09:08:16 crc kubenswrapper[4940]: I0223 09:08:16.743378 4940 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-86db49b7ff-5lc7s" podUID="3ee12f4d-4ae5-496e-b7dc-71b4e3b80300" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.110:5353: connect: connection refused" Feb 23 09:08:17 crc kubenswrapper[4940]: I0223 09:08:17.011553 4940 generic.go:334] "Generic (PLEG): container finished" podID="1b9efcfe-df2d-405e-9f10-d22dbce174e9" containerID="3b8ecdd503d2223b4c7153fbc75a1f144bffc636cd6f85abf3c99b81ac5ae618" exitCode=0 Feb 23 09:08:17 crc kubenswrapper[4940]: I0223 09:08:17.011818 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-4jjdt" event={"ID":"1b9efcfe-df2d-405e-9f10-d22dbce174e9","Type":"ContainerDied","Data":"3b8ecdd503d2223b4c7153fbc75a1f144bffc636cd6f85abf3c99b81ac5ae618"} Feb 23 09:08:17 crc kubenswrapper[4940]: I0223 09:08:17.594024 4940 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-skhdb" podUID="f4dfcca1-21ca-42ff-bed0-1bb4f8d14aaa" containerName="ovn-controller" probeResult="failure" output=< Feb 23 09:08:17 crc kubenswrapper[4940]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Feb 23 09:08:17 crc kubenswrapper[4940]: > Feb 23 09:08:17 crc kubenswrapper[4940]: I0223 09:08:17.619152 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-srtp4" Feb 23 09:08:17 crc kubenswrapper[4940]: I0223 09:08:17.638501 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-srtp4" Feb 23 09:08:17 crc kubenswrapper[4940]: I0223 09:08:17.887348 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-skhdb-config-k47nn"] Feb 23 09:08:17 crc kubenswrapper[4940]: E0223 09:08:17.887781 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf2e1100-e815-4e3c-9d88-aa5cf3fb47d0" containerName="mariadb-account-create-update" Feb 23 09:08:17 crc kubenswrapper[4940]: I0223 09:08:17.887797 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf2e1100-e815-4e3c-9d88-aa5cf3fb47d0" containerName="mariadb-account-create-update" Feb 23 09:08:17 crc kubenswrapper[4940]: I0223 09:08:17.887996 4940 memory_manager.go:354] "RemoveStaleState removing state" podUID="cf2e1100-e815-4e3c-9d88-aa5cf3fb47d0" containerName="mariadb-account-create-update" Feb 23 09:08:17 crc kubenswrapper[4940]: I0223 09:08:17.919534 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-skhdb-config-k47nn"] Feb 23 09:08:17 crc kubenswrapper[4940]: I0223 09:08:17.919677 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-skhdb-config-k47nn" Feb 23 09:08:17 crc kubenswrapper[4940]: I0223 09:08:17.923992 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Feb 23 09:08:17 crc kubenswrapper[4940]: I0223 09:08:17.954540 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/8dbb4d85-4bca-4fe5-8121-b53ddd1b3a6f-var-run-ovn\") pod \"ovn-controller-skhdb-config-k47nn\" (UID: \"8dbb4d85-4bca-4fe5-8121-b53ddd1b3a6f\") " pod="openstack/ovn-controller-skhdb-config-k47nn" Feb 23 09:08:17 crc kubenswrapper[4940]: I0223 09:08:17.954595 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/8dbb4d85-4bca-4fe5-8121-b53ddd1b3a6f-var-run\") pod \"ovn-controller-skhdb-config-k47nn\" (UID: \"8dbb4d85-4bca-4fe5-8121-b53ddd1b3a6f\") " pod="openstack/ovn-controller-skhdb-config-k47nn" Feb 23 09:08:17 crc kubenswrapper[4940]: I0223 09:08:17.954661 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/8dbb4d85-4bca-4fe5-8121-b53ddd1b3a6f-var-log-ovn\") pod \"ovn-controller-skhdb-config-k47nn\" (UID: \"8dbb4d85-4bca-4fe5-8121-b53ddd1b3a6f\") " pod="openstack/ovn-controller-skhdb-config-k47nn" Feb 23 09:08:17 crc kubenswrapper[4940]: I0223 09:08:17.954690 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/8dbb4d85-4bca-4fe5-8121-b53ddd1b3a6f-additional-scripts\") pod \"ovn-controller-skhdb-config-k47nn\" (UID: \"8dbb4d85-4bca-4fe5-8121-b53ddd1b3a6f\") " pod="openstack/ovn-controller-skhdb-config-k47nn" Feb 23 09:08:17 crc kubenswrapper[4940]: I0223 09:08:17.954723 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-clnfj\" (UniqueName: \"kubernetes.io/projected/8dbb4d85-4bca-4fe5-8121-b53ddd1b3a6f-kube-api-access-clnfj\") pod \"ovn-controller-skhdb-config-k47nn\" (UID: \"8dbb4d85-4bca-4fe5-8121-b53ddd1b3a6f\") " pod="openstack/ovn-controller-skhdb-config-k47nn" Feb 23 09:08:17 crc kubenswrapper[4940]: I0223 09:08:17.955597 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8dbb4d85-4bca-4fe5-8121-b53ddd1b3a6f-scripts\") pod \"ovn-controller-skhdb-config-k47nn\" (UID: \"8dbb4d85-4bca-4fe5-8121-b53ddd1b3a6f\") " pod="openstack/ovn-controller-skhdb-config-k47nn" Feb 23 09:08:18 crc kubenswrapper[4940]: I0223 09:08:18.058294 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/8dbb4d85-4bca-4fe5-8121-b53ddd1b3a6f-var-run-ovn\") pod \"ovn-controller-skhdb-config-k47nn\" (UID: \"8dbb4d85-4bca-4fe5-8121-b53ddd1b3a6f\") " pod="openstack/ovn-controller-skhdb-config-k47nn" Feb 23 09:08:18 crc kubenswrapper[4940]: I0223 09:08:18.058357 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/8dbb4d85-4bca-4fe5-8121-b53ddd1b3a6f-var-run\") pod \"ovn-controller-skhdb-config-k47nn\" (UID: \"8dbb4d85-4bca-4fe5-8121-b53ddd1b3a6f\") " pod="openstack/ovn-controller-skhdb-config-k47nn" Feb 23 09:08:18 crc kubenswrapper[4940]: I0223 09:08:18.059134 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/8dbb4d85-4bca-4fe5-8121-b53ddd1b3a6f-var-log-ovn\") pod \"ovn-controller-skhdb-config-k47nn\" (UID: \"8dbb4d85-4bca-4fe5-8121-b53ddd1b3a6f\") " pod="openstack/ovn-controller-skhdb-config-k47nn" Feb 23 09:08:18 crc kubenswrapper[4940]: I0223 09:08:18.059208 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/8dbb4d85-4bca-4fe5-8121-b53ddd1b3a6f-additional-scripts\") pod \"ovn-controller-skhdb-config-k47nn\" (UID: \"8dbb4d85-4bca-4fe5-8121-b53ddd1b3a6f\") " pod="openstack/ovn-controller-skhdb-config-k47nn" Feb 23 09:08:18 crc kubenswrapper[4940]: I0223 09:08:18.059900 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/8dbb4d85-4bca-4fe5-8121-b53ddd1b3a6f-var-run\") pod \"ovn-controller-skhdb-config-k47nn\" (UID: \"8dbb4d85-4bca-4fe5-8121-b53ddd1b3a6f\") " pod="openstack/ovn-controller-skhdb-config-k47nn" Feb 23 09:08:18 crc kubenswrapper[4940]: I0223 09:08:18.060211 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-clnfj\" (UniqueName: \"kubernetes.io/projected/8dbb4d85-4bca-4fe5-8121-b53ddd1b3a6f-kube-api-access-clnfj\") pod \"ovn-controller-skhdb-config-k47nn\" (UID: \"8dbb4d85-4bca-4fe5-8121-b53ddd1b3a6f\") " pod="openstack/ovn-controller-skhdb-config-k47nn" Feb 23 09:08:18 crc kubenswrapper[4940]: I0223 09:08:18.060251 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8dbb4d85-4bca-4fe5-8121-b53ddd1b3a6f-scripts\") pod \"ovn-controller-skhdb-config-k47nn\" (UID: \"8dbb4d85-4bca-4fe5-8121-b53ddd1b3a6f\") " pod="openstack/ovn-controller-skhdb-config-k47nn" Feb 23 09:08:18 crc kubenswrapper[4940]: I0223 09:08:18.061563 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/8dbb4d85-4bca-4fe5-8121-b53ddd1b3a6f-var-run-ovn\") pod \"ovn-controller-skhdb-config-k47nn\" (UID: \"8dbb4d85-4bca-4fe5-8121-b53ddd1b3a6f\") " pod="openstack/ovn-controller-skhdb-config-k47nn" Feb 23 09:08:18 crc kubenswrapper[4940]: I0223 09:08:18.061768 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/8dbb4d85-4bca-4fe5-8121-b53ddd1b3a6f-var-log-ovn\") pod \"ovn-controller-skhdb-config-k47nn\" (UID: \"8dbb4d85-4bca-4fe5-8121-b53ddd1b3a6f\") " pod="openstack/ovn-controller-skhdb-config-k47nn" Feb 23 09:08:18 crc kubenswrapper[4940]: I0223 09:08:18.062775 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/8dbb4d85-4bca-4fe5-8121-b53ddd1b3a6f-additional-scripts\") pod \"ovn-controller-skhdb-config-k47nn\" (UID: \"8dbb4d85-4bca-4fe5-8121-b53ddd1b3a6f\") " pod="openstack/ovn-controller-skhdb-config-k47nn" Feb 23 09:08:18 crc kubenswrapper[4940]: I0223 09:08:18.066539 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8dbb4d85-4bca-4fe5-8121-b53ddd1b3a6f-scripts\") pod \"ovn-controller-skhdb-config-k47nn\" (UID: \"8dbb4d85-4bca-4fe5-8121-b53ddd1b3a6f\") " pod="openstack/ovn-controller-skhdb-config-k47nn" Feb 23 09:08:18 crc kubenswrapper[4940]: I0223 09:08:18.083961 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-clnfj\" (UniqueName: \"kubernetes.io/projected/8dbb4d85-4bca-4fe5-8121-b53ddd1b3a6f-kube-api-access-clnfj\") pod \"ovn-controller-skhdb-config-k47nn\" (UID: \"8dbb4d85-4bca-4fe5-8121-b53ddd1b3a6f\") " pod="openstack/ovn-controller-skhdb-config-k47nn" Feb 23 09:08:18 crc kubenswrapper[4940]: I0223 09:08:18.302452 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-skhdb-config-k47nn" Feb 23 09:08:21 crc kubenswrapper[4940]: I0223 09:08:21.238930 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/8b985bfb-fd7d-4c37-b935-26bc80e96fc0-etc-swift\") pod \"swift-storage-0\" (UID: \"8b985bfb-fd7d-4c37-b935-26bc80e96fc0\") " pod="openstack/swift-storage-0" Feb 23 09:08:21 crc kubenswrapper[4940]: I0223 09:08:21.247440 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/8b985bfb-fd7d-4c37-b935-26bc80e96fc0-etc-swift\") pod \"swift-storage-0\" (UID: \"8b985bfb-fd7d-4c37-b935-26bc80e96fc0\") " pod="openstack/swift-storage-0" Feb 23 09:08:21 crc kubenswrapper[4940]: I0223 09:08:21.510535 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Feb 23 09:08:21 crc kubenswrapper[4940]: I0223 09:08:21.743554 4940 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-86db49b7ff-5lc7s" podUID="3ee12f4d-4ae5-496e-b7dc-71b4e3b80300" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.110:5353: connect: connection refused" Feb 23 09:08:22 crc kubenswrapper[4940]: I0223 09:08:22.622772 4940 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-skhdb" podUID="f4dfcca1-21ca-42ff-bed0-1bb4f8d14aaa" containerName="ovn-controller" probeResult="failure" output=< Feb 23 09:08:22 crc kubenswrapper[4940]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Feb 23 09:08:22 crc kubenswrapper[4940]: > Feb 23 09:08:22 crc kubenswrapper[4940]: I0223 09:08:22.686423 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-4jjdt" Feb 23 09:08:22 crc kubenswrapper[4940]: I0223 09:08:22.754518 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-5lc7s" Feb 23 09:08:22 crc kubenswrapper[4940]: I0223 09:08:22.761372 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/1b9efcfe-df2d-405e-9f10-d22dbce174e9-etc-swift\") pod \"1b9efcfe-df2d-405e-9f10-d22dbce174e9\" (UID: \"1b9efcfe-df2d-405e-9f10-d22dbce174e9\") " Feb 23 09:08:22 crc kubenswrapper[4940]: I0223 09:08:22.761432 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1b9efcfe-df2d-405e-9f10-d22dbce174e9-scripts\") pod \"1b9efcfe-df2d-405e-9f10-d22dbce174e9\" (UID: \"1b9efcfe-df2d-405e-9f10-d22dbce174e9\") " Feb 23 09:08:22 crc kubenswrapper[4940]: I0223 09:08:22.761464 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3ee12f4d-4ae5-496e-b7dc-71b4e3b80300-config\") pod \"3ee12f4d-4ae5-496e-b7dc-71b4e3b80300\" (UID: \"3ee12f4d-4ae5-496e-b7dc-71b4e3b80300\") " Feb 23 09:08:22 crc kubenswrapper[4940]: I0223 09:08:22.761504 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/1b9efcfe-df2d-405e-9f10-d22dbce174e9-dispersionconf\") pod \"1b9efcfe-df2d-405e-9f10-d22dbce174e9\" (UID: \"1b9efcfe-df2d-405e-9f10-d22dbce174e9\") " Feb 23 09:08:22 crc kubenswrapper[4940]: I0223 09:08:22.761534 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3ee12f4d-4ae5-496e-b7dc-71b4e3b80300-ovsdbserver-sb\") pod \"3ee12f4d-4ae5-496e-b7dc-71b4e3b80300\" (UID: \"3ee12f4d-4ae5-496e-b7dc-71b4e3b80300\") " Feb 23 09:08:22 crc kubenswrapper[4940]: I0223 09:08:22.761554 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b9efcfe-df2d-405e-9f10-d22dbce174e9-combined-ca-bundle\") pod \"1b9efcfe-df2d-405e-9f10-d22dbce174e9\" (UID: \"1b9efcfe-df2d-405e-9f10-d22dbce174e9\") " Feb 23 09:08:22 crc kubenswrapper[4940]: I0223 09:08:22.761580 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3ee12f4d-4ae5-496e-b7dc-71b4e3b80300-dns-svc\") pod \"3ee12f4d-4ae5-496e-b7dc-71b4e3b80300\" (UID: \"3ee12f4d-4ae5-496e-b7dc-71b4e3b80300\") " Feb 23 09:08:22 crc kubenswrapper[4940]: I0223 09:08:22.761628 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2cxml\" (UniqueName: \"kubernetes.io/projected/3ee12f4d-4ae5-496e-b7dc-71b4e3b80300-kube-api-access-2cxml\") pod \"3ee12f4d-4ae5-496e-b7dc-71b4e3b80300\" (UID: \"3ee12f4d-4ae5-496e-b7dc-71b4e3b80300\") " Feb 23 09:08:22 crc kubenswrapper[4940]: I0223 09:08:22.761677 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3ee12f4d-4ae5-496e-b7dc-71b4e3b80300-ovsdbserver-nb\") pod \"3ee12f4d-4ae5-496e-b7dc-71b4e3b80300\" (UID: \"3ee12f4d-4ae5-496e-b7dc-71b4e3b80300\") " Feb 23 09:08:22 crc kubenswrapper[4940]: I0223 09:08:22.761697 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/1b9efcfe-df2d-405e-9f10-d22dbce174e9-ring-data-devices\") pod \"1b9efcfe-df2d-405e-9f10-d22dbce174e9\" (UID: \"1b9efcfe-df2d-405e-9f10-d22dbce174e9\") " Feb 23 09:08:22 crc kubenswrapper[4940]: I0223 09:08:22.761747 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6jcfq\" (UniqueName: \"kubernetes.io/projected/1b9efcfe-df2d-405e-9f10-d22dbce174e9-kube-api-access-6jcfq\") pod \"1b9efcfe-df2d-405e-9f10-d22dbce174e9\" (UID: \"1b9efcfe-df2d-405e-9f10-d22dbce174e9\") " Feb 23 09:08:22 crc kubenswrapper[4940]: I0223 09:08:22.761773 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/1b9efcfe-df2d-405e-9f10-d22dbce174e9-swiftconf\") pod \"1b9efcfe-df2d-405e-9f10-d22dbce174e9\" (UID: \"1b9efcfe-df2d-405e-9f10-d22dbce174e9\") " Feb 23 09:08:22 crc kubenswrapper[4940]: I0223 09:08:22.763863 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1b9efcfe-df2d-405e-9f10-d22dbce174e9-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "1b9efcfe-df2d-405e-9f10-d22dbce174e9" (UID: "1b9efcfe-df2d-405e-9f10-d22dbce174e9"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 09:08:22 crc kubenswrapper[4940]: I0223 09:08:22.765028 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1b9efcfe-df2d-405e-9f10-d22dbce174e9-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "1b9efcfe-df2d-405e-9f10-d22dbce174e9" (UID: "1b9efcfe-df2d-405e-9f10-d22dbce174e9"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 09:08:22 crc kubenswrapper[4940]: I0223 09:08:22.771956 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1b9efcfe-df2d-405e-9f10-d22dbce174e9-kube-api-access-6jcfq" (OuterVolumeSpecName: "kube-api-access-6jcfq") pod "1b9efcfe-df2d-405e-9f10-d22dbce174e9" (UID: "1b9efcfe-df2d-405e-9f10-d22dbce174e9"). InnerVolumeSpecName "kube-api-access-6jcfq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 09:08:22 crc kubenswrapper[4940]: I0223 09:08:22.772129 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ee12f4d-4ae5-496e-b7dc-71b4e3b80300-kube-api-access-2cxml" (OuterVolumeSpecName: "kube-api-access-2cxml") pod "3ee12f4d-4ae5-496e-b7dc-71b4e3b80300" (UID: "3ee12f4d-4ae5-496e-b7dc-71b4e3b80300"). InnerVolumeSpecName "kube-api-access-2cxml". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 09:08:22 crc kubenswrapper[4940]: I0223 09:08:22.775132 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1b9efcfe-df2d-405e-9f10-d22dbce174e9-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "1b9efcfe-df2d-405e-9f10-d22dbce174e9" (UID: "1b9efcfe-df2d-405e-9f10-d22dbce174e9"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 09:08:22 crc kubenswrapper[4940]: I0223 09:08:22.810369 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1b9efcfe-df2d-405e-9f10-d22dbce174e9-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "1b9efcfe-df2d-405e-9f10-d22dbce174e9" (UID: "1b9efcfe-df2d-405e-9f10-d22dbce174e9"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 09:08:22 crc kubenswrapper[4940]: I0223 09:08:22.816164 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1b9efcfe-df2d-405e-9f10-d22dbce174e9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1b9efcfe-df2d-405e-9f10-d22dbce174e9" (UID: "1b9efcfe-df2d-405e-9f10-d22dbce174e9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 09:08:22 crc kubenswrapper[4940]: I0223 09:08:22.821484 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3ee12f4d-4ae5-496e-b7dc-71b4e3b80300-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "3ee12f4d-4ae5-496e-b7dc-71b4e3b80300" (UID: "3ee12f4d-4ae5-496e-b7dc-71b4e3b80300"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 09:08:22 crc kubenswrapper[4940]: I0223 09:08:22.840686 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3ee12f4d-4ae5-496e-b7dc-71b4e3b80300-config" (OuterVolumeSpecName: "config") pod "3ee12f4d-4ae5-496e-b7dc-71b4e3b80300" (UID: "3ee12f4d-4ae5-496e-b7dc-71b4e3b80300"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 09:08:22 crc kubenswrapper[4940]: I0223 09:08:22.854309 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1b9efcfe-df2d-405e-9f10-d22dbce174e9-scripts" (OuterVolumeSpecName: "scripts") pod "1b9efcfe-df2d-405e-9f10-d22dbce174e9" (UID: "1b9efcfe-df2d-405e-9f10-d22dbce174e9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 09:08:22 crc kubenswrapper[4940]: I0223 09:08:22.862862 4940 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/1b9efcfe-df2d-405e-9f10-d22dbce174e9-dispersionconf\") on node \"crc\" DevicePath \"\"" Feb 23 09:08:22 crc kubenswrapper[4940]: I0223 09:08:22.863137 4940 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3ee12f4d-4ae5-496e-b7dc-71b4e3b80300-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 23 09:08:22 crc kubenswrapper[4940]: I0223 09:08:22.863147 4940 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b9efcfe-df2d-405e-9f10-d22dbce174e9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 09:08:22 crc kubenswrapper[4940]: I0223 09:08:22.863179 4940 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2cxml\" (UniqueName: \"kubernetes.io/projected/3ee12f4d-4ae5-496e-b7dc-71b4e3b80300-kube-api-access-2cxml\") on node \"crc\" DevicePath \"\"" Feb 23 09:08:22 crc kubenswrapper[4940]: I0223 09:08:22.863191 4940 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/1b9efcfe-df2d-405e-9f10-d22dbce174e9-ring-data-devices\") on node \"crc\" DevicePath \"\"" Feb 23 09:08:22 crc kubenswrapper[4940]: I0223 09:08:22.863199 4940 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6jcfq\" (UniqueName: \"kubernetes.io/projected/1b9efcfe-df2d-405e-9f10-d22dbce174e9-kube-api-access-6jcfq\") on node \"crc\" DevicePath \"\"" Feb 23 09:08:22 crc kubenswrapper[4940]: I0223 09:08:22.863207 4940 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/1b9efcfe-df2d-405e-9f10-d22dbce174e9-swiftconf\") on node \"crc\" DevicePath \"\"" Feb 23 09:08:22 crc kubenswrapper[4940]: I0223 09:08:22.863215 4940 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/1b9efcfe-df2d-405e-9f10-d22dbce174e9-etc-swift\") on node \"crc\" DevicePath \"\"" Feb 23 09:08:22 crc kubenswrapper[4940]: I0223 09:08:22.863223 4940 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1b9efcfe-df2d-405e-9f10-d22dbce174e9-scripts\") on node \"crc\" DevicePath \"\"" Feb 23 09:08:22 crc kubenswrapper[4940]: I0223 09:08:22.863232 4940 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3ee12f4d-4ae5-496e-b7dc-71b4e3b80300-config\") on node \"crc\" DevicePath \"\"" Feb 23 09:08:22 crc kubenswrapper[4940]: I0223 09:08:22.874254 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3ee12f4d-4ae5-496e-b7dc-71b4e3b80300-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "3ee12f4d-4ae5-496e-b7dc-71b4e3b80300" (UID: "3ee12f4d-4ae5-496e-b7dc-71b4e3b80300"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 09:08:22 crc kubenswrapper[4940]: I0223 09:08:22.882240 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3ee12f4d-4ae5-496e-b7dc-71b4e3b80300-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "3ee12f4d-4ae5-496e-b7dc-71b4e3b80300" (UID: "3ee12f4d-4ae5-496e-b7dc-71b4e3b80300"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 09:08:22 crc kubenswrapper[4940]: I0223 09:08:22.983433 4940 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3ee12f4d-4ae5-496e-b7dc-71b4e3b80300-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 23 09:08:22 crc kubenswrapper[4940]: I0223 09:08:22.983467 4940 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3ee12f4d-4ae5-496e-b7dc-71b4e3b80300-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 23 09:08:23 crc kubenswrapper[4940]: I0223 09:08:23.068890 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-5lc7s" event={"ID":"3ee12f4d-4ae5-496e-b7dc-71b4e3b80300","Type":"ContainerDied","Data":"d3f14565721488b5f88e1be4d5eb3ea13b666450778e2f7a4067c47614bc715c"} Feb 23 09:08:23 crc kubenswrapper[4940]: I0223 09:08:23.068956 4940 scope.go:117] "RemoveContainer" containerID="3fd79e18a9a35c110c0ea409f0a2bba6996e2006a65f895f949e188b602368ff" Feb 23 09:08:23 crc kubenswrapper[4940]: I0223 09:08:23.069105 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-5lc7s" Feb 23 09:08:23 crc kubenswrapper[4940]: I0223 09:08:23.074987 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-skhdb-config-k47nn"] Feb 23 09:08:23 crc kubenswrapper[4940]: I0223 09:08:23.085812 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"7453ffea-4a1f-426c-ac9c-3377973fdb19","Type":"ContainerStarted","Data":"4ab4848a897a0f76845dc5e576a21f1d49ef5b490b1526a13be85d8b4754c46f"} Feb 23 09:08:23 crc kubenswrapper[4940]: I0223 09:08:23.086078 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Feb 23 09:08:23 crc kubenswrapper[4940]: I0223 09:08:23.089168 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"987e4448-8da2-41e3-9dba-777d599609f5","Type":"ContainerStarted","Data":"01b0383984f554ff70445159c3a13d303faa206f7750267de58c56e350331259"} Feb 23 09:08:23 crc kubenswrapper[4940]: I0223 09:08:23.089531 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Feb 23 09:08:23 crc kubenswrapper[4940]: I0223 09:08:23.095477 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-4jjdt" event={"ID":"1b9efcfe-df2d-405e-9f10-d22dbce174e9","Type":"ContainerDied","Data":"7daf4fdf2db2137a4e9297d95ff2c7ddd1b8baa90831edeb1ce590be8ac1321d"} Feb 23 09:08:23 crc kubenswrapper[4940]: I0223 09:08:23.095530 4940 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7daf4fdf2db2137a4e9297d95ff2c7ddd1b8baa90831edeb1ce590be8ac1321d" Feb 23 09:08:23 crc kubenswrapper[4940]: I0223 09:08:23.095624 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-4jjdt" Feb 23 09:08:23 crc kubenswrapper[4940]: I0223 09:08:23.103976 4940 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-5lc7s"] Feb 23 09:08:23 crc kubenswrapper[4940]: I0223 09:08:23.110801 4940 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-5lc7s"] Feb 23 09:08:23 crc kubenswrapper[4940]: I0223 09:08:23.115858 4940 scope.go:117] "RemoveContainer" containerID="0269ecd30a3ef9ef55024688668afe4bf9fdf73c5ca4d29f60884609f4eb964c" Feb 23 09:08:23 crc kubenswrapper[4940]: I0223 09:08:23.127847 4940 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=45.97452503 podStartE2EDuration="1m16.127832468s" podCreationTimestamp="2026-02-23 09:07:07 +0000 UTC" firstStartedPulling="2026-02-23 09:07:09.994397887 +0000 UTC m=+1161.377604034" lastFinishedPulling="2026-02-23 09:07:40.147705315 +0000 UTC m=+1191.530911472" observedRunningTime="2026-02-23 09:08:23.120990805 +0000 UTC m=+1234.504196962" watchObservedRunningTime="2026-02-23 09:08:23.127832468 +0000 UTC m=+1234.511038625" Feb 23 09:08:23 crc kubenswrapper[4940]: I0223 09:08:23.346948 4940 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=46.057956178 podStartE2EDuration="1m16.34693009s" podCreationTimestamp="2026-02-23 09:07:07 +0000 UTC" firstStartedPulling="2026-02-23 09:07:09.994072127 +0000 UTC m=+1161.377278284" lastFinishedPulling="2026-02-23 09:07:40.283046029 +0000 UTC m=+1191.666252196" observedRunningTime="2026-02-23 09:08:23.174502706 +0000 UTC m=+1234.557708883" watchObservedRunningTime="2026-02-23 09:08:23.34693009 +0000 UTC m=+1234.730136247" Feb 23 09:08:23 crc kubenswrapper[4940]: I0223 09:08:23.362760 4940 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ee12f4d-4ae5-496e-b7dc-71b4e3b80300" path="/var/lib/kubelet/pods/3ee12f4d-4ae5-496e-b7dc-71b4e3b80300/volumes" Feb 23 09:08:23 crc kubenswrapper[4940]: I0223 09:08:23.363853 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Feb 23 09:08:23 crc kubenswrapper[4940]: W0223 09:08:23.367491 4940 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8b985bfb_fd7d_4c37_b935_26bc80e96fc0.slice/crio-d829e46b9bda3ee2e747a2c799e8f5b30be1de6579ef2aabfd83ccf70bc2cc11 WatchSource:0}: Error finding container d829e46b9bda3ee2e747a2c799e8f5b30be1de6579ef2aabfd83ccf70bc2cc11: Status 404 returned error can't find the container with id d829e46b9bda3ee2e747a2c799e8f5b30be1de6579ef2aabfd83ccf70bc2cc11 Feb 23 09:08:24 crc kubenswrapper[4940]: I0223 09:08:24.105056 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"8b985bfb-fd7d-4c37-b935-26bc80e96fc0","Type":"ContainerStarted","Data":"d829e46b9bda3ee2e747a2c799e8f5b30be1de6579ef2aabfd83ccf70bc2cc11"} Feb 23 09:08:24 crc kubenswrapper[4940]: I0223 09:08:24.106639 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-p9579" event={"ID":"0fd00530-75d3-4e2e-aaf9-4b67a1f2e95e","Type":"ContainerStarted","Data":"a899ed4bfcf3daffef0949e5d81e86917d231cd12db0067ee4d54d594794bd8b"} Feb 23 09:08:24 crc kubenswrapper[4940]: I0223 09:08:24.111313 4940 generic.go:334] "Generic (PLEG): container finished" podID="8dbb4d85-4bca-4fe5-8121-b53ddd1b3a6f" containerID="572ce6d167ac51d0ad91a991b210bb256b4159032f69211871f7840ee7d58b59" exitCode=0 Feb 23 09:08:24 crc kubenswrapper[4940]: I0223 09:08:24.112106 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-skhdb-config-k47nn" event={"ID":"8dbb4d85-4bca-4fe5-8121-b53ddd1b3a6f","Type":"ContainerDied","Data":"572ce6d167ac51d0ad91a991b210bb256b4159032f69211871f7840ee7d58b59"} Feb 23 09:08:24 crc kubenswrapper[4940]: I0223 09:08:24.112145 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-skhdb-config-k47nn" event={"ID":"8dbb4d85-4bca-4fe5-8121-b53ddd1b3a6f","Type":"ContainerStarted","Data":"2e4cd0106ea4a51fa622e6f97681dc8c8d060980cc970fbcadeb264134b041d1"} Feb 23 09:08:24 crc kubenswrapper[4940]: I0223 09:08:24.131146 4940 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-p9579" podStartSLOduration=3.321451105 podStartE2EDuration="18.131122145s" podCreationTimestamp="2026-02-23 09:08:06 +0000 UTC" firstStartedPulling="2026-02-23 09:08:07.741873137 +0000 UTC m=+1219.125079294" lastFinishedPulling="2026-02-23 09:08:22.551544177 +0000 UTC m=+1233.934750334" observedRunningTime="2026-02-23 09:08:24.124945954 +0000 UTC m=+1235.508152111" watchObservedRunningTime="2026-02-23 09:08:24.131122145 +0000 UTC m=+1235.514328322" Feb 23 09:08:25 crc kubenswrapper[4940]: I0223 09:08:25.409099 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-skhdb-config-k47nn" Feb 23 09:08:25 crc kubenswrapper[4940]: I0223 09:08:25.580083 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8dbb4d85-4bca-4fe5-8121-b53ddd1b3a6f-scripts\") pod \"8dbb4d85-4bca-4fe5-8121-b53ddd1b3a6f\" (UID: \"8dbb4d85-4bca-4fe5-8121-b53ddd1b3a6f\") " Feb 23 09:08:25 crc kubenswrapper[4940]: I0223 09:08:25.580199 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-clnfj\" (UniqueName: \"kubernetes.io/projected/8dbb4d85-4bca-4fe5-8121-b53ddd1b3a6f-kube-api-access-clnfj\") pod \"8dbb4d85-4bca-4fe5-8121-b53ddd1b3a6f\" (UID: \"8dbb4d85-4bca-4fe5-8121-b53ddd1b3a6f\") " Feb 23 09:08:25 crc kubenswrapper[4940]: I0223 09:08:25.580237 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/8dbb4d85-4bca-4fe5-8121-b53ddd1b3a6f-additional-scripts\") pod \"8dbb4d85-4bca-4fe5-8121-b53ddd1b3a6f\" (UID: \"8dbb4d85-4bca-4fe5-8121-b53ddd1b3a6f\") " Feb 23 09:08:25 crc kubenswrapper[4940]: I0223 09:08:25.580274 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/8dbb4d85-4bca-4fe5-8121-b53ddd1b3a6f-var-run-ovn\") pod \"8dbb4d85-4bca-4fe5-8121-b53ddd1b3a6f\" (UID: \"8dbb4d85-4bca-4fe5-8121-b53ddd1b3a6f\") " Feb 23 09:08:25 crc kubenswrapper[4940]: I0223 09:08:25.580328 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/8dbb4d85-4bca-4fe5-8121-b53ddd1b3a6f-var-log-ovn\") pod \"8dbb4d85-4bca-4fe5-8121-b53ddd1b3a6f\" (UID: \"8dbb4d85-4bca-4fe5-8121-b53ddd1b3a6f\") " Feb 23 09:08:25 crc kubenswrapper[4940]: I0223 09:08:25.580368 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/8dbb4d85-4bca-4fe5-8121-b53ddd1b3a6f-var-run\") pod \"8dbb4d85-4bca-4fe5-8121-b53ddd1b3a6f\" (UID: \"8dbb4d85-4bca-4fe5-8121-b53ddd1b3a6f\") " Feb 23 09:08:25 crc kubenswrapper[4940]: I0223 09:08:25.580439 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8dbb4d85-4bca-4fe5-8121-b53ddd1b3a6f-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "8dbb4d85-4bca-4fe5-8121-b53ddd1b3a6f" (UID: "8dbb4d85-4bca-4fe5-8121-b53ddd1b3a6f"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 09:08:25 crc kubenswrapper[4940]: I0223 09:08:25.580484 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8dbb4d85-4bca-4fe5-8121-b53ddd1b3a6f-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "8dbb4d85-4bca-4fe5-8121-b53ddd1b3a6f" (UID: "8dbb4d85-4bca-4fe5-8121-b53ddd1b3a6f"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 09:08:25 crc kubenswrapper[4940]: I0223 09:08:25.580599 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8dbb4d85-4bca-4fe5-8121-b53ddd1b3a6f-var-run" (OuterVolumeSpecName: "var-run") pod "8dbb4d85-4bca-4fe5-8121-b53ddd1b3a6f" (UID: "8dbb4d85-4bca-4fe5-8121-b53ddd1b3a6f"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 09:08:25 crc kubenswrapper[4940]: I0223 09:08:25.580952 4940 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/8dbb4d85-4bca-4fe5-8121-b53ddd1b3a6f-var-run-ovn\") on node \"crc\" DevicePath \"\"" Feb 23 09:08:25 crc kubenswrapper[4940]: I0223 09:08:25.580983 4940 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/8dbb4d85-4bca-4fe5-8121-b53ddd1b3a6f-var-log-ovn\") on node \"crc\" DevicePath \"\"" Feb 23 09:08:25 crc kubenswrapper[4940]: I0223 09:08:25.581000 4940 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/8dbb4d85-4bca-4fe5-8121-b53ddd1b3a6f-var-run\") on node \"crc\" DevicePath \"\"" Feb 23 09:08:25 crc kubenswrapper[4940]: I0223 09:08:25.581161 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8dbb4d85-4bca-4fe5-8121-b53ddd1b3a6f-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "8dbb4d85-4bca-4fe5-8121-b53ddd1b3a6f" (UID: "8dbb4d85-4bca-4fe5-8121-b53ddd1b3a6f"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 09:08:25 crc kubenswrapper[4940]: I0223 09:08:25.581417 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8dbb4d85-4bca-4fe5-8121-b53ddd1b3a6f-scripts" (OuterVolumeSpecName: "scripts") pod "8dbb4d85-4bca-4fe5-8121-b53ddd1b3a6f" (UID: "8dbb4d85-4bca-4fe5-8121-b53ddd1b3a6f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 09:08:25 crc kubenswrapper[4940]: I0223 09:08:25.584495 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8dbb4d85-4bca-4fe5-8121-b53ddd1b3a6f-kube-api-access-clnfj" (OuterVolumeSpecName: "kube-api-access-clnfj") pod "8dbb4d85-4bca-4fe5-8121-b53ddd1b3a6f" (UID: "8dbb4d85-4bca-4fe5-8121-b53ddd1b3a6f"). InnerVolumeSpecName "kube-api-access-clnfj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 09:08:25 crc kubenswrapper[4940]: I0223 09:08:25.682558 4940 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8dbb4d85-4bca-4fe5-8121-b53ddd1b3a6f-scripts\") on node \"crc\" DevicePath \"\"" Feb 23 09:08:25 crc kubenswrapper[4940]: I0223 09:08:25.682601 4940 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-clnfj\" (UniqueName: \"kubernetes.io/projected/8dbb4d85-4bca-4fe5-8121-b53ddd1b3a6f-kube-api-access-clnfj\") on node \"crc\" DevicePath \"\"" Feb 23 09:08:25 crc kubenswrapper[4940]: I0223 09:08:25.682631 4940 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/8dbb4d85-4bca-4fe5-8121-b53ddd1b3a6f-additional-scripts\") on node \"crc\" DevicePath \"\"" Feb 23 09:08:26 crc kubenswrapper[4940]: I0223 09:08:26.131486 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-skhdb-config-k47nn" event={"ID":"8dbb4d85-4bca-4fe5-8121-b53ddd1b3a6f","Type":"ContainerDied","Data":"2e4cd0106ea4a51fa622e6f97681dc8c8d060980cc970fbcadeb264134b041d1"} Feb 23 09:08:26 crc kubenswrapper[4940]: I0223 09:08:26.131515 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-skhdb-config-k47nn" Feb 23 09:08:26 crc kubenswrapper[4940]: I0223 09:08:26.131525 4940 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2e4cd0106ea4a51fa622e6f97681dc8c8d060980cc970fbcadeb264134b041d1" Feb 23 09:08:26 crc kubenswrapper[4940]: I0223 09:08:26.134095 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"8b985bfb-fd7d-4c37-b935-26bc80e96fc0","Type":"ContainerStarted","Data":"e39edfe19b75b5e4e4b04cc40b606f14ccfaeb9437d245100dfe106e262e672f"} Feb 23 09:08:26 crc kubenswrapper[4940]: I0223 09:08:26.134223 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"8b985bfb-fd7d-4c37-b935-26bc80e96fc0","Type":"ContainerStarted","Data":"1bd97d0a7df558de708e5db5e9f652d5ac43f243f1edeb0d4f12d2d35f78056d"} Feb 23 09:08:26 crc kubenswrapper[4940]: I0223 09:08:26.134298 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"8b985bfb-fd7d-4c37-b935-26bc80e96fc0","Type":"ContainerStarted","Data":"1265af5268d560db61702e3a1ae1835aba5d57b3161d342e1a173118facb0073"} Feb 23 09:08:26 crc kubenswrapper[4940]: I0223 09:08:26.134372 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"8b985bfb-fd7d-4c37-b935-26bc80e96fc0","Type":"ContainerStarted","Data":"bd4be644920861747bbdc7509e0395a949649b719a01c97a89b8a110b6d76b72"} Feb 23 09:08:26 crc kubenswrapper[4940]: I0223 09:08:26.531351 4940 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-skhdb-config-k47nn"] Feb 23 09:08:26 crc kubenswrapper[4940]: I0223 09:08:26.538703 4940 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-skhdb-config-k47nn"] Feb 23 09:08:26 crc kubenswrapper[4940]: I0223 09:08:26.632793 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-skhdb-config-gfq4q"] Feb 23 09:08:26 crc kubenswrapper[4940]: E0223 09:08:26.633440 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b9efcfe-df2d-405e-9f10-d22dbce174e9" containerName="swift-ring-rebalance" Feb 23 09:08:26 crc kubenswrapper[4940]: I0223 09:08:26.633542 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b9efcfe-df2d-405e-9f10-d22dbce174e9" containerName="swift-ring-rebalance" Feb 23 09:08:26 crc kubenswrapper[4940]: E0223 09:08:26.633631 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ee12f4d-4ae5-496e-b7dc-71b4e3b80300" containerName="dnsmasq-dns" Feb 23 09:08:26 crc kubenswrapper[4940]: I0223 09:08:26.633700 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ee12f4d-4ae5-496e-b7dc-71b4e3b80300" containerName="dnsmasq-dns" Feb 23 09:08:26 crc kubenswrapper[4940]: E0223 09:08:26.633766 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8dbb4d85-4bca-4fe5-8121-b53ddd1b3a6f" containerName="ovn-config" Feb 23 09:08:26 crc kubenswrapper[4940]: I0223 09:08:26.633832 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="8dbb4d85-4bca-4fe5-8121-b53ddd1b3a6f" containerName="ovn-config" Feb 23 09:08:26 crc kubenswrapper[4940]: E0223 09:08:26.633898 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ee12f4d-4ae5-496e-b7dc-71b4e3b80300" containerName="init" Feb 23 09:08:26 crc kubenswrapper[4940]: I0223 09:08:26.633958 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ee12f4d-4ae5-496e-b7dc-71b4e3b80300" containerName="init" Feb 23 09:08:26 crc kubenswrapper[4940]: I0223 09:08:26.634196 4940 memory_manager.go:354] "RemoveStaleState removing state" podUID="3ee12f4d-4ae5-496e-b7dc-71b4e3b80300" containerName="dnsmasq-dns" Feb 23 09:08:26 crc kubenswrapper[4940]: I0223 09:08:26.634307 4940 memory_manager.go:354] "RemoveStaleState removing state" podUID="1b9efcfe-df2d-405e-9f10-d22dbce174e9" containerName="swift-ring-rebalance" Feb 23 09:08:26 crc kubenswrapper[4940]: I0223 09:08:26.634400 4940 memory_manager.go:354] "RemoveStaleState removing state" podUID="8dbb4d85-4bca-4fe5-8121-b53ddd1b3a6f" containerName="ovn-config" Feb 23 09:08:26 crc kubenswrapper[4940]: I0223 09:08:26.635151 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-skhdb-config-gfq4q" Feb 23 09:08:26 crc kubenswrapper[4940]: I0223 09:08:26.640413 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Feb 23 09:08:26 crc kubenswrapper[4940]: I0223 09:08:26.653758 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-skhdb-config-gfq4q"] Feb 23 09:08:26 crc kubenswrapper[4940]: I0223 09:08:26.802039 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/62f261cc-72c8-42db-8cac-a70b0d2218c2-var-run\") pod \"ovn-controller-skhdb-config-gfq4q\" (UID: \"62f261cc-72c8-42db-8cac-a70b0d2218c2\") " pod="openstack/ovn-controller-skhdb-config-gfq4q" Feb 23 09:08:26 crc kubenswrapper[4940]: I0223 09:08:26.802100 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/62f261cc-72c8-42db-8cac-a70b0d2218c2-additional-scripts\") pod \"ovn-controller-skhdb-config-gfq4q\" (UID: \"62f261cc-72c8-42db-8cac-a70b0d2218c2\") " pod="openstack/ovn-controller-skhdb-config-gfq4q" Feb 23 09:08:26 crc kubenswrapper[4940]: I0223 09:08:26.802194 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/62f261cc-72c8-42db-8cac-a70b0d2218c2-var-log-ovn\") pod \"ovn-controller-skhdb-config-gfq4q\" (UID: \"62f261cc-72c8-42db-8cac-a70b0d2218c2\") " pod="openstack/ovn-controller-skhdb-config-gfq4q" Feb 23 09:08:26 crc kubenswrapper[4940]: I0223 09:08:26.802282 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8thvx\" (UniqueName: \"kubernetes.io/projected/62f261cc-72c8-42db-8cac-a70b0d2218c2-kube-api-access-8thvx\") pod \"ovn-controller-skhdb-config-gfq4q\" (UID: \"62f261cc-72c8-42db-8cac-a70b0d2218c2\") " pod="openstack/ovn-controller-skhdb-config-gfq4q" Feb 23 09:08:26 crc kubenswrapper[4940]: I0223 09:08:26.802335 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/62f261cc-72c8-42db-8cac-a70b0d2218c2-scripts\") pod \"ovn-controller-skhdb-config-gfq4q\" (UID: \"62f261cc-72c8-42db-8cac-a70b0d2218c2\") " pod="openstack/ovn-controller-skhdb-config-gfq4q" Feb 23 09:08:26 crc kubenswrapper[4940]: I0223 09:08:26.802382 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/62f261cc-72c8-42db-8cac-a70b0d2218c2-var-run-ovn\") pod \"ovn-controller-skhdb-config-gfq4q\" (UID: \"62f261cc-72c8-42db-8cac-a70b0d2218c2\") " pod="openstack/ovn-controller-skhdb-config-gfq4q" Feb 23 09:08:26 crc kubenswrapper[4940]: I0223 09:08:26.904661 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/62f261cc-72c8-42db-8cac-a70b0d2218c2-var-run\") pod \"ovn-controller-skhdb-config-gfq4q\" (UID: \"62f261cc-72c8-42db-8cac-a70b0d2218c2\") " pod="openstack/ovn-controller-skhdb-config-gfq4q" Feb 23 09:08:26 crc kubenswrapper[4940]: I0223 09:08:26.904733 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/62f261cc-72c8-42db-8cac-a70b0d2218c2-additional-scripts\") pod \"ovn-controller-skhdb-config-gfq4q\" (UID: \"62f261cc-72c8-42db-8cac-a70b0d2218c2\") " pod="openstack/ovn-controller-skhdb-config-gfq4q" Feb 23 09:08:26 crc kubenswrapper[4940]: I0223 09:08:26.904831 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/62f261cc-72c8-42db-8cac-a70b0d2218c2-var-log-ovn\") pod \"ovn-controller-skhdb-config-gfq4q\" (UID: \"62f261cc-72c8-42db-8cac-a70b0d2218c2\") " pod="openstack/ovn-controller-skhdb-config-gfq4q" Feb 23 09:08:26 crc kubenswrapper[4940]: I0223 09:08:26.904884 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8thvx\" (UniqueName: \"kubernetes.io/projected/62f261cc-72c8-42db-8cac-a70b0d2218c2-kube-api-access-8thvx\") pod \"ovn-controller-skhdb-config-gfq4q\" (UID: \"62f261cc-72c8-42db-8cac-a70b0d2218c2\") " pod="openstack/ovn-controller-skhdb-config-gfq4q" Feb 23 09:08:26 crc kubenswrapper[4940]: I0223 09:08:26.904963 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/62f261cc-72c8-42db-8cac-a70b0d2218c2-scripts\") pod \"ovn-controller-skhdb-config-gfq4q\" (UID: \"62f261cc-72c8-42db-8cac-a70b0d2218c2\") " pod="openstack/ovn-controller-skhdb-config-gfq4q" Feb 23 09:08:26 crc kubenswrapper[4940]: I0223 09:08:26.905026 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/62f261cc-72c8-42db-8cac-a70b0d2218c2-var-run-ovn\") pod \"ovn-controller-skhdb-config-gfq4q\" (UID: \"62f261cc-72c8-42db-8cac-a70b0d2218c2\") " pod="openstack/ovn-controller-skhdb-config-gfq4q" Feb 23 09:08:26 crc kubenswrapper[4940]: I0223 09:08:26.905351 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/62f261cc-72c8-42db-8cac-a70b0d2218c2-var-run-ovn\") pod \"ovn-controller-skhdb-config-gfq4q\" (UID: \"62f261cc-72c8-42db-8cac-a70b0d2218c2\") " pod="openstack/ovn-controller-skhdb-config-gfq4q" Feb 23 09:08:26 crc kubenswrapper[4940]: I0223 09:08:26.905353 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/62f261cc-72c8-42db-8cac-a70b0d2218c2-var-log-ovn\") pod \"ovn-controller-skhdb-config-gfq4q\" (UID: \"62f261cc-72c8-42db-8cac-a70b0d2218c2\") " pod="openstack/ovn-controller-skhdb-config-gfq4q" Feb 23 09:08:26 crc kubenswrapper[4940]: I0223 09:08:26.905353 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/62f261cc-72c8-42db-8cac-a70b0d2218c2-var-run\") pod \"ovn-controller-skhdb-config-gfq4q\" (UID: \"62f261cc-72c8-42db-8cac-a70b0d2218c2\") " pod="openstack/ovn-controller-skhdb-config-gfq4q" Feb 23 09:08:26 crc kubenswrapper[4940]: I0223 09:08:26.906156 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/62f261cc-72c8-42db-8cac-a70b0d2218c2-additional-scripts\") pod \"ovn-controller-skhdb-config-gfq4q\" (UID: \"62f261cc-72c8-42db-8cac-a70b0d2218c2\") " pod="openstack/ovn-controller-skhdb-config-gfq4q" Feb 23 09:08:26 crc kubenswrapper[4940]: I0223 09:08:26.908179 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/62f261cc-72c8-42db-8cac-a70b0d2218c2-scripts\") pod \"ovn-controller-skhdb-config-gfq4q\" (UID: \"62f261cc-72c8-42db-8cac-a70b0d2218c2\") " pod="openstack/ovn-controller-skhdb-config-gfq4q" Feb 23 09:08:26 crc kubenswrapper[4940]: I0223 09:08:26.923028 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8thvx\" (UniqueName: \"kubernetes.io/projected/62f261cc-72c8-42db-8cac-a70b0d2218c2-kube-api-access-8thvx\") pod \"ovn-controller-skhdb-config-gfq4q\" (UID: \"62f261cc-72c8-42db-8cac-a70b0d2218c2\") " pod="openstack/ovn-controller-skhdb-config-gfq4q" Feb 23 09:08:26 crc kubenswrapper[4940]: I0223 09:08:26.958965 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-skhdb-config-gfq4q" Feb 23 09:08:27 crc kubenswrapper[4940]: I0223 09:08:27.166412 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"8b985bfb-fd7d-4c37-b935-26bc80e96fc0","Type":"ContainerStarted","Data":"dbe00a6b58dcaa8eafc675f3d0ece0ab8319169863766da27977d71288be2017"} Feb 23 09:08:27 crc kubenswrapper[4940]: I0223 09:08:27.166646 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"8b985bfb-fd7d-4c37-b935-26bc80e96fc0","Type":"ContainerStarted","Data":"ef851bafc4a499711b4f6bcb26145a28488d59b1ce06588fd40852828aadc645"} Feb 23 09:08:27 crc kubenswrapper[4940]: I0223 09:08:27.277636 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-skhdb-config-gfq4q"] Feb 23 09:08:27 crc kubenswrapper[4940]: I0223 09:08:27.354210 4940 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8dbb4d85-4bca-4fe5-8121-b53ddd1b3a6f" path="/var/lib/kubelet/pods/8dbb4d85-4bca-4fe5-8121-b53ddd1b3a6f/volumes" Feb 23 09:08:27 crc kubenswrapper[4940]: I0223 09:08:27.632409 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-skhdb" Feb 23 09:08:28 crc kubenswrapper[4940]: I0223 09:08:28.174693 4940 generic.go:334] "Generic (PLEG): container finished" podID="62f261cc-72c8-42db-8cac-a70b0d2218c2" containerID="d1e7c42006c191648eec14ed5655d867d922aff230310c3c25cd193649eb7d9c" exitCode=0 Feb 23 09:08:28 crc kubenswrapper[4940]: I0223 09:08:28.174772 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-skhdb-config-gfq4q" event={"ID":"62f261cc-72c8-42db-8cac-a70b0d2218c2","Type":"ContainerDied","Data":"d1e7c42006c191648eec14ed5655d867d922aff230310c3c25cd193649eb7d9c"} Feb 23 09:08:28 crc kubenswrapper[4940]: I0223 09:08:28.175095 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-skhdb-config-gfq4q" event={"ID":"62f261cc-72c8-42db-8cac-a70b0d2218c2","Type":"ContainerStarted","Data":"0f99810411884b976624e025a19a53dffdf2eee9bc8bfdef65651eda46f4bb71"} Feb 23 09:08:28 crc kubenswrapper[4940]: I0223 09:08:28.178666 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"8b985bfb-fd7d-4c37-b935-26bc80e96fc0","Type":"ContainerStarted","Data":"fdcdcaf484721e12fefd1f617faec9f570527320db5bd75ac08e59af04f3f605"} Feb 23 09:08:28 crc kubenswrapper[4940]: I0223 09:08:28.178707 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"8b985bfb-fd7d-4c37-b935-26bc80e96fc0","Type":"ContainerStarted","Data":"6108e533c89b050b724380e21d4926b400858b1fb69402afd802f5c57abcdd8c"} Feb 23 09:08:29 crc kubenswrapper[4940]: I0223 09:08:29.193598 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"8b985bfb-fd7d-4c37-b935-26bc80e96fc0","Type":"ContainerStarted","Data":"9ad2916e1edcbbf65062bcaabcffbb91a8f740f5513a6b0140ea5c8319565514"} Feb 23 09:08:29 crc kubenswrapper[4940]: I0223 09:08:29.193931 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"8b985bfb-fd7d-4c37-b935-26bc80e96fc0","Type":"ContainerStarted","Data":"75ee8f0c7c7197d99be8cb21caa3ef1141f0521afe8815cf533daa027b5ea4e1"} Feb 23 09:08:29 crc kubenswrapper[4940]: I0223 09:08:29.508508 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-skhdb-config-gfq4q" Feb 23 09:08:29 crc kubenswrapper[4940]: I0223 09:08:29.668894 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/62f261cc-72c8-42db-8cac-a70b0d2218c2-scripts\") pod \"62f261cc-72c8-42db-8cac-a70b0d2218c2\" (UID: \"62f261cc-72c8-42db-8cac-a70b0d2218c2\") " Feb 23 09:08:29 crc kubenswrapper[4940]: I0223 09:08:29.668954 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/62f261cc-72c8-42db-8cac-a70b0d2218c2-var-run\") pod \"62f261cc-72c8-42db-8cac-a70b0d2218c2\" (UID: \"62f261cc-72c8-42db-8cac-a70b0d2218c2\") " Feb 23 09:08:29 crc kubenswrapper[4940]: I0223 09:08:29.669019 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/62f261cc-72c8-42db-8cac-a70b0d2218c2-var-log-ovn\") pod \"62f261cc-72c8-42db-8cac-a70b0d2218c2\" (UID: \"62f261cc-72c8-42db-8cac-a70b0d2218c2\") " Feb 23 09:08:29 crc kubenswrapper[4940]: I0223 09:08:29.669051 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/62f261cc-72c8-42db-8cac-a70b0d2218c2-var-run-ovn\") pod \"62f261cc-72c8-42db-8cac-a70b0d2218c2\" (UID: \"62f261cc-72c8-42db-8cac-a70b0d2218c2\") " Feb 23 09:08:29 crc kubenswrapper[4940]: I0223 09:08:29.669050 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/62f261cc-72c8-42db-8cac-a70b0d2218c2-var-run" (OuterVolumeSpecName: "var-run") pod "62f261cc-72c8-42db-8cac-a70b0d2218c2" (UID: "62f261cc-72c8-42db-8cac-a70b0d2218c2"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 09:08:29 crc kubenswrapper[4940]: I0223 09:08:29.669129 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/62f261cc-72c8-42db-8cac-a70b0d2218c2-additional-scripts\") pod \"62f261cc-72c8-42db-8cac-a70b0d2218c2\" (UID: \"62f261cc-72c8-42db-8cac-a70b0d2218c2\") " Feb 23 09:08:29 crc kubenswrapper[4940]: I0223 09:08:29.669150 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/62f261cc-72c8-42db-8cac-a70b0d2218c2-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "62f261cc-72c8-42db-8cac-a70b0d2218c2" (UID: "62f261cc-72c8-42db-8cac-a70b0d2218c2"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 09:08:29 crc kubenswrapper[4940]: I0223 09:08:29.669173 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8thvx\" (UniqueName: \"kubernetes.io/projected/62f261cc-72c8-42db-8cac-a70b0d2218c2-kube-api-access-8thvx\") pod \"62f261cc-72c8-42db-8cac-a70b0d2218c2\" (UID: \"62f261cc-72c8-42db-8cac-a70b0d2218c2\") " Feb 23 09:08:29 crc kubenswrapper[4940]: I0223 09:08:29.669492 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/62f261cc-72c8-42db-8cac-a70b0d2218c2-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "62f261cc-72c8-42db-8cac-a70b0d2218c2" (UID: "62f261cc-72c8-42db-8cac-a70b0d2218c2"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 09:08:29 crc kubenswrapper[4940]: I0223 09:08:29.669919 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/62f261cc-72c8-42db-8cac-a70b0d2218c2-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "62f261cc-72c8-42db-8cac-a70b0d2218c2" (UID: "62f261cc-72c8-42db-8cac-a70b0d2218c2"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 09:08:29 crc kubenswrapper[4940]: I0223 09:08:29.670103 4940 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/62f261cc-72c8-42db-8cac-a70b0d2218c2-var-run\") on node \"crc\" DevicePath \"\"" Feb 23 09:08:29 crc kubenswrapper[4940]: I0223 09:08:29.670154 4940 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/62f261cc-72c8-42db-8cac-a70b0d2218c2-var-log-ovn\") on node \"crc\" DevicePath \"\"" Feb 23 09:08:29 crc kubenswrapper[4940]: I0223 09:08:29.670170 4940 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/62f261cc-72c8-42db-8cac-a70b0d2218c2-var-run-ovn\") on node \"crc\" DevicePath \"\"" Feb 23 09:08:29 crc kubenswrapper[4940]: I0223 09:08:29.670183 4940 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/62f261cc-72c8-42db-8cac-a70b0d2218c2-additional-scripts\") on node \"crc\" DevicePath \"\"" Feb 23 09:08:29 crc kubenswrapper[4940]: I0223 09:08:29.671051 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/62f261cc-72c8-42db-8cac-a70b0d2218c2-scripts" (OuterVolumeSpecName: "scripts") pod "62f261cc-72c8-42db-8cac-a70b0d2218c2" (UID: "62f261cc-72c8-42db-8cac-a70b0d2218c2"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 09:08:29 crc kubenswrapper[4940]: I0223 09:08:29.673434 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/62f261cc-72c8-42db-8cac-a70b0d2218c2-kube-api-access-8thvx" (OuterVolumeSpecName: "kube-api-access-8thvx") pod "62f261cc-72c8-42db-8cac-a70b0d2218c2" (UID: "62f261cc-72c8-42db-8cac-a70b0d2218c2"). InnerVolumeSpecName "kube-api-access-8thvx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 09:08:29 crc kubenswrapper[4940]: I0223 09:08:29.771905 4940 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8thvx\" (UniqueName: \"kubernetes.io/projected/62f261cc-72c8-42db-8cac-a70b0d2218c2-kube-api-access-8thvx\") on node \"crc\" DevicePath \"\"" Feb 23 09:08:29 crc kubenswrapper[4940]: I0223 09:08:29.771936 4940 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/62f261cc-72c8-42db-8cac-a70b0d2218c2-scripts\") on node \"crc\" DevicePath \"\"" Feb 23 09:08:30 crc kubenswrapper[4940]: I0223 09:08:30.209464 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"8b985bfb-fd7d-4c37-b935-26bc80e96fc0","Type":"ContainerStarted","Data":"876aa14239e80d725d2f860dfd683f3ccfd2a764cf9a1c40ed663757e8bed4f9"} Feb 23 09:08:30 crc kubenswrapper[4940]: I0223 09:08:30.209779 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"8b985bfb-fd7d-4c37-b935-26bc80e96fc0","Type":"ContainerStarted","Data":"e402631454929aea75ec5d19e7eb84987a164d748eeee8319ef4b3c0a1c3eb6f"} Feb 23 09:08:30 crc kubenswrapper[4940]: I0223 09:08:30.209789 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"8b985bfb-fd7d-4c37-b935-26bc80e96fc0","Type":"ContainerStarted","Data":"fc23b077a98b0ae02dfb9bf8a7b9ee8e75203c8cb770df1db577ce472e60bec5"} Feb 23 09:08:30 crc kubenswrapper[4940]: I0223 09:08:30.209798 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"8b985bfb-fd7d-4c37-b935-26bc80e96fc0","Type":"ContainerStarted","Data":"27cc0ca2550e3ca16ad62776fe67c5548910bfb38e19f7b2ce4ae66289d31fa9"} Feb 23 09:08:30 crc kubenswrapper[4940]: I0223 09:08:30.209808 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"8b985bfb-fd7d-4c37-b935-26bc80e96fc0","Type":"ContainerStarted","Data":"e96eeda33898345cc2386e6b3bd3878c5e7c0d9b1ccdf6c9e9fad317573fe367"} Feb 23 09:08:30 crc kubenswrapper[4940]: I0223 09:08:30.213136 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-skhdb-config-gfq4q" event={"ID":"62f261cc-72c8-42db-8cac-a70b0d2218c2","Type":"ContainerDied","Data":"0f99810411884b976624e025a19a53dffdf2eee9bc8bfdef65651eda46f4bb71"} Feb 23 09:08:30 crc kubenswrapper[4940]: I0223 09:08:30.213163 4940 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0f99810411884b976624e025a19a53dffdf2eee9bc8bfdef65651eda46f4bb71" Feb 23 09:08:30 crc kubenswrapper[4940]: I0223 09:08:30.213209 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-skhdb-config-gfq4q" Feb 23 09:08:30 crc kubenswrapper[4940]: I0223 09:08:30.249407 4940 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=20.913567637 podStartE2EDuration="26.249325138s" podCreationTimestamp="2026-02-23 09:08:04 +0000 UTC" firstStartedPulling="2026-02-23 09:08:23.369750298 +0000 UTC m=+1234.752956455" lastFinishedPulling="2026-02-23 09:08:28.705507799 +0000 UTC m=+1240.088713956" observedRunningTime="2026-02-23 09:08:30.242448524 +0000 UTC m=+1241.625654691" watchObservedRunningTime="2026-02-23 09:08:30.249325138 +0000 UTC m=+1241.632531295" Feb 23 09:08:30 crc kubenswrapper[4940]: I0223 09:08:30.553880 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-n8vjw"] Feb 23 09:08:30 crc kubenswrapper[4940]: E0223 09:08:30.554195 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="62f261cc-72c8-42db-8cac-a70b0d2218c2" containerName="ovn-config" Feb 23 09:08:30 crc kubenswrapper[4940]: I0223 09:08:30.554211 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="62f261cc-72c8-42db-8cac-a70b0d2218c2" containerName="ovn-config" Feb 23 09:08:30 crc kubenswrapper[4940]: I0223 09:08:30.554394 4940 memory_manager.go:354] "RemoveStaleState removing state" podUID="62f261cc-72c8-42db-8cac-a70b0d2218c2" containerName="ovn-config" Feb 23 09:08:30 crc kubenswrapper[4940]: I0223 09:08:30.555192 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-764c5664d7-n8vjw" Feb 23 09:08:30 crc kubenswrapper[4940]: I0223 09:08:30.557774 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Feb 23 09:08:30 crc kubenswrapper[4940]: I0223 09:08:30.569662 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-n8vjw"] Feb 23 09:08:30 crc kubenswrapper[4940]: I0223 09:08:30.658467 4940 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-skhdb-config-gfq4q"] Feb 23 09:08:30 crc kubenswrapper[4940]: I0223 09:08:30.669378 4940 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-skhdb-config-gfq4q"] Feb 23 09:08:30 crc kubenswrapper[4940]: I0223 09:08:30.686493 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h75p8\" (UniqueName: \"kubernetes.io/projected/3dce35c0-f82e-41b1-9716-c56e1b7d5e5d-kube-api-access-h75p8\") pod \"dnsmasq-dns-764c5664d7-n8vjw\" (UID: \"3dce35c0-f82e-41b1-9716-c56e1b7d5e5d\") " pod="openstack/dnsmasq-dns-764c5664d7-n8vjw" Feb 23 09:08:30 crc kubenswrapper[4940]: I0223 09:08:30.686819 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3dce35c0-f82e-41b1-9716-c56e1b7d5e5d-config\") pod \"dnsmasq-dns-764c5664d7-n8vjw\" (UID: \"3dce35c0-f82e-41b1-9716-c56e1b7d5e5d\") " pod="openstack/dnsmasq-dns-764c5664d7-n8vjw" Feb 23 09:08:30 crc kubenswrapper[4940]: I0223 09:08:30.686912 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3dce35c0-f82e-41b1-9716-c56e1b7d5e5d-dns-svc\") pod \"dnsmasq-dns-764c5664d7-n8vjw\" (UID: \"3dce35c0-f82e-41b1-9716-c56e1b7d5e5d\") " pod="openstack/dnsmasq-dns-764c5664d7-n8vjw" Feb 23 09:08:30 crc kubenswrapper[4940]: I0223 09:08:30.686994 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3dce35c0-f82e-41b1-9716-c56e1b7d5e5d-ovsdbserver-sb\") pod \"dnsmasq-dns-764c5664d7-n8vjw\" (UID: \"3dce35c0-f82e-41b1-9716-c56e1b7d5e5d\") " pod="openstack/dnsmasq-dns-764c5664d7-n8vjw" Feb 23 09:08:30 crc kubenswrapper[4940]: I0223 09:08:30.687082 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3dce35c0-f82e-41b1-9716-c56e1b7d5e5d-dns-swift-storage-0\") pod \"dnsmasq-dns-764c5664d7-n8vjw\" (UID: \"3dce35c0-f82e-41b1-9716-c56e1b7d5e5d\") " pod="openstack/dnsmasq-dns-764c5664d7-n8vjw" Feb 23 09:08:30 crc kubenswrapper[4940]: I0223 09:08:30.687176 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3dce35c0-f82e-41b1-9716-c56e1b7d5e5d-ovsdbserver-nb\") pod \"dnsmasq-dns-764c5664d7-n8vjw\" (UID: \"3dce35c0-f82e-41b1-9716-c56e1b7d5e5d\") " pod="openstack/dnsmasq-dns-764c5664d7-n8vjw" Feb 23 09:08:30 crc kubenswrapper[4940]: I0223 09:08:30.789187 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3dce35c0-f82e-41b1-9716-c56e1b7d5e5d-config\") pod \"dnsmasq-dns-764c5664d7-n8vjw\" (UID: \"3dce35c0-f82e-41b1-9716-c56e1b7d5e5d\") " pod="openstack/dnsmasq-dns-764c5664d7-n8vjw" Feb 23 09:08:30 crc kubenswrapper[4940]: I0223 09:08:30.789240 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3dce35c0-f82e-41b1-9716-c56e1b7d5e5d-dns-svc\") pod \"dnsmasq-dns-764c5664d7-n8vjw\" (UID: \"3dce35c0-f82e-41b1-9716-c56e1b7d5e5d\") " pod="openstack/dnsmasq-dns-764c5664d7-n8vjw" Feb 23 09:08:30 crc kubenswrapper[4940]: I0223 09:08:30.789258 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3dce35c0-f82e-41b1-9716-c56e1b7d5e5d-ovsdbserver-sb\") pod \"dnsmasq-dns-764c5664d7-n8vjw\" (UID: \"3dce35c0-f82e-41b1-9716-c56e1b7d5e5d\") " pod="openstack/dnsmasq-dns-764c5664d7-n8vjw" Feb 23 09:08:30 crc kubenswrapper[4940]: I0223 09:08:30.789288 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3dce35c0-f82e-41b1-9716-c56e1b7d5e5d-dns-swift-storage-0\") pod \"dnsmasq-dns-764c5664d7-n8vjw\" (UID: \"3dce35c0-f82e-41b1-9716-c56e1b7d5e5d\") " pod="openstack/dnsmasq-dns-764c5664d7-n8vjw" Feb 23 09:08:30 crc kubenswrapper[4940]: I0223 09:08:30.789325 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3dce35c0-f82e-41b1-9716-c56e1b7d5e5d-ovsdbserver-nb\") pod \"dnsmasq-dns-764c5664d7-n8vjw\" (UID: \"3dce35c0-f82e-41b1-9716-c56e1b7d5e5d\") " pod="openstack/dnsmasq-dns-764c5664d7-n8vjw" Feb 23 09:08:30 crc kubenswrapper[4940]: I0223 09:08:30.789379 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h75p8\" (UniqueName: \"kubernetes.io/projected/3dce35c0-f82e-41b1-9716-c56e1b7d5e5d-kube-api-access-h75p8\") pod \"dnsmasq-dns-764c5664d7-n8vjw\" (UID: \"3dce35c0-f82e-41b1-9716-c56e1b7d5e5d\") " pod="openstack/dnsmasq-dns-764c5664d7-n8vjw" Feb 23 09:08:30 crc kubenswrapper[4940]: I0223 09:08:30.790306 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3dce35c0-f82e-41b1-9716-c56e1b7d5e5d-config\") pod \"dnsmasq-dns-764c5664d7-n8vjw\" (UID: \"3dce35c0-f82e-41b1-9716-c56e1b7d5e5d\") " pod="openstack/dnsmasq-dns-764c5664d7-n8vjw" Feb 23 09:08:30 crc kubenswrapper[4940]: I0223 09:08:30.790398 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3dce35c0-f82e-41b1-9716-c56e1b7d5e5d-ovsdbserver-nb\") pod \"dnsmasq-dns-764c5664d7-n8vjw\" (UID: \"3dce35c0-f82e-41b1-9716-c56e1b7d5e5d\") " pod="openstack/dnsmasq-dns-764c5664d7-n8vjw" Feb 23 09:08:30 crc kubenswrapper[4940]: I0223 09:08:30.790785 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3dce35c0-f82e-41b1-9716-c56e1b7d5e5d-dns-swift-storage-0\") pod \"dnsmasq-dns-764c5664d7-n8vjw\" (UID: \"3dce35c0-f82e-41b1-9716-c56e1b7d5e5d\") " pod="openstack/dnsmasq-dns-764c5664d7-n8vjw" Feb 23 09:08:30 crc kubenswrapper[4940]: I0223 09:08:30.790782 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3dce35c0-f82e-41b1-9716-c56e1b7d5e5d-ovsdbserver-sb\") pod \"dnsmasq-dns-764c5664d7-n8vjw\" (UID: \"3dce35c0-f82e-41b1-9716-c56e1b7d5e5d\") " pod="openstack/dnsmasq-dns-764c5664d7-n8vjw" Feb 23 09:08:30 crc kubenswrapper[4940]: I0223 09:08:30.790963 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3dce35c0-f82e-41b1-9716-c56e1b7d5e5d-dns-svc\") pod \"dnsmasq-dns-764c5664d7-n8vjw\" (UID: \"3dce35c0-f82e-41b1-9716-c56e1b7d5e5d\") " pod="openstack/dnsmasq-dns-764c5664d7-n8vjw" Feb 23 09:08:30 crc kubenswrapper[4940]: I0223 09:08:30.808270 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h75p8\" (UniqueName: \"kubernetes.io/projected/3dce35c0-f82e-41b1-9716-c56e1b7d5e5d-kube-api-access-h75p8\") pod \"dnsmasq-dns-764c5664d7-n8vjw\" (UID: \"3dce35c0-f82e-41b1-9716-c56e1b7d5e5d\") " pod="openstack/dnsmasq-dns-764c5664d7-n8vjw" Feb 23 09:08:30 crc kubenswrapper[4940]: I0223 09:08:30.870007 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-764c5664d7-n8vjw" Feb 23 09:08:31 crc kubenswrapper[4940]: I0223 09:08:31.234397 4940 generic.go:334] "Generic (PLEG): container finished" podID="0fd00530-75d3-4e2e-aaf9-4b67a1f2e95e" containerID="a899ed4bfcf3daffef0949e5d81e86917d231cd12db0067ee4d54d594794bd8b" exitCode=0 Feb 23 09:08:31 crc kubenswrapper[4940]: I0223 09:08:31.235053 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-p9579" event={"ID":"0fd00530-75d3-4e2e-aaf9-4b67a1f2e95e","Type":"ContainerDied","Data":"a899ed4bfcf3daffef0949e5d81e86917d231cd12db0067ee4d54d594794bd8b"} Feb 23 09:08:31 crc kubenswrapper[4940]: I0223 09:08:31.358753 4940 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="62f261cc-72c8-42db-8cac-a70b0d2218c2" path="/var/lib/kubelet/pods/62f261cc-72c8-42db-8cac-a70b0d2218c2/volumes" Feb 23 09:08:31 crc kubenswrapper[4940]: I0223 09:08:31.364405 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-n8vjw"] Feb 23 09:08:32 crc kubenswrapper[4940]: I0223 09:08:32.243406 4940 generic.go:334] "Generic (PLEG): container finished" podID="3dce35c0-f82e-41b1-9716-c56e1b7d5e5d" containerID="96094fffcc6d34a9bc43f77a6b435e2dc6ea4eb15d71208f6654c150f2492859" exitCode=0 Feb 23 09:08:32 crc kubenswrapper[4940]: I0223 09:08:32.243514 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-n8vjw" event={"ID":"3dce35c0-f82e-41b1-9716-c56e1b7d5e5d","Type":"ContainerDied","Data":"96094fffcc6d34a9bc43f77a6b435e2dc6ea4eb15d71208f6654c150f2492859"} Feb 23 09:08:32 crc kubenswrapper[4940]: I0223 09:08:32.243839 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-n8vjw" event={"ID":"3dce35c0-f82e-41b1-9716-c56e1b7d5e5d","Type":"ContainerStarted","Data":"58c298224d0e00269ac8578b4f38256f0e685b34b9a80e97661237134fbd73f3"} Feb 23 09:08:32 crc kubenswrapper[4940]: I0223 09:08:32.759938 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-p9579" Feb 23 09:08:32 crc kubenswrapper[4940]: I0223 09:08:32.920303 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/0fd00530-75d3-4e2e-aaf9-4b67a1f2e95e-db-sync-config-data\") pod \"0fd00530-75d3-4e2e-aaf9-4b67a1f2e95e\" (UID: \"0fd00530-75d3-4e2e-aaf9-4b67a1f2e95e\") " Feb 23 09:08:32 crc kubenswrapper[4940]: I0223 09:08:32.920429 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0fd00530-75d3-4e2e-aaf9-4b67a1f2e95e-combined-ca-bundle\") pod \"0fd00530-75d3-4e2e-aaf9-4b67a1f2e95e\" (UID: \"0fd00530-75d3-4e2e-aaf9-4b67a1f2e95e\") " Feb 23 09:08:32 crc kubenswrapper[4940]: I0223 09:08:32.920475 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rllwr\" (UniqueName: \"kubernetes.io/projected/0fd00530-75d3-4e2e-aaf9-4b67a1f2e95e-kube-api-access-rllwr\") pod \"0fd00530-75d3-4e2e-aaf9-4b67a1f2e95e\" (UID: \"0fd00530-75d3-4e2e-aaf9-4b67a1f2e95e\") " Feb 23 09:08:32 crc kubenswrapper[4940]: I0223 09:08:32.920513 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0fd00530-75d3-4e2e-aaf9-4b67a1f2e95e-config-data\") pod \"0fd00530-75d3-4e2e-aaf9-4b67a1f2e95e\" (UID: \"0fd00530-75d3-4e2e-aaf9-4b67a1f2e95e\") " Feb 23 09:08:32 crc kubenswrapper[4940]: I0223 09:08:32.926293 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0fd00530-75d3-4e2e-aaf9-4b67a1f2e95e-kube-api-access-rllwr" (OuterVolumeSpecName: "kube-api-access-rllwr") pod "0fd00530-75d3-4e2e-aaf9-4b67a1f2e95e" (UID: "0fd00530-75d3-4e2e-aaf9-4b67a1f2e95e"). InnerVolumeSpecName "kube-api-access-rllwr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 09:08:32 crc kubenswrapper[4940]: I0223 09:08:32.926367 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0fd00530-75d3-4e2e-aaf9-4b67a1f2e95e-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "0fd00530-75d3-4e2e-aaf9-4b67a1f2e95e" (UID: "0fd00530-75d3-4e2e-aaf9-4b67a1f2e95e"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 09:08:32 crc kubenswrapper[4940]: I0223 09:08:32.945521 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0fd00530-75d3-4e2e-aaf9-4b67a1f2e95e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0fd00530-75d3-4e2e-aaf9-4b67a1f2e95e" (UID: "0fd00530-75d3-4e2e-aaf9-4b67a1f2e95e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 09:08:32 crc kubenswrapper[4940]: I0223 09:08:32.963123 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0fd00530-75d3-4e2e-aaf9-4b67a1f2e95e-config-data" (OuterVolumeSpecName: "config-data") pod "0fd00530-75d3-4e2e-aaf9-4b67a1f2e95e" (UID: "0fd00530-75d3-4e2e-aaf9-4b67a1f2e95e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 09:08:33 crc kubenswrapper[4940]: I0223 09:08:33.022561 4940 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0fd00530-75d3-4e2e-aaf9-4b67a1f2e95e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 09:08:33 crc kubenswrapper[4940]: I0223 09:08:33.022812 4940 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rllwr\" (UniqueName: \"kubernetes.io/projected/0fd00530-75d3-4e2e-aaf9-4b67a1f2e95e-kube-api-access-rllwr\") on node \"crc\" DevicePath \"\"" Feb 23 09:08:33 crc kubenswrapper[4940]: I0223 09:08:33.022833 4940 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0fd00530-75d3-4e2e-aaf9-4b67a1f2e95e-config-data\") on node \"crc\" DevicePath \"\"" Feb 23 09:08:33 crc kubenswrapper[4940]: I0223 09:08:33.022844 4940 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/0fd00530-75d3-4e2e-aaf9-4b67a1f2e95e-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 23 09:08:33 crc kubenswrapper[4940]: I0223 09:08:33.260535 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-p9579" event={"ID":"0fd00530-75d3-4e2e-aaf9-4b67a1f2e95e","Type":"ContainerDied","Data":"2b1e87a7185a10bbf14dcda5a0dc9316d86c754aafb696c2f0885c7831ef7e53"} Feb 23 09:08:33 crc kubenswrapper[4940]: I0223 09:08:33.260572 4940 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2b1e87a7185a10bbf14dcda5a0dc9316d86c754aafb696c2f0885c7831ef7e53" Feb 23 09:08:33 crc kubenswrapper[4940]: I0223 09:08:33.260641 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-p9579" Feb 23 09:08:33 crc kubenswrapper[4940]: I0223 09:08:33.269767 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-n8vjw" event={"ID":"3dce35c0-f82e-41b1-9716-c56e1b7d5e5d","Type":"ContainerStarted","Data":"c5e44c30580ff3021854bf137e32c15ce7217039e154f05cbebebaeb37c1f422"} Feb 23 09:08:33 crc kubenswrapper[4940]: I0223 09:08:33.269931 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-764c5664d7-n8vjw" Feb 23 09:08:33 crc kubenswrapper[4940]: I0223 09:08:33.303308 4940 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-764c5664d7-n8vjw" podStartSLOduration=3.3032879299999998 podStartE2EDuration="3.30328793s" podCreationTimestamp="2026-02-23 09:08:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 09:08:33.288205862 +0000 UTC m=+1244.671412019" watchObservedRunningTime="2026-02-23 09:08:33.30328793 +0000 UTC m=+1244.686494107" Feb 23 09:08:33 crc kubenswrapper[4940]: I0223 09:08:33.611360 4940 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-n8vjw"] Feb 23 09:08:33 crc kubenswrapper[4940]: I0223 09:08:33.650430 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-74f6bcbc87-czh8n"] Feb 23 09:08:33 crc kubenswrapper[4940]: E0223 09:08:33.650769 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0fd00530-75d3-4e2e-aaf9-4b67a1f2e95e" containerName="glance-db-sync" Feb 23 09:08:33 crc kubenswrapper[4940]: I0223 09:08:33.650786 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="0fd00530-75d3-4e2e-aaf9-4b67a1f2e95e" containerName="glance-db-sync" Feb 23 09:08:33 crc kubenswrapper[4940]: I0223 09:08:33.650937 4940 memory_manager.go:354] "RemoveStaleState removing state" podUID="0fd00530-75d3-4e2e-aaf9-4b67a1f2e95e" containerName="glance-db-sync" Feb 23 09:08:33 crc kubenswrapper[4940]: I0223 09:08:33.651755 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74f6bcbc87-czh8n" Feb 23 09:08:33 crc kubenswrapper[4940]: I0223 09:08:33.675565 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-74f6bcbc87-czh8n"] Feb 23 09:08:33 crc kubenswrapper[4940]: I0223 09:08:33.835726 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a8aafe0f-b233-4ea4-84a4-a28d6cfd2284-ovsdbserver-nb\") pod \"dnsmasq-dns-74f6bcbc87-czh8n\" (UID: \"a8aafe0f-b233-4ea4-84a4-a28d6cfd2284\") " pod="openstack/dnsmasq-dns-74f6bcbc87-czh8n" Feb 23 09:08:33 crc kubenswrapper[4940]: I0223 09:08:33.835804 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a8aafe0f-b233-4ea4-84a4-a28d6cfd2284-ovsdbserver-sb\") pod \"dnsmasq-dns-74f6bcbc87-czh8n\" (UID: \"a8aafe0f-b233-4ea4-84a4-a28d6cfd2284\") " pod="openstack/dnsmasq-dns-74f6bcbc87-czh8n" Feb 23 09:08:33 crc kubenswrapper[4940]: I0223 09:08:33.836291 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2ww5s\" (UniqueName: \"kubernetes.io/projected/a8aafe0f-b233-4ea4-84a4-a28d6cfd2284-kube-api-access-2ww5s\") pod \"dnsmasq-dns-74f6bcbc87-czh8n\" (UID: \"a8aafe0f-b233-4ea4-84a4-a28d6cfd2284\") " pod="openstack/dnsmasq-dns-74f6bcbc87-czh8n" Feb 23 09:08:33 crc kubenswrapper[4940]: I0223 09:08:33.836447 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a8aafe0f-b233-4ea4-84a4-a28d6cfd2284-dns-swift-storage-0\") pod \"dnsmasq-dns-74f6bcbc87-czh8n\" (UID: \"a8aafe0f-b233-4ea4-84a4-a28d6cfd2284\") " pod="openstack/dnsmasq-dns-74f6bcbc87-czh8n" Feb 23 09:08:33 crc kubenswrapper[4940]: I0223 09:08:33.836506 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a8aafe0f-b233-4ea4-84a4-a28d6cfd2284-config\") pod \"dnsmasq-dns-74f6bcbc87-czh8n\" (UID: \"a8aafe0f-b233-4ea4-84a4-a28d6cfd2284\") " pod="openstack/dnsmasq-dns-74f6bcbc87-czh8n" Feb 23 09:08:33 crc kubenswrapper[4940]: I0223 09:08:33.836690 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a8aafe0f-b233-4ea4-84a4-a28d6cfd2284-dns-svc\") pod \"dnsmasq-dns-74f6bcbc87-czh8n\" (UID: \"a8aafe0f-b233-4ea4-84a4-a28d6cfd2284\") " pod="openstack/dnsmasq-dns-74f6bcbc87-czh8n" Feb 23 09:08:33 crc kubenswrapper[4940]: I0223 09:08:33.938323 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2ww5s\" (UniqueName: \"kubernetes.io/projected/a8aafe0f-b233-4ea4-84a4-a28d6cfd2284-kube-api-access-2ww5s\") pod \"dnsmasq-dns-74f6bcbc87-czh8n\" (UID: \"a8aafe0f-b233-4ea4-84a4-a28d6cfd2284\") " pod="openstack/dnsmasq-dns-74f6bcbc87-czh8n" Feb 23 09:08:33 crc kubenswrapper[4940]: I0223 09:08:33.938401 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a8aafe0f-b233-4ea4-84a4-a28d6cfd2284-dns-swift-storage-0\") pod \"dnsmasq-dns-74f6bcbc87-czh8n\" (UID: \"a8aafe0f-b233-4ea4-84a4-a28d6cfd2284\") " pod="openstack/dnsmasq-dns-74f6bcbc87-czh8n" Feb 23 09:08:33 crc kubenswrapper[4940]: I0223 09:08:33.938442 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a8aafe0f-b233-4ea4-84a4-a28d6cfd2284-config\") pod \"dnsmasq-dns-74f6bcbc87-czh8n\" (UID: \"a8aafe0f-b233-4ea4-84a4-a28d6cfd2284\") " pod="openstack/dnsmasq-dns-74f6bcbc87-czh8n" Feb 23 09:08:33 crc kubenswrapper[4940]: I0223 09:08:33.938482 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a8aafe0f-b233-4ea4-84a4-a28d6cfd2284-dns-svc\") pod \"dnsmasq-dns-74f6bcbc87-czh8n\" (UID: \"a8aafe0f-b233-4ea4-84a4-a28d6cfd2284\") " pod="openstack/dnsmasq-dns-74f6bcbc87-czh8n" Feb 23 09:08:33 crc kubenswrapper[4940]: I0223 09:08:33.938531 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a8aafe0f-b233-4ea4-84a4-a28d6cfd2284-ovsdbserver-nb\") pod \"dnsmasq-dns-74f6bcbc87-czh8n\" (UID: \"a8aafe0f-b233-4ea4-84a4-a28d6cfd2284\") " pod="openstack/dnsmasq-dns-74f6bcbc87-czh8n" Feb 23 09:08:33 crc kubenswrapper[4940]: I0223 09:08:33.938563 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a8aafe0f-b233-4ea4-84a4-a28d6cfd2284-ovsdbserver-sb\") pod \"dnsmasq-dns-74f6bcbc87-czh8n\" (UID: \"a8aafe0f-b233-4ea4-84a4-a28d6cfd2284\") " pod="openstack/dnsmasq-dns-74f6bcbc87-czh8n" Feb 23 09:08:33 crc kubenswrapper[4940]: I0223 09:08:33.939354 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a8aafe0f-b233-4ea4-84a4-a28d6cfd2284-config\") pod \"dnsmasq-dns-74f6bcbc87-czh8n\" (UID: \"a8aafe0f-b233-4ea4-84a4-a28d6cfd2284\") " pod="openstack/dnsmasq-dns-74f6bcbc87-czh8n" Feb 23 09:08:33 crc kubenswrapper[4940]: I0223 09:08:33.939438 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a8aafe0f-b233-4ea4-84a4-a28d6cfd2284-ovsdbserver-nb\") pod \"dnsmasq-dns-74f6bcbc87-czh8n\" (UID: \"a8aafe0f-b233-4ea4-84a4-a28d6cfd2284\") " pod="openstack/dnsmasq-dns-74f6bcbc87-czh8n" Feb 23 09:08:33 crc kubenswrapper[4940]: I0223 09:08:33.939692 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a8aafe0f-b233-4ea4-84a4-a28d6cfd2284-ovsdbserver-sb\") pod \"dnsmasq-dns-74f6bcbc87-czh8n\" (UID: \"a8aafe0f-b233-4ea4-84a4-a28d6cfd2284\") " pod="openstack/dnsmasq-dns-74f6bcbc87-czh8n" Feb 23 09:08:33 crc kubenswrapper[4940]: I0223 09:08:33.940010 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a8aafe0f-b233-4ea4-84a4-a28d6cfd2284-dns-swift-storage-0\") pod \"dnsmasq-dns-74f6bcbc87-czh8n\" (UID: \"a8aafe0f-b233-4ea4-84a4-a28d6cfd2284\") " pod="openstack/dnsmasq-dns-74f6bcbc87-czh8n" Feb 23 09:08:33 crc kubenswrapper[4940]: I0223 09:08:33.940402 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a8aafe0f-b233-4ea4-84a4-a28d6cfd2284-dns-svc\") pod \"dnsmasq-dns-74f6bcbc87-czh8n\" (UID: \"a8aafe0f-b233-4ea4-84a4-a28d6cfd2284\") " pod="openstack/dnsmasq-dns-74f6bcbc87-czh8n" Feb 23 09:08:33 crc kubenswrapper[4940]: I0223 09:08:33.958076 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2ww5s\" (UniqueName: \"kubernetes.io/projected/a8aafe0f-b233-4ea4-84a4-a28d6cfd2284-kube-api-access-2ww5s\") pod \"dnsmasq-dns-74f6bcbc87-czh8n\" (UID: \"a8aafe0f-b233-4ea4-84a4-a28d6cfd2284\") " pod="openstack/dnsmasq-dns-74f6bcbc87-czh8n" Feb 23 09:08:33 crc kubenswrapper[4940]: I0223 09:08:33.984633 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74f6bcbc87-czh8n" Feb 23 09:08:34 crc kubenswrapper[4940]: I0223 09:08:34.420825 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-74f6bcbc87-czh8n"] Feb 23 09:08:35 crc kubenswrapper[4940]: I0223 09:08:35.286696 4940 generic.go:334] "Generic (PLEG): container finished" podID="a8aafe0f-b233-4ea4-84a4-a28d6cfd2284" containerID="de637a66fdb051f9a3f6f9ebd74e992ff6ea57aba8da0127ab4e6d0f58dc984c" exitCode=0 Feb 23 09:08:35 crc kubenswrapper[4940]: I0223 09:08:35.286791 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74f6bcbc87-czh8n" event={"ID":"a8aafe0f-b233-4ea4-84a4-a28d6cfd2284","Type":"ContainerDied","Data":"de637a66fdb051f9a3f6f9ebd74e992ff6ea57aba8da0127ab4e6d0f58dc984c"} Feb 23 09:08:35 crc kubenswrapper[4940]: I0223 09:08:35.287132 4940 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-764c5664d7-n8vjw" podUID="3dce35c0-f82e-41b1-9716-c56e1b7d5e5d" containerName="dnsmasq-dns" containerID="cri-o://c5e44c30580ff3021854bf137e32c15ce7217039e154f05cbebebaeb37c1f422" gracePeriod=10 Feb 23 09:08:35 crc kubenswrapper[4940]: I0223 09:08:35.287165 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74f6bcbc87-czh8n" event={"ID":"a8aafe0f-b233-4ea4-84a4-a28d6cfd2284","Type":"ContainerStarted","Data":"556b35648ccafe647b1d26aa35b82501159431767389488b7c8d4ad6dcd0e7e9"} Feb 23 09:08:35 crc kubenswrapper[4940]: I0223 09:08:35.746392 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-764c5664d7-n8vjw" Feb 23 09:08:35 crc kubenswrapper[4940]: I0223 09:08:35.875258 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3dce35c0-f82e-41b1-9716-c56e1b7d5e5d-dns-svc\") pod \"3dce35c0-f82e-41b1-9716-c56e1b7d5e5d\" (UID: \"3dce35c0-f82e-41b1-9716-c56e1b7d5e5d\") " Feb 23 09:08:35 crc kubenswrapper[4940]: I0223 09:08:35.875333 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3dce35c0-f82e-41b1-9716-c56e1b7d5e5d-config\") pod \"3dce35c0-f82e-41b1-9716-c56e1b7d5e5d\" (UID: \"3dce35c0-f82e-41b1-9716-c56e1b7d5e5d\") " Feb 23 09:08:35 crc kubenswrapper[4940]: I0223 09:08:35.875408 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h75p8\" (UniqueName: \"kubernetes.io/projected/3dce35c0-f82e-41b1-9716-c56e1b7d5e5d-kube-api-access-h75p8\") pod \"3dce35c0-f82e-41b1-9716-c56e1b7d5e5d\" (UID: \"3dce35c0-f82e-41b1-9716-c56e1b7d5e5d\") " Feb 23 09:08:35 crc kubenswrapper[4940]: I0223 09:08:35.875425 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3dce35c0-f82e-41b1-9716-c56e1b7d5e5d-dns-swift-storage-0\") pod \"3dce35c0-f82e-41b1-9716-c56e1b7d5e5d\" (UID: \"3dce35c0-f82e-41b1-9716-c56e1b7d5e5d\") " Feb 23 09:08:35 crc kubenswrapper[4940]: I0223 09:08:35.875462 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3dce35c0-f82e-41b1-9716-c56e1b7d5e5d-ovsdbserver-sb\") pod \"3dce35c0-f82e-41b1-9716-c56e1b7d5e5d\" (UID: \"3dce35c0-f82e-41b1-9716-c56e1b7d5e5d\") " Feb 23 09:08:35 crc kubenswrapper[4940]: I0223 09:08:35.875522 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3dce35c0-f82e-41b1-9716-c56e1b7d5e5d-ovsdbserver-nb\") pod \"3dce35c0-f82e-41b1-9716-c56e1b7d5e5d\" (UID: \"3dce35c0-f82e-41b1-9716-c56e1b7d5e5d\") " Feb 23 09:08:35 crc kubenswrapper[4940]: I0223 09:08:35.880546 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3dce35c0-f82e-41b1-9716-c56e1b7d5e5d-kube-api-access-h75p8" (OuterVolumeSpecName: "kube-api-access-h75p8") pod "3dce35c0-f82e-41b1-9716-c56e1b7d5e5d" (UID: "3dce35c0-f82e-41b1-9716-c56e1b7d5e5d"). InnerVolumeSpecName "kube-api-access-h75p8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 09:08:35 crc kubenswrapper[4940]: I0223 09:08:35.920101 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3dce35c0-f82e-41b1-9716-c56e1b7d5e5d-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "3dce35c0-f82e-41b1-9716-c56e1b7d5e5d" (UID: "3dce35c0-f82e-41b1-9716-c56e1b7d5e5d"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 09:08:35 crc kubenswrapper[4940]: I0223 09:08:35.921070 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3dce35c0-f82e-41b1-9716-c56e1b7d5e5d-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "3dce35c0-f82e-41b1-9716-c56e1b7d5e5d" (UID: "3dce35c0-f82e-41b1-9716-c56e1b7d5e5d"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 09:08:35 crc kubenswrapper[4940]: I0223 09:08:35.926521 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3dce35c0-f82e-41b1-9716-c56e1b7d5e5d-config" (OuterVolumeSpecName: "config") pod "3dce35c0-f82e-41b1-9716-c56e1b7d5e5d" (UID: "3dce35c0-f82e-41b1-9716-c56e1b7d5e5d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 09:08:35 crc kubenswrapper[4940]: I0223 09:08:35.927181 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3dce35c0-f82e-41b1-9716-c56e1b7d5e5d-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "3dce35c0-f82e-41b1-9716-c56e1b7d5e5d" (UID: "3dce35c0-f82e-41b1-9716-c56e1b7d5e5d"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 09:08:35 crc kubenswrapper[4940]: I0223 09:08:35.939912 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3dce35c0-f82e-41b1-9716-c56e1b7d5e5d-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "3dce35c0-f82e-41b1-9716-c56e1b7d5e5d" (UID: "3dce35c0-f82e-41b1-9716-c56e1b7d5e5d"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 09:08:35 crc kubenswrapper[4940]: I0223 09:08:35.977799 4940 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3dce35c0-f82e-41b1-9716-c56e1b7d5e5d-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 23 09:08:35 crc kubenswrapper[4940]: I0223 09:08:35.977831 4940 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3dce35c0-f82e-41b1-9716-c56e1b7d5e5d-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 23 09:08:35 crc kubenswrapper[4940]: I0223 09:08:35.977841 4940 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3dce35c0-f82e-41b1-9716-c56e1b7d5e5d-config\") on node \"crc\" DevicePath \"\"" Feb 23 09:08:35 crc kubenswrapper[4940]: I0223 09:08:35.977851 4940 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h75p8\" (UniqueName: \"kubernetes.io/projected/3dce35c0-f82e-41b1-9716-c56e1b7d5e5d-kube-api-access-h75p8\") on node \"crc\" DevicePath \"\"" Feb 23 09:08:35 crc kubenswrapper[4940]: I0223 09:08:35.977862 4940 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3dce35c0-f82e-41b1-9716-c56e1b7d5e5d-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 23 09:08:35 crc kubenswrapper[4940]: I0223 09:08:35.977870 4940 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3dce35c0-f82e-41b1-9716-c56e1b7d5e5d-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 23 09:08:36 crc kubenswrapper[4940]: I0223 09:08:36.317224 4940 generic.go:334] "Generic (PLEG): container finished" podID="3dce35c0-f82e-41b1-9716-c56e1b7d5e5d" containerID="c5e44c30580ff3021854bf137e32c15ce7217039e154f05cbebebaeb37c1f422" exitCode=0 Feb 23 09:08:36 crc kubenswrapper[4940]: I0223 09:08:36.317285 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-n8vjw" event={"ID":"3dce35c0-f82e-41b1-9716-c56e1b7d5e5d","Type":"ContainerDied","Data":"c5e44c30580ff3021854bf137e32c15ce7217039e154f05cbebebaeb37c1f422"} Feb 23 09:08:36 crc kubenswrapper[4940]: I0223 09:08:36.317333 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-764c5664d7-n8vjw" Feb 23 09:08:36 crc kubenswrapper[4940]: I0223 09:08:36.317351 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-n8vjw" event={"ID":"3dce35c0-f82e-41b1-9716-c56e1b7d5e5d","Type":"ContainerDied","Data":"58c298224d0e00269ac8578b4f38256f0e685b34b9a80e97661237134fbd73f3"} Feb 23 09:08:36 crc kubenswrapper[4940]: I0223 09:08:36.317380 4940 scope.go:117] "RemoveContainer" containerID="c5e44c30580ff3021854bf137e32c15ce7217039e154f05cbebebaeb37c1f422" Feb 23 09:08:36 crc kubenswrapper[4940]: I0223 09:08:36.330912 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74f6bcbc87-czh8n" event={"ID":"a8aafe0f-b233-4ea4-84a4-a28d6cfd2284","Type":"ContainerStarted","Data":"d17f230221091f10b35e26889ddeb92a41f743813d36024c58f17a18baacb26b"} Feb 23 09:08:36 crc kubenswrapper[4940]: I0223 09:08:36.331458 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-74f6bcbc87-czh8n" Feb 23 09:08:36 crc kubenswrapper[4940]: I0223 09:08:36.418206 4940 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-74f6bcbc87-czh8n" podStartSLOduration=3.418182873 podStartE2EDuration="3.418182873s" podCreationTimestamp="2026-02-23 09:08:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 09:08:36.41710006 +0000 UTC m=+1247.800306217" watchObservedRunningTime="2026-02-23 09:08:36.418182873 +0000 UTC m=+1247.801389030" Feb 23 09:08:36 crc kubenswrapper[4940]: I0223 09:08:36.419585 4940 scope.go:117] "RemoveContainer" containerID="96094fffcc6d34a9bc43f77a6b435e2dc6ea4eb15d71208f6654c150f2492859" Feb 23 09:08:36 crc kubenswrapper[4940]: I0223 09:08:36.444333 4940 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-n8vjw"] Feb 23 09:08:36 crc kubenswrapper[4940]: I0223 09:08:36.450643 4940 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-n8vjw"] Feb 23 09:08:36 crc kubenswrapper[4940]: I0223 09:08:36.477073 4940 scope.go:117] "RemoveContainer" containerID="c5e44c30580ff3021854bf137e32c15ce7217039e154f05cbebebaeb37c1f422" Feb 23 09:08:36 crc kubenswrapper[4940]: E0223 09:08:36.477462 4940 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c5e44c30580ff3021854bf137e32c15ce7217039e154f05cbebebaeb37c1f422\": container with ID starting with c5e44c30580ff3021854bf137e32c15ce7217039e154f05cbebebaeb37c1f422 not found: ID does not exist" containerID="c5e44c30580ff3021854bf137e32c15ce7217039e154f05cbebebaeb37c1f422" Feb 23 09:08:36 crc kubenswrapper[4940]: I0223 09:08:36.477494 4940 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c5e44c30580ff3021854bf137e32c15ce7217039e154f05cbebebaeb37c1f422"} err="failed to get container status \"c5e44c30580ff3021854bf137e32c15ce7217039e154f05cbebebaeb37c1f422\": rpc error: code = NotFound desc = could not find container \"c5e44c30580ff3021854bf137e32c15ce7217039e154f05cbebebaeb37c1f422\": container with ID starting with c5e44c30580ff3021854bf137e32c15ce7217039e154f05cbebebaeb37c1f422 not found: ID does not exist" Feb 23 09:08:36 crc kubenswrapper[4940]: I0223 09:08:36.477515 4940 scope.go:117] "RemoveContainer" containerID="96094fffcc6d34a9bc43f77a6b435e2dc6ea4eb15d71208f6654c150f2492859" Feb 23 09:08:36 crc kubenswrapper[4940]: E0223 09:08:36.477838 4940 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"96094fffcc6d34a9bc43f77a6b435e2dc6ea4eb15d71208f6654c150f2492859\": container with ID starting with 96094fffcc6d34a9bc43f77a6b435e2dc6ea4eb15d71208f6654c150f2492859 not found: ID does not exist" containerID="96094fffcc6d34a9bc43f77a6b435e2dc6ea4eb15d71208f6654c150f2492859" Feb 23 09:08:36 crc kubenswrapper[4940]: I0223 09:08:36.477861 4940 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"96094fffcc6d34a9bc43f77a6b435e2dc6ea4eb15d71208f6654c150f2492859"} err="failed to get container status \"96094fffcc6d34a9bc43f77a6b435e2dc6ea4eb15d71208f6654c150f2492859\": rpc error: code = NotFound desc = could not find container \"96094fffcc6d34a9bc43f77a6b435e2dc6ea4eb15d71208f6654c150f2492859\": container with ID starting with 96094fffcc6d34a9bc43f77a6b435e2dc6ea4eb15d71208f6654c150f2492859 not found: ID does not exist" Feb 23 09:08:37 crc kubenswrapper[4940]: I0223 09:08:37.358410 4940 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3dce35c0-f82e-41b1-9716-c56e1b7d5e5d" path="/var/lib/kubelet/pods/3dce35c0-f82e-41b1-9716-c56e1b7d5e5d/volumes" Feb 23 09:08:38 crc kubenswrapper[4940]: I0223 09:08:38.614198 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Feb 23 09:08:38 crc kubenswrapper[4940]: I0223 09:08:38.957898 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Feb 23 09:08:39 crc kubenswrapper[4940]: I0223 09:08:39.031069 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-t4mpt"] Feb 23 09:08:39 crc kubenswrapper[4940]: E0223 09:08:39.031408 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3dce35c0-f82e-41b1-9716-c56e1b7d5e5d" containerName="init" Feb 23 09:08:39 crc kubenswrapper[4940]: I0223 09:08:39.031424 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="3dce35c0-f82e-41b1-9716-c56e1b7d5e5d" containerName="init" Feb 23 09:08:39 crc kubenswrapper[4940]: E0223 09:08:39.031441 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3dce35c0-f82e-41b1-9716-c56e1b7d5e5d" containerName="dnsmasq-dns" Feb 23 09:08:39 crc kubenswrapper[4940]: I0223 09:08:39.031447 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="3dce35c0-f82e-41b1-9716-c56e1b7d5e5d" containerName="dnsmasq-dns" Feb 23 09:08:39 crc kubenswrapper[4940]: I0223 09:08:39.031603 4940 memory_manager.go:354] "RemoveStaleState removing state" podUID="3dce35c0-f82e-41b1-9716-c56e1b7d5e5d" containerName="dnsmasq-dns" Feb 23 09:08:39 crc kubenswrapper[4940]: I0223 09:08:39.032131 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-t4mpt" Feb 23 09:08:39 crc kubenswrapper[4940]: I0223 09:08:39.043767 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-f7ce-account-create-update-hms8s"] Feb 23 09:08:39 crc kubenswrapper[4940]: I0223 09:08:39.044981 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-f7ce-account-create-update-hms8s" Feb 23 09:08:39 crc kubenswrapper[4940]: I0223 09:08:39.048543 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Feb 23 09:08:39 crc kubenswrapper[4940]: I0223 09:08:39.055353 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-t4mpt"] Feb 23 09:08:39 crc kubenswrapper[4940]: I0223 09:08:39.113368 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-f7ce-account-create-update-hms8s"] Feb 23 09:08:39 crc kubenswrapper[4940]: I0223 09:08:39.163131 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-db-create-cjlx5"] Feb 23 09:08:39 crc kubenswrapper[4940]: I0223 09:08:39.164538 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-create-cjlx5" Feb 23 09:08:39 crc kubenswrapper[4940]: I0223 09:08:39.176774 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bbecedf9-3f67-471e-b8e7-8945107b9055-operator-scripts\") pod \"cinder-f7ce-account-create-update-hms8s\" (UID: \"bbecedf9-3f67-471e-b8e7-8945107b9055\") " pod="openstack/cinder-f7ce-account-create-update-hms8s" Feb 23 09:08:39 crc kubenswrapper[4940]: I0223 09:08:39.176880 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ppzpm\" (UniqueName: \"kubernetes.io/projected/534d5483-19f1-48db-92f4-7311eb8e0bdd-kube-api-access-ppzpm\") pod \"cinder-db-create-t4mpt\" (UID: \"534d5483-19f1-48db-92f4-7311eb8e0bdd\") " pod="openstack/cinder-db-create-t4mpt" Feb 23 09:08:39 crc kubenswrapper[4940]: I0223 09:08:39.176982 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9krdb\" (UniqueName: \"kubernetes.io/projected/bbecedf9-3f67-471e-b8e7-8945107b9055-kube-api-access-9krdb\") pod \"cinder-f7ce-account-create-update-hms8s\" (UID: \"bbecedf9-3f67-471e-b8e7-8945107b9055\") " pod="openstack/cinder-f7ce-account-create-update-hms8s" Feb 23 09:08:39 crc kubenswrapper[4940]: I0223 09:08:39.177035 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/534d5483-19f1-48db-92f4-7311eb8e0bdd-operator-scripts\") pod \"cinder-db-create-t4mpt\" (UID: \"534d5483-19f1-48db-92f4-7311eb8e0bdd\") " pod="openstack/cinder-db-create-t4mpt" Feb 23 09:08:39 crc kubenswrapper[4940]: I0223 09:08:39.189785 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-db-create-cjlx5"] Feb 23 09:08:39 crc kubenswrapper[4940]: I0223 09:08:39.278440 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9krdb\" (UniqueName: \"kubernetes.io/projected/bbecedf9-3f67-471e-b8e7-8945107b9055-kube-api-access-9krdb\") pod \"cinder-f7ce-account-create-update-hms8s\" (UID: \"bbecedf9-3f67-471e-b8e7-8945107b9055\") " pod="openstack/cinder-f7ce-account-create-update-hms8s" Feb 23 09:08:39 crc kubenswrapper[4940]: I0223 09:08:39.278508 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/33bcc7d8-8eed-4039-97fa-d156a882474c-operator-scripts\") pod \"manila-db-create-cjlx5\" (UID: \"33bcc7d8-8eed-4039-97fa-d156a882474c\") " pod="openstack/manila-db-create-cjlx5" Feb 23 09:08:39 crc kubenswrapper[4940]: I0223 09:08:39.278568 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/534d5483-19f1-48db-92f4-7311eb8e0bdd-operator-scripts\") pod \"cinder-db-create-t4mpt\" (UID: \"534d5483-19f1-48db-92f4-7311eb8e0bdd\") " pod="openstack/cinder-db-create-t4mpt" Feb 23 09:08:39 crc kubenswrapper[4940]: I0223 09:08:39.278693 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-24bgx\" (UniqueName: \"kubernetes.io/projected/33bcc7d8-8eed-4039-97fa-d156a882474c-kube-api-access-24bgx\") pod \"manila-db-create-cjlx5\" (UID: \"33bcc7d8-8eed-4039-97fa-d156a882474c\") " pod="openstack/manila-db-create-cjlx5" Feb 23 09:08:39 crc kubenswrapper[4940]: I0223 09:08:39.278755 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bbecedf9-3f67-471e-b8e7-8945107b9055-operator-scripts\") pod \"cinder-f7ce-account-create-update-hms8s\" (UID: \"bbecedf9-3f67-471e-b8e7-8945107b9055\") " pod="openstack/cinder-f7ce-account-create-update-hms8s" Feb 23 09:08:39 crc kubenswrapper[4940]: I0223 09:08:39.278960 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ppzpm\" (UniqueName: \"kubernetes.io/projected/534d5483-19f1-48db-92f4-7311eb8e0bdd-kube-api-access-ppzpm\") pod \"cinder-db-create-t4mpt\" (UID: \"534d5483-19f1-48db-92f4-7311eb8e0bdd\") " pod="openstack/cinder-db-create-t4mpt" Feb 23 09:08:39 crc kubenswrapper[4940]: I0223 09:08:39.279527 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/534d5483-19f1-48db-92f4-7311eb8e0bdd-operator-scripts\") pod \"cinder-db-create-t4mpt\" (UID: \"534d5483-19f1-48db-92f4-7311eb8e0bdd\") " pod="openstack/cinder-db-create-t4mpt" Feb 23 09:08:39 crc kubenswrapper[4940]: I0223 09:08:39.279702 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bbecedf9-3f67-471e-b8e7-8945107b9055-operator-scripts\") pod \"cinder-f7ce-account-create-update-hms8s\" (UID: \"bbecedf9-3f67-471e-b8e7-8945107b9055\") " pod="openstack/cinder-f7ce-account-create-update-hms8s" Feb 23 09:08:39 crc kubenswrapper[4940]: I0223 09:08:39.305210 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ppzpm\" (UniqueName: \"kubernetes.io/projected/534d5483-19f1-48db-92f4-7311eb8e0bdd-kube-api-access-ppzpm\") pod \"cinder-db-create-t4mpt\" (UID: \"534d5483-19f1-48db-92f4-7311eb8e0bdd\") " pod="openstack/cinder-db-create-t4mpt" Feb 23 09:08:39 crc kubenswrapper[4940]: I0223 09:08:39.310864 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9krdb\" (UniqueName: \"kubernetes.io/projected/bbecedf9-3f67-471e-b8e7-8945107b9055-kube-api-access-9krdb\") pod \"cinder-f7ce-account-create-update-hms8s\" (UID: \"bbecedf9-3f67-471e-b8e7-8945107b9055\") " pod="openstack/cinder-f7ce-account-create-update-hms8s" Feb 23 09:08:39 crc kubenswrapper[4940]: I0223 09:08:39.358093 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-t4mpt" Feb 23 09:08:39 crc kubenswrapper[4940]: I0223 09:08:39.366247 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-ztzmr"] Feb 23 09:08:39 crc kubenswrapper[4940]: I0223 09:08:39.368433 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-f7ce-account-create-update-hms8s" Feb 23 09:08:39 crc kubenswrapper[4940]: I0223 09:08:39.368703 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-ztzmr" Feb 23 09:08:39 crc kubenswrapper[4940]: I0223 09:08:39.370957 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 23 09:08:39 crc kubenswrapper[4940]: I0223 09:08:39.371515 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 23 09:08:39 crc kubenswrapper[4940]: I0223 09:08:39.371716 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-l76pf" Feb 23 09:08:39 crc kubenswrapper[4940]: I0223 09:08:39.371847 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 23 09:08:39 crc kubenswrapper[4940]: I0223 09:08:39.382999 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-24bgx\" (UniqueName: \"kubernetes.io/projected/33bcc7d8-8eed-4039-97fa-d156a882474c-kube-api-access-24bgx\") pod \"manila-db-create-cjlx5\" (UID: \"33bcc7d8-8eed-4039-97fa-d156a882474c\") " pod="openstack/manila-db-create-cjlx5" Feb 23 09:08:39 crc kubenswrapper[4940]: I0223 09:08:39.383880 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/33bcc7d8-8eed-4039-97fa-d156a882474c-operator-scripts\") pod \"manila-db-create-cjlx5\" (UID: \"33bcc7d8-8eed-4039-97fa-d156a882474c\") " pod="openstack/manila-db-create-cjlx5" Feb 23 09:08:39 crc kubenswrapper[4940]: I0223 09:08:39.384711 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/33bcc7d8-8eed-4039-97fa-d156a882474c-operator-scripts\") pod \"manila-db-create-cjlx5\" (UID: \"33bcc7d8-8eed-4039-97fa-d156a882474c\") " pod="openstack/manila-db-create-cjlx5" Feb 23 09:08:39 crc kubenswrapper[4940]: I0223 09:08:39.437540 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-24bgx\" (UniqueName: \"kubernetes.io/projected/33bcc7d8-8eed-4039-97fa-d156a882474c-kube-api-access-24bgx\") pod \"manila-db-create-cjlx5\" (UID: \"33bcc7d8-8eed-4039-97fa-d156a882474c\") " pod="openstack/manila-db-create-cjlx5" Feb 23 09:08:39 crc kubenswrapper[4940]: I0223 09:08:39.440392 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-jw74t"] Feb 23 09:08:39 crc kubenswrapper[4940]: I0223 09:08:39.441763 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-jw74t" Feb 23 09:08:39 crc kubenswrapper[4940]: I0223 09:08:39.448807 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-fca0-account-create-update-cbpzc"] Feb 23 09:08:39 crc kubenswrapper[4940]: I0223 09:08:39.449816 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-fca0-account-create-update-cbpzc" Feb 23 09:08:39 crc kubenswrapper[4940]: I0223 09:08:39.452398 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Feb 23 09:08:39 crc kubenswrapper[4940]: I0223 09:08:39.463492 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-ztzmr"] Feb 23 09:08:39 crc kubenswrapper[4940]: I0223 09:08:39.478380 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-jw74t"] Feb 23 09:08:39 crc kubenswrapper[4940]: I0223 09:08:39.488033 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-fca0-account-create-update-cbpzc"] Feb 23 09:08:39 crc kubenswrapper[4940]: I0223 09:08:39.488853 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9063161f-40d0-49a1-a4f2-f68a3aff7897-combined-ca-bundle\") pod \"keystone-db-sync-ztzmr\" (UID: \"9063161f-40d0-49a1-a4f2-f68a3aff7897\") " pod="openstack/keystone-db-sync-ztzmr" Feb 23 09:08:39 crc kubenswrapper[4940]: I0223 09:08:39.489003 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9zqfc\" (UniqueName: \"kubernetes.io/projected/9063161f-40d0-49a1-a4f2-f68a3aff7897-kube-api-access-9zqfc\") pod \"keystone-db-sync-ztzmr\" (UID: \"9063161f-40d0-49a1-a4f2-f68a3aff7897\") " pod="openstack/keystone-db-sync-ztzmr" Feb 23 09:08:39 crc kubenswrapper[4940]: I0223 09:08:39.489034 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9063161f-40d0-49a1-a4f2-f68a3aff7897-config-data\") pod \"keystone-db-sync-ztzmr\" (UID: \"9063161f-40d0-49a1-a4f2-f68a3aff7897\") " pod="openstack/keystone-db-sync-ztzmr" Feb 23 09:08:39 crc kubenswrapper[4940]: I0223 09:08:39.501362 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-create-cjlx5" Feb 23 09:08:39 crc kubenswrapper[4940]: I0223 09:08:39.577501 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-8086-account-create-update-pmxvb"] Feb 23 09:08:39 crc kubenswrapper[4940]: I0223 09:08:39.579517 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-8086-account-create-update-pmxvb" Feb 23 09:08:39 crc kubenswrapper[4940]: I0223 09:08:39.584295 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-db-secret" Feb 23 09:08:39 crc kubenswrapper[4940]: I0223 09:08:39.585167 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-8086-account-create-update-pmxvb"] Feb 23 09:08:39 crc kubenswrapper[4940]: I0223 09:08:39.593713 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9zqfc\" (UniqueName: \"kubernetes.io/projected/9063161f-40d0-49a1-a4f2-f68a3aff7897-kube-api-access-9zqfc\") pod \"keystone-db-sync-ztzmr\" (UID: \"9063161f-40d0-49a1-a4f2-f68a3aff7897\") " pod="openstack/keystone-db-sync-ztzmr" Feb 23 09:08:39 crc kubenswrapper[4940]: I0223 09:08:39.593777 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9063161f-40d0-49a1-a4f2-f68a3aff7897-config-data\") pod \"keystone-db-sync-ztzmr\" (UID: \"9063161f-40d0-49a1-a4f2-f68a3aff7897\") " pod="openstack/keystone-db-sync-ztzmr" Feb 23 09:08:39 crc kubenswrapper[4940]: I0223 09:08:39.593873 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zz8x2\" (UniqueName: \"kubernetes.io/projected/e79ed92a-52e6-42a1-9870-08a965e41cd0-kube-api-access-zz8x2\") pod \"barbican-fca0-account-create-update-cbpzc\" (UID: \"e79ed92a-52e6-42a1-9870-08a965e41cd0\") " pod="openstack/barbican-fca0-account-create-update-cbpzc" Feb 23 09:08:39 crc kubenswrapper[4940]: I0223 09:08:39.593915 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g7qkh\" (UniqueName: \"kubernetes.io/projected/1900fd27-c407-4691-8b8c-c92f97c6829e-kube-api-access-g7qkh\") pod \"barbican-db-create-jw74t\" (UID: \"1900fd27-c407-4691-8b8c-c92f97c6829e\") " pod="openstack/barbican-db-create-jw74t" Feb 23 09:08:39 crc kubenswrapper[4940]: I0223 09:08:39.593968 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9063161f-40d0-49a1-a4f2-f68a3aff7897-combined-ca-bundle\") pod \"keystone-db-sync-ztzmr\" (UID: \"9063161f-40d0-49a1-a4f2-f68a3aff7897\") " pod="openstack/keystone-db-sync-ztzmr" Feb 23 09:08:39 crc kubenswrapper[4940]: I0223 09:08:39.594020 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e79ed92a-52e6-42a1-9870-08a965e41cd0-operator-scripts\") pod \"barbican-fca0-account-create-update-cbpzc\" (UID: \"e79ed92a-52e6-42a1-9870-08a965e41cd0\") " pod="openstack/barbican-fca0-account-create-update-cbpzc" Feb 23 09:08:39 crc kubenswrapper[4940]: I0223 09:08:39.594053 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1900fd27-c407-4691-8b8c-c92f97c6829e-operator-scripts\") pod \"barbican-db-create-jw74t\" (UID: \"1900fd27-c407-4691-8b8c-c92f97c6829e\") " pod="openstack/barbican-db-create-jw74t" Feb 23 09:08:39 crc kubenswrapper[4940]: I0223 09:08:39.603292 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9063161f-40d0-49a1-a4f2-f68a3aff7897-config-data\") pod \"keystone-db-sync-ztzmr\" (UID: \"9063161f-40d0-49a1-a4f2-f68a3aff7897\") " pod="openstack/keystone-db-sync-ztzmr" Feb 23 09:08:39 crc kubenswrapper[4940]: I0223 09:08:39.610815 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9063161f-40d0-49a1-a4f2-f68a3aff7897-combined-ca-bundle\") pod \"keystone-db-sync-ztzmr\" (UID: \"9063161f-40d0-49a1-a4f2-f68a3aff7897\") " pod="openstack/keystone-db-sync-ztzmr" Feb 23 09:08:39 crc kubenswrapper[4940]: I0223 09:08:39.620422 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9zqfc\" (UniqueName: \"kubernetes.io/projected/9063161f-40d0-49a1-a4f2-f68a3aff7897-kube-api-access-9zqfc\") pod \"keystone-db-sync-ztzmr\" (UID: \"9063161f-40d0-49a1-a4f2-f68a3aff7897\") " pod="openstack/keystone-db-sync-ztzmr" Feb 23 09:08:39 crc kubenswrapper[4940]: I0223 09:08:39.641721 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-zcqlx"] Feb 23 09:08:39 crc kubenswrapper[4940]: I0223 09:08:39.643154 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-zcqlx" Feb 23 09:08:39 crc kubenswrapper[4940]: I0223 09:08:39.671834 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-zcqlx"] Feb 23 09:08:39 crc kubenswrapper[4940]: I0223 09:08:39.701346 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zz8x2\" (UniqueName: \"kubernetes.io/projected/e79ed92a-52e6-42a1-9870-08a965e41cd0-kube-api-access-zz8x2\") pod \"barbican-fca0-account-create-update-cbpzc\" (UID: \"e79ed92a-52e6-42a1-9870-08a965e41cd0\") " pod="openstack/barbican-fca0-account-create-update-cbpzc" Feb 23 09:08:39 crc kubenswrapper[4940]: I0223 09:08:39.701747 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g7qkh\" (UniqueName: \"kubernetes.io/projected/1900fd27-c407-4691-8b8c-c92f97c6829e-kube-api-access-g7qkh\") pod \"barbican-db-create-jw74t\" (UID: \"1900fd27-c407-4691-8b8c-c92f97c6829e\") " pod="openstack/barbican-db-create-jw74t" Feb 23 09:08:39 crc kubenswrapper[4940]: I0223 09:08:39.701802 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/52b8ddee-466a-4cfa-b22f-c5b256a5b602-operator-scripts\") pod \"manila-8086-account-create-update-pmxvb\" (UID: \"52b8ddee-466a-4cfa-b22f-c5b256a5b602\") " pod="openstack/manila-8086-account-create-update-pmxvb" Feb 23 09:08:39 crc kubenswrapper[4940]: I0223 09:08:39.701880 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e79ed92a-52e6-42a1-9870-08a965e41cd0-operator-scripts\") pod \"barbican-fca0-account-create-update-cbpzc\" (UID: \"e79ed92a-52e6-42a1-9870-08a965e41cd0\") " pod="openstack/barbican-fca0-account-create-update-cbpzc" Feb 23 09:08:39 crc kubenswrapper[4940]: I0223 09:08:39.701913 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1900fd27-c407-4691-8b8c-c92f97c6829e-operator-scripts\") pod \"barbican-db-create-jw74t\" (UID: \"1900fd27-c407-4691-8b8c-c92f97c6829e\") " pod="openstack/barbican-db-create-jw74t" Feb 23 09:08:39 crc kubenswrapper[4940]: I0223 09:08:39.701960 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t4x5t\" (UniqueName: \"kubernetes.io/projected/52b8ddee-466a-4cfa-b22f-c5b256a5b602-kube-api-access-t4x5t\") pod \"manila-8086-account-create-update-pmxvb\" (UID: \"52b8ddee-466a-4cfa-b22f-c5b256a5b602\") " pod="openstack/manila-8086-account-create-update-pmxvb" Feb 23 09:08:39 crc kubenswrapper[4940]: I0223 09:08:39.702684 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e79ed92a-52e6-42a1-9870-08a965e41cd0-operator-scripts\") pod \"barbican-fca0-account-create-update-cbpzc\" (UID: \"e79ed92a-52e6-42a1-9870-08a965e41cd0\") " pod="openstack/barbican-fca0-account-create-update-cbpzc" Feb 23 09:08:39 crc kubenswrapper[4940]: I0223 09:08:39.702973 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1900fd27-c407-4691-8b8c-c92f97c6829e-operator-scripts\") pod \"barbican-db-create-jw74t\" (UID: \"1900fd27-c407-4691-8b8c-c92f97c6829e\") " pod="openstack/barbican-db-create-jw74t" Feb 23 09:08:39 crc kubenswrapper[4940]: I0223 09:08:39.724116 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g7qkh\" (UniqueName: \"kubernetes.io/projected/1900fd27-c407-4691-8b8c-c92f97c6829e-kube-api-access-g7qkh\") pod \"barbican-db-create-jw74t\" (UID: \"1900fd27-c407-4691-8b8c-c92f97c6829e\") " pod="openstack/barbican-db-create-jw74t" Feb 23 09:08:39 crc kubenswrapper[4940]: I0223 09:08:39.724717 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zz8x2\" (UniqueName: \"kubernetes.io/projected/e79ed92a-52e6-42a1-9870-08a965e41cd0-kube-api-access-zz8x2\") pod \"barbican-fca0-account-create-update-cbpzc\" (UID: \"e79ed92a-52e6-42a1-9870-08a965e41cd0\") " pod="openstack/barbican-fca0-account-create-update-cbpzc" Feb 23 09:08:39 crc kubenswrapper[4940]: I0223 09:08:39.803984 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8k5nw\" (UniqueName: \"kubernetes.io/projected/926cba46-e952-43fd-a42e-9dfaa77e74d0-kube-api-access-8k5nw\") pod \"neutron-db-create-zcqlx\" (UID: \"926cba46-e952-43fd-a42e-9dfaa77e74d0\") " pod="openstack/neutron-db-create-zcqlx" Feb 23 09:08:39 crc kubenswrapper[4940]: I0223 09:08:39.804048 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t4x5t\" (UniqueName: \"kubernetes.io/projected/52b8ddee-466a-4cfa-b22f-c5b256a5b602-kube-api-access-t4x5t\") pod \"manila-8086-account-create-update-pmxvb\" (UID: \"52b8ddee-466a-4cfa-b22f-c5b256a5b602\") " pod="openstack/manila-8086-account-create-update-pmxvb" Feb 23 09:08:39 crc kubenswrapper[4940]: I0223 09:08:39.804195 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/926cba46-e952-43fd-a42e-9dfaa77e74d0-operator-scripts\") pod \"neutron-db-create-zcqlx\" (UID: \"926cba46-e952-43fd-a42e-9dfaa77e74d0\") " pod="openstack/neutron-db-create-zcqlx" Feb 23 09:08:39 crc kubenswrapper[4940]: I0223 09:08:39.804239 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/52b8ddee-466a-4cfa-b22f-c5b256a5b602-operator-scripts\") pod \"manila-8086-account-create-update-pmxvb\" (UID: \"52b8ddee-466a-4cfa-b22f-c5b256a5b602\") " pod="openstack/manila-8086-account-create-update-pmxvb" Feb 23 09:08:39 crc kubenswrapper[4940]: I0223 09:08:39.811022 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/52b8ddee-466a-4cfa-b22f-c5b256a5b602-operator-scripts\") pod \"manila-8086-account-create-update-pmxvb\" (UID: \"52b8ddee-466a-4cfa-b22f-c5b256a5b602\") " pod="openstack/manila-8086-account-create-update-pmxvb" Feb 23 09:08:39 crc kubenswrapper[4940]: I0223 09:08:39.819170 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-0ed7-account-create-update-rqmbz"] Feb 23 09:08:39 crc kubenswrapper[4940]: I0223 09:08:39.820489 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-0ed7-account-create-update-rqmbz" Feb 23 09:08:39 crc kubenswrapper[4940]: I0223 09:08:39.823394 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Feb 23 09:08:39 crc kubenswrapper[4940]: I0223 09:08:39.825389 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-ztzmr" Feb 23 09:08:39 crc kubenswrapper[4940]: I0223 09:08:39.832738 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t4x5t\" (UniqueName: \"kubernetes.io/projected/52b8ddee-466a-4cfa-b22f-c5b256a5b602-kube-api-access-t4x5t\") pod \"manila-8086-account-create-update-pmxvb\" (UID: \"52b8ddee-466a-4cfa-b22f-c5b256a5b602\") " pod="openstack/manila-8086-account-create-update-pmxvb" Feb 23 09:08:39 crc kubenswrapper[4940]: I0223 09:08:39.851222 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-0ed7-account-create-update-rqmbz"] Feb 23 09:08:39 crc kubenswrapper[4940]: I0223 09:08:39.869179 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-jw74t" Feb 23 09:08:39 crc kubenswrapper[4940]: I0223 09:08:39.879859 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-t4mpt"] Feb 23 09:08:39 crc kubenswrapper[4940]: I0223 09:08:39.882440 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-fca0-account-create-update-cbpzc" Feb 23 09:08:39 crc kubenswrapper[4940]: I0223 09:08:39.907402 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8k5nw\" (UniqueName: \"kubernetes.io/projected/926cba46-e952-43fd-a42e-9dfaa77e74d0-kube-api-access-8k5nw\") pod \"neutron-db-create-zcqlx\" (UID: \"926cba46-e952-43fd-a42e-9dfaa77e74d0\") " pod="openstack/neutron-db-create-zcqlx" Feb 23 09:08:39 crc kubenswrapper[4940]: I0223 09:08:39.907485 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5gdmr\" (UniqueName: \"kubernetes.io/projected/92c6b8ad-74d4-447c-b8b2-e6302e5a2d55-kube-api-access-5gdmr\") pod \"neutron-0ed7-account-create-update-rqmbz\" (UID: \"92c6b8ad-74d4-447c-b8b2-e6302e5a2d55\") " pod="openstack/neutron-0ed7-account-create-update-rqmbz" Feb 23 09:08:39 crc kubenswrapper[4940]: I0223 09:08:39.907542 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/92c6b8ad-74d4-447c-b8b2-e6302e5a2d55-operator-scripts\") pod \"neutron-0ed7-account-create-update-rqmbz\" (UID: \"92c6b8ad-74d4-447c-b8b2-e6302e5a2d55\") " pod="openstack/neutron-0ed7-account-create-update-rqmbz" Feb 23 09:08:39 crc kubenswrapper[4940]: I0223 09:08:39.907592 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/926cba46-e952-43fd-a42e-9dfaa77e74d0-operator-scripts\") pod \"neutron-db-create-zcqlx\" (UID: \"926cba46-e952-43fd-a42e-9dfaa77e74d0\") " pod="openstack/neutron-db-create-zcqlx" Feb 23 09:08:39 crc kubenswrapper[4940]: I0223 09:08:39.908326 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/926cba46-e952-43fd-a42e-9dfaa77e74d0-operator-scripts\") pod \"neutron-db-create-zcqlx\" (UID: \"926cba46-e952-43fd-a42e-9dfaa77e74d0\") " pod="openstack/neutron-db-create-zcqlx" Feb 23 09:08:39 crc kubenswrapper[4940]: I0223 09:08:39.929891 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8k5nw\" (UniqueName: \"kubernetes.io/projected/926cba46-e952-43fd-a42e-9dfaa77e74d0-kube-api-access-8k5nw\") pod \"neutron-db-create-zcqlx\" (UID: \"926cba46-e952-43fd-a42e-9dfaa77e74d0\") " pod="openstack/neutron-db-create-zcqlx" Feb 23 09:08:40 crc kubenswrapper[4940]: I0223 09:08:39.961980 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-8086-account-create-update-pmxvb" Feb 23 09:08:40 crc kubenswrapper[4940]: I0223 09:08:39.977412 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-zcqlx" Feb 23 09:08:40 crc kubenswrapper[4940]: I0223 09:08:40.007629 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-f7ce-account-create-update-hms8s"] Feb 23 09:08:40 crc kubenswrapper[4940]: I0223 09:08:40.012298 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/92c6b8ad-74d4-447c-b8b2-e6302e5a2d55-operator-scripts\") pod \"neutron-0ed7-account-create-update-rqmbz\" (UID: \"92c6b8ad-74d4-447c-b8b2-e6302e5a2d55\") " pod="openstack/neutron-0ed7-account-create-update-rqmbz" Feb 23 09:08:40 crc kubenswrapper[4940]: I0223 09:08:40.012688 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5gdmr\" (UniqueName: \"kubernetes.io/projected/92c6b8ad-74d4-447c-b8b2-e6302e5a2d55-kube-api-access-5gdmr\") pod \"neutron-0ed7-account-create-update-rqmbz\" (UID: \"92c6b8ad-74d4-447c-b8b2-e6302e5a2d55\") " pod="openstack/neutron-0ed7-account-create-update-rqmbz" Feb 23 09:08:40 crc kubenswrapper[4940]: I0223 09:08:40.013562 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/92c6b8ad-74d4-447c-b8b2-e6302e5a2d55-operator-scripts\") pod \"neutron-0ed7-account-create-update-rqmbz\" (UID: \"92c6b8ad-74d4-447c-b8b2-e6302e5a2d55\") " pod="openstack/neutron-0ed7-account-create-update-rqmbz" Feb 23 09:08:40 crc kubenswrapper[4940]: I0223 09:08:40.037221 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5gdmr\" (UniqueName: \"kubernetes.io/projected/92c6b8ad-74d4-447c-b8b2-e6302e5a2d55-kube-api-access-5gdmr\") pod \"neutron-0ed7-account-create-update-rqmbz\" (UID: \"92c6b8ad-74d4-447c-b8b2-e6302e5a2d55\") " pod="openstack/neutron-0ed7-account-create-update-rqmbz" Feb 23 09:08:40 crc kubenswrapper[4940]: I0223 09:08:40.153929 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-db-create-cjlx5"] Feb 23 09:08:40 crc kubenswrapper[4940]: I0223 09:08:40.155946 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-0ed7-account-create-update-rqmbz" Feb 23 09:08:40 crc kubenswrapper[4940]: W0223 09:08:40.162232 4940 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod33bcc7d8_8eed_4039_97fa_d156a882474c.slice/crio-6e526df90e3495ac2b72ae367a71306eab64df653ffef4b7d288dbb1c29acc3f WatchSource:0}: Error finding container 6e526df90e3495ac2b72ae367a71306eab64df653ffef4b7d288dbb1c29acc3f: Status 404 returned error can't find the container with id 6e526df90e3495ac2b72ae367a71306eab64df653ffef4b7d288dbb1c29acc3f Feb 23 09:08:40 crc kubenswrapper[4940]: I0223 09:08:40.367760 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-create-cjlx5" event={"ID":"33bcc7d8-8eed-4039-97fa-d156a882474c","Type":"ContainerStarted","Data":"100755ce628d7b75bc077814d6db070d80ec0892f5ebded60652945511ef5835"} Feb 23 09:08:40 crc kubenswrapper[4940]: I0223 09:08:40.368203 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-create-cjlx5" event={"ID":"33bcc7d8-8eed-4039-97fa-d156a882474c","Type":"ContainerStarted","Data":"6e526df90e3495ac2b72ae367a71306eab64df653ffef4b7d288dbb1c29acc3f"} Feb 23 09:08:40 crc kubenswrapper[4940]: I0223 09:08:40.369672 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-f7ce-account-create-update-hms8s" event={"ID":"bbecedf9-3f67-471e-b8e7-8945107b9055","Type":"ContainerStarted","Data":"5113cc9aed38e0c069ca83f4113fd2f41c0a4e4ce5a2416899c6c49e8954c612"} Feb 23 09:08:40 crc kubenswrapper[4940]: I0223 09:08:40.369697 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-f7ce-account-create-update-hms8s" event={"ID":"bbecedf9-3f67-471e-b8e7-8945107b9055","Type":"ContainerStarted","Data":"96dfad22f64e49b4b2f6516a1182d54b9c4fd8df53361dba77ac33de9a551d0a"} Feb 23 09:08:40 crc kubenswrapper[4940]: I0223 09:08:40.371411 4940 generic.go:334] "Generic (PLEG): container finished" podID="534d5483-19f1-48db-92f4-7311eb8e0bdd" containerID="718eab3076e08c740b11b125044da354b569ad3ae05e5abee77eeeaf7cc395d0" exitCode=0 Feb 23 09:08:40 crc kubenswrapper[4940]: I0223 09:08:40.371450 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-t4mpt" event={"ID":"534d5483-19f1-48db-92f4-7311eb8e0bdd","Type":"ContainerDied","Data":"718eab3076e08c740b11b125044da354b569ad3ae05e5abee77eeeaf7cc395d0"} Feb 23 09:08:40 crc kubenswrapper[4940]: I0223 09:08:40.371477 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-t4mpt" event={"ID":"534d5483-19f1-48db-92f4-7311eb8e0bdd","Type":"ContainerStarted","Data":"0f921791e6f6044b512ba3263fcf0223a26c1f3652cd210b4c934ec3a935fad4"} Feb 23 09:08:40 crc kubenswrapper[4940]: I0223 09:08:40.394445 4940 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-f7ce-account-create-update-hms8s" podStartSLOduration=1.394423689 podStartE2EDuration="1.394423689s" podCreationTimestamp="2026-02-23 09:08:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 09:08:40.389987721 +0000 UTC m=+1251.773193888" watchObservedRunningTime="2026-02-23 09:08:40.394423689 +0000 UTC m=+1251.777629856" Feb 23 09:08:41 crc kubenswrapper[4940]: I0223 09:08:41.064263 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-ztzmr"] Feb 23 09:08:41 crc kubenswrapper[4940]: W0223 09:08:41.073587 4940 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9063161f_40d0_49a1_a4f2_f68a3aff7897.slice/crio-01757462a3ac7867a967916f92c1e852726d0989c8a41a394f13d72cb29e588d WatchSource:0}: Error finding container 01757462a3ac7867a967916f92c1e852726d0989c8a41a394f13d72cb29e588d: Status 404 returned error can't find the container with id 01757462a3ac7867a967916f92c1e852726d0989c8a41a394f13d72cb29e588d Feb 23 09:08:41 crc kubenswrapper[4940]: I0223 09:08:41.167164 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-fca0-account-create-update-cbpzc"] Feb 23 09:08:41 crc kubenswrapper[4940]: I0223 09:08:41.193915 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-jw74t"] Feb 23 09:08:41 crc kubenswrapper[4940]: I0223 09:08:41.228819 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-zcqlx"] Feb 23 09:08:41 crc kubenswrapper[4940]: W0223 09:08:41.257715 4940 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod52b8ddee_466a_4cfa_b22f_c5b256a5b602.slice/crio-dafe68772d986c43942f235009d89ea1d8c54f8b3e9fa59a0e4d5918f52c5081 WatchSource:0}: Error finding container dafe68772d986c43942f235009d89ea1d8c54f8b3e9fa59a0e4d5918f52c5081: Status 404 returned error can't find the container with id dafe68772d986c43942f235009d89ea1d8c54f8b3e9fa59a0e4d5918f52c5081 Feb 23 09:08:41 crc kubenswrapper[4940]: I0223 09:08:41.258185 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-8086-account-create-update-pmxvb"] Feb 23 09:08:41 crc kubenswrapper[4940]: I0223 09:08:41.269564 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-0ed7-account-create-update-rqmbz"] Feb 23 09:08:41 crc kubenswrapper[4940]: W0223 09:08:41.272628 4940 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod926cba46_e952_43fd_a42e_9dfaa77e74d0.slice/crio-336519d444336a595628571425220f3981940969a1f1f931a349d9d7fae9e98a WatchSource:0}: Error finding container 336519d444336a595628571425220f3981940969a1f1f931a349d9d7fae9e98a: Status 404 returned error can't find the container with id 336519d444336a595628571425220f3981940969a1f1f931a349d9d7fae9e98a Feb 23 09:08:41 crc kubenswrapper[4940]: W0223 09:08:41.280719 4940 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod92c6b8ad_74d4_447c_b8b2_e6302e5a2d55.slice/crio-b38eae5b4c201a851012e024ee7447c01260f362619c2639f22fbfc9ac5e19a8 WatchSource:0}: Error finding container b38eae5b4c201a851012e024ee7447c01260f362619c2639f22fbfc9ac5e19a8: Status 404 returned error can't find the container with id b38eae5b4c201a851012e024ee7447c01260f362619c2639f22fbfc9ac5e19a8 Feb 23 09:08:41 crc kubenswrapper[4940]: I0223 09:08:41.379538 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-jw74t" event={"ID":"1900fd27-c407-4691-8b8c-c92f97c6829e","Type":"ContainerStarted","Data":"d38aff858f9fb208ff9a90e9dc9dc2f44c6c295fe6ac3d6cac4c415b82ecd566"} Feb 23 09:08:41 crc kubenswrapper[4940]: I0223 09:08:41.381107 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-8086-account-create-update-pmxvb" event={"ID":"52b8ddee-466a-4cfa-b22f-c5b256a5b602","Type":"ContainerStarted","Data":"dafe68772d986c43942f235009d89ea1d8c54f8b3e9fa59a0e4d5918f52c5081"} Feb 23 09:08:41 crc kubenswrapper[4940]: I0223 09:08:41.382448 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-0ed7-account-create-update-rqmbz" event={"ID":"92c6b8ad-74d4-447c-b8b2-e6302e5a2d55","Type":"ContainerStarted","Data":"b38eae5b4c201a851012e024ee7447c01260f362619c2639f22fbfc9ac5e19a8"} Feb 23 09:08:41 crc kubenswrapper[4940]: I0223 09:08:41.383373 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-ztzmr" event={"ID":"9063161f-40d0-49a1-a4f2-f68a3aff7897","Type":"ContainerStarted","Data":"01757462a3ac7867a967916f92c1e852726d0989c8a41a394f13d72cb29e588d"} Feb 23 09:08:41 crc kubenswrapper[4940]: I0223 09:08:41.384536 4940 generic.go:334] "Generic (PLEG): container finished" podID="33bcc7d8-8eed-4039-97fa-d156a882474c" containerID="100755ce628d7b75bc077814d6db070d80ec0892f5ebded60652945511ef5835" exitCode=0 Feb 23 09:08:41 crc kubenswrapper[4940]: I0223 09:08:41.384575 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-create-cjlx5" event={"ID":"33bcc7d8-8eed-4039-97fa-d156a882474c","Type":"ContainerDied","Data":"100755ce628d7b75bc077814d6db070d80ec0892f5ebded60652945511ef5835"} Feb 23 09:08:41 crc kubenswrapper[4940]: I0223 09:08:41.388689 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-zcqlx" event={"ID":"926cba46-e952-43fd-a42e-9dfaa77e74d0","Type":"ContainerStarted","Data":"336519d444336a595628571425220f3981940969a1f1f931a349d9d7fae9e98a"} Feb 23 09:08:41 crc kubenswrapper[4940]: I0223 09:08:41.390513 4940 generic.go:334] "Generic (PLEG): container finished" podID="bbecedf9-3f67-471e-b8e7-8945107b9055" containerID="5113cc9aed38e0c069ca83f4113fd2f41c0a4e4ce5a2416899c6c49e8954c612" exitCode=0 Feb 23 09:08:41 crc kubenswrapper[4940]: I0223 09:08:41.390560 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-f7ce-account-create-update-hms8s" event={"ID":"bbecedf9-3f67-471e-b8e7-8945107b9055","Type":"ContainerDied","Data":"5113cc9aed38e0c069ca83f4113fd2f41c0a4e4ce5a2416899c6c49e8954c612"} Feb 23 09:08:41 crc kubenswrapper[4940]: I0223 09:08:41.392388 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-fca0-account-create-update-cbpzc" event={"ID":"e79ed92a-52e6-42a1-9870-08a965e41cd0","Type":"ContainerStarted","Data":"15df81cc49f42d6c63559ab18b6ae01c26bc06d2371aa9c9dff7a46666ce81b2"} Feb 23 09:08:41 crc kubenswrapper[4940]: I0223 09:08:41.835957 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-t4mpt" Feb 23 09:08:41 crc kubenswrapper[4940]: I0223 09:08:41.958793 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ppzpm\" (UniqueName: \"kubernetes.io/projected/534d5483-19f1-48db-92f4-7311eb8e0bdd-kube-api-access-ppzpm\") pod \"534d5483-19f1-48db-92f4-7311eb8e0bdd\" (UID: \"534d5483-19f1-48db-92f4-7311eb8e0bdd\") " Feb 23 09:08:41 crc kubenswrapper[4940]: I0223 09:08:41.958965 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/534d5483-19f1-48db-92f4-7311eb8e0bdd-operator-scripts\") pod \"534d5483-19f1-48db-92f4-7311eb8e0bdd\" (UID: \"534d5483-19f1-48db-92f4-7311eb8e0bdd\") " Feb 23 09:08:41 crc kubenswrapper[4940]: I0223 09:08:41.960249 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/534d5483-19f1-48db-92f4-7311eb8e0bdd-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "534d5483-19f1-48db-92f4-7311eb8e0bdd" (UID: "534d5483-19f1-48db-92f4-7311eb8e0bdd"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 09:08:41 crc kubenswrapper[4940]: I0223 09:08:41.969342 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/534d5483-19f1-48db-92f4-7311eb8e0bdd-kube-api-access-ppzpm" (OuterVolumeSpecName: "kube-api-access-ppzpm") pod "534d5483-19f1-48db-92f4-7311eb8e0bdd" (UID: "534d5483-19f1-48db-92f4-7311eb8e0bdd"). InnerVolumeSpecName "kube-api-access-ppzpm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 09:08:42 crc kubenswrapper[4940]: I0223 09:08:42.060666 4940 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/534d5483-19f1-48db-92f4-7311eb8e0bdd-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 23 09:08:42 crc kubenswrapper[4940]: I0223 09:08:42.061215 4940 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ppzpm\" (UniqueName: \"kubernetes.io/projected/534d5483-19f1-48db-92f4-7311eb8e0bdd-kube-api-access-ppzpm\") on node \"crc\" DevicePath \"\"" Feb 23 09:08:42 crc kubenswrapper[4940]: I0223 09:08:42.401843 4940 generic.go:334] "Generic (PLEG): container finished" podID="52b8ddee-466a-4cfa-b22f-c5b256a5b602" containerID="ad0edd3ade96ef715c3dfd49c9b7bdee951b4f2ba1ade606630cba78fd183785" exitCode=0 Feb 23 09:08:42 crc kubenswrapper[4940]: I0223 09:08:42.401940 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-8086-account-create-update-pmxvb" event={"ID":"52b8ddee-466a-4cfa-b22f-c5b256a5b602","Type":"ContainerDied","Data":"ad0edd3ade96ef715c3dfd49c9b7bdee951b4f2ba1ade606630cba78fd183785"} Feb 23 09:08:42 crc kubenswrapper[4940]: I0223 09:08:42.403675 4940 generic.go:334] "Generic (PLEG): container finished" podID="92c6b8ad-74d4-447c-b8b2-e6302e5a2d55" containerID="893611724f44d5a274aacb48dc70ebf6c251d1f8b411b2a6851f87e6d911ac78" exitCode=0 Feb 23 09:08:42 crc kubenswrapper[4940]: I0223 09:08:42.403777 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-0ed7-account-create-update-rqmbz" event={"ID":"92c6b8ad-74d4-447c-b8b2-e6302e5a2d55","Type":"ContainerDied","Data":"893611724f44d5a274aacb48dc70ebf6c251d1f8b411b2a6851f87e6d911ac78"} Feb 23 09:08:42 crc kubenswrapper[4940]: I0223 09:08:42.407447 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-t4mpt" event={"ID":"534d5483-19f1-48db-92f4-7311eb8e0bdd","Type":"ContainerDied","Data":"0f921791e6f6044b512ba3263fcf0223a26c1f3652cd210b4c934ec3a935fad4"} Feb 23 09:08:42 crc kubenswrapper[4940]: I0223 09:08:42.407486 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-t4mpt" Feb 23 09:08:42 crc kubenswrapper[4940]: I0223 09:08:42.407506 4940 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0f921791e6f6044b512ba3263fcf0223a26c1f3652cd210b4c934ec3a935fad4" Feb 23 09:08:42 crc kubenswrapper[4940]: I0223 09:08:42.409297 4940 generic.go:334] "Generic (PLEG): container finished" podID="926cba46-e952-43fd-a42e-9dfaa77e74d0" containerID="a271c8366b2a73340221775abdf9bc7b756fa893190124b600d8d50ad96ec250" exitCode=0 Feb 23 09:08:42 crc kubenswrapper[4940]: I0223 09:08:42.409455 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-zcqlx" event={"ID":"926cba46-e952-43fd-a42e-9dfaa77e74d0","Type":"ContainerDied","Data":"a271c8366b2a73340221775abdf9bc7b756fa893190124b600d8d50ad96ec250"} Feb 23 09:08:42 crc kubenswrapper[4940]: I0223 09:08:42.410700 4940 generic.go:334] "Generic (PLEG): container finished" podID="e79ed92a-52e6-42a1-9870-08a965e41cd0" containerID="394176dddcb0382c0a2bbc210c6359d6c0e4bb26ecfc27caaa2aa22ad5201b06" exitCode=0 Feb 23 09:08:42 crc kubenswrapper[4940]: I0223 09:08:42.410755 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-fca0-account-create-update-cbpzc" event={"ID":"e79ed92a-52e6-42a1-9870-08a965e41cd0","Type":"ContainerDied","Data":"394176dddcb0382c0a2bbc210c6359d6c0e4bb26ecfc27caaa2aa22ad5201b06"} Feb 23 09:08:42 crc kubenswrapper[4940]: I0223 09:08:42.413059 4940 generic.go:334] "Generic (PLEG): container finished" podID="1900fd27-c407-4691-8b8c-c92f97c6829e" containerID="422da4d80f32fe87000a2d770ab1ade34428ef47d6c3a1364b3fff25e0bf9ed5" exitCode=0 Feb 23 09:08:42 crc kubenswrapper[4940]: I0223 09:08:42.413244 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-jw74t" event={"ID":"1900fd27-c407-4691-8b8c-c92f97c6829e","Type":"ContainerDied","Data":"422da4d80f32fe87000a2d770ab1ade34428ef47d6c3a1364b3fff25e0bf9ed5"} Feb 23 09:08:42 crc kubenswrapper[4940]: I0223 09:08:42.785255 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-f7ce-account-create-update-hms8s" Feb 23 09:08:42 crc kubenswrapper[4940]: I0223 09:08:42.875914 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9krdb\" (UniqueName: \"kubernetes.io/projected/bbecedf9-3f67-471e-b8e7-8945107b9055-kube-api-access-9krdb\") pod \"bbecedf9-3f67-471e-b8e7-8945107b9055\" (UID: \"bbecedf9-3f67-471e-b8e7-8945107b9055\") " Feb 23 09:08:42 crc kubenswrapper[4940]: I0223 09:08:42.876414 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bbecedf9-3f67-471e-b8e7-8945107b9055-operator-scripts\") pod \"bbecedf9-3f67-471e-b8e7-8945107b9055\" (UID: \"bbecedf9-3f67-471e-b8e7-8945107b9055\") " Feb 23 09:08:42 crc kubenswrapper[4940]: I0223 09:08:42.877151 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bbecedf9-3f67-471e-b8e7-8945107b9055-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "bbecedf9-3f67-471e-b8e7-8945107b9055" (UID: "bbecedf9-3f67-471e-b8e7-8945107b9055"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 09:08:42 crc kubenswrapper[4940]: I0223 09:08:42.880481 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bbecedf9-3f67-471e-b8e7-8945107b9055-kube-api-access-9krdb" (OuterVolumeSpecName: "kube-api-access-9krdb") pod "bbecedf9-3f67-471e-b8e7-8945107b9055" (UID: "bbecedf9-3f67-471e-b8e7-8945107b9055"). InnerVolumeSpecName "kube-api-access-9krdb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 09:08:42 crc kubenswrapper[4940]: I0223 09:08:42.935904 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-create-cjlx5" Feb 23 09:08:42 crc kubenswrapper[4940]: I0223 09:08:42.978794 4940 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9krdb\" (UniqueName: \"kubernetes.io/projected/bbecedf9-3f67-471e-b8e7-8945107b9055-kube-api-access-9krdb\") on node \"crc\" DevicePath \"\"" Feb 23 09:08:42 crc kubenswrapper[4940]: I0223 09:08:42.978837 4940 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bbecedf9-3f67-471e-b8e7-8945107b9055-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 23 09:08:43 crc kubenswrapper[4940]: I0223 09:08:43.079845 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-24bgx\" (UniqueName: \"kubernetes.io/projected/33bcc7d8-8eed-4039-97fa-d156a882474c-kube-api-access-24bgx\") pod \"33bcc7d8-8eed-4039-97fa-d156a882474c\" (UID: \"33bcc7d8-8eed-4039-97fa-d156a882474c\") " Feb 23 09:08:43 crc kubenswrapper[4940]: I0223 09:08:43.079922 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/33bcc7d8-8eed-4039-97fa-d156a882474c-operator-scripts\") pod \"33bcc7d8-8eed-4039-97fa-d156a882474c\" (UID: \"33bcc7d8-8eed-4039-97fa-d156a882474c\") " Feb 23 09:08:43 crc kubenswrapper[4940]: I0223 09:08:43.080459 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/33bcc7d8-8eed-4039-97fa-d156a882474c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "33bcc7d8-8eed-4039-97fa-d156a882474c" (UID: "33bcc7d8-8eed-4039-97fa-d156a882474c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 09:08:43 crc kubenswrapper[4940]: I0223 09:08:43.083841 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/33bcc7d8-8eed-4039-97fa-d156a882474c-kube-api-access-24bgx" (OuterVolumeSpecName: "kube-api-access-24bgx") pod "33bcc7d8-8eed-4039-97fa-d156a882474c" (UID: "33bcc7d8-8eed-4039-97fa-d156a882474c"). InnerVolumeSpecName "kube-api-access-24bgx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 09:08:43 crc kubenswrapper[4940]: I0223 09:08:43.181960 4940 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-24bgx\" (UniqueName: \"kubernetes.io/projected/33bcc7d8-8eed-4039-97fa-d156a882474c-kube-api-access-24bgx\") on node \"crc\" DevicePath \"\"" Feb 23 09:08:43 crc kubenswrapper[4940]: I0223 09:08:43.181997 4940 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/33bcc7d8-8eed-4039-97fa-d156a882474c-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 23 09:08:43 crc kubenswrapper[4940]: I0223 09:08:43.425806 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-create-cjlx5" Feb 23 09:08:43 crc kubenswrapper[4940]: I0223 09:08:43.425814 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-create-cjlx5" event={"ID":"33bcc7d8-8eed-4039-97fa-d156a882474c","Type":"ContainerDied","Data":"6e526df90e3495ac2b72ae367a71306eab64df653ffef4b7d288dbb1c29acc3f"} Feb 23 09:08:43 crc kubenswrapper[4940]: I0223 09:08:43.425925 4940 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6e526df90e3495ac2b72ae367a71306eab64df653ffef4b7d288dbb1c29acc3f" Feb 23 09:08:43 crc kubenswrapper[4940]: I0223 09:08:43.431149 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-f7ce-account-create-update-hms8s" Feb 23 09:08:43 crc kubenswrapper[4940]: I0223 09:08:43.431257 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-f7ce-account-create-update-hms8s" event={"ID":"bbecedf9-3f67-471e-b8e7-8945107b9055","Type":"ContainerDied","Data":"96dfad22f64e49b4b2f6516a1182d54b9c4fd8df53361dba77ac33de9a551d0a"} Feb 23 09:08:43 crc kubenswrapper[4940]: I0223 09:08:43.431293 4940 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="96dfad22f64e49b4b2f6516a1182d54b9c4fd8df53361dba77ac33de9a551d0a" Feb 23 09:08:43 crc kubenswrapper[4940]: I0223 09:08:43.986766 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-74f6bcbc87-czh8n" Feb 23 09:08:44 crc kubenswrapper[4940]: I0223 09:08:44.054771 4940 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-698758b865-l59jm"] Feb 23 09:08:44 crc kubenswrapper[4940]: I0223 09:08:44.055093 4940 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-698758b865-l59jm" podUID="03982047-fd03-484c-9467-564d3ba0876a" containerName="dnsmasq-dns" containerID="cri-o://a479abe871aff9bead30bdd78c529dd77e9226d086ccbee1c43f279d34eec0c0" gracePeriod=10 Feb 23 09:08:44 crc kubenswrapper[4940]: I0223 09:08:44.376757 4940 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-698758b865-l59jm" podUID="03982047-fd03-484c-9467-564d3ba0876a" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.119:5353: connect: connection refused" Feb 23 09:08:44 crc kubenswrapper[4940]: I0223 09:08:44.441337 4940 generic.go:334] "Generic (PLEG): container finished" podID="03982047-fd03-484c-9467-564d3ba0876a" containerID="a479abe871aff9bead30bdd78c529dd77e9226d086ccbee1c43f279d34eec0c0" exitCode=0 Feb 23 09:08:44 crc kubenswrapper[4940]: I0223 09:08:44.441388 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-l59jm" event={"ID":"03982047-fd03-484c-9467-564d3ba0876a","Type":"ContainerDied","Data":"a479abe871aff9bead30bdd78c529dd77e9226d086ccbee1c43f279d34eec0c0"} Feb 23 09:08:46 crc kubenswrapper[4940]: I0223 09:08:46.345027 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-jw74t" Feb 23 09:08:46 crc kubenswrapper[4940]: I0223 09:08:46.350740 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-8086-account-create-update-pmxvb" Feb 23 09:08:46 crc kubenswrapper[4940]: I0223 09:08:46.356876 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-zcqlx" Feb 23 09:08:46 crc kubenswrapper[4940]: I0223 09:08:46.366228 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-fca0-account-create-update-cbpzc" Feb 23 09:08:46 crc kubenswrapper[4940]: I0223 09:08:46.384871 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-0ed7-account-create-update-rqmbz" Feb 23 09:08:46 crc kubenswrapper[4940]: I0223 09:08:46.431600 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5gdmr\" (UniqueName: \"kubernetes.io/projected/92c6b8ad-74d4-447c-b8b2-e6302e5a2d55-kube-api-access-5gdmr\") pod \"92c6b8ad-74d4-447c-b8b2-e6302e5a2d55\" (UID: \"92c6b8ad-74d4-447c-b8b2-e6302e5a2d55\") " Feb 23 09:08:46 crc kubenswrapper[4940]: I0223 09:08:46.431732 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/926cba46-e952-43fd-a42e-9dfaa77e74d0-operator-scripts\") pod \"926cba46-e952-43fd-a42e-9dfaa77e74d0\" (UID: \"926cba46-e952-43fd-a42e-9dfaa77e74d0\") " Feb 23 09:08:46 crc kubenswrapper[4940]: I0223 09:08:46.431766 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1900fd27-c407-4691-8b8c-c92f97c6829e-operator-scripts\") pod \"1900fd27-c407-4691-8b8c-c92f97c6829e\" (UID: \"1900fd27-c407-4691-8b8c-c92f97c6829e\") " Feb 23 09:08:46 crc kubenswrapper[4940]: I0223 09:08:46.431830 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g7qkh\" (UniqueName: \"kubernetes.io/projected/1900fd27-c407-4691-8b8c-c92f97c6829e-kube-api-access-g7qkh\") pod \"1900fd27-c407-4691-8b8c-c92f97c6829e\" (UID: \"1900fd27-c407-4691-8b8c-c92f97c6829e\") " Feb 23 09:08:46 crc kubenswrapper[4940]: I0223 09:08:46.431862 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e79ed92a-52e6-42a1-9870-08a965e41cd0-operator-scripts\") pod \"e79ed92a-52e6-42a1-9870-08a965e41cd0\" (UID: \"e79ed92a-52e6-42a1-9870-08a965e41cd0\") " Feb 23 09:08:46 crc kubenswrapper[4940]: I0223 09:08:46.431913 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zz8x2\" (UniqueName: \"kubernetes.io/projected/e79ed92a-52e6-42a1-9870-08a965e41cd0-kube-api-access-zz8x2\") pod \"e79ed92a-52e6-42a1-9870-08a965e41cd0\" (UID: \"e79ed92a-52e6-42a1-9870-08a965e41cd0\") " Feb 23 09:08:46 crc kubenswrapper[4940]: I0223 09:08:46.431984 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/92c6b8ad-74d4-447c-b8b2-e6302e5a2d55-operator-scripts\") pod \"92c6b8ad-74d4-447c-b8b2-e6302e5a2d55\" (UID: \"92c6b8ad-74d4-447c-b8b2-e6302e5a2d55\") " Feb 23 09:08:46 crc kubenswrapper[4940]: I0223 09:08:46.432034 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/52b8ddee-466a-4cfa-b22f-c5b256a5b602-operator-scripts\") pod \"52b8ddee-466a-4cfa-b22f-c5b256a5b602\" (UID: \"52b8ddee-466a-4cfa-b22f-c5b256a5b602\") " Feb 23 09:08:46 crc kubenswrapper[4940]: I0223 09:08:46.432075 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t4x5t\" (UniqueName: \"kubernetes.io/projected/52b8ddee-466a-4cfa-b22f-c5b256a5b602-kube-api-access-t4x5t\") pod \"52b8ddee-466a-4cfa-b22f-c5b256a5b602\" (UID: \"52b8ddee-466a-4cfa-b22f-c5b256a5b602\") " Feb 23 09:08:46 crc kubenswrapper[4940]: I0223 09:08:46.432113 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8k5nw\" (UniqueName: \"kubernetes.io/projected/926cba46-e952-43fd-a42e-9dfaa77e74d0-kube-api-access-8k5nw\") pod \"926cba46-e952-43fd-a42e-9dfaa77e74d0\" (UID: \"926cba46-e952-43fd-a42e-9dfaa77e74d0\") " Feb 23 09:08:46 crc kubenswrapper[4940]: I0223 09:08:46.432954 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/52b8ddee-466a-4cfa-b22f-c5b256a5b602-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "52b8ddee-466a-4cfa-b22f-c5b256a5b602" (UID: "52b8ddee-466a-4cfa-b22f-c5b256a5b602"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 09:08:46 crc kubenswrapper[4940]: I0223 09:08:46.432985 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1900fd27-c407-4691-8b8c-c92f97c6829e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "1900fd27-c407-4691-8b8c-c92f97c6829e" (UID: "1900fd27-c407-4691-8b8c-c92f97c6829e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 09:08:46 crc kubenswrapper[4940]: I0223 09:08:46.433035 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/92c6b8ad-74d4-447c-b8b2-e6302e5a2d55-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "92c6b8ad-74d4-447c-b8b2-e6302e5a2d55" (UID: "92c6b8ad-74d4-447c-b8b2-e6302e5a2d55"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 09:08:46 crc kubenswrapper[4940]: I0223 09:08:46.433035 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/926cba46-e952-43fd-a42e-9dfaa77e74d0-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "926cba46-e952-43fd-a42e-9dfaa77e74d0" (UID: "926cba46-e952-43fd-a42e-9dfaa77e74d0"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 09:08:46 crc kubenswrapper[4940]: I0223 09:08:46.433773 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e79ed92a-52e6-42a1-9870-08a965e41cd0-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e79ed92a-52e6-42a1-9870-08a965e41cd0" (UID: "e79ed92a-52e6-42a1-9870-08a965e41cd0"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 09:08:46 crc kubenswrapper[4940]: I0223 09:08:46.438748 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/92c6b8ad-74d4-447c-b8b2-e6302e5a2d55-kube-api-access-5gdmr" (OuterVolumeSpecName: "kube-api-access-5gdmr") pod "92c6b8ad-74d4-447c-b8b2-e6302e5a2d55" (UID: "92c6b8ad-74d4-447c-b8b2-e6302e5a2d55"). InnerVolumeSpecName "kube-api-access-5gdmr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 09:08:46 crc kubenswrapper[4940]: I0223 09:08:46.448434 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/926cba46-e952-43fd-a42e-9dfaa77e74d0-kube-api-access-8k5nw" (OuterVolumeSpecName: "kube-api-access-8k5nw") pod "926cba46-e952-43fd-a42e-9dfaa77e74d0" (UID: "926cba46-e952-43fd-a42e-9dfaa77e74d0"). InnerVolumeSpecName "kube-api-access-8k5nw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 09:08:46 crc kubenswrapper[4940]: I0223 09:08:46.449199 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e79ed92a-52e6-42a1-9870-08a965e41cd0-kube-api-access-zz8x2" (OuterVolumeSpecName: "kube-api-access-zz8x2") pod "e79ed92a-52e6-42a1-9870-08a965e41cd0" (UID: "e79ed92a-52e6-42a1-9870-08a965e41cd0"). InnerVolumeSpecName "kube-api-access-zz8x2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 09:08:46 crc kubenswrapper[4940]: I0223 09:08:46.455274 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/52b8ddee-466a-4cfa-b22f-c5b256a5b602-kube-api-access-t4x5t" (OuterVolumeSpecName: "kube-api-access-t4x5t") pod "52b8ddee-466a-4cfa-b22f-c5b256a5b602" (UID: "52b8ddee-466a-4cfa-b22f-c5b256a5b602"). InnerVolumeSpecName "kube-api-access-t4x5t". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 09:08:46 crc kubenswrapper[4940]: I0223 09:08:46.457101 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-8086-account-create-update-pmxvb" event={"ID":"52b8ddee-466a-4cfa-b22f-c5b256a5b602","Type":"ContainerDied","Data":"dafe68772d986c43942f235009d89ea1d8c54f8b3e9fa59a0e4d5918f52c5081"} Feb 23 09:08:46 crc kubenswrapper[4940]: I0223 09:08:46.457139 4940 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dafe68772d986c43942f235009d89ea1d8c54f8b3e9fa59a0e4d5918f52c5081" Feb 23 09:08:46 crc kubenswrapper[4940]: I0223 09:08:46.457195 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-8086-account-create-update-pmxvb" Feb 23 09:08:46 crc kubenswrapper[4940]: I0223 09:08:46.463892 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1900fd27-c407-4691-8b8c-c92f97c6829e-kube-api-access-g7qkh" (OuterVolumeSpecName: "kube-api-access-g7qkh") pod "1900fd27-c407-4691-8b8c-c92f97c6829e" (UID: "1900fd27-c407-4691-8b8c-c92f97c6829e"). InnerVolumeSpecName "kube-api-access-g7qkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 09:08:46 crc kubenswrapper[4940]: I0223 09:08:46.467603 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-0ed7-account-create-update-rqmbz" event={"ID":"92c6b8ad-74d4-447c-b8b2-e6302e5a2d55","Type":"ContainerDied","Data":"b38eae5b4c201a851012e024ee7447c01260f362619c2639f22fbfc9ac5e19a8"} Feb 23 09:08:46 crc kubenswrapper[4940]: I0223 09:08:46.467660 4940 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b38eae5b4c201a851012e024ee7447c01260f362619c2639f22fbfc9ac5e19a8" Feb 23 09:08:46 crc kubenswrapper[4940]: I0223 09:08:46.467728 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-0ed7-account-create-update-rqmbz" Feb 23 09:08:46 crc kubenswrapper[4940]: I0223 09:08:46.472122 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-zcqlx" event={"ID":"926cba46-e952-43fd-a42e-9dfaa77e74d0","Type":"ContainerDied","Data":"336519d444336a595628571425220f3981940969a1f1f931a349d9d7fae9e98a"} Feb 23 09:08:46 crc kubenswrapper[4940]: I0223 09:08:46.472161 4940 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="336519d444336a595628571425220f3981940969a1f1f931a349d9d7fae9e98a" Feb 23 09:08:46 crc kubenswrapper[4940]: I0223 09:08:46.472217 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-zcqlx" Feb 23 09:08:46 crc kubenswrapper[4940]: I0223 09:08:46.474655 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-fca0-account-create-update-cbpzc" event={"ID":"e79ed92a-52e6-42a1-9870-08a965e41cd0","Type":"ContainerDied","Data":"15df81cc49f42d6c63559ab18b6ae01c26bc06d2371aa9c9dff7a46666ce81b2"} Feb 23 09:08:46 crc kubenswrapper[4940]: I0223 09:08:46.474693 4940 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="15df81cc49f42d6c63559ab18b6ae01c26bc06d2371aa9c9dff7a46666ce81b2" Feb 23 09:08:46 crc kubenswrapper[4940]: I0223 09:08:46.474730 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-fca0-account-create-update-cbpzc" Feb 23 09:08:46 crc kubenswrapper[4940]: I0223 09:08:46.476164 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-jw74t" event={"ID":"1900fd27-c407-4691-8b8c-c92f97c6829e","Type":"ContainerDied","Data":"d38aff858f9fb208ff9a90e9dc9dc2f44c6c295fe6ac3d6cac4c415b82ecd566"} Feb 23 09:08:46 crc kubenswrapper[4940]: I0223 09:08:46.476181 4940 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d38aff858f9fb208ff9a90e9dc9dc2f44c6c295fe6ac3d6cac4c415b82ecd566" Feb 23 09:08:46 crc kubenswrapper[4940]: I0223 09:08:46.476197 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-jw74t" Feb 23 09:08:46 crc kubenswrapper[4940]: I0223 09:08:46.533815 4940 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g7qkh\" (UniqueName: \"kubernetes.io/projected/1900fd27-c407-4691-8b8c-c92f97c6829e-kube-api-access-g7qkh\") on node \"crc\" DevicePath \"\"" Feb 23 09:08:46 crc kubenswrapper[4940]: I0223 09:08:46.533842 4940 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e79ed92a-52e6-42a1-9870-08a965e41cd0-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 23 09:08:46 crc kubenswrapper[4940]: I0223 09:08:46.533850 4940 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zz8x2\" (UniqueName: \"kubernetes.io/projected/e79ed92a-52e6-42a1-9870-08a965e41cd0-kube-api-access-zz8x2\") on node \"crc\" DevicePath \"\"" Feb 23 09:08:46 crc kubenswrapper[4940]: I0223 09:08:46.533860 4940 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/92c6b8ad-74d4-447c-b8b2-e6302e5a2d55-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 23 09:08:46 crc kubenswrapper[4940]: I0223 09:08:46.533869 4940 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/52b8ddee-466a-4cfa-b22f-c5b256a5b602-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 23 09:08:46 crc kubenswrapper[4940]: I0223 09:08:46.533878 4940 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t4x5t\" (UniqueName: \"kubernetes.io/projected/52b8ddee-466a-4cfa-b22f-c5b256a5b602-kube-api-access-t4x5t\") on node \"crc\" DevicePath \"\"" Feb 23 09:08:46 crc kubenswrapper[4940]: I0223 09:08:46.533887 4940 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8k5nw\" (UniqueName: \"kubernetes.io/projected/926cba46-e952-43fd-a42e-9dfaa77e74d0-kube-api-access-8k5nw\") on node \"crc\" DevicePath \"\"" Feb 23 09:08:46 crc kubenswrapper[4940]: I0223 09:08:46.533895 4940 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5gdmr\" (UniqueName: \"kubernetes.io/projected/92c6b8ad-74d4-447c-b8b2-e6302e5a2d55-kube-api-access-5gdmr\") on node \"crc\" DevicePath \"\"" Feb 23 09:08:46 crc kubenswrapper[4940]: I0223 09:08:46.533903 4940 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/926cba46-e952-43fd-a42e-9dfaa77e74d0-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 23 09:08:46 crc kubenswrapper[4940]: I0223 09:08:46.533911 4940 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1900fd27-c407-4691-8b8c-c92f97c6829e-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 23 09:08:46 crc kubenswrapper[4940]: I0223 09:08:46.706247 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-l59jm" Feb 23 09:08:46 crc kubenswrapper[4940]: I0223 09:08:46.837717 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/03982047-fd03-484c-9467-564d3ba0876a-dns-svc\") pod \"03982047-fd03-484c-9467-564d3ba0876a\" (UID: \"03982047-fd03-484c-9467-564d3ba0876a\") " Feb 23 09:08:46 crc kubenswrapper[4940]: I0223 09:08:46.838005 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/03982047-fd03-484c-9467-564d3ba0876a-config\") pod \"03982047-fd03-484c-9467-564d3ba0876a\" (UID: \"03982047-fd03-484c-9467-564d3ba0876a\") " Feb 23 09:08:46 crc kubenswrapper[4940]: I0223 09:08:46.838066 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8sbd4\" (UniqueName: \"kubernetes.io/projected/03982047-fd03-484c-9467-564d3ba0876a-kube-api-access-8sbd4\") pod \"03982047-fd03-484c-9467-564d3ba0876a\" (UID: \"03982047-fd03-484c-9467-564d3ba0876a\") " Feb 23 09:08:46 crc kubenswrapper[4940]: I0223 09:08:46.838109 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/03982047-fd03-484c-9467-564d3ba0876a-ovsdbserver-sb\") pod \"03982047-fd03-484c-9467-564d3ba0876a\" (UID: \"03982047-fd03-484c-9467-564d3ba0876a\") " Feb 23 09:08:46 crc kubenswrapper[4940]: I0223 09:08:46.838198 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/03982047-fd03-484c-9467-564d3ba0876a-ovsdbserver-nb\") pod \"03982047-fd03-484c-9467-564d3ba0876a\" (UID: \"03982047-fd03-484c-9467-564d3ba0876a\") " Feb 23 09:08:46 crc kubenswrapper[4940]: I0223 09:08:46.842568 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/03982047-fd03-484c-9467-564d3ba0876a-kube-api-access-8sbd4" (OuterVolumeSpecName: "kube-api-access-8sbd4") pod "03982047-fd03-484c-9467-564d3ba0876a" (UID: "03982047-fd03-484c-9467-564d3ba0876a"). InnerVolumeSpecName "kube-api-access-8sbd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 09:08:46 crc kubenswrapper[4940]: I0223 09:08:46.880757 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/03982047-fd03-484c-9467-564d3ba0876a-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "03982047-fd03-484c-9467-564d3ba0876a" (UID: "03982047-fd03-484c-9467-564d3ba0876a"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 09:08:46 crc kubenswrapper[4940]: I0223 09:08:46.880966 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/03982047-fd03-484c-9467-564d3ba0876a-config" (OuterVolumeSpecName: "config") pod "03982047-fd03-484c-9467-564d3ba0876a" (UID: "03982047-fd03-484c-9467-564d3ba0876a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 09:08:46 crc kubenswrapper[4940]: I0223 09:08:46.883258 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/03982047-fd03-484c-9467-564d3ba0876a-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "03982047-fd03-484c-9467-564d3ba0876a" (UID: "03982047-fd03-484c-9467-564d3ba0876a"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 09:08:46 crc kubenswrapper[4940]: I0223 09:08:46.885173 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/03982047-fd03-484c-9467-564d3ba0876a-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "03982047-fd03-484c-9467-564d3ba0876a" (UID: "03982047-fd03-484c-9467-564d3ba0876a"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 09:08:46 crc kubenswrapper[4940]: I0223 09:08:46.940086 4940 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/03982047-fd03-484c-9467-564d3ba0876a-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 23 09:08:46 crc kubenswrapper[4940]: I0223 09:08:46.940116 4940 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/03982047-fd03-484c-9467-564d3ba0876a-config\") on node \"crc\" DevicePath \"\"" Feb 23 09:08:46 crc kubenswrapper[4940]: I0223 09:08:46.940126 4940 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8sbd4\" (UniqueName: \"kubernetes.io/projected/03982047-fd03-484c-9467-564d3ba0876a-kube-api-access-8sbd4\") on node \"crc\" DevicePath \"\"" Feb 23 09:08:46 crc kubenswrapper[4940]: I0223 09:08:46.940136 4940 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/03982047-fd03-484c-9467-564d3ba0876a-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 23 09:08:46 crc kubenswrapper[4940]: I0223 09:08:46.940145 4940 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/03982047-fd03-484c-9467-564d3ba0876a-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 23 09:08:47 crc kubenswrapper[4940]: I0223 09:08:47.486077 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-ztzmr" event={"ID":"9063161f-40d0-49a1-a4f2-f68a3aff7897","Type":"ContainerStarted","Data":"3692a952631f69e3210d7d0c41508b109967ad4af1b8f9e7a5c6505b602976b0"} Feb 23 09:08:47 crc kubenswrapper[4940]: I0223 09:08:47.488337 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-l59jm" event={"ID":"03982047-fd03-484c-9467-564d3ba0876a","Type":"ContainerDied","Data":"5144b4768f140b7db0071e819cf419473fe8dcf53a9c45ec75de1315452c076a"} Feb 23 09:08:47 crc kubenswrapper[4940]: I0223 09:08:47.488394 4940 scope.go:117] "RemoveContainer" containerID="a479abe871aff9bead30bdd78c529dd77e9226d086ccbee1c43f279d34eec0c0" Feb 23 09:08:47 crc kubenswrapper[4940]: I0223 09:08:47.488419 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-l59jm" Feb 23 09:08:47 crc kubenswrapper[4940]: I0223 09:08:47.508924 4940 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-ztzmr" podStartSLOduration=2.908526234 podStartE2EDuration="8.508904082s" podCreationTimestamp="2026-02-23 09:08:39 +0000 UTC" firstStartedPulling="2026-02-23 09:08:41.080944832 +0000 UTC m=+1252.464150989" lastFinishedPulling="2026-02-23 09:08:46.68132268 +0000 UTC m=+1258.064528837" observedRunningTime="2026-02-23 09:08:47.501843202 +0000 UTC m=+1258.885049359" watchObservedRunningTime="2026-02-23 09:08:47.508904082 +0000 UTC m=+1258.892110239" Feb 23 09:08:47 crc kubenswrapper[4940]: I0223 09:08:47.509962 4940 scope.go:117] "RemoveContainer" containerID="fccfd6744803257c984bada6c62a8c513da8f52ec2a937c0c1525c7b06f0f5f5" Feb 23 09:08:47 crc kubenswrapper[4940]: I0223 09:08:47.524244 4940 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-698758b865-l59jm"] Feb 23 09:08:47 crc kubenswrapper[4940]: I0223 09:08:47.531984 4940 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-698758b865-l59jm"] Feb 23 09:08:49 crc kubenswrapper[4940]: I0223 09:08:49.361606 4940 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="03982047-fd03-484c-9467-564d3ba0876a" path="/var/lib/kubelet/pods/03982047-fd03-484c-9467-564d3ba0876a/volumes" Feb 23 09:08:50 crc kubenswrapper[4940]: I0223 09:08:50.517496 4940 generic.go:334] "Generic (PLEG): container finished" podID="9063161f-40d0-49a1-a4f2-f68a3aff7897" containerID="3692a952631f69e3210d7d0c41508b109967ad4af1b8f9e7a5c6505b602976b0" exitCode=0 Feb 23 09:08:50 crc kubenswrapper[4940]: I0223 09:08:50.517544 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-ztzmr" event={"ID":"9063161f-40d0-49a1-a4f2-f68a3aff7897","Type":"ContainerDied","Data":"3692a952631f69e3210d7d0c41508b109967ad4af1b8f9e7a5c6505b602976b0"} Feb 23 09:08:51 crc kubenswrapper[4940]: I0223 09:08:51.846940 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-ztzmr" Feb 23 09:08:51 crc kubenswrapper[4940]: I0223 09:08:51.949005 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9063161f-40d0-49a1-a4f2-f68a3aff7897-combined-ca-bundle\") pod \"9063161f-40d0-49a1-a4f2-f68a3aff7897\" (UID: \"9063161f-40d0-49a1-a4f2-f68a3aff7897\") " Feb 23 09:08:51 crc kubenswrapper[4940]: I0223 09:08:51.949075 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9063161f-40d0-49a1-a4f2-f68a3aff7897-config-data\") pod \"9063161f-40d0-49a1-a4f2-f68a3aff7897\" (UID: \"9063161f-40d0-49a1-a4f2-f68a3aff7897\") " Feb 23 09:08:51 crc kubenswrapper[4940]: I0223 09:08:51.949151 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9zqfc\" (UniqueName: \"kubernetes.io/projected/9063161f-40d0-49a1-a4f2-f68a3aff7897-kube-api-access-9zqfc\") pod \"9063161f-40d0-49a1-a4f2-f68a3aff7897\" (UID: \"9063161f-40d0-49a1-a4f2-f68a3aff7897\") " Feb 23 09:08:51 crc kubenswrapper[4940]: I0223 09:08:51.956792 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9063161f-40d0-49a1-a4f2-f68a3aff7897-kube-api-access-9zqfc" (OuterVolumeSpecName: "kube-api-access-9zqfc") pod "9063161f-40d0-49a1-a4f2-f68a3aff7897" (UID: "9063161f-40d0-49a1-a4f2-f68a3aff7897"). InnerVolumeSpecName "kube-api-access-9zqfc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 09:08:51 crc kubenswrapper[4940]: I0223 09:08:51.989439 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9063161f-40d0-49a1-a4f2-f68a3aff7897-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9063161f-40d0-49a1-a4f2-f68a3aff7897" (UID: "9063161f-40d0-49a1-a4f2-f68a3aff7897"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 09:08:52 crc kubenswrapper[4940]: I0223 09:08:52.008284 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9063161f-40d0-49a1-a4f2-f68a3aff7897-config-data" (OuterVolumeSpecName: "config-data") pod "9063161f-40d0-49a1-a4f2-f68a3aff7897" (UID: "9063161f-40d0-49a1-a4f2-f68a3aff7897"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 09:08:52 crc kubenswrapper[4940]: I0223 09:08:52.051427 4940 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9zqfc\" (UniqueName: \"kubernetes.io/projected/9063161f-40d0-49a1-a4f2-f68a3aff7897-kube-api-access-9zqfc\") on node \"crc\" DevicePath \"\"" Feb 23 09:08:52 crc kubenswrapper[4940]: I0223 09:08:52.051456 4940 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9063161f-40d0-49a1-a4f2-f68a3aff7897-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 09:08:52 crc kubenswrapper[4940]: I0223 09:08:52.051465 4940 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9063161f-40d0-49a1-a4f2-f68a3aff7897-config-data\") on node \"crc\" DevicePath \"\"" Feb 23 09:08:52 crc kubenswrapper[4940]: I0223 09:08:52.544654 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-ztzmr" event={"ID":"9063161f-40d0-49a1-a4f2-f68a3aff7897","Type":"ContainerDied","Data":"01757462a3ac7867a967916f92c1e852726d0989c8a41a394f13d72cb29e588d"} Feb 23 09:08:52 crc kubenswrapper[4940]: I0223 09:08:52.544715 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-ztzmr" Feb 23 09:08:52 crc kubenswrapper[4940]: I0223 09:08:52.544758 4940 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="01757462a3ac7867a967916f92c1e852726d0989c8a41a394f13d72cb29e588d" Feb 23 09:08:52 crc kubenswrapper[4940]: I0223 09:08:52.777307 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-847c4cc679-dfkmx"] Feb 23 09:08:52 crc kubenswrapper[4940]: E0223 09:08:52.777956 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="534d5483-19f1-48db-92f4-7311eb8e0bdd" containerName="mariadb-database-create" Feb 23 09:08:52 crc kubenswrapper[4940]: I0223 09:08:52.778049 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="534d5483-19f1-48db-92f4-7311eb8e0bdd" containerName="mariadb-database-create" Feb 23 09:08:52 crc kubenswrapper[4940]: E0223 09:08:52.778124 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="92c6b8ad-74d4-447c-b8b2-e6302e5a2d55" containerName="mariadb-account-create-update" Feb 23 09:08:52 crc kubenswrapper[4940]: I0223 09:08:52.778187 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="92c6b8ad-74d4-447c-b8b2-e6302e5a2d55" containerName="mariadb-account-create-update" Feb 23 09:08:52 crc kubenswrapper[4940]: E0223 09:08:52.778248 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1900fd27-c407-4691-8b8c-c92f97c6829e" containerName="mariadb-database-create" Feb 23 09:08:52 crc kubenswrapper[4940]: I0223 09:08:52.778307 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="1900fd27-c407-4691-8b8c-c92f97c6829e" containerName="mariadb-database-create" Feb 23 09:08:52 crc kubenswrapper[4940]: E0223 09:08:52.778397 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03982047-fd03-484c-9467-564d3ba0876a" containerName="init" Feb 23 09:08:52 crc kubenswrapper[4940]: I0223 09:08:52.778463 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="03982047-fd03-484c-9467-564d3ba0876a" containerName="init" Feb 23 09:08:52 crc kubenswrapper[4940]: E0223 09:08:52.778530 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="926cba46-e952-43fd-a42e-9dfaa77e74d0" containerName="mariadb-database-create" Feb 23 09:08:52 crc kubenswrapper[4940]: I0223 09:08:52.778595 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="926cba46-e952-43fd-a42e-9dfaa77e74d0" containerName="mariadb-database-create" Feb 23 09:08:52 crc kubenswrapper[4940]: E0223 09:08:52.778693 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03982047-fd03-484c-9467-564d3ba0876a" containerName="dnsmasq-dns" Feb 23 09:08:52 crc kubenswrapper[4940]: I0223 09:08:52.778758 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="03982047-fd03-484c-9467-564d3ba0876a" containerName="dnsmasq-dns" Feb 23 09:08:52 crc kubenswrapper[4940]: E0223 09:08:52.778853 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52b8ddee-466a-4cfa-b22f-c5b256a5b602" containerName="mariadb-account-create-update" Feb 23 09:08:52 crc kubenswrapper[4940]: I0223 09:08:52.778928 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="52b8ddee-466a-4cfa-b22f-c5b256a5b602" containerName="mariadb-account-create-update" Feb 23 09:08:52 crc kubenswrapper[4940]: E0223 09:08:52.779001 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bbecedf9-3f67-471e-b8e7-8945107b9055" containerName="mariadb-account-create-update" Feb 23 09:08:52 crc kubenswrapper[4940]: I0223 09:08:52.779063 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="bbecedf9-3f67-471e-b8e7-8945107b9055" containerName="mariadb-account-create-update" Feb 23 09:08:52 crc kubenswrapper[4940]: E0223 09:08:52.779134 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e79ed92a-52e6-42a1-9870-08a965e41cd0" containerName="mariadb-account-create-update" Feb 23 09:08:52 crc kubenswrapper[4940]: I0223 09:08:52.779196 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="e79ed92a-52e6-42a1-9870-08a965e41cd0" containerName="mariadb-account-create-update" Feb 23 09:08:52 crc kubenswrapper[4940]: E0223 09:08:52.779261 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33bcc7d8-8eed-4039-97fa-d156a882474c" containerName="mariadb-database-create" Feb 23 09:08:52 crc kubenswrapper[4940]: I0223 09:08:52.779330 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="33bcc7d8-8eed-4039-97fa-d156a882474c" containerName="mariadb-database-create" Feb 23 09:08:52 crc kubenswrapper[4940]: E0223 09:08:52.779399 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9063161f-40d0-49a1-a4f2-f68a3aff7897" containerName="keystone-db-sync" Feb 23 09:08:52 crc kubenswrapper[4940]: I0223 09:08:52.779464 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="9063161f-40d0-49a1-a4f2-f68a3aff7897" containerName="keystone-db-sync" Feb 23 09:08:52 crc kubenswrapper[4940]: I0223 09:08:52.779916 4940 memory_manager.go:354] "RemoveStaleState removing state" podUID="e79ed92a-52e6-42a1-9870-08a965e41cd0" containerName="mariadb-account-create-update" Feb 23 09:08:52 crc kubenswrapper[4940]: I0223 09:08:52.780009 4940 memory_manager.go:354] "RemoveStaleState removing state" podUID="534d5483-19f1-48db-92f4-7311eb8e0bdd" containerName="mariadb-database-create" Feb 23 09:08:52 crc kubenswrapper[4940]: I0223 09:08:52.780090 4940 memory_manager.go:354] "RemoveStaleState removing state" podUID="92c6b8ad-74d4-447c-b8b2-e6302e5a2d55" containerName="mariadb-account-create-update" Feb 23 09:08:52 crc kubenswrapper[4940]: I0223 09:08:52.780159 4940 memory_manager.go:354] "RemoveStaleState removing state" podUID="926cba46-e952-43fd-a42e-9dfaa77e74d0" containerName="mariadb-database-create" Feb 23 09:08:52 crc kubenswrapper[4940]: I0223 09:08:52.780235 4940 memory_manager.go:354] "RemoveStaleState removing state" podUID="52b8ddee-466a-4cfa-b22f-c5b256a5b602" containerName="mariadb-account-create-update" Feb 23 09:08:52 crc kubenswrapper[4940]: I0223 09:08:52.780306 4940 memory_manager.go:354] "RemoveStaleState removing state" podUID="9063161f-40d0-49a1-a4f2-f68a3aff7897" containerName="keystone-db-sync" Feb 23 09:08:52 crc kubenswrapper[4940]: I0223 09:08:52.780377 4940 memory_manager.go:354] "RemoveStaleState removing state" podUID="1900fd27-c407-4691-8b8c-c92f97c6829e" containerName="mariadb-database-create" Feb 23 09:08:52 crc kubenswrapper[4940]: I0223 09:08:52.780450 4940 memory_manager.go:354] "RemoveStaleState removing state" podUID="03982047-fd03-484c-9467-564d3ba0876a" containerName="dnsmasq-dns" Feb 23 09:08:52 crc kubenswrapper[4940]: I0223 09:08:52.780522 4940 memory_manager.go:354] "RemoveStaleState removing state" podUID="bbecedf9-3f67-471e-b8e7-8945107b9055" containerName="mariadb-account-create-update" Feb 23 09:08:52 crc kubenswrapper[4940]: I0223 09:08:52.780589 4940 memory_manager.go:354] "RemoveStaleState removing state" podUID="33bcc7d8-8eed-4039-97fa-d156a882474c" containerName="mariadb-database-create" Feb 23 09:08:52 crc kubenswrapper[4940]: I0223 09:08:52.781776 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-847c4cc679-dfkmx" Feb 23 09:08:52 crc kubenswrapper[4940]: I0223 09:08:52.803507 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-847c4cc679-dfkmx"] Feb 23 09:08:52 crc kubenswrapper[4940]: I0223 09:08:52.832143 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-b4dtx"] Feb 23 09:08:52 crc kubenswrapper[4940]: I0223 09:08:52.833829 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-b4dtx" Feb 23 09:08:52 crc kubenswrapper[4940]: I0223 09:08:52.840667 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 23 09:08:52 crc kubenswrapper[4940]: I0223 09:08:52.841117 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 23 09:08:52 crc kubenswrapper[4940]: I0223 09:08:52.841276 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 23 09:08:52 crc kubenswrapper[4940]: I0223 09:08:52.841811 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-l76pf" Feb 23 09:08:52 crc kubenswrapper[4940]: I0223 09:08:52.841947 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Feb 23 09:08:52 crc kubenswrapper[4940]: I0223 09:08:52.857216 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-b4dtx"] Feb 23 09:08:52 crc kubenswrapper[4940]: I0223 09:08:52.864604 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/95735a1e-0235-40ee-bfe3-94c7e269342c-dns-swift-storage-0\") pod \"dnsmasq-dns-847c4cc679-dfkmx\" (UID: \"95735a1e-0235-40ee-bfe3-94c7e269342c\") " pod="openstack/dnsmasq-dns-847c4cc679-dfkmx" Feb 23 09:08:52 crc kubenswrapper[4940]: I0223 09:08:52.864712 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/95735a1e-0235-40ee-bfe3-94c7e269342c-dns-svc\") pod \"dnsmasq-dns-847c4cc679-dfkmx\" (UID: \"95735a1e-0235-40ee-bfe3-94c7e269342c\") " pod="openstack/dnsmasq-dns-847c4cc679-dfkmx" Feb 23 09:08:52 crc kubenswrapper[4940]: I0223 09:08:52.864745 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/95735a1e-0235-40ee-bfe3-94c7e269342c-config\") pod \"dnsmasq-dns-847c4cc679-dfkmx\" (UID: \"95735a1e-0235-40ee-bfe3-94c7e269342c\") " pod="openstack/dnsmasq-dns-847c4cc679-dfkmx" Feb 23 09:08:52 crc kubenswrapper[4940]: I0223 09:08:52.864766 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/95735a1e-0235-40ee-bfe3-94c7e269342c-ovsdbserver-nb\") pod \"dnsmasq-dns-847c4cc679-dfkmx\" (UID: \"95735a1e-0235-40ee-bfe3-94c7e269342c\") " pod="openstack/dnsmasq-dns-847c4cc679-dfkmx" Feb 23 09:08:52 crc kubenswrapper[4940]: I0223 09:08:52.864795 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/95735a1e-0235-40ee-bfe3-94c7e269342c-ovsdbserver-sb\") pod \"dnsmasq-dns-847c4cc679-dfkmx\" (UID: \"95735a1e-0235-40ee-bfe3-94c7e269342c\") " pod="openstack/dnsmasq-dns-847c4cc679-dfkmx" Feb 23 09:08:52 crc kubenswrapper[4940]: I0223 09:08:52.864836 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vqrcl\" (UniqueName: \"kubernetes.io/projected/95735a1e-0235-40ee-bfe3-94c7e269342c-kube-api-access-vqrcl\") pod \"dnsmasq-dns-847c4cc679-dfkmx\" (UID: \"95735a1e-0235-40ee-bfe3-94c7e269342c\") " pod="openstack/dnsmasq-dns-847c4cc679-dfkmx" Feb 23 09:08:52 crc kubenswrapper[4940]: I0223 09:08:52.966456 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/95735a1e-0235-40ee-bfe3-94c7e269342c-dns-swift-storage-0\") pod \"dnsmasq-dns-847c4cc679-dfkmx\" (UID: \"95735a1e-0235-40ee-bfe3-94c7e269342c\") " pod="openstack/dnsmasq-dns-847c4cc679-dfkmx" Feb 23 09:08:52 crc kubenswrapper[4940]: I0223 09:08:52.966773 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/95735a1e-0235-40ee-bfe3-94c7e269342c-dns-svc\") pod \"dnsmasq-dns-847c4cc679-dfkmx\" (UID: \"95735a1e-0235-40ee-bfe3-94c7e269342c\") " pod="openstack/dnsmasq-dns-847c4cc679-dfkmx" Feb 23 09:08:52 crc kubenswrapper[4940]: I0223 09:08:52.966881 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/95735a1e-0235-40ee-bfe3-94c7e269342c-config\") pod \"dnsmasq-dns-847c4cc679-dfkmx\" (UID: \"95735a1e-0235-40ee-bfe3-94c7e269342c\") " pod="openstack/dnsmasq-dns-847c4cc679-dfkmx" Feb 23 09:08:52 crc kubenswrapper[4940]: I0223 09:08:52.966980 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/95735a1e-0235-40ee-bfe3-94c7e269342c-ovsdbserver-nb\") pod \"dnsmasq-dns-847c4cc679-dfkmx\" (UID: \"95735a1e-0235-40ee-bfe3-94c7e269342c\") " pod="openstack/dnsmasq-dns-847c4cc679-dfkmx" Feb 23 09:08:52 crc kubenswrapper[4940]: I0223 09:08:52.967111 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/95735a1e-0235-40ee-bfe3-94c7e269342c-ovsdbserver-sb\") pod \"dnsmasq-dns-847c4cc679-dfkmx\" (UID: \"95735a1e-0235-40ee-bfe3-94c7e269342c\") " pod="openstack/dnsmasq-dns-847c4cc679-dfkmx" Feb 23 09:08:52 crc kubenswrapper[4940]: I0223 09:08:52.967208 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0159383e-55f2-47df-9401-3fc82abecc72-combined-ca-bundle\") pod \"keystone-bootstrap-b4dtx\" (UID: \"0159383e-55f2-47df-9401-3fc82abecc72\") " pod="openstack/keystone-bootstrap-b4dtx" Feb 23 09:08:52 crc kubenswrapper[4940]: I0223 09:08:52.967308 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vqrcl\" (UniqueName: \"kubernetes.io/projected/95735a1e-0235-40ee-bfe3-94c7e269342c-kube-api-access-vqrcl\") pod \"dnsmasq-dns-847c4cc679-dfkmx\" (UID: \"95735a1e-0235-40ee-bfe3-94c7e269342c\") " pod="openstack/dnsmasq-dns-847c4cc679-dfkmx" Feb 23 09:08:52 crc kubenswrapper[4940]: I0223 09:08:52.967401 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/0159383e-55f2-47df-9401-3fc82abecc72-credential-keys\") pod \"keystone-bootstrap-b4dtx\" (UID: \"0159383e-55f2-47df-9401-3fc82abecc72\") " pod="openstack/keystone-bootstrap-b4dtx" Feb 23 09:08:52 crc kubenswrapper[4940]: I0223 09:08:52.967498 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0159383e-55f2-47df-9401-3fc82abecc72-config-data\") pod \"keystone-bootstrap-b4dtx\" (UID: \"0159383e-55f2-47df-9401-3fc82abecc72\") " pod="openstack/keystone-bootstrap-b4dtx" Feb 23 09:08:52 crc kubenswrapper[4940]: I0223 09:08:52.967582 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/0159383e-55f2-47df-9401-3fc82abecc72-fernet-keys\") pod \"keystone-bootstrap-b4dtx\" (UID: \"0159383e-55f2-47df-9401-3fc82abecc72\") " pod="openstack/keystone-bootstrap-b4dtx" Feb 23 09:08:52 crc kubenswrapper[4940]: I0223 09:08:52.967710 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-49hzd\" (UniqueName: \"kubernetes.io/projected/0159383e-55f2-47df-9401-3fc82abecc72-kube-api-access-49hzd\") pod \"keystone-bootstrap-b4dtx\" (UID: \"0159383e-55f2-47df-9401-3fc82abecc72\") " pod="openstack/keystone-bootstrap-b4dtx" Feb 23 09:08:52 crc kubenswrapper[4940]: I0223 09:08:52.967810 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0159383e-55f2-47df-9401-3fc82abecc72-scripts\") pod \"keystone-bootstrap-b4dtx\" (UID: \"0159383e-55f2-47df-9401-3fc82abecc72\") " pod="openstack/keystone-bootstrap-b4dtx" Feb 23 09:08:52 crc kubenswrapper[4940]: I0223 09:08:52.968009 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/95735a1e-0235-40ee-bfe3-94c7e269342c-dns-svc\") pod \"dnsmasq-dns-847c4cc679-dfkmx\" (UID: \"95735a1e-0235-40ee-bfe3-94c7e269342c\") " pod="openstack/dnsmasq-dns-847c4cc679-dfkmx" Feb 23 09:08:52 crc kubenswrapper[4940]: I0223 09:08:52.968685 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/95735a1e-0235-40ee-bfe3-94c7e269342c-dns-swift-storage-0\") pod \"dnsmasq-dns-847c4cc679-dfkmx\" (UID: \"95735a1e-0235-40ee-bfe3-94c7e269342c\") " pod="openstack/dnsmasq-dns-847c4cc679-dfkmx" Feb 23 09:08:52 crc kubenswrapper[4940]: I0223 09:08:52.968954 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/95735a1e-0235-40ee-bfe3-94c7e269342c-config\") pod \"dnsmasq-dns-847c4cc679-dfkmx\" (UID: \"95735a1e-0235-40ee-bfe3-94c7e269342c\") " pod="openstack/dnsmasq-dns-847c4cc679-dfkmx" Feb 23 09:08:52 crc kubenswrapper[4940]: I0223 09:08:52.969084 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/95735a1e-0235-40ee-bfe3-94c7e269342c-ovsdbserver-nb\") pod \"dnsmasq-dns-847c4cc679-dfkmx\" (UID: \"95735a1e-0235-40ee-bfe3-94c7e269342c\") " pod="openstack/dnsmasq-dns-847c4cc679-dfkmx" Feb 23 09:08:52 crc kubenswrapper[4940]: I0223 09:08:52.969362 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/95735a1e-0235-40ee-bfe3-94c7e269342c-ovsdbserver-sb\") pod \"dnsmasq-dns-847c4cc679-dfkmx\" (UID: \"95735a1e-0235-40ee-bfe3-94c7e269342c\") " pod="openstack/dnsmasq-dns-847c4cc679-dfkmx" Feb 23 09:08:53 crc kubenswrapper[4940]: I0223 09:08:53.032819 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vqrcl\" (UniqueName: \"kubernetes.io/projected/95735a1e-0235-40ee-bfe3-94c7e269342c-kube-api-access-vqrcl\") pod \"dnsmasq-dns-847c4cc679-dfkmx\" (UID: \"95735a1e-0235-40ee-bfe3-94c7e269342c\") " pod="openstack/dnsmasq-dns-847c4cc679-dfkmx" Feb 23 09:08:53 crc kubenswrapper[4940]: I0223 09:08:53.070484 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0159383e-55f2-47df-9401-3fc82abecc72-combined-ca-bundle\") pod \"keystone-bootstrap-b4dtx\" (UID: \"0159383e-55f2-47df-9401-3fc82abecc72\") " pod="openstack/keystone-bootstrap-b4dtx" Feb 23 09:08:53 crc kubenswrapper[4940]: I0223 09:08:53.070539 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/0159383e-55f2-47df-9401-3fc82abecc72-credential-keys\") pod \"keystone-bootstrap-b4dtx\" (UID: \"0159383e-55f2-47df-9401-3fc82abecc72\") " pod="openstack/keystone-bootstrap-b4dtx" Feb 23 09:08:53 crc kubenswrapper[4940]: I0223 09:08:53.070569 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0159383e-55f2-47df-9401-3fc82abecc72-config-data\") pod \"keystone-bootstrap-b4dtx\" (UID: \"0159383e-55f2-47df-9401-3fc82abecc72\") " pod="openstack/keystone-bootstrap-b4dtx" Feb 23 09:08:53 crc kubenswrapper[4940]: I0223 09:08:53.070590 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/0159383e-55f2-47df-9401-3fc82abecc72-fernet-keys\") pod \"keystone-bootstrap-b4dtx\" (UID: \"0159383e-55f2-47df-9401-3fc82abecc72\") " pod="openstack/keystone-bootstrap-b4dtx" Feb 23 09:08:53 crc kubenswrapper[4940]: I0223 09:08:53.070651 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-49hzd\" (UniqueName: \"kubernetes.io/projected/0159383e-55f2-47df-9401-3fc82abecc72-kube-api-access-49hzd\") pod \"keystone-bootstrap-b4dtx\" (UID: \"0159383e-55f2-47df-9401-3fc82abecc72\") " pod="openstack/keystone-bootstrap-b4dtx" Feb 23 09:08:53 crc kubenswrapper[4940]: I0223 09:08:53.070669 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0159383e-55f2-47df-9401-3fc82abecc72-scripts\") pod \"keystone-bootstrap-b4dtx\" (UID: \"0159383e-55f2-47df-9401-3fc82abecc72\") " pod="openstack/keystone-bootstrap-b4dtx" Feb 23 09:08:53 crc kubenswrapper[4940]: I0223 09:08:53.075013 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0159383e-55f2-47df-9401-3fc82abecc72-scripts\") pod \"keystone-bootstrap-b4dtx\" (UID: \"0159383e-55f2-47df-9401-3fc82abecc72\") " pod="openstack/keystone-bootstrap-b4dtx" Feb 23 09:08:53 crc kubenswrapper[4940]: I0223 09:08:53.078217 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0159383e-55f2-47df-9401-3fc82abecc72-combined-ca-bundle\") pod \"keystone-bootstrap-b4dtx\" (UID: \"0159383e-55f2-47df-9401-3fc82abecc72\") " pod="openstack/keystone-bootstrap-b4dtx" Feb 23 09:08:53 crc kubenswrapper[4940]: I0223 09:08:53.078965 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0159383e-55f2-47df-9401-3fc82abecc72-config-data\") pod \"keystone-bootstrap-b4dtx\" (UID: \"0159383e-55f2-47df-9401-3fc82abecc72\") " pod="openstack/keystone-bootstrap-b4dtx" Feb 23 09:08:53 crc kubenswrapper[4940]: I0223 09:08:53.082276 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/0159383e-55f2-47df-9401-3fc82abecc72-fernet-keys\") pod \"keystone-bootstrap-b4dtx\" (UID: \"0159383e-55f2-47df-9401-3fc82abecc72\") " pod="openstack/keystone-bootstrap-b4dtx" Feb 23 09:08:53 crc kubenswrapper[4940]: I0223 09:08:53.103555 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/0159383e-55f2-47df-9401-3fc82abecc72-credential-keys\") pod \"keystone-bootstrap-b4dtx\" (UID: \"0159383e-55f2-47df-9401-3fc82abecc72\") " pod="openstack/keystone-bootstrap-b4dtx" Feb 23 09:08:53 crc kubenswrapper[4940]: I0223 09:08:53.106632 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-6p69q"] Feb 23 09:08:53 crc kubenswrapper[4940]: I0223 09:08:53.107937 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-6p69q" Feb 23 09:08:53 crc kubenswrapper[4940]: I0223 09:08:53.108251 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-847c4cc679-dfkmx" Feb 23 09:08:53 crc kubenswrapper[4940]: I0223 09:08:53.119228 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-f965d" Feb 23 09:08:53 crc kubenswrapper[4940]: I0223 09:08:53.119531 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Feb 23 09:08:53 crc kubenswrapper[4940]: I0223 09:08:53.146706 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Feb 23 09:08:53 crc kubenswrapper[4940]: I0223 09:08:53.154160 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-76557b5cdc-8z8m9"] Feb 23 09:08:53 crc kubenswrapper[4940]: I0223 09:08:53.155552 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-76557b5cdc-8z8m9" Feb 23 09:08:53 crc kubenswrapper[4940]: I0223 09:08:53.158131 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-49hzd\" (UniqueName: \"kubernetes.io/projected/0159383e-55f2-47df-9401-3fc82abecc72-kube-api-access-49hzd\") pod \"keystone-bootstrap-b4dtx\" (UID: \"0159383e-55f2-47df-9401-3fc82abecc72\") " pod="openstack/keystone-bootstrap-b4dtx" Feb 23 09:08:53 crc kubenswrapper[4940]: I0223 09:08:53.162223 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-scripts" Feb 23 09:08:53 crc kubenswrapper[4940]: I0223 09:08:53.162375 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon" Feb 23 09:08:53 crc kubenswrapper[4940]: I0223 09:08:53.166945 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon-horizon-dockercfg-btt58" Feb 23 09:08:53 crc kubenswrapper[4940]: I0223 09:08:53.167176 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-config-data" Feb 23 09:08:53 crc kubenswrapper[4940]: I0223 09:08:53.178290 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-b4dtx" Feb 23 09:08:53 crc kubenswrapper[4940]: I0223 09:08:53.206489 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-6p69q"] Feb 23 09:08:53 crc kubenswrapper[4940]: I0223 09:08:53.222429 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-76557b5cdc-8z8m9"] Feb 23 09:08:53 crc kubenswrapper[4940]: I0223 09:08:53.278682 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-db-sync-ktg94"] Feb 23 09:08:53 crc kubenswrapper[4940]: I0223 09:08:53.279705 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-sync-ktg94" Feb 23 09:08:53 crc kubenswrapper[4940]: I0223 09:08:53.279942 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hqrcm\" (UniqueName: \"kubernetes.io/projected/ab97aa50-1b14-4a5c-82cd-1be9f025b2b5-kube-api-access-hqrcm\") pod \"cinder-db-sync-6p69q\" (UID: \"ab97aa50-1b14-4a5c-82cd-1be9f025b2b5\") " pod="openstack/cinder-db-sync-6p69q" Feb 23 09:08:53 crc kubenswrapper[4940]: I0223 09:08:53.280369 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/ab97aa50-1b14-4a5c-82cd-1be9f025b2b5-db-sync-config-data\") pod \"cinder-db-sync-6p69q\" (UID: \"ab97aa50-1b14-4a5c-82cd-1be9f025b2b5\") " pod="openstack/cinder-db-sync-6p69q" Feb 23 09:08:53 crc kubenswrapper[4940]: I0223 09:08:53.280411 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6d288\" (UniqueName: \"kubernetes.io/projected/4fe165a1-2722-4594-82d4-d9b9e5e88a56-kube-api-access-6d288\") pod \"horizon-76557b5cdc-8z8m9\" (UID: \"4fe165a1-2722-4594-82d4-d9b9e5e88a56\") " pod="openstack/horizon-76557b5cdc-8z8m9" Feb 23 09:08:53 crc kubenswrapper[4940]: I0223 09:08:53.280443 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ab97aa50-1b14-4a5c-82cd-1be9f025b2b5-scripts\") pod \"cinder-db-sync-6p69q\" (UID: \"ab97aa50-1b14-4a5c-82cd-1be9f025b2b5\") " pod="openstack/cinder-db-sync-6p69q" Feb 23 09:08:53 crc kubenswrapper[4940]: I0223 09:08:53.280488 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4fe165a1-2722-4594-82d4-d9b9e5e88a56-config-data\") pod \"horizon-76557b5cdc-8z8m9\" (UID: \"4fe165a1-2722-4594-82d4-d9b9e5e88a56\") " pod="openstack/horizon-76557b5cdc-8z8m9" Feb 23 09:08:53 crc kubenswrapper[4940]: I0223 09:08:53.280584 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4fe165a1-2722-4594-82d4-d9b9e5e88a56-scripts\") pod \"horizon-76557b5cdc-8z8m9\" (UID: \"4fe165a1-2722-4594-82d4-d9b9e5e88a56\") " pod="openstack/horizon-76557b5cdc-8z8m9" Feb 23 09:08:53 crc kubenswrapper[4940]: I0223 09:08:53.282474 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/4fe165a1-2722-4594-82d4-d9b9e5e88a56-horizon-secret-key\") pod \"horizon-76557b5cdc-8z8m9\" (UID: \"4fe165a1-2722-4594-82d4-d9b9e5e88a56\") " pod="openstack/horizon-76557b5cdc-8z8m9" Feb 23 09:08:53 crc kubenswrapper[4940]: I0223 09:08:53.282633 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab97aa50-1b14-4a5c-82cd-1be9f025b2b5-combined-ca-bundle\") pod \"cinder-db-sync-6p69q\" (UID: \"ab97aa50-1b14-4a5c-82cd-1be9f025b2b5\") " pod="openstack/cinder-db-sync-6p69q" Feb 23 09:08:53 crc kubenswrapper[4940]: I0223 09:08:53.282773 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4fe165a1-2722-4594-82d4-d9b9e5e88a56-logs\") pod \"horizon-76557b5cdc-8z8m9\" (UID: \"4fe165a1-2722-4594-82d4-d9b9e5e88a56\") " pod="openstack/horizon-76557b5cdc-8z8m9" Feb 23 09:08:53 crc kubenswrapper[4940]: I0223 09:08:53.282986 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab97aa50-1b14-4a5c-82cd-1be9f025b2b5-config-data\") pod \"cinder-db-sync-6p69q\" (UID: \"ab97aa50-1b14-4a5c-82cd-1be9f025b2b5\") " pod="openstack/cinder-db-sync-6p69q" Feb 23 09:08:53 crc kubenswrapper[4940]: I0223 09:08:53.283095 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ab97aa50-1b14-4a5c-82cd-1be9f025b2b5-etc-machine-id\") pod \"cinder-db-sync-6p69q\" (UID: \"ab97aa50-1b14-4a5c-82cd-1be9f025b2b5\") " pod="openstack/cinder-db-sync-6p69q" Feb 23 09:08:53 crc kubenswrapper[4940]: I0223 09:08:53.313370 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-manila-dockercfg-wsgxw" Feb 23 09:08:53 crc kubenswrapper[4940]: I0223 09:08:53.313576 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-config-data" Feb 23 09:08:53 crc kubenswrapper[4940]: I0223 09:08:53.389587 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/4fe165a1-2722-4594-82d4-d9b9e5e88a56-horizon-secret-key\") pod \"horizon-76557b5cdc-8z8m9\" (UID: \"4fe165a1-2722-4594-82d4-d9b9e5e88a56\") " pod="openstack/horizon-76557b5cdc-8z8m9" Feb 23 09:08:53 crc kubenswrapper[4940]: I0223 09:08:53.389870 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab97aa50-1b14-4a5c-82cd-1be9f025b2b5-combined-ca-bundle\") pod \"cinder-db-sync-6p69q\" (UID: \"ab97aa50-1b14-4a5c-82cd-1be9f025b2b5\") " pod="openstack/cinder-db-sync-6p69q" Feb 23 09:08:53 crc kubenswrapper[4940]: I0223 09:08:53.389906 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4fe165a1-2722-4594-82d4-d9b9e5e88a56-logs\") pod \"horizon-76557b5cdc-8z8m9\" (UID: \"4fe165a1-2722-4594-82d4-d9b9e5e88a56\") " pod="openstack/horizon-76557b5cdc-8z8m9" Feb 23 09:08:53 crc kubenswrapper[4940]: I0223 09:08:53.389942 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a43f9f8e-d118-4247-b1f0-b6aac984bb4d-combined-ca-bundle\") pod \"manila-db-sync-ktg94\" (UID: \"a43f9f8e-d118-4247-b1f0-b6aac984bb4d\") " pod="openstack/manila-db-sync-ktg94" Feb 23 09:08:53 crc kubenswrapper[4940]: I0223 09:08:53.389972 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab97aa50-1b14-4a5c-82cd-1be9f025b2b5-config-data\") pod \"cinder-db-sync-6p69q\" (UID: \"ab97aa50-1b14-4a5c-82cd-1be9f025b2b5\") " pod="openstack/cinder-db-sync-6p69q" Feb 23 09:08:53 crc kubenswrapper[4940]: I0223 09:08:53.390005 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ab97aa50-1b14-4a5c-82cd-1be9f025b2b5-etc-machine-id\") pod \"cinder-db-sync-6p69q\" (UID: \"ab97aa50-1b14-4a5c-82cd-1be9f025b2b5\") " pod="openstack/cinder-db-sync-6p69q" Feb 23 09:08:53 crc kubenswrapper[4940]: I0223 09:08:53.390053 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hqrcm\" (UniqueName: \"kubernetes.io/projected/ab97aa50-1b14-4a5c-82cd-1be9f025b2b5-kube-api-access-hqrcm\") pod \"cinder-db-sync-6p69q\" (UID: \"ab97aa50-1b14-4a5c-82cd-1be9f025b2b5\") " pod="openstack/cinder-db-sync-6p69q" Feb 23 09:08:53 crc kubenswrapper[4940]: I0223 09:08:53.390107 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/ab97aa50-1b14-4a5c-82cd-1be9f025b2b5-db-sync-config-data\") pod \"cinder-db-sync-6p69q\" (UID: \"ab97aa50-1b14-4a5c-82cd-1be9f025b2b5\") " pod="openstack/cinder-db-sync-6p69q" Feb 23 09:08:53 crc kubenswrapper[4940]: I0223 09:08:53.390131 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sthvh\" (UniqueName: \"kubernetes.io/projected/a43f9f8e-d118-4247-b1f0-b6aac984bb4d-kube-api-access-sthvh\") pod \"manila-db-sync-ktg94\" (UID: \"a43f9f8e-d118-4247-b1f0-b6aac984bb4d\") " pod="openstack/manila-db-sync-ktg94" Feb 23 09:08:53 crc kubenswrapper[4940]: I0223 09:08:53.390153 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6d288\" (UniqueName: \"kubernetes.io/projected/4fe165a1-2722-4594-82d4-d9b9e5e88a56-kube-api-access-6d288\") pod \"horizon-76557b5cdc-8z8m9\" (UID: \"4fe165a1-2722-4594-82d4-d9b9e5e88a56\") " pod="openstack/horizon-76557b5cdc-8z8m9" Feb 23 09:08:53 crc kubenswrapper[4940]: I0223 09:08:53.390177 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a43f9f8e-d118-4247-b1f0-b6aac984bb4d-config-data\") pod \"manila-db-sync-ktg94\" (UID: \"a43f9f8e-d118-4247-b1f0-b6aac984bb4d\") " pod="openstack/manila-db-sync-ktg94" Feb 23 09:08:53 crc kubenswrapper[4940]: I0223 09:08:53.390197 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ab97aa50-1b14-4a5c-82cd-1be9f025b2b5-scripts\") pod \"cinder-db-sync-6p69q\" (UID: \"ab97aa50-1b14-4a5c-82cd-1be9f025b2b5\") " pod="openstack/cinder-db-sync-6p69q" Feb 23 09:08:53 crc kubenswrapper[4940]: I0223 09:08:53.390230 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4fe165a1-2722-4594-82d4-d9b9e5e88a56-config-data\") pod \"horizon-76557b5cdc-8z8m9\" (UID: \"4fe165a1-2722-4594-82d4-d9b9e5e88a56\") " pod="openstack/horizon-76557b5cdc-8z8m9" Feb 23 09:08:53 crc kubenswrapper[4940]: I0223 09:08:53.390250 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"job-config-data\" (UniqueName: \"kubernetes.io/secret/a43f9f8e-d118-4247-b1f0-b6aac984bb4d-job-config-data\") pod \"manila-db-sync-ktg94\" (UID: \"a43f9f8e-d118-4247-b1f0-b6aac984bb4d\") " pod="openstack/manila-db-sync-ktg94" Feb 23 09:08:53 crc kubenswrapper[4940]: I0223 09:08:53.390269 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4fe165a1-2722-4594-82d4-d9b9e5e88a56-scripts\") pod \"horizon-76557b5cdc-8z8m9\" (UID: \"4fe165a1-2722-4594-82d4-d9b9e5e88a56\") " pod="openstack/horizon-76557b5cdc-8z8m9" Feb 23 09:08:53 crc kubenswrapper[4940]: I0223 09:08:53.390948 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4fe165a1-2722-4594-82d4-d9b9e5e88a56-scripts\") pod \"horizon-76557b5cdc-8z8m9\" (UID: \"4fe165a1-2722-4594-82d4-d9b9e5e88a56\") " pod="openstack/horizon-76557b5cdc-8z8m9" Feb 23 09:08:53 crc kubenswrapper[4940]: I0223 09:08:53.406876 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4fe165a1-2722-4594-82d4-d9b9e5e88a56-config-data\") pod \"horizon-76557b5cdc-8z8m9\" (UID: \"4fe165a1-2722-4594-82d4-d9b9e5e88a56\") " pod="openstack/horizon-76557b5cdc-8z8m9" Feb 23 09:08:53 crc kubenswrapper[4940]: I0223 09:08:53.412374 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-db-sync-ktg94"] Feb 23 09:08:53 crc kubenswrapper[4940]: I0223 09:08:53.412425 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 23 09:08:53 crc kubenswrapper[4940]: I0223 09:08:53.418659 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4fe165a1-2722-4594-82d4-d9b9e5e88a56-logs\") pod \"horizon-76557b5cdc-8z8m9\" (UID: \"4fe165a1-2722-4594-82d4-d9b9e5e88a56\") " pod="openstack/horizon-76557b5cdc-8z8m9" Feb 23 09:08:53 crc kubenswrapper[4940]: I0223 09:08:53.424526 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/ab97aa50-1b14-4a5c-82cd-1be9f025b2b5-db-sync-config-data\") pod \"cinder-db-sync-6p69q\" (UID: \"ab97aa50-1b14-4a5c-82cd-1be9f025b2b5\") " pod="openstack/cinder-db-sync-6p69q" Feb 23 09:08:53 crc kubenswrapper[4940]: I0223 09:08:53.447479 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ab97aa50-1b14-4a5c-82cd-1be9f025b2b5-scripts\") pod \"cinder-db-sync-6p69q\" (UID: \"ab97aa50-1b14-4a5c-82cd-1be9f025b2b5\") " pod="openstack/cinder-db-sync-6p69q" Feb 23 09:08:53 crc kubenswrapper[4940]: I0223 09:08:53.460475 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6d288\" (UniqueName: \"kubernetes.io/projected/4fe165a1-2722-4594-82d4-d9b9e5e88a56-kube-api-access-6d288\") pod \"horizon-76557b5cdc-8z8m9\" (UID: \"4fe165a1-2722-4594-82d4-d9b9e5e88a56\") " pod="openstack/horizon-76557b5cdc-8z8m9" Feb 23 09:08:53 crc kubenswrapper[4940]: I0223 09:08:53.463808 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ab97aa50-1b14-4a5c-82cd-1be9f025b2b5-etc-machine-id\") pod \"cinder-db-sync-6p69q\" (UID: \"ab97aa50-1b14-4a5c-82cd-1be9f025b2b5\") " pod="openstack/cinder-db-sync-6p69q" Feb 23 09:08:53 crc kubenswrapper[4940]: I0223 09:08:53.490650 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab97aa50-1b14-4a5c-82cd-1be9f025b2b5-config-data\") pod \"cinder-db-sync-6p69q\" (UID: \"ab97aa50-1b14-4a5c-82cd-1be9f025b2b5\") " pod="openstack/cinder-db-sync-6p69q" Feb 23 09:08:53 crc kubenswrapper[4940]: I0223 09:08:53.506855 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/4fe165a1-2722-4594-82d4-d9b9e5e88a56-horizon-secret-key\") pod \"horizon-76557b5cdc-8z8m9\" (UID: \"4fe165a1-2722-4594-82d4-d9b9e5e88a56\") " pod="openstack/horizon-76557b5cdc-8z8m9" Feb 23 09:08:53 crc kubenswrapper[4940]: I0223 09:08:53.509924 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 23 09:08:53 crc kubenswrapper[4940]: I0223 09:08:53.511728 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sthvh\" (UniqueName: \"kubernetes.io/projected/a43f9f8e-d118-4247-b1f0-b6aac984bb4d-kube-api-access-sthvh\") pod \"manila-db-sync-ktg94\" (UID: \"a43f9f8e-d118-4247-b1f0-b6aac984bb4d\") " pod="openstack/manila-db-sync-ktg94" Feb 23 09:08:53 crc kubenswrapper[4940]: I0223 09:08:53.518377 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab97aa50-1b14-4a5c-82cd-1be9f025b2b5-combined-ca-bundle\") pod \"cinder-db-sync-6p69q\" (UID: \"ab97aa50-1b14-4a5c-82cd-1be9f025b2b5\") " pod="openstack/cinder-db-sync-6p69q" Feb 23 09:08:53 crc kubenswrapper[4940]: I0223 09:08:53.528322 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hqrcm\" (UniqueName: \"kubernetes.io/projected/ab97aa50-1b14-4a5c-82cd-1be9f025b2b5-kube-api-access-hqrcm\") pod \"cinder-db-sync-6p69q\" (UID: \"ab97aa50-1b14-4a5c-82cd-1be9f025b2b5\") " pod="openstack/cinder-db-sync-6p69q" Feb 23 09:08:53 crc kubenswrapper[4940]: I0223 09:08:53.532067 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a43f9f8e-d118-4247-b1f0-b6aac984bb4d-config-data\") pod \"manila-db-sync-ktg94\" (UID: \"a43f9f8e-d118-4247-b1f0-b6aac984bb4d\") " pod="openstack/manila-db-sync-ktg94" Feb 23 09:08:53 crc kubenswrapper[4940]: I0223 09:08:53.533006 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"job-config-data\" (UniqueName: \"kubernetes.io/secret/a43f9f8e-d118-4247-b1f0-b6aac984bb4d-job-config-data\") pod \"manila-db-sync-ktg94\" (UID: \"a43f9f8e-d118-4247-b1f0-b6aac984bb4d\") " pod="openstack/manila-db-sync-ktg94" Feb 23 09:08:53 crc kubenswrapper[4940]: I0223 09:08:53.533423 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a43f9f8e-d118-4247-b1f0-b6aac984bb4d-combined-ca-bundle\") pod \"manila-db-sync-ktg94\" (UID: \"a43f9f8e-d118-4247-b1f0-b6aac984bb4d\") " pod="openstack/manila-db-sync-ktg94" Feb 23 09:08:53 crc kubenswrapper[4940]: I0223 09:08:53.541738 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-rxlnz"] Feb 23 09:08:53 crc kubenswrapper[4940]: I0223 09:08:53.543438 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-rxlnz" Feb 23 09:08:53 crc kubenswrapper[4940]: I0223 09:08:53.543599 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-6p69q" Feb 23 09:08:53 crc kubenswrapper[4940]: I0223 09:08:53.544353 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"job-config-data\" (UniqueName: \"kubernetes.io/secret/a43f9f8e-d118-4247-b1f0-b6aac984bb4d-job-config-data\") pod \"manila-db-sync-ktg94\" (UID: \"a43f9f8e-d118-4247-b1f0-b6aac984bb4d\") " pod="openstack/manila-db-sync-ktg94" Feb 23 09:08:53 crc kubenswrapper[4940]: I0223 09:08:53.550839 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-bwcqd" Feb 23 09:08:53 crc kubenswrapper[4940]: I0223 09:08:53.551080 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 23 09:08:53 crc kubenswrapper[4940]: I0223 09:08:53.551356 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 23 09:08:53 crc kubenswrapper[4940]: I0223 09:08:53.552464 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a43f9f8e-d118-4247-b1f0-b6aac984bb4d-config-data\") pod \"manila-db-sync-ktg94\" (UID: \"a43f9f8e-d118-4247-b1f0-b6aac984bb4d\") " pod="openstack/manila-db-sync-ktg94" Feb 23 09:08:53 crc kubenswrapper[4940]: I0223 09:08:53.559662 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a43f9f8e-d118-4247-b1f0-b6aac984bb4d-combined-ca-bundle\") pod \"manila-db-sync-ktg94\" (UID: \"a43f9f8e-d118-4247-b1f0-b6aac984bb4d\") " pod="openstack/manila-db-sync-ktg94" Feb 23 09:08:53 crc kubenswrapper[4940]: I0223 09:08:53.564109 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Feb 23 09:08:53 crc kubenswrapper[4940]: I0223 09:08:53.564347 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Feb 23 09:08:53 crc kubenswrapper[4940]: I0223 09:08:53.565014 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sthvh\" (UniqueName: \"kubernetes.io/projected/a43f9f8e-d118-4247-b1f0-b6aac984bb4d-kube-api-access-sthvh\") pod \"manila-db-sync-ktg94\" (UID: \"a43f9f8e-d118-4247-b1f0-b6aac984bb4d\") " pod="openstack/manila-db-sync-ktg94" Feb 23 09:08:53 crc kubenswrapper[4940]: I0223 09:08:53.569308 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-76557b5cdc-8z8m9" Feb 23 09:08:53 crc kubenswrapper[4940]: I0223 09:08:53.622873 4940 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-847c4cc679-dfkmx"] Feb 23 09:08:53 crc kubenswrapper[4940]: I0223 09:08:53.625959 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-sync-ktg94" Feb 23 09:08:53 crc kubenswrapper[4940]: I0223 09:08:53.640561 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6af3cab0-e7d8-461f-9092-6b5afefff5cc-config-data\") pod \"ceilometer-0\" (UID: \"6af3cab0-e7d8-461f-9092-6b5afefff5cc\") " pod="openstack/ceilometer-0" Feb 23 09:08:53 crc kubenswrapper[4940]: I0223 09:08:53.640627 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6af3cab0-e7d8-461f-9092-6b5afefff5cc-scripts\") pod \"ceilometer-0\" (UID: \"6af3cab0-e7d8-461f-9092-6b5afefff5cc\") " pod="openstack/ceilometer-0" Feb 23 09:08:53 crc kubenswrapper[4940]: I0223 09:08:53.640662 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6af3cab0-e7d8-461f-9092-6b5afefff5cc-run-httpd\") pod \"ceilometer-0\" (UID: \"6af3cab0-e7d8-461f-9092-6b5afefff5cc\") " pod="openstack/ceilometer-0" Feb 23 09:08:53 crc kubenswrapper[4940]: I0223 09:08:53.640681 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6af3cab0-e7d8-461f-9092-6b5afefff5cc-log-httpd\") pod \"ceilometer-0\" (UID: \"6af3cab0-e7d8-461f-9092-6b5afefff5cc\") " pod="openstack/ceilometer-0" Feb 23 09:08:53 crc kubenswrapper[4940]: I0223 09:08:53.640701 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9lchs\" (UniqueName: \"kubernetes.io/projected/6af3cab0-e7d8-461f-9092-6b5afefff5cc-kube-api-access-9lchs\") pod \"ceilometer-0\" (UID: \"6af3cab0-e7d8-461f-9092-6b5afefff5cc\") " pod="openstack/ceilometer-0" Feb 23 09:08:53 crc kubenswrapper[4940]: I0223 09:08:53.640747 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6af3cab0-e7d8-461f-9092-6b5afefff5cc-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"6af3cab0-e7d8-461f-9092-6b5afefff5cc\") " pod="openstack/ceilometer-0" Feb 23 09:08:53 crc kubenswrapper[4940]: I0223 09:08:53.640804 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6af3cab0-e7d8-461f-9092-6b5afefff5cc-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"6af3cab0-e7d8-461f-9092-6b5afefff5cc\") " pod="openstack/ceilometer-0" Feb 23 09:08:53 crc kubenswrapper[4940]: I0223 09:08:53.648715 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 23 09:08:53 crc kubenswrapper[4940]: I0223 09:08:53.681678 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-rxlnz"] Feb 23 09:08:53 crc kubenswrapper[4940]: I0223 09:08:53.727684 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-hcm9c"] Feb 23 09:08:53 crc kubenswrapper[4940]: I0223 09:08:53.728926 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-hcm9c" Feb 23 09:08:53 crc kubenswrapper[4940]: I0223 09:08:53.735393 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Feb 23 09:08:53 crc kubenswrapper[4940]: I0223 09:08:53.735588 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-bntvw" Feb 23 09:08:53 crc kubenswrapper[4940]: I0223 09:08:53.742141 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6af3cab0-e7d8-461f-9092-6b5afefff5cc-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"6af3cab0-e7d8-461f-9092-6b5afefff5cc\") " pod="openstack/ceilometer-0" Feb 23 09:08:53 crc kubenswrapper[4940]: I0223 09:08:53.742219 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c217a33-e32d-41cc-8fda-6691bf37db15-combined-ca-bundle\") pod \"placement-db-sync-rxlnz\" (UID: \"7c217a33-e32d-41cc-8fda-6691bf37db15\") " pod="openstack/placement-db-sync-rxlnz" Feb 23 09:08:53 crc kubenswrapper[4940]: I0223 09:08:53.742249 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7c217a33-e32d-41cc-8fda-6691bf37db15-logs\") pod \"placement-db-sync-rxlnz\" (UID: \"7c217a33-e32d-41cc-8fda-6691bf37db15\") " pod="openstack/placement-db-sync-rxlnz" Feb 23 09:08:53 crc kubenswrapper[4940]: I0223 09:08:53.742271 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xffzp\" (UniqueName: \"kubernetes.io/projected/7c217a33-e32d-41cc-8fda-6691bf37db15-kube-api-access-xffzp\") pod \"placement-db-sync-rxlnz\" (UID: \"7c217a33-e32d-41cc-8fda-6691bf37db15\") " pod="openstack/placement-db-sync-rxlnz" Feb 23 09:08:53 crc kubenswrapper[4940]: I0223 09:08:53.742306 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6af3cab0-e7d8-461f-9092-6b5afefff5cc-config-data\") pod \"ceilometer-0\" (UID: \"6af3cab0-e7d8-461f-9092-6b5afefff5cc\") " pod="openstack/ceilometer-0" Feb 23 09:08:53 crc kubenswrapper[4940]: I0223 09:08:53.742322 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6af3cab0-e7d8-461f-9092-6b5afefff5cc-scripts\") pod \"ceilometer-0\" (UID: \"6af3cab0-e7d8-461f-9092-6b5afefff5cc\") " pod="openstack/ceilometer-0" Feb 23 09:08:53 crc kubenswrapper[4940]: I0223 09:08:53.742353 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7c217a33-e32d-41cc-8fda-6691bf37db15-scripts\") pod \"placement-db-sync-rxlnz\" (UID: \"7c217a33-e32d-41cc-8fda-6691bf37db15\") " pod="openstack/placement-db-sync-rxlnz" Feb 23 09:08:53 crc kubenswrapper[4940]: I0223 09:08:53.742386 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6af3cab0-e7d8-461f-9092-6b5afefff5cc-run-httpd\") pod \"ceilometer-0\" (UID: \"6af3cab0-e7d8-461f-9092-6b5afefff5cc\") " pod="openstack/ceilometer-0" Feb 23 09:08:53 crc kubenswrapper[4940]: I0223 09:08:53.742405 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6af3cab0-e7d8-461f-9092-6b5afefff5cc-log-httpd\") pod \"ceilometer-0\" (UID: \"6af3cab0-e7d8-461f-9092-6b5afefff5cc\") " pod="openstack/ceilometer-0" Feb 23 09:08:53 crc kubenswrapper[4940]: I0223 09:08:53.742423 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9lchs\" (UniqueName: \"kubernetes.io/projected/6af3cab0-e7d8-461f-9092-6b5afefff5cc-kube-api-access-9lchs\") pod \"ceilometer-0\" (UID: \"6af3cab0-e7d8-461f-9092-6b5afefff5cc\") " pod="openstack/ceilometer-0" Feb 23 09:08:53 crc kubenswrapper[4940]: I0223 09:08:53.742457 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7c217a33-e32d-41cc-8fda-6691bf37db15-config-data\") pod \"placement-db-sync-rxlnz\" (UID: \"7c217a33-e32d-41cc-8fda-6691bf37db15\") " pod="openstack/placement-db-sync-rxlnz" Feb 23 09:08:53 crc kubenswrapper[4940]: I0223 09:08:53.742482 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6af3cab0-e7d8-461f-9092-6b5afefff5cc-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"6af3cab0-e7d8-461f-9092-6b5afefff5cc\") " pod="openstack/ceilometer-0" Feb 23 09:08:53 crc kubenswrapper[4940]: I0223 09:08:53.747997 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6af3cab0-e7d8-461f-9092-6b5afefff5cc-log-httpd\") pod \"ceilometer-0\" (UID: \"6af3cab0-e7d8-461f-9092-6b5afefff5cc\") " pod="openstack/ceilometer-0" Feb 23 09:08:53 crc kubenswrapper[4940]: I0223 09:08:53.748330 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6af3cab0-e7d8-461f-9092-6b5afefff5cc-run-httpd\") pod \"ceilometer-0\" (UID: \"6af3cab0-e7d8-461f-9092-6b5afefff5cc\") " pod="openstack/ceilometer-0" Feb 23 09:08:53 crc kubenswrapper[4940]: I0223 09:08:53.749001 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Feb 23 09:08:53 crc kubenswrapper[4940]: I0223 09:08:53.755110 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6af3cab0-e7d8-461f-9092-6b5afefff5cc-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"6af3cab0-e7d8-461f-9092-6b5afefff5cc\") " pod="openstack/ceilometer-0" Feb 23 09:08:53 crc kubenswrapper[4940]: I0223 09:08:53.756008 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6af3cab0-e7d8-461f-9092-6b5afefff5cc-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"6af3cab0-e7d8-461f-9092-6b5afefff5cc\") " pod="openstack/ceilometer-0" Feb 23 09:08:53 crc kubenswrapper[4940]: I0223 09:08:53.767294 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-6d4447f67f-bwqtp"] Feb 23 09:08:53 crc kubenswrapper[4940]: I0223 09:08:53.768855 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6af3cab0-e7d8-461f-9092-6b5afefff5cc-scripts\") pod \"ceilometer-0\" (UID: \"6af3cab0-e7d8-461f-9092-6b5afefff5cc\") " pod="openstack/ceilometer-0" Feb 23 09:08:53 crc kubenswrapper[4940]: I0223 09:08:53.770143 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6af3cab0-e7d8-461f-9092-6b5afefff5cc-config-data\") pod \"ceilometer-0\" (UID: \"6af3cab0-e7d8-461f-9092-6b5afefff5cc\") " pod="openstack/ceilometer-0" Feb 23 09:08:53 crc kubenswrapper[4940]: I0223 09:08:53.771726 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6d4447f67f-bwqtp" Feb 23 09:08:53 crc kubenswrapper[4940]: I0223 09:08:53.773658 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9lchs\" (UniqueName: \"kubernetes.io/projected/6af3cab0-e7d8-461f-9092-6b5afefff5cc-kube-api-access-9lchs\") pod \"ceilometer-0\" (UID: \"6af3cab0-e7d8-461f-9092-6b5afefff5cc\") " pod="openstack/ceilometer-0" Feb 23 09:08:53 crc kubenswrapper[4940]: I0223 09:08:53.780269 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-hcm9c"] Feb 23 09:08:53 crc kubenswrapper[4940]: I0223 09:08:53.800554 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-s4wc7"] Feb 23 09:08:53 crc kubenswrapper[4940]: I0223 09:08:53.802153 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-s4wc7" Feb 23 09:08:53 crc kubenswrapper[4940]: I0223 09:08:53.828683 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-mphgm"] Feb 23 09:08:53 crc kubenswrapper[4940]: I0223 09:08:53.829999 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-mphgm" Feb 23 09:08:53 crc kubenswrapper[4940]: I0223 09:08:53.834144 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Feb 23 09:08:53 crc kubenswrapper[4940]: I0223 09:08:53.837249 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-89wzw" Feb 23 09:08:53 crc kubenswrapper[4940]: I0223 09:08:53.845772 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c217a33-e32d-41cc-8fda-6691bf37db15-combined-ca-bundle\") pod \"placement-db-sync-rxlnz\" (UID: \"7c217a33-e32d-41cc-8fda-6691bf37db15\") " pod="openstack/placement-db-sync-rxlnz" Feb 23 09:08:53 crc kubenswrapper[4940]: I0223 09:08:53.845809 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7c217a33-e32d-41cc-8fda-6691bf37db15-logs\") pod \"placement-db-sync-rxlnz\" (UID: \"7c217a33-e32d-41cc-8fda-6691bf37db15\") " pod="openstack/placement-db-sync-rxlnz" Feb 23 09:08:53 crc kubenswrapper[4940]: I0223 09:08:53.845828 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xffzp\" (UniqueName: \"kubernetes.io/projected/7c217a33-e32d-41cc-8fda-6691bf37db15-kube-api-access-xffzp\") pod \"placement-db-sync-rxlnz\" (UID: \"7c217a33-e32d-41cc-8fda-6691bf37db15\") " pod="openstack/placement-db-sync-rxlnz" Feb 23 09:08:53 crc kubenswrapper[4940]: I0223 09:08:53.845859 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf56a426-9c5a-4a94-8740-fbe2c05dafbb-combined-ca-bundle\") pod \"neutron-db-sync-hcm9c\" (UID: \"bf56a426-9c5a-4a94-8740-fbe2c05dafbb\") " pod="openstack/neutron-db-sync-hcm9c" Feb 23 09:08:53 crc kubenswrapper[4940]: I0223 09:08:53.845883 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5nr7g\" (UniqueName: \"kubernetes.io/projected/bf56a426-9c5a-4a94-8740-fbe2c05dafbb-kube-api-access-5nr7g\") pod \"neutron-db-sync-hcm9c\" (UID: \"bf56a426-9c5a-4a94-8740-fbe2c05dafbb\") " pod="openstack/neutron-db-sync-hcm9c" Feb 23 09:08:53 crc kubenswrapper[4940]: I0223 09:08:53.845918 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7c217a33-e32d-41cc-8fda-6691bf37db15-scripts\") pod \"placement-db-sync-rxlnz\" (UID: \"7c217a33-e32d-41cc-8fda-6691bf37db15\") " pod="openstack/placement-db-sync-rxlnz" Feb 23 09:08:53 crc kubenswrapper[4940]: I0223 09:08:53.845940 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/bf56a426-9c5a-4a94-8740-fbe2c05dafbb-config\") pod \"neutron-db-sync-hcm9c\" (UID: \"bf56a426-9c5a-4a94-8740-fbe2c05dafbb\") " pod="openstack/neutron-db-sync-hcm9c" Feb 23 09:08:53 crc kubenswrapper[4940]: I0223 09:08:53.845981 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7c217a33-e32d-41cc-8fda-6691bf37db15-config-data\") pod \"placement-db-sync-rxlnz\" (UID: \"7c217a33-e32d-41cc-8fda-6691bf37db15\") " pod="openstack/placement-db-sync-rxlnz" Feb 23 09:08:53 crc kubenswrapper[4940]: I0223 09:08:53.846777 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7c217a33-e32d-41cc-8fda-6691bf37db15-logs\") pod \"placement-db-sync-rxlnz\" (UID: \"7c217a33-e32d-41cc-8fda-6691bf37db15\") " pod="openstack/placement-db-sync-rxlnz" Feb 23 09:08:53 crc kubenswrapper[4940]: I0223 09:08:53.849232 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6d4447f67f-bwqtp"] Feb 23 09:08:53 crc kubenswrapper[4940]: I0223 09:08:53.851152 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7c217a33-e32d-41cc-8fda-6691bf37db15-config-data\") pod \"placement-db-sync-rxlnz\" (UID: \"7c217a33-e32d-41cc-8fda-6691bf37db15\") " pod="openstack/placement-db-sync-rxlnz" Feb 23 09:08:53 crc kubenswrapper[4940]: I0223 09:08:53.852808 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7c217a33-e32d-41cc-8fda-6691bf37db15-scripts\") pod \"placement-db-sync-rxlnz\" (UID: \"7c217a33-e32d-41cc-8fda-6691bf37db15\") " pod="openstack/placement-db-sync-rxlnz" Feb 23 09:08:53 crc kubenswrapper[4940]: I0223 09:08:53.853361 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c217a33-e32d-41cc-8fda-6691bf37db15-combined-ca-bundle\") pod \"placement-db-sync-rxlnz\" (UID: \"7c217a33-e32d-41cc-8fda-6691bf37db15\") " pod="openstack/placement-db-sync-rxlnz" Feb 23 09:08:53 crc kubenswrapper[4940]: I0223 09:08:53.883280 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xffzp\" (UniqueName: \"kubernetes.io/projected/7c217a33-e32d-41cc-8fda-6691bf37db15-kube-api-access-xffzp\") pod \"placement-db-sync-rxlnz\" (UID: \"7c217a33-e32d-41cc-8fda-6691bf37db15\") " pod="openstack/placement-db-sync-rxlnz" Feb 23 09:08:53 crc kubenswrapper[4940]: I0223 09:08:53.894304 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-s4wc7"] Feb 23 09:08:53 crc kubenswrapper[4940]: I0223 09:08:53.923211 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 23 09:08:53 crc kubenswrapper[4940]: I0223 09:08:53.934441 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-mphgm"] Feb 23 09:08:53 crc kubenswrapper[4940]: I0223 09:08:53.947542 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/05fcd3d3-fb9e-420f-a8b6-72bf1c84f0b3-horizon-secret-key\") pod \"horizon-6d4447f67f-bwqtp\" (UID: \"05fcd3d3-fb9e-420f-a8b6-72bf1c84f0b3\") " pod="openstack/horizon-6d4447f67f-bwqtp" Feb 23 09:08:53 crc kubenswrapper[4940]: I0223 09:08:53.947716 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7dc47a31-b1a6-40b2-8d67-5d60854fea4e-ovsdbserver-nb\") pod \"dnsmasq-dns-785d8bcb8c-s4wc7\" (UID: \"7dc47a31-b1a6-40b2-8d67-5d60854fea4e\") " pod="openstack/dnsmasq-dns-785d8bcb8c-s4wc7" Feb 23 09:08:53 crc kubenswrapper[4940]: I0223 09:08:53.947760 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/05fcd3d3-fb9e-420f-a8b6-72bf1c84f0b3-config-data\") pod \"horizon-6d4447f67f-bwqtp\" (UID: \"05fcd3d3-fb9e-420f-a8b6-72bf1c84f0b3\") " pod="openstack/horizon-6d4447f67f-bwqtp" Feb 23 09:08:53 crc kubenswrapper[4940]: I0223 09:08:53.947797 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b430a58d-ed32-4642-ac93-d6f0de2eeb0d-combined-ca-bundle\") pod \"barbican-db-sync-mphgm\" (UID: \"b430a58d-ed32-4642-ac93-d6f0de2eeb0d\") " pod="openstack/barbican-db-sync-mphgm" Feb 23 09:08:53 crc kubenswrapper[4940]: I0223 09:08:53.947844 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dlkf2\" (UniqueName: \"kubernetes.io/projected/05fcd3d3-fb9e-420f-a8b6-72bf1c84f0b3-kube-api-access-dlkf2\") pod \"horizon-6d4447f67f-bwqtp\" (UID: \"05fcd3d3-fb9e-420f-a8b6-72bf1c84f0b3\") " pod="openstack/horizon-6d4447f67f-bwqtp" Feb 23 09:08:53 crc kubenswrapper[4940]: I0223 09:08:53.947869 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7dc47a31-b1a6-40b2-8d67-5d60854fea4e-ovsdbserver-sb\") pod \"dnsmasq-dns-785d8bcb8c-s4wc7\" (UID: \"7dc47a31-b1a6-40b2-8d67-5d60854fea4e\") " pod="openstack/dnsmasq-dns-785d8bcb8c-s4wc7" Feb 23 09:08:53 crc kubenswrapper[4940]: I0223 09:08:53.947902 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rsrtb\" (UniqueName: \"kubernetes.io/projected/b430a58d-ed32-4642-ac93-d6f0de2eeb0d-kube-api-access-rsrtb\") pod \"barbican-db-sync-mphgm\" (UID: \"b430a58d-ed32-4642-ac93-d6f0de2eeb0d\") " pod="openstack/barbican-db-sync-mphgm" Feb 23 09:08:53 crc kubenswrapper[4940]: I0223 09:08:53.948109 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dz6dn\" (UniqueName: \"kubernetes.io/projected/7dc47a31-b1a6-40b2-8d67-5d60854fea4e-kube-api-access-dz6dn\") pod \"dnsmasq-dns-785d8bcb8c-s4wc7\" (UID: \"7dc47a31-b1a6-40b2-8d67-5d60854fea4e\") " pod="openstack/dnsmasq-dns-785d8bcb8c-s4wc7" Feb 23 09:08:53 crc kubenswrapper[4940]: I0223 09:08:53.948144 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7dc47a31-b1a6-40b2-8d67-5d60854fea4e-config\") pod \"dnsmasq-dns-785d8bcb8c-s4wc7\" (UID: \"7dc47a31-b1a6-40b2-8d67-5d60854fea4e\") " pod="openstack/dnsmasq-dns-785d8bcb8c-s4wc7" Feb 23 09:08:53 crc kubenswrapper[4940]: I0223 09:08:53.948170 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7dc47a31-b1a6-40b2-8d67-5d60854fea4e-dns-swift-storage-0\") pod \"dnsmasq-dns-785d8bcb8c-s4wc7\" (UID: \"7dc47a31-b1a6-40b2-8d67-5d60854fea4e\") " pod="openstack/dnsmasq-dns-785d8bcb8c-s4wc7" Feb 23 09:08:53 crc kubenswrapper[4940]: I0223 09:08:53.948191 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/05fcd3d3-fb9e-420f-a8b6-72bf1c84f0b3-logs\") pod \"horizon-6d4447f67f-bwqtp\" (UID: \"05fcd3d3-fb9e-420f-a8b6-72bf1c84f0b3\") " pod="openstack/horizon-6d4447f67f-bwqtp" Feb 23 09:08:53 crc kubenswrapper[4940]: I0223 09:08:53.948216 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf56a426-9c5a-4a94-8740-fbe2c05dafbb-combined-ca-bundle\") pod \"neutron-db-sync-hcm9c\" (UID: \"bf56a426-9c5a-4a94-8740-fbe2c05dafbb\") " pod="openstack/neutron-db-sync-hcm9c" Feb 23 09:08:53 crc kubenswrapper[4940]: I0223 09:08:53.948248 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5nr7g\" (UniqueName: \"kubernetes.io/projected/bf56a426-9c5a-4a94-8740-fbe2c05dafbb-kube-api-access-5nr7g\") pod \"neutron-db-sync-hcm9c\" (UID: \"bf56a426-9c5a-4a94-8740-fbe2c05dafbb\") " pod="openstack/neutron-db-sync-hcm9c" Feb 23 09:08:53 crc kubenswrapper[4940]: I0223 09:08:53.948289 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/05fcd3d3-fb9e-420f-a8b6-72bf1c84f0b3-scripts\") pod \"horizon-6d4447f67f-bwqtp\" (UID: \"05fcd3d3-fb9e-420f-a8b6-72bf1c84f0b3\") " pod="openstack/horizon-6d4447f67f-bwqtp" Feb 23 09:08:53 crc kubenswrapper[4940]: I0223 09:08:53.948309 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/b430a58d-ed32-4642-ac93-d6f0de2eeb0d-db-sync-config-data\") pod \"barbican-db-sync-mphgm\" (UID: \"b430a58d-ed32-4642-ac93-d6f0de2eeb0d\") " pod="openstack/barbican-db-sync-mphgm" Feb 23 09:08:53 crc kubenswrapper[4940]: I0223 09:08:53.948371 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7dc47a31-b1a6-40b2-8d67-5d60854fea4e-dns-svc\") pod \"dnsmasq-dns-785d8bcb8c-s4wc7\" (UID: \"7dc47a31-b1a6-40b2-8d67-5d60854fea4e\") " pod="openstack/dnsmasq-dns-785d8bcb8c-s4wc7" Feb 23 09:08:53 crc kubenswrapper[4940]: I0223 09:08:53.948400 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/bf56a426-9c5a-4a94-8740-fbe2c05dafbb-config\") pod \"neutron-db-sync-hcm9c\" (UID: \"bf56a426-9c5a-4a94-8740-fbe2c05dafbb\") " pod="openstack/neutron-db-sync-hcm9c" Feb 23 09:08:53 crc kubenswrapper[4940]: I0223 09:08:53.960278 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/bf56a426-9c5a-4a94-8740-fbe2c05dafbb-config\") pod \"neutron-db-sync-hcm9c\" (UID: \"bf56a426-9c5a-4a94-8740-fbe2c05dafbb\") " pod="openstack/neutron-db-sync-hcm9c" Feb 23 09:08:53 crc kubenswrapper[4940]: I0223 09:08:53.966679 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Feb 23 09:08:53 crc kubenswrapper[4940]: I0223 09:08:53.968496 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 23 09:08:53 crc kubenswrapper[4940]: I0223 09:08:53.968544 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-rxlnz" Feb 23 09:08:53 crc kubenswrapper[4940]: I0223 09:08:53.973584 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5nr7g\" (UniqueName: \"kubernetes.io/projected/bf56a426-9c5a-4a94-8740-fbe2c05dafbb-kube-api-access-5nr7g\") pod \"neutron-db-sync-hcm9c\" (UID: \"bf56a426-9c5a-4a94-8740-fbe2c05dafbb\") " pod="openstack/neutron-db-sync-hcm9c" Feb 23 09:08:53 crc kubenswrapper[4940]: I0223 09:08:53.974966 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Feb 23 09:08:53 crc kubenswrapper[4940]: I0223 09:08:53.975141 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-h58hk" Feb 23 09:08:53 crc kubenswrapper[4940]: I0223 09:08:53.975186 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Feb 23 09:08:53 crc kubenswrapper[4940]: I0223 09:08:53.976185 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Feb 23 09:08:53 crc kubenswrapper[4940]: I0223 09:08:53.976303 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 23 09:08:53 crc kubenswrapper[4940]: I0223 09:08:53.976910 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Feb 23 09:08:53 crc kubenswrapper[4940]: I0223 09:08:53.987083 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf56a426-9c5a-4a94-8740-fbe2c05dafbb-combined-ca-bundle\") pod \"neutron-db-sync-hcm9c\" (UID: \"bf56a426-9c5a-4a94-8740-fbe2c05dafbb\") " pod="openstack/neutron-db-sync-hcm9c" Feb 23 09:08:54 crc kubenswrapper[4940]: I0223 09:08:53.998808 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 23 09:08:54 crc kubenswrapper[4940]: I0223 09:08:54.001031 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 23 09:08:54 crc kubenswrapper[4940]: I0223 09:08:54.006816 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Feb 23 09:08:54 crc kubenswrapper[4940]: I0223 09:08:54.006818 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Feb 23 09:08:54 crc kubenswrapper[4940]: I0223 09:08:54.010314 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 23 09:08:54 crc kubenswrapper[4940]: I0223 09:08:54.049670 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/05fcd3d3-fb9e-420f-a8b6-72bf1c84f0b3-horizon-secret-key\") pod \"horizon-6d4447f67f-bwqtp\" (UID: \"05fcd3d3-fb9e-420f-a8b6-72bf1c84f0b3\") " pod="openstack/horizon-6d4447f67f-bwqtp" Feb 23 09:08:54 crc kubenswrapper[4940]: I0223 09:08:54.049744 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7dc47a31-b1a6-40b2-8d67-5d60854fea4e-ovsdbserver-nb\") pod \"dnsmasq-dns-785d8bcb8c-s4wc7\" (UID: \"7dc47a31-b1a6-40b2-8d67-5d60854fea4e\") " pod="openstack/dnsmasq-dns-785d8bcb8c-s4wc7" Feb 23 09:08:54 crc kubenswrapper[4940]: I0223 09:08:54.049790 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/05fcd3d3-fb9e-420f-a8b6-72bf1c84f0b3-config-data\") pod \"horizon-6d4447f67f-bwqtp\" (UID: \"05fcd3d3-fb9e-420f-a8b6-72bf1c84f0b3\") " pod="openstack/horizon-6d4447f67f-bwqtp" Feb 23 09:08:54 crc kubenswrapper[4940]: I0223 09:08:54.049818 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b430a58d-ed32-4642-ac93-d6f0de2eeb0d-combined-ca-bundle\") pod \"barbican-db-sync-mphgm\" (UID: \"b430a58d-ed32-4642-ac93-d6f0de2eeb0d\") " pod="openstack/barbican-db-sync-mphgm" Feb 23 09:08:54 crc kubenswrapper[4940]: I0223 09:08:54.049864 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dlkf2\" (UniqueName: \"kubernetes.io/projected/05fcd3d3-fb9e-420f-a8b6-72bf1c84f0b3-kube-api-access-dlkf2\") pod \"horizon-6d4447f67f-bwqtp\" (UID: \"05fcd3d3-fb9e-420f-a8b6-72bf1c84f0b3\") " pod="openstack/horizon-6d4447f67f-bwqtp" Feb 23 09:08:54 crc kubenswrapper[4940]: I0223 09:08:54.049885 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7dc47a31-b1a6-40b2-8d67-5d60854fea4e-ovsdbserver-sb\") pod \"dnsmasq-dns-785d8bcb8c-s4wc7\" (UID: \"7dc47a31-b1a6-40b2-8d67-5d60854fea4e\") " pod="openstack/dnsmasq-dns-785d8bcb8c-s4wc7" Feb 23 09:08:54 crc kubenswrapper[4940]: I0223 09:08:54.049907 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rsrtb\" (UniqueName: \"kubernetes.io/projected/b430a58d-ed32-4642-ac93-d6f0de2eeb0d-kube-api-access-rsrtb\") pod \"barbican-db-sync-mphgm\" (UID: \"b430a58d-ed32-4642-ac93-d6f0de2eeb0d\") " pod="openstack/barbican-db-sync-mphgm" Feb 23 09:08:54 crc kubenswrapper[4940]: I0223 09:08:54.049957 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dz6dn\" (UniqueName: \"kubernetes.io/projected/7dc47a31-b1a6-40b2-8d67-5d60854fea4e-kube-api-access-dz6dn\") pod \"dnsmasq-dns-785d8bcb8c-s4wc7\" (UID: \"7dc47a31-b1a6-40b2-8d67-5d60854fea4e\") " pod="openstack/dnsmasq-dns-785d8bcb8c-s4wc7" Feb 23 09:08:54 crc kubenswrapper[4940]: I0223 09:08:54.049978 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7dc47a31-b1a6-40b2-8d67-5d60854fea4e-config\") pod \"dnsmasq-dns-785d8bcb8c-s4wc7\" (UID: \"7dc47a31-b1a6-40b2-8d67-5d60854fea4e\") " pod="openstack/dnsmasq-dns-785d8bcb8c-s4wc7" Feb 23 09:08:54 crc kubenswrapper[4940]: I0223 09:08:54.050012 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7dc47a31-b1a6-40b2-8d67-5d60854fea4e-dns-swift-storage-0\") pod \"dnsmasq-dns-785d8bcb8c-s4wc7\" (UID: \"7dc47a31-b1a6-40b2-8d67-5d60854fea4e\") " pod="openstack/dnsmasq-dns-785d8bcb8c-s4wc7" Feb 23 09:08:54 crc kubenswrapper[4940]: I0223 09:08:54.050052 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/05fcd3d3-fb9e-420f-a8b6-72bf1c84f0b3-logs\") pod \"horizon-6d4447f67f-bwqtp\" (UID: \"05fcd3d3-fb9e-420f-a8b6-72bf1c84f0b3\") " pod="openstack/horizon-6d4447f67f-bwqtp" Feb 23 09:08:54 crc kubenswrapper[4940]: I0223 09:08:54.050107 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/b430a58d-ed32-4642-ac93-d6f0de2eeb0d-db-sync-config-data\") pod \"barbican-db-sync-mphgm\" (UID: \"b430a58d-ed32-4642-ac93-d6f0de2eeb0d\") " pod="openstack/barbican-db-sync-mphgm" Feb 23 09:08:54 crc kubenswrapper[4940]: I0223 09:08:54.050124 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/05fcd3d3-fb9e-420f-a8b6-72bf1c84f0b3-scripts\") pod \"horizon-6d4447f67f-bwqtp\" (UID: \"05fcd3d3-fb9e-420f-a8b6-72bf1c84f0b3\") " pod="openstack/horizon-6d4447f67f-bwqtp" Feb 23 09:08:54 crc kubenswrapper[4940]: I0223 09:08:54.050150 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7dc47a31-b1a6-40b2-8d67-5d60854fea4e-dns-svc\") pod \"dnsmasq-dns-785d8bcb8c-s4wc7\" (UID: \"7dc47a31-b1a6-40b2-8d67-5d60854fea4e\") " pod="openstack/dnsmasq-dns-785d8bcb8c-s4wc7" Feb 23 09:08:54 crc kubenswrapper[4940]: I0223 09:08:54.051451 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7dc47a31-b1a6-40b2-8d67-5d60854fea4e-dns-svc\") pod \"dnsmasq-dns-785d8bcb8c-s4wc7\" (UID: \"7dc47a31-b1a6-40b2-8d67-5d60854fea4e\") " pod="openstack/dnsmasq-dns-785d8bcb8c-s4wc7" Feb 23 09:08:54 crc kubenswrapper[4940]: I0223 09:08:54.052830 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7dc47a31-b1a6-40b2-8d67-5d60854fea4e-ovsdbserver-sb\") pod \"dnsmasq-dns-785d8bcb8c-s4wc7\" (UID: \"7dc47a31-b1a6-40b2-8d67-5d60854fea4e\") " pod="openstack/dnsmasq-dns-785d8bcb8c-s4wc7" Feb 23 09:08:54 crc kubenswrapper[4940]: I0223 09:08:54.054059 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7dc47a31-b1a6-40b2-8d67-5d60854fea4e-dns-swift-storage-0\") pod \"dnsmasq-dns-785d8bcb8c-s4wc7\" (UID: \"7dc47a31-b1a6-40b2-8d67-5d60854fea4e\") " pod="openstack/dnsmasq-dns-785d8bcb8c-s4wc7" Feb 23 09:08:54 crc kubenswrapper[4940]: I0223 09:08:54.054772 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7dc47a31-b1a6-40b2-8d67-5d60854fea4e-config\") pod \"dnsmasq-dns-785d8bcb8c-s4wc7\" (UID: \"7dc47a31-b1a6-40b2-8d67-5d60854fea4e\") " pod="openstack/dnsmasq-dns-785d8bcb8c-s4wc7" Feb 23 09:08:54 crc kubenswrapper[4940]: I0223 09:08:54.055120 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/05fcd3d3-fb9e-420f-a8b6-72bf1c84f0b3-logs\") pod \"horizon-6d4447f67f-bwqtp\" (UID: \"05fcd3d3-fb9e-420f-a8b6-72bf1c84f0b3\") " pod="openstack/horizon-6d4447f67f-bwqtp" Feb 23 09:08:54 crc kubenswrapper[4940]: I0223 09:08:54.055153 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7dc47a31-b1a6-40b2-8d67-5d60854fea4e-ovsdbserver-nb\") pod \"dnsmasq-dns-785d8bcb8c-s4wc7\" (UID: \"7dc47a31-b1a6-40b2-8d67-5d60854fea4e\") " pod="openstack/dnsmasq-dns-785d8bcb8c-s4wc7" Feb 23 09:08:54 crc kubenswrapper[4940]: I0223 09:08:54.055973 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/05fcd3d3-fb9e-420f-a8b6-72bf1c84f0b3-config-data\") pod \"horizon-6d4447f67f-bwqtp\" (UID: \"05fcd3d3-fb9e-420f-a8b6-72bf1c84f0b3\") " pod="openstack/horizon-6d4447f67f-bwqtp" Feb 23 09:08:54 crc kubenswrapper[4940]: I0223 09:08:54.056382 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/05fcd3d3-fb9e-420f-a8b6-72bf1c84f0b3-horizon-secret-key\") pod \"horizon-6d4447f67f-bwqtp\" (UID: \"05fcd3d3-fb9e-420f-a8b6-72bf1c84f0b3\") " pod="openstack/horizon-6d4447f67f-bwqtp" Feb 23 09:08:54 crc kubenswrapper[4940]: I0223 09:08:54.059221 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/b430a58d-ed32-4642-ac93-d6f0de2eeb0d-db-sync-config-data\") pod \"barbican-db-sync-mphgm\" (UID: \"b430a58d-ed32-4642-ac93-d6f0de2eeb0d\") " pod="openstack/barbican-db-sync-mphgm" Feb 23 09:08:54 crc kubenswrapper[4940]: I0223 09:08:54.064443 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-hcm9c" Feb 23 09:08:54 crc kubenswrapper[4940]: I0223 09:08:54.066999 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b430a58d-ed32-4642-ac93-d6f0de2eeb0d-combined-ca-bundle\") pod \"barbican-db-sync-mphgm\" (UID: \"b430a58d-ed32-4642-ac93-d6f0de2eeb0d\") " pod="openstack/barbican-db-sync-mphgm" Feb 23 09:08:54 crc kubenswrapper[4940]: I0223 09:08:54.067405 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/05fcd3d3-fb9e-420f-a8b6-72bf1c84f0b3-scripts\") pod \"horizon-6d4447f67f-bwqtp\" (UID: \"05fcd3d3-fb9e-420f-a8b6-72bf1c84f0b3\") " pod="openstack/horizon-6d4447f67f-bwqtp" Feb 23 09:08:54 crc kubenswrapper[4940]: I0223 09:08:54.078413 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rsrtb\" (UniqueName: \"kubernetes.io/projected/b430a58d-ed32-4642-ac93-d6f0de2eeb0d-kube-api-access-rsrtb\") pod \"barbican-db-sync-mphgm\" (UID: \"b430a58d-ed32-4642-ac93-d6f0de2eeb0d\") " pod="openstack/barbican-db-sync-mphgm" Feb 23 09:08:54 crc kubenswrapper[4940]: I0223 09:08:54.097593 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dlkf2\" (UniqueName: \"kubernetes.io/projected/05fcd3d3-fb9e-420f-a8b6-72bf1c84f0b3-kube-api-access-dlkf2\") pod \"horizon-6d4447f67f-bwqtp\" (UID: \"05fcd3d3-fb9e-420f-a8b6-72bf1c84f0b3\") " pod="openstack/horizon-6d4447f67f-bwqtp" Feb 23 09:08:54 crc kubenswrapper[4940]: I0223 09:08:54.103524 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dz6dn\" (UniqueName: \"kubernetes.io/projected/7dc47a31-b1a6-40b2-8d67-5d60854fea4e-kube-api-access-dz6dn\") pod \"dnsmasq-dns-785d8bcb8c-s4wc7\" (UID: \"7dc47a31-b1a6-40b2-8d67-5d60854fea4e\") " pod="openstack/dnsmasq-dns-785d8bcb8c-s4wc7" Feb 23 09:08:54 crc kubenswrapper[4940]: I0223 09:08:54.124975 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6d4447f67f-bwqtp" Feb 23 09:08:54 crc kubenswrapper[4940]: I0223 09:08:54.136992 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-s4wc7" Feb 23 09:08:54 crc kubenswrapper[4940]: I0223 09:08:54.151674 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e771581c-dd8e-43df-be95-2a6b3f3d47a1-scripts\") pod \"glance-default-external-api-0\" (UID: \"e771581c-dd8e-43df-be95-2a6b3f3d47a1\") " pod="openstack/glance-default-external-api-0" Feb 23 09:08:54 crc kubenswrapper[4940]: I0223 09:08:54.151717 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/d4f65141-3bae-4ad8-978d-e1ff9cc68a7c-ceph\") pod \"glance-default-internal-api-0\" (UID: \"d4f65141-3bae-4ad8-978d-e1ff9cc68a7c\") " pod="openstack/glance-default-internal-api-0" Feb 23 09:08:54 crc kubenswrapper[4940]: I0223 09:08:54.151745 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d4f65141-3bae-4ad8-978d-e1ff9cc68a7c-scripts\") pod \"glance-default-internal-api-0\" (UID: \"d4f65141-3bae-4ad8-978d-e1ff9cc68a7c\") " pod="openstack/glance-default-internal-api-0" Feb 23 09:08:54 crc kubenswrapper[4940]: I0223 09:08:54.151767 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e771581c-dd8e-43df-be95-2a6b3f3d47a1-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"e771581c-dd8e-43df-be95-2a6b3f3d47a1\") " pod="openstack/glance-default-external-api-0" Feb 23 09:08:54 crc kubenswrapper[4940]: I0223 09:08:54.151793 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-external-api-0\" (UID: \"e771581c-dd8e-43df-be95-2a6b3f3d47a1\") " pod="openstack/glance-default-external-api-0" Feb 23 09:08:54 crc kubenswrapper[4940]: I0223 09:08:54.151837 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4f65141-3bae-4ad8-978d-e1ff9cc68a7c-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"d4f65141-3bae-4ad8-978d-e1ff9cc68a7c\") " pod="openstack/glance-default-internal-api-0" Feb 23 09:08:54 crc kubenswrapper[4940]: I0223 09:08:54.151873 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"d4f65141-3bae-4ad8-978d-e1ff9cc68a7c\") " pod="openstack/glance-default-internal-api-0" Feb 23 09:08:54 crc kubenswrapper[4940]: I0223 09:08:54.151895 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d4f65141-3bae-4ad8-978d-e1ff9cc68a7c-logs\") pod \"glance-default-internal-api-0\" (UID: \"d4f65141-3bae-4ad8-978d-e1ff9cc68a7c\") " pod="openstack/glance-default-internal-api-0" Feb 23 09:08:54 crc kubenswrapper[4940]: I0223 09:08:54.151919 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d4f65141-3bae-4ad8-978d-e1ff9cc68a7c-config-data\") pod \"glance-default-internal-api-0\" (UID: \"d4f65141-3bae-4ad8-978d-e1ff9cc68a7c\") " pod="openstack/glance-default-internal-api-0" Feb 23 09:08:54 crc kubenswrapper[4940]: I0223 09:08:54.151941 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e771581c-dd8e-43df-be95-2a6b3f3d47a1-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"e771581c-dd8e-43df-be95-2a6b3f3d47a1\") " pod="openstack/glance-default-external-api-0" Feb 23 09:08:54 crc kubenswrapper[4940]: I0223 09:08:54.151969 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d4f65141-3bae-4ad8-978d-e1ff9cc68a7c-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"d4f65141-3bae-4ad8-978d-e1ff9cc68a7c\") " pod="openstack/glance-default-internal-api-0" Feb 23 09:08:54 crc kubenswrapper[4940]: I0223 09:08:54.151987 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e771581c-dd8e-43df-be95-2a6b3f3d47a1-logs\") pod \"glance-default-external-api-0\" (UID: \"e771581c-dd8e-43df-be95-2a6b3f3d47a1\") " pod="openstack/glance-default-external-api-0" Feb 23 09:08:54 crc kubenswrapper[4940]: I0223 09:08:54.152012 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e771581c-dd8e-43df-be95-2a6b3f3d47a1-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"e771581c-dd8e-43df-be95-2a6b3f3d47a1\") " pod="openstack/glance-default-external-api-0" Feb 23 09:08:54 crc kubenswrapper[4940]: I0223 09:08:54.152029 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d4f65141-3bae-4ad8-978d-e1ff9cc68a7c-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"d4f65141-3bae-4ad8-978d-e1ff9cc68a7c\") " pod="openstack/glance-default-internal-api-0" Feb 23 09:08:54 crc kubenswrapper[4940]: I0223 09:08:54.152065 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e771581c-dd8e-43df-be95-2a6b3f3d47a1-config-data\") pod \"glance-default-external-api-0\" (UID: \"e771581c-dd8e-43df-be95-2a6b3f3d47a1\") " pod="openstack/glance-default-external-api-0" Feb 23 09:08:54 crc kubenswrapper[4940]: I0223 09:08:54.152093 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z5lj4\" (UniqueName: \"kubernetes.io/projected/e771581c-dd8e-43df-be95-2a6b3f3d47a1-kube-api-access-z5lj4\") pod \"glance-default-external-api-0\" (UID: \"e771581c-dd8e-43df-be95-2a6b3f3d47a1\") " pod="openstack/glance-default-external-api-0" Feb 23 09:08:54 crc kubenswrapper[4940]: I0223 09:08:54.152122 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b4vbp\" (UniqueName: \"kubernetes.io/projected/d4f65141-3bae-4ad8-978d-e1ff9cc68a7c-kube-api-access-b4vbp\") pod \"glance-default-internal-api-0\" (UID: \"d4f65141-3bae-4ad8-978d-e1ff9cc68a7c\") " pod="openstack/glance-default-internal-api-0" Feb 23 09:08:54 crc kubenswrapper[4940]: I0223 09:08:54.152138 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/e771581c-dd8e-43df-be95-2a6b3f3d47a1-ceph\") pod \"glance-default-external-api-0\" (UID: \"e771581c-dd8e-43df-be95-2a6b3f3d47a1\") " pod="openstack/glance-default-external-api-0" Feb 23 09:08:54 crc kubenswrapper[4940]: I0223 09:08:54.158450 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-mphgm" Feb 23 09:08:54 crc kubenswrapper[4940]: I0223 09:08:54.175659 4940 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-847c4cc679-dfkmx"] Feb 23 09:08:54 crc kubenswrapper[4940]: W0223 09:08:54.182833 4940 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0159383e_55f2_47df_9401_3fc82abecc72.slice/crio-bc2f3fb2bb6ad931f8fbce26bd49b07602f99f875c96a2dc014e96be2566e4ed WatchSource:0}: Error finding container bc2f3fb2bb6ad931f8fbce26bd49b07602f99f875c96a2dc014e96be2566e4ed: Status 404 returned error can't find the container with id bc2f3fb2bb6ad931f8fbce26bd49b07602f99f875c96a2dc014e96be2566e4ed Feb 23 09:08:54 crc kubenswrapper[4940]: I0223 09:08:54.221542 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-b4dtx"] Feb 23 09:08:54 crc kubenswrapper[4940]: I0223 09:08:54.254079 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d4f65141-3bae-4ad8-978d-e1ff9cc68a7c-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"d4f65141-3bae-4ad8-978d-e1ff9cc68a7c\") " pod="openstack/glance-default-internal-api-0" Feb 23 09:08:54 crc kubenswrapper[4940]: I0223 09:08:54.254397 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e771581c-dd8e-43df-be95-2a6b3f3d47a1-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"e771581c-dd8e-43df-be95-2a6b3f3d47a1\") " pod="openstack/glance-default-external-api-0" Feb 23 09:08:54 crc kubenswrapper[4940]: I0223 09:08:54.254595 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e771581c-dd8e-43df-be95-2a6b3f3d47a1-config-data\") pod \"glance-default-external-api-0\" (UID: \"e771581c-dd8e-43df-be95-2a6b3f3d47a1\") " pod="openstack/glance-default-external-api-0" Feb 23 09:08:54 crc kubenswrapper[4940]: I0223 09:08:54.254841 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z5lj4\" (UniqueName: \"kubernetes.io/projected/e771581c-dd8e-43df-be95-2a6b3f3d47a1-kube-api-access-z5lj4\") pod \"glance-default-external-api-0\" (UID: \"e771581c-dd8e-43df-be95-2a6b3f3d47a1\") " pod="openstack/glance-default-external-api-0" Feb 23 09:08:54 crc kubenswrapper[4940]: I0223 09:08:54.255004 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b4vbp\" (UniqueName: \"kubernetes.io/projected/d4f65141-3bae-4ad8-978d-e1ff9cc68a7c-kube-api-access-b4vbp\") pod \"glance-default-internal-api-0\" (UID: \"d4f65141-3bae-4ad8-978d-e1ff9cc68a7c\") " pod="openstack/glance-default-internal-api-0" Feb 23 09:08:54 crc kubenswrapper[4940]: I0223 09:08:54.255111 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/e771581c-dd8e-43df-be95-2a6b3f3d47a1-ceph\") pod \"glance-default-external-api-0\" (UID: \"e771581c-dd8e-43df-be95-2a6b3f3d47a1\") " pod="openstack/glance-default-external-api-0" Feb 23 09:08:54 crc kubenswrapper[4940]: I0223 09:08:54.255227 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e771581c-dd8e-43df-be95-2a6b3f3d47a1-scripts\") pod \"glance-default-external-api-0\" (UID: \"e771581c-dd8e-43df-be95-2a6b3f3d47a1\") " pod="openstack/glance-default-external-api-0" Feb 23 09:08:54 crc kubenswrapper[4940]: I0223 09:08:54.255335 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/d4f65141-3bae-4ad8-978d-e1ff9cc68a7c-ceph\") pod \"glance-default-internal-api-0\" (UID: \"d4f65141-3bae-4ad8-978d-e1ff9cc68a7c\") " pod="openstack/glance-default-internal-api-0" Feb 23 09:08:54 crc kubenswrapper[4940]: I0223 09:08:54.255447 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d4f65141-3bae-4ad8-978d-e1ff9cc68a7c-scripts\") pod \"glance-default-internal-api-0\" (UID: \"d4f65141-3bae-4ad8-978d-e1ff9cc68a7c\") " pod="openstack/glance-default-internal-api-0" Feb 23 09:08:54 crc kubenswrapper[4940]: I0223 09:08:54.255539 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e771581c-dd8e-43df-be95-2a6b3f3d47a1-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"e771581c-dd8e-43df-be95-2a6b3f3d47a1\") " pod="openstack/glance-default-external-api-0" Feb 23 09:08:54 crc kubenswrapper[4940]: I0223 09:08:54.255675 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-external-api-0\" (UID: \"e771581c-dd8e-43df-be95-2a6b3f3d47a1\") " pod="openstack/glance-default-external-api-0" Feb 23 09:08:54 crc kubenswrapper[4940]: I0223 09:08:54.255793 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4f65141-3bae-4ad8-978d-e1ff9cc68a7c-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"d4f65141-3bae-4ad8-978d-e1ff9cc68a7c\") " pod="openstack/glance-default-internal-api-0" Feb 23 09:08:54 crc kubenswrapper[4940]: I0223 09:08:54.255919 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"d4f65141-3bae-4ad8-978d-e1ff9cc68a7c\") " pod="openstack/glance-default-internal-api-0" Feb 23 09:08:54 crc kubenswrapper[4940]: I0223 09:08:54.256056 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d4f65141-3bae-4ad8-978d-e1ff9cc68a7c-logs\") pod \"glance-default-internal-api-0\" (UID: \"d4f65141-3bae-4ad8-978d-e1ff9cc68a7c\") " pod="openstack/glance-default-internal-api-0" Feb 23 09:08:54 crc kubenswrapper[4940]: I0223 09:08:54.256233 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d4f65141-3bae-4ad8-978d-e1ff9cc68a7c-config-data\") pod \"glance-default-internal-api-0\" (UID: \"d4f65141-3bae-4ad8-978d-e1ff9cc68a7c\") " pod="openstack/glance-default-internal-api-0" Feb 23 09:08:54 crc kubenswrapper[4940]: I0223 09:08:54.256358 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e771581c-dd8e-43df-be95-2a6b3f3d47a1-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"e771581c-dd8e-43df-be95-2a6b3f3d47a1\") " pod="openstack/glance-default-external-api-0" Feb 23 09:08:54 crc kubenswrapper[4940]: I0223 09:08:54.256503 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d4f65141-3bae-4ad8-978d-e1ff9cc68a7c-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"d4f65141-3bae-4ad8-978d-e1ff9cc68a7c\") " pod="openstack/glance-default-internal-api-0" Feb 23 09:08:54 crc kubenswrapper[4940]: I0223 09:08:54.256742 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e771581c-dd8e-43df-be95-2a6b3f3d47a1-logs\") pod \"glance-default-external-api-0\" (UID: \"e771581c-dd8e-43df-be95-2a6b3f3d47a1\") " pod="openstack/glance-default-external-api-0" Feb 23 09:08:54 crc kubenswrapper[4940]: I0223 09:08:54.264311 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d4f65141-3bae-4ad8-978d-e1ff9cc68a7c-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"d4f65141-3bae-4ad8-978d-e1ff9cc68a7c\") " pod="openstack/glance-default-internal-api-0" Feb 23 09:08:54 crc kubenswrapper[4940]: I0223 09:08:54.268599 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e771581c-dd8e-43df-be95-2a6b3f3d47a1-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"e771581c-dd8e-43df-be95-2a6b3f3d47a1\") " pod="openstack/glance-default-external-api-0" Feb 23 09:08:54 crc kubenswrapper[4940]: I0223 09:08:54.269414 4940 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-external-api-0\" (UID: \"e771581c-dd8e-43df-be95-2a6b3f3d47a1\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/glance-default-external-api-0" Feb 23 09:08:54 crc kubenswrapper[4940]: I0223 09:08:54.271351 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e771581c-dd8e-43df-be95-2a6b3f3d47a1-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"e771581c-dd8e-43df-be95-2a6b3f3d47a1\") " pod="openstack/glance-default-external-api-0" Feb 23 09:08:54 crc kubenswrapper[4940]: I0223 09:08:54.272761 4940 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"d4f65141-3bae-4ad8-978d-e1ff9cc68a7c\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/glance-default-internal-api-0" Feb 23 09:08:54 crc kubenswrapper[4940]: I0223 09:08:54.273075 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e771581c-dd8e-43df-be95-2a6b3f3d47a1-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"e771581c-dd8e-43df-be95-2a6b3f3d47a1\") " pod="openstack/glance-default-external-api-0" Feb 23 09:08:54 crc kubenswrapper[4940]: I0223 09:08:54.273345 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d4f65141-3bae-4ad8-978d-e1ff9cc68a7c-logs\") pod \"glance-default-internal-api-0\" (UID: \"d4f65141-3bae-4ad8-978d-e1ff9cc68a7c\") " pod="openstack/glance-default-internal-api-0" Feb 23 09:08:54 crc kubenswrapper[4940]: I0223 09:08:54.277099 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e771581c-dd8e-43df-be95-2a6b3f3d47a1-logs\") pod \"glance-default-external-api-0\" (UID: \"e771581c-dd8e-43df-be95-2a6b3f3d47a1\") " pod="openstack/glance-default-external-api-0" Feb 23 09:08:54 crc kubenswrapper[4940]: I0223 09:08:54.279684 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4f65141-3bae-4ad8-978d-e1ff9cc68a7c-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"d4f65141-3bae-4ad8-978d-e1ff9cc68a7c\") " pod="openstack/glance-default-internal-api-0" Feb 23 09:08:54 crc kubenswrapper[4940]: I0223 09:08:54.281983 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/e771581c-dd8e-43df-be95-2a6b3f3d47a1-ceph\") pod \"glance-default-external-api-0\" (UID: \"e771581c-dd8e-43df-be95-2a6b3f3d47a1\") " pod="openstack/glance-default-external-api-0" Feb 23 09:08:54 crc kubenswrapper[4940]: I0223 09:08:54.292297 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/d4f65141-3bae-4ad8-978d-e1ff9cc68a7c-ceph\") pod \"glance-default-internal-api-0\" (UID: \"d4f65141-3bae-4ad8-978d-e1ff9cc68a7c\") " pod="openstack/glance-default-internal-api-0" Feb 23 09:08:54 crc kubenswrapper[4940]: I0223 09:08:54.292301 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d4f65141-3bae-4ad8-978d-e1ff9cc68a7c-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"d4f65141-3bae-4ad8-978d-e1ff9cc68a7c\") " pod="openstack/glance-default-internal-api-0" Feb 23 09:08:54 crc kubenswrapper[4940]: I0223 09:08:54.292360 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d4f65141-3bae-4ad8-978d-e1ff9cc68a7c-scripts\") pod \"glance-default-internal-api-0\" (UID: \"d4f65141-3bae-4ad8-978d-e1ff9cc68a7c\") " pod="openstack/glance-default-internal-api-0" Feb 23 09:08:54 crc kubenswrapper[4940]: I0223 09:08:54.293427 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e771581c-dd8e-43df-be95-2a6b3f3d47a1-config-data\") pod \"glance-default-external-api-0\" (UID: \"e771581c-dd8e-43df-be95-2a6b3f3d47a1\") " pod="openstack/glance-default-external-api-0" Feb 23 09:08:54 crc kubenswrapper[4940]: I0223 09:08:54.294111 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d4f65141-3bae-4ad8-978d-e1ff9cc68a7c-config-data\") pod \"glance-default-internal-api-0\" (UID: \"d4f65141-3bae-4ad8-978d-e1ff9cc68a7c\") " pod="openstack/glance-default-internal-api-0" Feb 23 09:08:54 crc kubenswrapper[4940]: I0223 09:08:54.297179 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e771581c-dd8e-43df-be95-2a6b3f3d47a1-scripts\") pod \"glance-default-external-api-0\" (UID: \"e771581c-dd8e-43df-be95-2a6b3f3d47a1\") " pod="openstack/glance-default-external-api-0" Feb 23 09:08:54 crc kubenswrapper[4940]: I0223 09:08:54.298630 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z5lj4\" (UniqueName: \"kubernetes.io/projected/e771581c-dd8e-43df-be95-2a6b3f3d47a1-kube-api-access-z5lj4\") pod \"glance-default-external-api-0\" (UID: \"e771581c-dd8e-43df-be95-2a6b3f3d47a1\") " pod="openstack/glance-default-external-api-0" Feb 23 09:08:54 crc kubenswrapper[4940]: I0223 09:08:54.318364 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b4vbp\" (UniqueName: \"kubernetes.io/projected/d4f65141-3bae-4ad8-978d-e1ff9cc68a7c-kube-api-access-b4vbp\") pod \"glance-default-internal-api-0\" (UID: \"d4f65141-3bae-4ad8-978d-e1ff9cc68a7c\") " pod="openstack/glance-default-internal-api-0" Feb 23 09:08:54 crc kubenswrapper[4940]: I0223 09:08:54.329673 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-6p69q"] Feb 23 09:08:54 crc kubenswrapper[4940]: I0223 09:08:54.342643 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"d4f65141-3bae-4ad8-978d-e1ff9cc68a7c\") " pod="openstack/glance-default-internal-api-0" Feb 23 09:08:54 crc kubenswrapper[4940]: I0223 09:08:54.346422 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-external-api-0\" (UID: \"e771581c-dd8e-43df-be95-2a6b3f3d47a1\") " pod="openstack/glance-default-external-api-0" Feb 23 09:08:54 crc kubenswrapper[4940]: I0223 09:08:54.442741 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-76557b5cdc-8z8m9"] Feb 23 09:08:54 crc kubenswrapper[4940]: I0223 09:08:54.550651 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-db-sync-ktg94"] Feb 23 09:08:54 crc kubenswrapper[4940]: I0223 09:08:54.566727 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 23 09:08:54 crc kubenswrapper[4940]: I0223 09:08:54.608824 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 23 09:08:54 crc kubenswrapper[4940]: I0223 09:08:54.665898 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-sync-ktg94" event={"ID":"a43f9f8e-d118-4247-b1f0-b6aac984bb4d","Type":"ContainerStarted","Data":"47eab27bf9314c0fb748ddfa5f443dbc290e10b214e146aa78d69aa57e2c2ece"} Feb 23 09:08:54 crc kubenswrapper[4940]: I0223 09:08:54.667195 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-rxlnz"] Feb 23 09:08:54 crc kubenswrapper[4940]: I0223 09:08:54.670529 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-6p69q" event={"ID":"ab97aa50-1b14-4a5c-82cd-1be9f025b2b5","Type":"ContainerStarted","Data":"0f166c1680cacfb199a057ba9c87258008797ad60c5982554e9cb4c7208aa7fd"} Feb 23 09:08:54 crc kubenswrapper[4940]: I0223 09:08:54.682071 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-b4dtx" event={"ID":"0159383e-55f2-47df-9401-3fc82abecc72","Type":"ContainerStarted","Data":"bc2f3fb2bb6ad931f8fbce26bd49b07602f99f875c96a2dc014e96be2566e4ed"} Feb 23 09:08:54 crc kubenswrapper[4940]: I0223 09:08:54.682535 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 23 09:08:54 crc kubenswrapper[4940]: I0223 09:08:54.685912 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-76557b5cdc-8z8m9" event={"ID":"4fe165a1-2722-4594-82d4-d9b9e5e88a56","Type":"ContainerStarted","Data":"296838aa6b6a8a95fadb652db4cc85c6822da2d8cf5fa262722f8fcbc7ef481c"} Feb 23 09:08:54 crc kubenswrapper[4940]: I0223 09:08:54.706934 4940 generic.go:334] "Generic (PLEG): container finished" podID="95735a1e-0235-40ee-bfe3-94c7e269342c" containerID="3c68d090565ee04e0ad46803e77b58f9be66c29d15814c78f9c4d5aeb978e218" exitCode=0 Feb 23 09:08:54 crc kubenswrapper[4940]: I0223 09:08:54.706977 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-847c4cc679-dfkmx" event={"ID":"95735a1e-0235-40ee-bfe3-94c7e269342c","Type":"ContainerDied","Data":"3c68d090565ee04e0ad46803e77b58f9be66c29d15814c78f9c4d5aeb978e218"} Feb 23 09:08:54 crc kubenswrapper[4940]: I0223 09:08:54.707001 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-847c4cc679-dfkmx" event={"ID":"95735a1e-0235-40ee-bfe3-94c7e269342c","Type":"ContainerStarted","Data":"88a7222228ef0d4e4b369f0a7ca6016b84260b919205e6dd88fd0d7ad06aa36f"} Feb 23 09:08:54 crc kubenswrapper[4940]: I0223 09:08:54.824415 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6d4447f67f-bwqtp"] Feb 23 09:08:54 crc kubenswrapper[4940]: I0223 09:08:54.940878 4940 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 23 09:08:54 crc kubenswrapper[4940]: I0223 09:08:54.977585 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-mphgm"] Feb 23 09:08:54 crc kubenswrapper[4940]: W0223 09:08:54.977711 4940 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb430a58d_ed32_4642_ac93_d6f0de2eeb0d.slice/crio-da277968472d3c31a972269d1529bfb7c7881186deb860687c991ec7ea656593 WatchSource:0}: Error finding container da277968472d3c31a972269d1529bfb7c7881186deb860687c991ec7ea656593: Status 404 returned error can't find the container with id da277968472d3c31a972269d1529bfb7c7881186deb860687c991ec7ea656593 Feb 23 09:08:54 crc kubenswrapper[4940]: W0223 09:08:54.978330 4940 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbf56a426_9c5a_4a94_8740_fbe2c05dafbb.slice/crio-c8d81b28a4b41c8a9cb8fa5e371c45861dda3969c8f697195bdbe3c396974d0a WatchSource:0}: Error finding container c8d81b28a4b41c8a9cb8fa5e371c45861dda3969c8f697195bdbe3c396974d0a: Status 404 returned error can't find the container with id c8d81b28a4b41c8a9cb8fa5e371c45861dda3969c8f697195bdbe3c396974d0a Feb 23 09:08:54 crc kubenswrapper[4940]: I0223 09:08:54.993506 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-hcm9c"] Feb 23 09:08:55 crc kubenswrapper[4940]: I0223 09:08:55.017705 4940 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-76557b5cdc-8z8m9"] Feb 23 09:08:55 crc kubenswrapper[4940]: I0223 09:08:55.036057 4940 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 23 09:08:55 crc kubenswrapper[4940]: I0223 09:08:55.052384 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-s4wc7"] Feb 23 09:08:55 crc kubenswrapper[4940]: I0223 09:08:55.061712 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-86784fc7b9-5n6q4"] Feb 23 09:08:55 crc kubenswrapper[4940]: I0223 09:08:55.072430 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-86784fc7b9-5n6q4" Feb 23 09:08:55 crc kubenswrapper[4940]: I0223 09:08:55.082396 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-86784fc7b9-5n6q4"] Feb 23 09:08:55 crc kubenswrapper[4940]: W0223 09:08:55.158289 4940 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7dc47a31_b1a6_40b2_8d67_5d60854fea4e.slice/crio-26d81b74267ebec2270adbc9d151982b32431fc15e490e012f205a245b807b97 WatchSource:0}: Error finding container 26d81b74267ebec2270adbc9d151982b32431fc15e490e012f205a245b807b97: Status 404 returned error can't find the container with id 26d81b74267ebec2270adbc9d151982b32431fc15e490e012f205a245b807b97 Feb 23 09:08:55 crc kubenswrapper[4940]: I0223 09:08:55.186331 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/caff4ac2-d49f-41a4-a186-c7c1f398a54d-horizon-secret-key\") pod \"horizon-86784fc7b9-5n6q4\" (UID: \"caff4ac2-d49f-41a4-a186-c7c1f398a54d\") " pod="openstack/horizon-86784fc7b9-5n6q4" Feb 23 09:08:55 crc kubenswrapper[4940]: I0223 09:08:55.193510 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/caff4ac2-d49f-41a4-a186-c7c1f398a54d-logs\") pod \"horizon-86784fc7b9-5n6q4\" (UID: \"caff4ac2-d49f-41a4-a186-c7c1f398a54d\") " pod="openstack/horizon-86784fc7b9-5n6q4" Feb 23 09:08:55 crc kubenswrapper[4940]: I0223 09:08:55.193727 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/caff4ac2-d49f-41a4-a186-c7c1f398a54d-scripts\") pod \"horizon-86784fc7b9-5n6q4\" (UID: \"caff4ac2-d49f-41a4-a186-c7c1f398a54d\") " pod="openstack/horizon-86784fc7b9-5n6q4" Feb 23 09:08:55 crc kubenswrapper[4940]: I0223 09:08:55.193890 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/caff4ac2-d49f-41a4-a186-c7c1f398a54d-config-data\") pod \"horizon-86784fc7b9-5n6q4\" (UID: \"caff4ac2-d49f-41a4-a186-c7c1f398a54d\") " pod="openstack/horizon-86784fc7b9-5n6q4" Feb 23 09:08:55 crc kubenswrapper[4940]: I0223 09:08:55.193939 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8f29g\" (UniqueName: \"kubernetes.io/projected/caff4ac2-d49f-41a4-a186-c7c1f398a54d-kube-api-access-8f29g\") pod \"horizon-86784fc7b9-5n6q4\" (UID: \"caff4ac2-d49f-41a4-a186-c7c1f398a54d\") " pod="openstack/horizon-86784fc7b9-5n6q4" Feb 23 09:08:55 crc kubenswrapper[4940]: I0223 09:08:55.292421 4940 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 23 09:08:55 crc kubenswrapper[4940]: I0223 09:08:55.295982 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8f29g\" (UniqueName: \"kubernetes.io/projected/caff4ac2-d49f-41a4-a186-c7c1f398a54d-kube-api-access-8f29g\") pod \"horizon-86784fc7b9-5n6q4\" (UID: \"caff4ac2-d49f-41a4-a186-c7c1f398a54d\") " pod="openstack/horizon-86784fc7b9-5n6q4" Feb 23 09:08:55 crc kubenswrapper[4940]: I0223 09:08:55.296117 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/caff4ac2-d49f-41a4-a186-c7c1f398a54d-horizon-secret-key\") pod \"horizon-86784fc7b9-5n6q4\" (UID: \"caff4ac2-d49f-41a4-a186-c7c1f398a54d\") " pod="openstack/horizon-86784fc7b9-5n6q4" Feb 23 09:08:55 crc kubenswrapper[4940]: I0223 09:08:55.296146 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/caff4ac2-d49f-41a4-a186-c7c1f398a54d-logs\") pod \"horizon-86784fc7b9-5n6q4\" (UID: \"caff4ac2-d49f-41a4-a186-c7c1f398a54d\") " pod="openstack/horizon-86784fc7b9-5n6q4" Feb 23 09:08:55 crc kubenswrapper[4940]: I0223 09:08:55.296227 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/caff4ac2-d49f-41a4-a186-c7c1f398a54d-scripts\") pod \"horizon-86784fc7b9-5n6q4\" (UID: \"caff4ac2-d49f-41a4-a186-c7c1f398a54d\") " pod="openstack/horizon-86784fc7b9-5n6q4" Feb 23 09:08:55 crc kubenswrapper[4940]: I0223 09:08:55.296296 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/caff4ac2-d49f-41a4-a186-c7c1f398a54d-config-data\") pod \"horizon-86784fc7b9-5n6q4\" (UID: \"caff4ac2-d49f-41a4-a186-c7c1f398a54d\") " pod="openstack/horizon-86784fc7b9-5n6q4" Feb 23 09:08:55 crc kubenswrapper[4940]: I0223 09:08:55.298400 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/caff4ac2-d49f-41a4-a186-c7c1f398a54d-config-data\") pod \"horizon-86784fc7b9-5n6q4\" (UID: \"caff4ac2-d49f-41a4-a186-c7c1f398a54d\") " pod="openstack/horizon-86784fc7b9-5n6q4" Feb 23 09:08:55 crc kubenswrapper[4940]: I0223 09:08:55.299629 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/caff4ac2-d49f-41a4-a186-c7c1f398a54d-logs\") pod \"horizon-86784fc7b9-5n6q4\" (UID: \"caff4ac2-d49f-41a4-a186-c7c1f398a54d\") " pod="openstack/horizon-86784fc7b9-5n6q4" Feb 23 09:08:55 crc kubenswrapper[4940]: I0223 09:08:55.300237 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/caff4ac2-d49f-41a4-a186-c7c1f398a54d-scripts\") pod \"horizon-86784fc7b9-5n6q4\" (UID: \"caff4ac2-d49f-41a4-a186-c7c1f398a54d\") " pod="openstack/horizon-86784fc7b9-5n6q4" Feb 23 09:08:55 crc kubenswrapper[4940]: I0223 09:08:55.312982 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/caff4ac2-d49f-41a4-a186-c7c1f398a54d-horizon-secret-key\") pod \"horizon-86784fc7b9-5n6q4\" (UID: \"caff4ac2-d49f-41a4-a186-c7c1f398a54d\") " pod="openstack/horizon-86784fc7b9-5n6q4" Feb 23 09:08:55 crc kubenswrapper[4940]: I0223 09:08:55.320712 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8f29g\" (UniqueName: \"kubernetes.io/projected/caff4ac2-d49f-41a4-a186-c7c1f398a54d-kube-api-access-8f29g\") pod \"horizon-86784fc7b9-5n6q4\" (UID: \"caff4ac2-d49f-41a4-a186-c7c1f398a54d\") " pod="openstack/horizon-86784fc7b9-5n6q4" Feb 23 09:08:55 crc kubenswrapper[4940]: I0223 09:08:55.389544 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-86784fc7b9-5n6q4" Feb 23 09:08:55 crc kubenswrapper[4940]: I0223 09:08:55.400935 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-847c4cc679-dfkmx" Feb 23 09:08:55 crc kubenswrapper[4940]: I0223 09:08:55.501170 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/95735a1e-0235-40ee-bfe3-94c7e269342c-config\") pod \"95735a1e-0235-40ee-bfe3-94c7e269342c\" (UID: \"95735a1e-0235-40ee-bfe3-94c7e269342c\") " Feb 23 09:08:55 crc kubenswrapper[4940]: I0223 09:08:55.501250 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/95735a1e-0235-40ee-bfe3-94c7e269342c-ovsdbserver-nb\") pod \"95735a1e-0235-40ee-bfe3-94c7e269342c\" (UID: \"95735a1e-0235-40ee-bfe3-94c7e269342c\") " Feb 23 09:08:55 crc kubenswrapper[4940]: I0223 09:08:55.501276 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/95735a1e-0235-40ee-bfe3-94c7e269342c-dns-svc\") pod \"95735a1e-0235-40ee-bfe3-94c7e269342c\" (UID: \"95735a1e-0235-40ee-bfe3-94c7e269342c\") " Feb 23 09:08:55 crc kubenswrapper[4940]: I0223 09:08:55.501375 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/95735a1e-0235-40ee-bfe3-94c7e269342c-ovsdbserver-sb\") pod \"95735a1e-0235-40ee-bfe3-94c7e269342c\" (UID: \"95735a1e-0235-40ee-bfe3-94c7e269342c\") " Feb 23 09:08:55 crc kubenswrapper[4940]: I0223 09:08:55.501396 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vqrcl\" (UniqueName: \"kubernetes.io/projected/95735a1e-0235-40ee-bfe3-94c7e269342c-kube-api-access-vqrcl\") pod \"95735a1e-0235-40ee-bfe3-94c7e269342c\" (UID: \"95735a1e-0235-40ee-bfe3-94c7e269342c\") " Feb 23 09:08:55 crc kubenswrapper[4940]: I0223 09:08:55.501484 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/95735a1e-0235-40ee-bfe3-94c7e269342c-dns-swift-storage-0\") pod \"95735a1e-0235-40ee-bfe3-94c7e269342c\" (UID: \"95735a1e-0235-40ee-bfe3-94c7e269342c\") " Feb 23 09:08:55 crc kubenswrapper[4940]: I0223 09:08:55.510401 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/95735a1e-0235-40ee-bfe3-94c7e269342c-kube-api-access-vqrcl" (OuterVolumeSpecName: "kube-api-access-vqrcl") pod "95735a1e-0235-40ee-bfe3-94c7e269342c" (UID: "95735a1e-0235-40ee-bfe3-94c7e269342c"). InnerVolumeSpecName "kube-api-access-vqrcl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 09:08:55 crc kubenswrapper[4940]: I0223 09:08:55.539104 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/95735a1e-0235-40ee-bfe3-94c7e269342c-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "95735a1e-0235-40ee-bfe3-94c7e269342c" (UID: "95735a1e-0235-40ee-bfe3-94c7e269342c"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 09:08:55 crc kubenswrapper[4940]: I0223 09:08:55.549601 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/95735a1e-0235-40ee-bfe3-94c7e269342c-config" (OuterVolumeSpecName: "config") pod "95735a1e-0235-40ee-bfe3-94c7e269342c" (UID: "95735a1e-0235-40ee-bfe3-94c7e269342c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 09:08:55 crc kubenswrapper[4940]: I0223 09:08:55.550475 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/95735a1e-0235-40ee-bfe3-94c7e269342c-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "95735a1e-0235-40ee-bfe3-94c7e269342c" (UID: "95735a1e-0235-40ee-bfe3-94c7e269342c"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 09:08:55 crc kubenswrapper[4940]: I0223 09:08:55.565763 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/95735a1e-0235-40ee-bfe3-94c7e269342c-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "95735a1e-0235-40ee-bfe3-94c7e269342c" (UID: "95735a1e-0235-40ee-bfe3-94c7e269342c"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 09:08:55 crc kubenswrapper[4940]: I0223 09:08:55.566921 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/95735a1e-0235-40ee-bfe3-94c7e269342c-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "95735a1e-0235-40ee-bfe3-94c7e269342c" (UID: "95735a1e-0235-40ee-bfe3-94c7e269342c"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 09:08:55 crc kubenswrapper[4940]: I0223 09:08:55.575118 4940 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 23 09:08:55 crc kubenswrapper[4940]: I0223 09:08:55.603518 4940 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/95735a1e-0235-40ee-bfe3-94c7e269342c-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 23 09:08:55 crc kubenswrapper[4940]: I0223 09:08:55.603548 4940 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vqrcl\" (UniqueName: \"kubernetes.io/projected/95735a1e-0235-40ee-bfe3-94c7e269342c-kube-api-access-vqrcl\") on node \"crc\" DevicePath \"\"" Feb 23 09:08:55 crc kubenswrapper[4940]: I0223 09:08:55.603561 4940 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/95735a1e-0235-40ee-bfe3-94c7e269342c-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 23 09:08:55 crc kubenswrapper[4940]: I0223 09:08:55.603573 4940 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/95735a1e-0235-40ee-bfe3-94c7e269342c-config\") on node \"crc\" DevicePath \"\"" Feb 23 09:08:55 crc kubenswrapper[4940]: I0223 09:08:55.603581 4940 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/95735a1e-0235-40ee-bfe3-94c7e269342c-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 23 09:08:55 crc kubenswrapper[4940]: I0223 09:08:55.603589 4940 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/95735a1e-0235-40ee-bfe3-94c7e269342c-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 23 09:08:55 crc kubenswrapper[4940]: I0223 09:08:55.727241 4940 generic.go:334] "Generic (PLEG): container finished" podID="7dc47a31-b1a6-40b2-8d67-5d60854fea4e" containerID="0262a32618565c7f5571ccacc289cae4e7cbd2594d01b023febd1caf10395488" exitCode=0 Feb 23 09:08:55 crc kubenswrapper[4940]: I0223 09:08:55.727494 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-s4wc7" event={"ID":"7dc47a31-b1a6-40b2-8d67-5d60854fea4e","Type":"ContainerDied","Data":"0262a32618565c7f5571ccacc289cae4e7cbd2594d01b023febd1caf10395488"} Feb 23 09:08:55 crc kubenswrapper[4940]: I0223 09:08:55.727548 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-s4wc7" event={"ID":"7dc47a31-b1a6-40b2-8d67-5d60854fea4e","Type":"ContainerStarted","Data":"26d81b74267ebec2270adbc9d151982b32431fc15e490e012f205a245b807b97"} Feb 23 09:08:55 crc kubenswrapper[4940]: I0223 09:08:55.729250 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-mphgm" event={"ID":"b430a58d-ed32-4642-ac93-d6f0de2eeb0d","Type":"ContainerStarted","Data":"da277968472d3c31a972269d1529bfb7c7881186deb860687c991ec7ea656593"} Feb 23 09:08:55 crc kubenswrapper[4940]: I0223 09:08:55.732510 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-b4dtx" event={"ID":"0159383e-55f2-47df-9401-3fc82abecc72","Type":"ContainerStarted","Data":"c66255ffa47f345c3635c233bea3468a82a41928b871561c873a41d70a0535a6"} Feb 23 09:08:55 crc kubenswrapper[4940]: I0223 09:08:55.738400 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-847c4cc679-dfkmx" Feb 23 09:08:55 crc kubenswrapper[4940]: I0223 09:08:55.738448 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-847c4cc679-dfkmx" event={"ID":"95735a1e-0235-40ee-bfe3-94c7e269342c","Type":"ContainerDied","Data":"88a7222228ef0d4e4b369f0a7ca6016b84260b919205e6dd88fd0d7ad06aa36f"} Feb 23 09:08:55 crc kubenswrapper[4940]: I0223 09:08:55.738531 4940 scope.go:117] "RemoveContainer" containerID="3c68d090565ee04e0ad46803e77b58f9be66c29d15814c78f9c4d5aeb978e218" Feb 23 09:08:55 crc kubenswrapper[4940]: I0223 09:08:55.750964 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-rxlnz" event={"ID":"7c217a33-e32d-41cc-8fda-6691bf37db15","Type":"ContainerStarted","Data":"ab3d4155c70b911088fcaae189a2aa3c2cfd709ddc0dbf3c363ec5b5fa235783"} Feb 23 09:08:55 crc kubenswrapper[4940]: I0223 09:08:55.757282 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6af3cab0-e7d8-461f-9092-6b5afefff5cc","Type":"ContainerStarted","Data":"b7996fbac951655972a87c5f5d69c66e2e11b6f66042deee94fbe46e6c2a8141"} Feb 23 09:08:55 crc kubenswrapper[4940]: I0223 09:08:55.768538 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6d4447f67f-bwqtp" event={"ID":"05fcd3d3-fb9e-420f-a8b6-72bf1c84f0b3","Type":"ContainerStarted","Data":"2d10b1b7fcecd98adf497965f111edf6a861b0792894ef59104ed60803372099"} Feb 23 09:08:55 crc kubenswrapper[4940]: I0223 09:08:55.787859 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-hcm9c" event={"ID":"bf56a426-9c5a-4a94-8740-fbe2c05dafbb","Type":"ContainerStarted","Data":"93564bc54012fea7c0d172def0757dd0e72e20b2ee023b5224e79fa561559ff4"} Feb 23 09:08:55 crc kubenswrapper[4940]: I0223 09:08:55.787943 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-hcm9c" event={"ID":"bf56a426-9c5a-4a94-8740-fbe2c05dafbb","Type":"ContainerStarted","Data":"c8d81b28a4b41c8a9cb8fa5e371c45861dda3969c8f697195bdbe3c396974d0a"} Feb 23 09:08:55 crc kubenswrapper[4940]: I0223 09:08:55.788215 4940 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-b4dtx" podStartSLOduration=3.788195907 podStartE2EDuration="3.788195907s" podCreationTimestamp="2026-02-23 09:08:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 09:08:55.766233445 +0000 UTC m=+1267.149439602" watchObservedRunningTime="2026-02-23 09:08:55.788195907 +0000 UTC m=+1267.171402064" Feb 23 09:08:55 crc kubenswrapper[4940]: I0223 09:08:55.798105 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"e771581c-dd8e-43df-be95-2a6b3f3d47a1","Type":"ContainerStarted","Data":"18d135d5f9634962b2af4da91c8cdea42f00db54b3a34faae322f112d3a6bd44"} Feb 23 09:08:55 crc kubenswrapper[4940]: I0223 09:08:55.841207 4940 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-847c4cc679-dfkmx"] Feb 23 09:08:55 crc kubenswrapper[4940]: I0223 09:08:55.846253 4940 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-847c4cc679-dfkmx"] Feb 23 09:08:55 crc kubenswrapper[4940]: I0223 09:08:55.854277 4940 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-hcm9c" podStartSLOduration=2.854254518 podStartE2EDuration="2.854254518s" podCreationTimestamp="2026-02-23 09:08:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 09:08:55.853537695 +0000 UTC m=+1267.236743872" watchObservedRunningTime="2026-02-23 09:08:55.854254518 +0000 UTC m=+1267.237460695" Feb 23 09:08:55 crc kubenswrapper[4940]: I0223 09:08:55.964056 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-86784fc7b9-5n6q4"] Feb 23 09:08:56 crc kubenswrapper[4940]: I0223 09:08:56.333071 4940 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 23 09:08:56 crc kubenswrapper[4940]: W0223 09:08:56.345163 4940 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd4f65141_3bae_4ad8_978d_e1ff9cc68a7c.slice/crio-d61490aead547ddd5af96fba0c9edb863a81c1a68b181d13111708e12302a905 WatchSource:0}: Error finding container d61490aead547ddd5af96fba0c9edb863a81c1a68b181d13111708e12302a905: Status 404 returned error can't find the container with id d61490aead547ddd5af96fba0c9edb863a81c1a68b181d13111708e12302a905 Feb 23 09:08:56 crc kubenswrapper[4940]: I0223 09:08:56.842452 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-s4wc7" event={"ID":"7dc47a31-b1a6-40b2-8d67-5d60854fea4e","Type":"ContainerStarted","Data":"1bf928855769ada0b818334a0a7eaf1e33ae4bf03eeec0a85338f4139f7f3393"} Feb 23 09:08:56 crc kubenswrapper[4940]: I0223 09:08:56.842778 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-785d8bcb8c-s4wc7" Feb 23 09:08:56 crc kubenswrapper[4940]: I0223 09:08:56.845948 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"d4f65141-3bae-4ad8-978d-e1ff9cc68a7c","Type":"ContainerStarted","Data":"d61490aead547ddd5af96fba0c9edb863a81c1a68b181d13111708e12302a905"} Feb 23 09:08:56 crc kubenswrapper[4940]: I0223 09:08:56.855439 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-86784fc7b9-5n6q4" event={"ID":"caff4ac2-d49f-41a4-a186-c7c1f398a54d","Type":"ContainerStarted","Data":"4d9ab5ff008aa1123d500ac1f7811753deb43facccc38957ba36e3df9375cbe2"} Feb 23 09:08:56 crc kubenswrapper[4940]: I0223 09:08:56.864440 4940 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-785d8bcb8c-s4wc7" podStartSLOduration=3.864419469 podStartE2EDuration="3.864419469s" podCreationTimestamp="2026-02-23 09:08:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 09:08:56.860058564 +0000 UTC m=+1268.243264721" watchObservedRunningTime="2026-02-23 09:08:56.864419469 +0000 UTC m=+1268.247625626" Feb 23 09:08:56 crc kubenswrapper[4940]: I0223 09:08:56.865483 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"e771581c-dd8e-43df-be95-2a6b3f3d47a1","Type":"ContainerStarted","Data":"94fc406d47308236ad3f7e6cd6440e28d50b7047815ecf7a3d1afad2c90f894a"} Feb 23 09:08:57 crc kubenswrapper[4940]: I0223 09:08:57.366854 4940 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="95735a1e-0235-40ee-bfe3-94c7e269342c" path="/var/lib/kubelet/pods/95735a1e-0235-40ee-bfe3-94c7e269342c/volumes" Feb 23 09:08:57 crc kubenswrapper[4940]: I0223 09:08:57.879312 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"d4f65141-3bae-4ad8-978d-e1ff9cc68a7c","Type":"ContainerStarted","Data":"6f47c6c78b1a63e4aab8f7ba80739a7738f5171315a9f9c9ca4cdd4e57d06c47"} Feb 23 09:08:57 crc kubenswrapper[4940]: I0223 09:08:57.892922 4940 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="e771581c-dd8e-43df-be95-2a6b3f3d47a1" containerName="glance-log" containerID="cri-o://94fc406d47308236ad3f7e6cd6440e28d50b7047815ecf7a3d1afad2c90f894a" gracePeriod=30 Feb 23 09:08:57 crc kubenswrapper[4940]: I0223 09:08:57.894034 4940 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="e771581c-dd8e-43df-be95-2a6b3f3d47a1" containerName="glance-httpd" containerID="cri-o://de9d8049630ed432919ad8cddb34d3c13a95a25e9f5a9ffe83adef5d415e4252" gracePeriod=30 Feb 23 09:08:57 crc kubenswrapper[4940]: I0223 09:08:57.896526 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"e771581c-dd8e-43df-be95-2a6b3f3d47a1","Type":"ContainerStarted","Data":"de9d8049630ed432919ad8cddb34d3c13a95a25e9f5a9ffe83adef5d415e4252"} Feb 23 09:08:57 crc kubenswrapper[4940]: I0223 09:08:57.931234 4940 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=4.9312169279999996 podStartE2EDuration="4.931216928s" podCreationTimestamp="2026-02-23 09:08:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 09:08:57.916437049 +0000 UTC m=+1269.299643206" watchObservedRunningTime="2026-02-23 09:08:57.931216928 +0000 UTC m=+1269.314423085" Feb 23 09:08:58 crc kubenswrapper[4940]: I0223 09:08:58.928929 4940 generic.go:334] "Generic (PLEG): container finished" podID="0159383e-55f2-47df-9401-3fc82abecc72" containerID="c66255ffa47f345c3635c233bea3468a82a41928b871561c873a41d70a0535a6" exitCode=0 Feb 23 09:08:58 crc kubenswrapper[4940]: I0223 09:08:58.929078 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-b4dtx" event={"ID":"0159383e-55f2-47df-9401-3fc82abecc72","Type":"ContainerDied","Data":"c66255ffa47f345c3635c233bea3468a82a41928b871561c873a41d70a0535a6"} Feb 23 09:08:58 crc kubenswrapper[4940]: I0223 09:08:58.955364 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"d4f65141-3bae-4ad8-978d-e1ff9cc68a7c","Type":"ContainerStarted","Data":"94cafb9bbd168512415a6b93e03f7c0742ba6e5068268ad6517631e8bd1bd143"} Feb 23 09:08:58 crc kubenswrapper[4940]: I0223 09:08:58.955468 4940 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="d4f65141-3bae-4ad8-978d-e1ff9cc68a7c" containerName="glance-httpd" containerID="cri-o://94cafb9bbd168512415a6b93e03f7c0742ba6e5068268ad6517631e8bd1bd143" gracePeriod=30 Feb 23 09:08:58 crc kubenswrapper[4940]: I0223 09:08:58.955445 4940 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="d4f65141-3bae-4ad8-978d-e1ff9cc68a7c" containerName="glance-log" containerID="cri-o://6f47c6c78b1a63e4aab8f7ba80739a7738f5171315a9f9c9ca4cdd4e57d06c47" gracePeriod=30 Feb 23 09:08:58 crc kubenswrapper[4940]: I0223 09:08:58.976945 4940 generic.go:334] "Generic (PLEG): container finished" podID="e771581c-dd8e-43df-be95-2a6b3f3d47a1" containerID="de9d8049630ed432919ad8cddb34d3c13a95a25e9f5a9ffe83adef5d415e4252" exitCode=143 Feb 23 09:08:58 crc kubenswrapper[4940]: I0223 09:08:58.976974 4940 generic.go:334] "Generic (PLEG): container finished" podID="e771581c-dd8e-43df-be95-2a6b3f3d47a1" containerID="94fc406d47308236ad3f7e6cd6440e28d50b7047815ecf7a3d1afad2c90f894a" exitCode=143 Feb 23 09:08:58 crc kubenswrapper[4940]: I0223 09:08:58.976994 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"e771581c-dd8e-43df-be95-2a6b3f3d47a1","Type":"ContainerDied","Data":"de9d8049630ed432919ad8cddb34d3c13a95a25e9f5a9ffe83adef5d415e4252"} Feb 23 09:08:58 crc kubenswrapper[4940]: I0223 09:08:58.977018 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"e771581c-dd8e-43df-be95-2a6b3f3d47a1","Type":"ContainerDied","Data":"94fc406d47308236ad3f7e6cd6440e28d50b7047815ecf7a3d1afad2c90f894a"} Feb 23 09:08:59 crc kubenswrapper[4940]: I0223 09:08:59.011972 4940 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=6.01195449 podStartE2EDuration="6.01195449s" podCreationTimestamp="2026-02-23 09:08:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 09:08:59.004957323 +0000 UTC m=+1270.388163490" watchObservedRunningTime="2026-02-23 09:08:59.01195449 +0000 UTC m=+1270.395160647" Feb 23 09:08:59 crc kubenswrapper[4940]: I0223 09:08:59.995891 4940 generic.go:334] "Generic (PLEG): container finished" podID="d4f65141-3bae-4ad8-978d-e1ff9cc68a7c" containerID="94cafb9bbd168512415a6b93e03f7c0742ba6e5068268ad6517631e8bd1bd143" exitCode=143 Feb 23 09:08:59 crc kubenswrapper[4940]: I0223 09:08:59.995930 4940 generic.go:334] "Generic (PLEG): container finished" podID="d4f65141-3bae-4ad8-978d-e1ff9cc68a7c" containerID="6f47c6c78b1a63e4aab8f7ba80739a7738f5171315a9f9c9ca4cdd4e57d06c47" exitCode=143 Feb 23 09:08:59 crc kubenswrapper[4940]: I0223 09:08:59.995948 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"d4f65141-3bae-4ad8-978d-e1ff9cc68a7c","Type":"ContainerDied","Data":"94cafb9bbd168512415a6b93e03f7c0742ba6e5068268ad6517631e8bd1bd143"} Feb 23 09:08:59 crc kubenswrapper[4940]: I0223 09:08:59.998987 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"d4f65141-3bae-4ad8-978d-e1ff9cc68a7c","Type":"ContainerDied","Data":"6f47c6c78b1a63e4aab8f7ba80739a7738f5171315a9f9c9ca4cdd4e57d06c47"} Feb 23 09:09:02 crc kubenswrapper[4940]: I0223 09:09:02.096769 4940 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-6d4447f67f-bwqtp"] Feb 23 09:09:02 crc kubenswrapper[4940]: I0223 09:09:02.121973 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-6884678d78-ckt87"] Feb 23 09:09:02 crc kubenswrapper[4940]: E0223 09:09:02.122425 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95735a1e-0235-40ee-bfe3-94c7e269342c" containerName="init" Feb 23 09:09:02 crc kubenswrapper[4940]: I0223 09:09:02.122444 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="95735a1e-0235-40ee-bfe3-94c7e269342c" containerName="init" Feb 23 09:09:02 crc kubenswrapper[4940]: I0223 09:09:02.122619 4940 memory_manager.go:354] "RemoveStaleState removing state" podUID="95735a1e-0235-40ee-bfe3-94c7e269342c" containerName="init" Feb 23 09:09:02 crc kubenswrapper[4940]: I0223 09:09:02.123793 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6884678d78-ckt87" Feb 23 09:09:02 crc kubenswrapper[4940]: I0223 09:09:02.126421 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-horizon-svc" Feb 23 09:09:02 crc kubenswrapper[4940]: I0223 09:09:02.145124 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6884678d78-ckt87"] Feb 23 09:09:02 crc kubenswrapper[4940]: I0223 09:09:02.170685 4940 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-86784fc7b9-5n6q4"] Feb 23 09:09:02 crc kubenswrapper[4940]: I0223 09:09:02.240406 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-b4dtx" Feb 23 09:09:02 crc kubenswrapper[4940]: I0223 09:09:02.242495 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-8485464bb-cvmj5"] Feb 23 09:09:02 crc kubenswrapper[4940]: I0223 09:09:02.242622 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/e330abf6-9282-4221-b286-672ffc3985e7-horizon-tls-certs\") pod \"horizon-6884678d78-ckt87\" (UID: \"e330abf6-9282-4221-b286-672ffc3985e7\") " pod="openstack/horizon-6884678d78-ckt87" Feb 23 09:09:02 crc kubenswrapper[4940]: I0223 09:09:02.242673 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e330abf6-9282-4221-b286-672ffc3985e7-config-data\") pod \"horizon-6884678d78-ckt87\" (UID: \"e330abf6-9282-4221-b286-672ffc3985e7\") " pod="openstack/horizon-6884678d78-ckt87" Feb 23 09:09:02 crc kubenswrapper[4940]: I0223 09:09:02.242724 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e330abf6-9282-4221-b286-672ffc3985e7-scripts\") pod \"horizon-6884678d78-ckt87\" (UID: \"e330abf6-9282-4221-b286-672ffc3985e7\") " pod="openstack/horizon-6884678d78-ckt87" Feb 23 09:09:02 crc kubenswrapper[4940]: I0223 09:09:02.242840 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pbkvh\" (UniqueName: \"kubernetes.io/projected/e330abf6-9282-4221-b286-672ffc3985e7-kube-api-access-pbkvh\") pod \"horizon-6884678d78-ckt87\" (UID: \"e330abf6-9282-4221-b286-672ffc3985e7\") " pod="openstack/horizon-6884678d78-ckt87" Feb 23 09:09:02 crc kubenswrapper[4940]: I0223 09:09:02.242888 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e330abf6-9282-4221-b286-672ffc3985e7-combined-ca-bundle\") pod \"horizon-6884678d78-ckt87\" (UID: \"e330abf6-9282-4221-b286-672ffc3985e7\") " pod="openstack/horizon-6884678d78-ckt87" Feb 23 09:09:02 crc kubenswrapper[4940]: I0223 09:09:02.242970 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/e330abf6-9282-4221-b286-672ffc3985e7-horizon-secret-key\") pod \"horizon-6884678d78-ckt87\" (UID: \"e330abf6-9282-4221-b286-672ffc3985e7\") " pod="openstack/horizon-6884678d78-ckt87" Feb 23 09:09:02 crc kubenswrapper[4940]: I0223 09:09:02.243005 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e330abf6-9282-4221-b286-672ffc3985e7-logs\") pod \"horizon-6884678d78-ckt87\" (UID: \"e330abf6-9282-4221-b286-672ffc3985e7\") " pod="openstack/horizon-6884678d78-ckt87" Feb 23 09:09:02 crc kubenswrapper[4940]: E0223 09:09:02.243039 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0159383e-55f2-47df-9401-3fc82abecc72" containerName="keystone-bootstrap" Feb 23 09:09:02 crc kubenswrapper[4940]: I0223 09:09:02.243057 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="0159383e-55f2-47df-9401-3fc82abecc72" containerName="keystone-bootstrap" Feb 23 09:09:02 crc kubenswrapper[4940]: I0223 09:09:02.243310 4940 memory_manager.go:354] "RemoveStaleState removing state" podUID="0159383e-55f2-47df-9401-3fc82abecc72" containerName="keystone-bootstrap" Feb 23 09:09:02 crc kubenswrapper[4940]: I0223 09:09:02.244534 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-8485464bb-cvmj5" Feb 23 09:09:02 crc kubenswrapper[4940]: I0223 09:09:02.273002 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 23 09:09:02 crc kubenswrapper[4940]: I0223 09:09:02.282086 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-8485464bb-cvmj5"] Feb 23 09:09:02 crc kubenswrapper[4940]: I0223 09:09:02.344858 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e771581c-dd8e-43df-be95-2a6b3f3d47a1-combined-ca-bundle\") pod \"e771581c-dd8e-43df-be95-2a6b3f3d47a1\" (UID: \"e771581c-dd8e-43df-be95-2a6b3f3d47a1\") " Feb 23 09:09:02 crc kubenswrapper[4940]: I0223 09:09:02.345033 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z5lj4\" (UniqueName: \"kubernetes.io/projected/e771581c-dd8e-43df-be95-2a6b3f3d47a1-kube-api-access-z5lj4\") pod \"e771581c-dd8e-43df-be95-2a6b3f3d47a1\" (UID: \"e771581c-dd8e-43df-be95-2a6b3f3d47a1\") " Feb 23 09:09:02 crc kubenswrapper[4940]: I0223 09:09:02.345071 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/0159383e-55f2-47df-9401-3fc82abecc72-fernet-keys\") pod \"0159383e-55f2-47df-9401-3fc82abecc72\" (UID: \"0159383e-55f2-47df-9401-3fc82abecc72\") " Feb 23 09:09:02 crc kubenswrapper[4940]: I0223 09:09:02.345119 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0159383e-55f2-47df-9401-3fc82abecc72-combined-ca-bundle\") pod \"0159383e-55f2-47df-9401-3fc82abecc72\" (UID: \"0159383e-55f2-47df-9401-3fc82abecc72\") " Feb 23 09:09:02 crc kubenswrapper[4940]: I0223 09:09:02.345174 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/0159383e-55f2-47df-9401-3fc82abecc72-credential-keys\") pod \"0159383e-55f2-47df-9401-3fc82abecc72\" (UID: \"0159383e-55f2-47df-9401-3fc82abecc72\") " Feb 23 09:09:02 crc kubenswrapper[4940]: I0223 09:09:02.345232 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e771581c-dd8e-43df-be95-2a6b3f3d47a1-config-data\") pod \"e771581c-dd8e-43df-be95-2a6b3f3d47a1\" (UID: \"e771581c-dd8e-43df-be95-2a6b3f3d47a1\") " Feb 23 09:09:02 crc kubenswrapper[4940]: I0223 09:09:02.345264 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e771581c-dd8e-43df-be95-2a6b3f3d47a1-scripts\") pod \"e771581c-dd8e-43df-be95-2a6b3f3d47a1\" (UID: \"e771581c-dd8e-43df-be95-2a6b3f3d47a1\") " Feb 23 09:09:02 crc kubenswrapper[4940]: I0223 09:09:02.345318 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0159383e-55f2-47df-9401-3fc82abecc72-config-data\") pod \"0159383e-55f2-47df-9401-3fc82abecc72\" (UID: \"0159383e-55f2-47df-9401-3fc82abecc72\") " Feb 23 09:09:02 crc kubenswrapper[4940]: I0223 09:09:02.345373 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e771581c-dd8e-43df-be95-2a6b3f3d47a1-httpd-run\") pod \"e771581c-dd8e-43df-be95-2a6b3f3d47a1\" (UID: \"e771581c-dd8e-43df-be95-2a6b3f3d47a1\") " Feb 23 09:09:02 crc kubenswrapper[4940]: I0223 09:09:02.345406 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e771581c-dd8e-43df-be95-2a6b3f3d47a1-logs\") pod \"e771581c-dd8e-43df-be95-2a6b3f3d47a1\" (UID: \"e771581c-dd8e-43df-be95-2a6b3f3d47a1\") " Feb 23 09:09:02 crc kubenswrapper[4940]: I0223 09:09:02.345455 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e771581c-dd8e-43df-be95-2a6b3f3d47a1-public-tls-certs\") pod \"e771581c-dd8e-43df-be95-2a6b3f3d47a1\" (UID: \"e771581c-dd8e-43df-be95-2a6b3f3d47a1\") " Feb 23 09:09:02 crc kubenswrapper[4940]: I0223 09:09:02.345487 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/e771581c-dd8e-43df-be95-2a6b3f3d47a1-ceph\") pod \"e771581c-dd8e-43df-be95-2a6b3f3d47a1\" (UID: \"e771581c-dd8e-43df-be95-2a6b3f3d47a1\") " Feb 23 09:09:02 crc kubenswrapper[4940]: I0223 09:09:02.345526 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-49hzd\" (UniqueName: \"kubernetes.io/projected/0159383e-55f2-47df-9401-3fc82abecc72-kube-api-access-49hzd\") pod \"0159383e-55f2-47df-9401-3fc82abecc72\" (UID: \"0159383e-55f2-47df-9401-3fc82abecc72\") " Feb 23 09:09:02 crc kubenswrapper[4940]: I0223 09:09:02.345566 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0159383e-55f2-47df-9401-3fc82abecc72-scripts\") pod \"0159383e-55f2-47df-9401-3fc82abecc72\" (UID: \"0159383e-55f2-47df-9401-3fc82abecc72\") " Feb 23 09:09:02 crc kubenswrapper[4940]: I0223 09:09:02.345592 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"e771581c-dd8e-43df-be95-2a6b3f3d47a1\" (UID: \"e771581c-dd8e-43df-be95-2a6b3f3d47a1\") " Feb 23 09:09:02 crc kubenswrapper[4940]: I0223 09:09:02.347053 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/e330abf6-9282-4221-b286-672ffc3985e7-horizon-tls-certs\") pod \"horizon-6884678d78-ckt87\" (UID: \"e330abf6-9282-4221-b286-672ffc3985e7\") " pod="openstack/horizon-6884678d78-ckt87" Feb 23 09:09:02 crc kubenswrapper[4940]: I0223 09:09:02.347128 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0c698dee-e3c4-44d3-a08b-73e6b1e87986-config-data\") pod \"horizon-8485464bb-cvmj5\" (UID: \"0c698dee-e3c4-44d3-a08b-73e6b1e87986\") " pod="openstack/horizon-8485464bb-cvmj5" Feb 23 09:09:02 crc kubenswrapper[4940]: I0223 09:09:02.347161 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q875c\" (UniqueName: \"kubernetes.io/projected/0c698dee-e3c4-44d3-a08b-73e6b1e87986-kube-api-access-q875c\") pod \"horizon-8485464bb-cvmj5\" (UID: \"0c698dee-e3c4-44d3-a08b-73e6b1e87986\") " pod="openstack/horizon-8485464bb-cvmj5" Feb 23 09:09:02 crc kubenswrapper[4940]: I0223 09:09:02.347277 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c698dee-e3c4-44d3-a08b-73e6b1e87986-combined-ca-bundle\") pod \"horizon-8485464bb-cvmj5\" (UID: \"0c698dee-e3c4-44d3-a08b-73e6b1e87986\") " pod="openstack/horizon-8485464bb-cvmj5" Feb 23 09:09:02 crc kubenswrapper[4940]: I0223 09:09:02.347409 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e330abf6-9282-4221-b286-672ffc3985e7-config-data\") pod \"horizon-6884678d78-ckt87\" (UID: \"e330abf6-9282-4221-b286-672ffc3985e7\") " pod="openstack/horizon-6884678d78-ckt87" Feb 23 09:09:02 crc kubenswrapper[4940]: I0223 09:09:02.347442 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/0c698dee-e3c4-44d3-a08b-73e6b1e87986-horizon-tls-certs\") pod \"horizon-8485464bb-cvmj5\" (UID: \"0c698dee-e3c4-44d3-a08b-73e6b1e87986\") " pod="openstack/horizon-8485464bb-cvmj5" Feb 23 09:09:02 crc kubenswrapper[4940]: I0223 09:09:02.347574 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e330abf6-9282-4221-b286-672ffc3985e7-scripts\") pod \"horizon-6884678d78-ckt87\" (UID: \"e330abf6-9282-4221-b286-672ffc3985e7\") " pod="openstack/horizon-6884678d78-ckt87" Feb 23 09:09:02 crc kubenswrapper[4940]: I0223 09:09:02.348795 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pbkvh\" (UniqueName: \"kubernetes.io/projected/e330abf6-9282-4221-b286-672ffc3985e7-kube-api-access-pbkvh\") pod \"horizon-6884678d78-ckt87\" (UID: \"e330abf6-9282-4221-b286-672ffc3985e7\") " pod="openstack/horizon-6884678d78-ckt87" Feb 23 09:09:02 crc kubenswrapper[4940]: I0223 09:09:02.348890 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e330abf6-9282-4221-b286-672ffc3985e7-combined-ca-bundle\") pod \"horizon-6884678d78-ckt87\" (UID: \"e330abf6-9282-4221-b286-672ffc3985e7\") " pod="openstack/horizon-6884678d78-ckt87" Feb 23 09:09:02 crc kubenswrapper[4940]: I0223 09:09:02.348951 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/e330abf6-9282-4221-b286-672ffc3985e7-horizon-secret-key\") pod \"horizon-6884678d78-ckt87\" (UID: \"e330abf6-9282-4221-b286-672ffc3985e7\") " pod="openstack/horizon-6884678d78-ckt87" Feb 23 09:09:02 crc kubenswrapper[4940]: I0223 09:09:02.349034 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e330abf6-9282-4221-b286-672ffc3985e7-logs\") pod \"horizon-6884678d78-ckt87\" (UID: \"e330abf6-9282-4221-b286-672ffc3985e7\") " pod="openstack/horizon-6884678d78-ckt87" Feb 23 09:09:02 crc kubenswrapper[4940]: I0223 09:09:02.349173 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0c698dee-e3c4-44d3-a08b-73e6b1e87986-logs\") pod \"horizon-8485464bb-cvmj5\" (UID: \"0c698dee-e3c4-44d3-a08b-73e6b1e87986\") " pod="openstack/horizon-8485464bb-cvmj5" Feb 23 09:09:02 crc kubenswrapper[4940]: I0223 09:09:02.349206 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/0c698dee-e3c4-44d3-a08b-73e6b1e87986-horizon-secret-key\") pod \"horizon-8485464bb-cvmj5\" (UID: \"0c698dee-e3c4-44d3-a08b-73e6b1e87986\") " pod="openstack/horizon-8485464bb-cvmj5" Feb 23 09:09:02 crc kubenswrapper[4940]: I0223 09:09:02.349228 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0c698dee-e3c4-44d3-a08b-73e6b1e87986-scripts\") pod \"horizon-8485464bb-cvmj5\" (UID: \"0c698dee-e3c4-44d3-a08b-73e6b1e87986\") " pod="openstack/horizon-8485464bb-cvmj5" Feb 23 09:09:02 crc kubenswrapper[4940]: I0223 09:09:02.351485 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e771581c-dd8e-43df-be95-2a6b3f3d47a1-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "e771581c-dd8e-43df-be95-2a6b3f3d47a1" (UID: "e771581c-dd8e-43df-be95-2a6b3f3d47a1"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 09:09:02 crc kubenswrapper[4940]: I0223 09:09:02.351833 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e771581c-dd8e-43df-be95-2a6b3f3d47a1-logs" (OuterVolumeSpecName: "logs") pod "e771581c-dd8e-43df-be95-2a6b3f3d47a1" (UID: "e771581c-dd8e-43df-be95-2a6b3f3d47a1"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 09:09:02 crc kubenswrapper[4940]: I0223 09:09:02.353668 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e330abf6-9282-4221-b286-672ffc3985e7-scripts\") pod \"horizon-6884678d78-ckt87\" (UID: \"e330abf6-9282-4221-b286-672ffc3985e7\") " pod="openstack/horizon-6884678d78-ckt87" Feb 23 09:09:02 crc kubenswrapper[4940]: I0223 09:09:02.354318 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e330abf6-9282-4221-b286-672ffc3985e7-logs\") pod \"horizon-6884678d78-ckt87\" (UID: \"e330abf6-9282-4221-b286-672ffc3985e7\") " pod="openstack/horizon-6884678d78-ckt87" Feb 23 09:09:02 crc kubenswrapper[4940]: I0223 09:09:02.355268 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e330abf6-9282-4221-b286-672ffc3985e7-config-data\") pod \"horizon-6884678d78-ckt87\" (UID: \"e330abf6-9282-4221-b286-672ffc3985e7\") " pod="openstack/horizon-6884678d78-ckt87" Feb 23 09:09:02 crc kubenswrapper[4940]: I0223 09:09:02.359785 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0159383e-55f2-47df-9401-3fc82abecc72-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "0159383e-55f2-47df-9401-3fc82abecc72" (UID: "0159383e-55f2-47df-9401-3fc82abecc72"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 09:09:02 crc kubenswrapper[4940]: I0223 09:09:02.360945 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e330abf6-9282-4221-b286-672ffc3985e7-combined-ca-bundle\") pod \"horizon-6884678d78-ckt87\" (UID: \"e330abf6-9282-4221-b286-672ffc3985e7\") " pod="openstack/horizon-6884678d78-ckt87" Feb 23 09:09:02 crc kubenswrapper[4940]: I0223 09:09:02.363931 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0159383e-55f2-47df-9401-3fc82abecc72-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "0159383e-55f2-47df-9401-3fc82abecc72" (UID: "0159383e-55f2-47df-9401-3fc82abecc72"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 09:09:02 crc kubenswrapper[4940]: I0223 09:09:02.364303 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e771581c-dd8e-43df-be95-2a6b3f3d47a1-kube-api-access-z5lj4" (OuterVolumeSpecName: "kube-api-access-z5lj4") pod "e771581c-dd8e-43df-be95-2a6b3f3d47a1" (UID: "e771581c-dd8e-43df-be95-2a6b3f3d47a1"). InnerVolumeSpecName "kube-api-access-z5lj4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 09:09:02 crc kubenswrapper[4940]: I0223 09:09:02.365556 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/e330abf6-9282-4221-b286-672ffc3985e7-horizon-tls-certs\") pod \"horizon-6884678d78-ckt87\" (UID: \"e330abf6-9282-4221-b286-672ffc3985e7\") " pod="openstack/horizon-6884678d78-ckt87" Feb 23 09:09:02 crc kubenswrapper[4940]: I0223 09:09:02.374707 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage02-crc" (OuterVolumeSpecName: "glance") pod "e771581c-dd8e-43df-be95-2a6b3f3d47a1" (UID: "e771581c-dd8e-43df-be95-2a6b3f3d47a1"). InnerVolumeSpecName "local-storage02-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 23 09:09:02 crc kubenswrapper[4940]: I0223 09:09:02.375105 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/e330abf6-9282-4221-b286-672ffc3985e7-horizon-secret-key\") pod \"horizon-6884678d78-ckt87\" (UID: \"e330abf6-9282-4221-b286-672ffc3985e7\") " pod="openstack/horizon-6884678d78-ckt87" Feb 23 09:09:02 crc kubenswrapper[4940]: I0223 09:09:02.376926 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e771581c-dd8e-43df-be95-2a6b3f3d47a1-ceph" (OuterVolumeSpecName: "ceph") pod "e771581c-dd8e-43df-be95-2a6b3f3d47a1" (UID: "e771581c-dd8e-43df-be95-2a6b3f3d47a1"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 09:09:02 crc kubenswrapper[4940]: I0223 09:09:02.379087 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e771581c-dd8e-43df-be95-2a6b3f3d47a1-scripts" (OuterVolumeSpecName: "scripts") pod "e771581c-dd8e-43df-be95-2a6b3f3d47a1" (UID: "e771581c-dd8e-43df-be95-2a6b3f3d47a1"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 09:09:02 crc kubenswrapper[4940]: I0223 09:09:02.380655 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0159383e-55f2-47df-9401-3fc82abecc72-kube-api-access-49hzd" (OuterVolumeSpecName: "kube-api-access-49hzd") pod "0159383e-55f2-47df-9401-3fc82abecc72" (UID: "0159383e-55f2-47df-9401-3fc82abecc72"). InnerVolumeSpecName "kube-api-access-49hzd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 09:09:02 crc kubenswrapper[4940]: I0223 09:09:02.383982 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pbkvh\" (UniqueName: \"kubernetes.io/projected/e330abf6-9282-4221-b286-672ffc3985e7-kube-api-access-pbkvh\") pod \"horizon-6884678d78-ckt87\" (UID: \"e330abf6-9282-4221-b286-672ffc3985e7\") " pod="openstack/horizon-6884678d78-ckt87" Feb 23 09:09:02 crc kubenswrapper[4940]: I0223 09:09:02.386433 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0159383e-55f2-47df-9401-3fc82abecc72-scripts" (OuterVolumeSpecName: "scripts") pod "0159383e-55f2-47df-9401-3fc82abecc72" (UID: "0159383e-55f2-47df-9401-3fc82abecc72"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 09:09:02 crc kubenswrapper[4940]: I0223 09:09:02.416536 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0159383e-55f2-47df-9401-3fc82abecc72-config-data" (OuterVolumeSpecName: "config-data") pod "0159383e-55f2-47df-9401-3fc82abecc72" (UID: "0159383e-55f2-47df-9401-3fc82abecc72"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 09:09:02 crc kubenswrapper[4940]: I0223 09:09:02.449020 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e771581c-dd8e-43df-be95-2a6b3f3d47a1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e771581c-dd8e-43df-be95-2a6b3f3d47a1" (UID: "e771581c-dd8e-43df-be95-2a6b3f3d47a1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 09:09:02 crc kubenswrapper[4940]: I0223 09:09:02.450779 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0c698dee-e3c4-44d3-a08b-73e6b1e87986-logs\") pod \"horizon-8485464bb-cvmj5\" (UID: \"0c698dee-e3c4-44d3-a08b-73e6b1e87986\") " pod="openstack/horizon-8485464bb-cvmj5" Feb 23 09:09:02 crc kubenswrapper[4940]: I0223 09:09:02.450840 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/0c698dee-e3c4-44d3-a08b-73e6b1e87986-horizon-secret-key\") pod \"horizon-8485464bb-cvmj5\" (UID: \"0c698dee-e3c4-44d3-a08b-73e6b1e87986\") " pod="openstack/horizon-8485464bb-cvmj5" Feb 23 09:09:02 crc kubenswrapper[4940]: I0223 09:09:02.451049 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0c698dee-e3c4-44d3-a08b-73e6b1e87986-scripts\") pod \"horizon-8485464bb-cvmj5\" (UID: \"0c698dee-e3c4-44d3-a08b-73e6b1e87986\") " pod="openstack/horizon-8485464bb-cvmj5" Feb 23 09:09:02 crc kubenswrapper[4940]: I0223 09:09:02.451094 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0c698dee-e3c4-44d3-a08b-73e6b1e87986-config-data\") pod \"horizon-8485464bb-cvmj5\" (UID: \"0c698dee-e3c4-44d3-a08b-73e6b1e87986\") " pod="openstack/horizon-8485464bb-cvmj5" Feb 23 09:09:02 crc kubenswrapper[4940]: I0223 09:09:02.451115 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q875c\" (UniqueName: \"kubernetes.io/projected/0c698dee-e3c4-44d3-a08b-73e6b1e87986-kube-api-access-q875c\") pod \"horizon-8485464bb-cvmj5\" (UID: \"0c698dee-e3c4-44d3-a08b-73e6b1e87986\") " pod="openstack/horizon-8485464bb-cvmj5" Feb 23 09:09:02 crc kubenswrapper[4940]: I0223 09:09:02.451149 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c698dee-e3c4-44d3-a08b-73e6b1e87986-combined-ca-bundle\") pod \"horizon-8485464bb-cvmj5\" (UID: \"0c698dee-e3c4-44d3-a08b-73e6b1e87986\") " pod="openstack/horizon-8485464bb-cvmj5" Feb 23 09:09:02 crc kubenswrapper[4940]: I0223 09:09:02.451232 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/0c698dee-e3c4-44d3-a08b-73e6b1e87986-horizon-tls-certs\") pod \"horizon-8485464bb-cvmj5\" (UID: \"0c698dee-e3c4-44d3-a08b-73e6b1e87986\") " pod="openstack/horizon-8485464bb-cvmj5" Feb 23 09:09:02 crc kubenswrapper[4940]: I0223 09:09:02.451417 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0c698dee-e3c4-44d3-a08b-73e6b1e87986-logs\") pod \"horizon-8485464bb-cvmj5\" (UID: \"0c698dee-e3c4-44d3-a08b-73e6b1e87986\") " pod="openstack/horizon-8485464bb-cvmj5" Feb 23 09:09:02 crc kubenswrapper[4940]: I0223 09:09:02.451436 4940 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z5lj4\" (UniqueName: \"kubernetes.io/projected/e771581c-dd8e-43df-be95-2a6b3f3d47a1-kube-api-access-z5lj4\") on node \"crc\" DevicePath \"\"" Feb 23 09:09:02 crc kubenswrapper[4940]: I0223 09:09:02.453095 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0c698dee-e3c4-44d3-a08b-73e6b1e87986-scripts\") pod \"horizon-8485464bb-cvmj5\" (UID: \"0c698dee-e3c4-44d3-a08b-73e6b1e87986\") " pod="openstack/horizon-8485464bb-cvmj5" Feb 23 09:09:02 crc kubenswrapper[4940]: I0223 09:09:02.453804 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0c698dee-e3c4-44d3-a08b-73e6b1e87986-config-data\") pod \"horizon-8485464bb-cvmj5\" (UID: \"0c698dee-e3c4-44d3-a08b-73e6b1e87986\") " pod="openstack/horizon-8485464bb-cvmj5" Feb 23 09:09:02 crc kubenswrapper[4940]: I0223 09:09:02.455702 4940 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/0159383e-55f2-47df-9401-3fc82abecc72-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 23 09:09:02 crc kubenswrapper[4940]: I0223 09:09:02.455775 4940 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/0159383e-55f2-47df-9401-3fc82abecc72-credential-keys\") on node \"crc\" DevicePath \"\"" Feb 23 09:09:02 crc kubenswrapper[4940]: I0223 09:09:02.455820 4940 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e771581c-dd8e-43df-be95-2a6b3f3d47a1-scripts\") on node \"crc\" DevicePath \"\"" Feb 23 09:09:02 crc kubenswrapper[4940]: I0223 09:09:02.455835 4940 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0159383e-55f2-47df-9401-3fc82abecc72-config-data\") on node \"crc\" DevicePath \"\"" Feb 23 09:09:02 crc kubenswrapper[4940]: I0223 09:09:02.455847 4940 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e771581c-dd8e-43df-be95-2a6b3f3d47a1-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 23 09:09:02 crc kubenswrapper[4940]: I0223 09:09:02.455862 4940 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e771581c-dd8e-43df-be95-2a6b3f3d47a1-logs\") on node \"crc\" DevicePath \"\"" Feb 23 09:09:02 crc kubenswrapper[4940]: I0223 09:09:02.455874 4940 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/e771581c-dd8e-43df-be95-2a6b3f3d47a1-ceph\") on node \"crc\" DevicePath \"\"" Feb 23 09:09:02 crc kubenswrapper[4940]: I0223 09:09:02.455887 4940 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-49hzd\" (UniqueName: \"kubernetes.io/projected/0159383e-55f2-47df-9401-3fc82abecc72-kube-api-access-49hzd\") on node \"crc\" DevicePath \"\"" Feb 23 09:09:02 crc kubenswrapper[4940]: I0223 09:09:02.455900 4940 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0159383e-55f2-47df-9401-3fc82abecc72-scripts\") on node \"crc\" DevicePath \"\"" Feb 23 09:09:02 crc kubenswrapper[4940]: I0223 09:09:02.455931 4940 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" " Feb 23 09:09:02 crc kubenswrapper[4940]: I0223 09:09:02.455956 4940 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e771581c-dd8e-43df-be95-2a6b3f3d47a1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 09:09:02 crc kubenswrapper[4940]: I0223 09:09:02.457981 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/0c698dee-e3c4-44d3-a08b-73e6b1e87986-horizon-secret-key\") pod \"horizon-8485464bb-cvmj5\" (UID: \"0c698dee-e3c4-44d3-a08b-73e6b1e87986\") " pod="openstack/horizon-8485464bb-cvmj5" Feb 23 09:09:02 crc kubenswrapper[4940]: I0223 09:09:02.457989 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/0c698dee-e3c4-44d3-a08b-73e6b1e87986-horizon-tls-certs\") pod \"horizon-8485464bb-cvmj5\" (UID: \"0c698dee-e3c4-44d3-a08b-73e6b1e87986\") " pod="openstack/horizon-8485464bb-cvmj5" Feb 23 09:09:02 crc kubenswrapper[4940]: I0223 09:09:02.459504 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0159383e-55f2-47df-9401-3fc82abecc72-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0159383e-55f2-47df-9401-3fc82abecc72" (UID: "0159383e-55f2-47df-9401-3fc82abecc72"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 09:09:02 crc kubenswrapper[4940]: I0223 09:09:02.459738 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c698dee-e3c4-44d3-a08b-73e6b1e87986-combined-ca-bundle\") pod \"horizon-8485464bb-cvmj5\" (UID: \"0c698dee-e3c4-44d3-a08b-73e6b1e87986\") " pod="openstack/horizon-8485464bb-cvmj5" Feb 23 09:09:02 crc kubenswrapper[4940]: I0223 09:09:02.476578 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q875c\" (UniqueName: \"kubernetes.io/projected/0c698dee-e3c4-44d3-a08b-73e6b1e87986-kube-api-access-q875c\") pod \"horizon-8485464bb-cvmj5\" (UID: \"0c698dee-e3c4-44d3-a08b-73e6b1e87986\") " pod="openstack/horizon-8485464bb-cvmj5" Feb 23 09:09:02 crc kubenswrapper[4940]: I0223 09:09:02.478205 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e771581c-dd8e-43df-be95-2a6b3f3d47a1-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "e771581c-dd8e-43df-be95-2a6b3f3d47a1" (UID: "e771581c-dd8e-43df-be95-2a6b3f3d47a1"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 09:09:02 crc kubenswrapper[4940]: I0223 09:09:02.499556 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e771581c-dd8e-43df-be95-2a6b3f3d47a1-config-data" (OuterVolumeSpecName: "config-data") pod "e771581c-dd8e-43df-be95-2a6b3f3d47a1" (UID: "e771581c-dd8e-43df-be95-2a6b3f3d47a1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 09:09:02 crc kubenswrapper[4940]: I0223 09:09:02.519097 4940 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage02-crc" (UniqueName: "kubernetes.io/local-volume/local-storage02-crc") on node "crc" Feb 23 09:09:02 crc kubenswrapper[4940]: I0223 09:09:02.558751 4940 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0159383e-55f2-47df-9401-3fc82abecc72-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 09:09:02 crc kubenswrapper[4940]: I0223 09:09:02.558785 4940 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e771581c-dd8e-43df-be95-2a6b3f3d47a1-config-data\") on node \"crc\" DevicePath \"\"" Feb 23 09:09:02 crc kubenswrapper[4940]: I0223 09:09:02.558797 4940 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e771581c-dd8e-43df-be95-2a6b3f3d47a1-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 23 09:09:02 crc kubenswrapper[4940]: I0223 09:09:02.558808 4940 reconciler_common.go:293] "Volume detached for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" DevicePath \"\"" Feb 23 09:09:02 crc kubenswrapper[4940]: I0223 09:09:02.576000 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6884678d78-ckt87" Feb 23 09:09:02 crc kubenswrapper[4940]: I0223 09:09:02.590054 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-8485464bb-cvmj5" Feb 23 09:09:03 crc kubenswrapper[4940]: I0223 09:09:03.041433 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"e771581c-dd8e-43df-be95-2a6b3f3d47a1","Type":"ContainerDied","Data":"18d135d5f9634962b2af4da91c8cdea42f00db54b3a34faae322f112d3a6bd44"} Feb 23 09:09:03 crc kubenswrapper[4940]: I0223 09:09:03.041473 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 23 09:09:03 crc kubenswrapper[4940]: I0223 09:09:03.041480 4940 scope.go:117] "RemoveContainer" containerID="de9d8049630ed432919ad8cddb34d3c13a95a25e9f5a9ffe83adef5d415e4252" Feb 23 09:09:03 crc kubenswrapper[4940]: I0223 09:09:03.046576 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-b4dtx" event={"ID":"0159383e-55f2-47df-9401-3fc82abecc72","Type":"ContainerDied","Data":"bc2f3fb2bb6ad931f8fbce26bd49b07602f99f875c96a2dc014e96be2566e4ed"} Feb 23 09:09:03 crc kubenswrapper[4940]: I0223 09:09:03.046632 4940 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bc2f3fb2bb6ad931f8fbce26bd49b07602f99f875c96a2dc014e96be2566e4ed" Feb 23 09:09:03 crc kubenswrapper[4940]: I0223 09:09:03.046653 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-b4dtx" Feb 23 09:09:03 crc kubenswrapper[4940]: I0223 09:09:03.086430 4940 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 23 09:09:03 crc kubenswrapper[4940]: I0223 09:09:03.098565 4940 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 23 09:09:03 crc kubenswrapper[4940]: I0223 09:09:03.114088 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Feb 23 09:09:03 crc kubenswrapper[4940]: E0223 09:09:03.114718 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e771581c-dd8e-43df-be95-2a6b3f3d47a1" containerName="glance-httpd" Feb 23 09:09:03 crc kubenswrapper[4940]: I0223 09:09:03.114735 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="e771581c-dd8e-43df-be95-2a6b3f3d47a1" containerName="glance-httpd" Feb 23 09:09:03 crc kubenswrapper[4940]: E0223 09:09:03.114764 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e771581c-dd8e-43df-be95-2a6b3f3d47a1" containerName="glance-log" Feb 23 09:09:03 crc kubenswrapper[4940]: I0223 09:09:03.114771 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="e771581c-dd8e-43df-be95-2a6b3f3d47a1" containerName="glance-log" Feb 23 09:09:03 crc kubenswrapper[4940]: I0223 09:09:03.115185 4940 memory_manager.go:354] "RemoveStaleState removing state" podUID="e771581c-dd8e-43df-be95-2a6b3f3d47a1" containerName="glance-httpd" Feb 23 09:09:03 crc kubenswrapper[4940]: I0223 09:09:03.115201 4940 memory_manager.go:354] "RemoveStaleState removing state" podUID="e771581c-dd8e-43df-be95-2a6b3f3d47a1" containerName="glance-log" Feb 23 09:09:03 crc kubenswrapper[4940]: I0223 09:09:03.117234 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 23 09:09:03 crc kubenswrapper[4940]: I0223 09:09:03.119967 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Feb 23 09:09:03 crc kubenswrapper[4940]: I0223 09:09:03.120072 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Feb 23 09:09:03 crc kubenswrapper[4940]: I0223 09:09:03.124940 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 23 09:09:03 crc kubenswrapper[4940]: I0223 09:09:03.168695 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/301b7712-08f5-47e5-b4bf-f1c3033eba8d-ceph\") pod \"glance-default-external-api-0\" (UID: \"301b7712-08f5-47e5-b4bf-f1c3033eba8d\") " pod="openstack/glance-default-external-api-0" Feb 23 09:09:03 crc kubenswrapper[4940]: I0223 09:09:03.168804 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/301b7712-08f5-47e5-b4bf-f1c3033eba8d-config-data\") pod \"glance-default-external-api-0\" (UID: \"301b7712-08f5-47e5-b4bf-f1c3033eba8d\") " pod="openstack/glance-default-external-api-0" Feb 23 09:09:03 crc kubenswrapper[4940]: I0223 09:09:03.168849 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/301b7712-08f5-47e5-b4bf-f1c3033eba8d-scripts\") pod \"glance-default-external-api-0\" (UID: \"301b7712-08f5-47e5-b4bf-f1c3033eba8d\") " pod="openstack/glance-default-external-api-0" Feb 23 09:09:03 crc kubenswrapper[4940]: I0223 09:09:03.168908 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9bbgw\" (UniqueName: \"kubernetes.io/projected/301b7712-08f5-47e5-b4bf-f1c3033eba8d-kube-api-access-9bbgw\") pod \"glance-default-external-api-0\" (UID: \"301b7712-08f5-47e5-b4bf-f1c3033eba8d\") " pod="openstack/glance-default-external-api-0" Feb 23 09:09:03 crc kubenswrapper[4940]: I0223 09:09:03.168946 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/301b7712-08f5-47e5-b4bf-f1c3033eba8d-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"301b7712-08f5-47e5-b4bf-f1c3033eba8d\") " pod="openstack/glance-default-external-api-0" Feb 23 09:09:03 crc kubenswrapper[4940]: I0223 09:09:03.169021 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/301b7712-08f5-47e5-b4bf-f1c3033eba8d-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"301b7712-08f5-47e5-b4bf-f1c3033eba8d\") " pod="openstack/glance-default-external-api-0" Feb 23 09:09:03 crc kubenswrapper[4940]: I0223 09:09:03.169083 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-external-api-0\" (UID: \"301b7712-08f5-47e5-b4bf-f1c3033eba8d\") " pod="openstack/glance-default-external-api-0" Feb 23 09:09:03 crc kubenswrapper[4940]: I0223 09:09:03.169120 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/301b7712-08f5-47e5-b4bf-f1c3033eba8d-logs\") pod \"glance-default-external-api-0\" (UID: \"301b7712-08f5-47e5-b4bf-f1c3033eba8d\") " pod="openstack/glance-default-external-api-0" Feb 23 09:09:03 crc kubenswrapper[4940]: I0223 09:09:03.169154 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/301b7712-08f5-47e5-b4bf-f1c3033eba8d-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"301b7712-08f5-47e5-b4bf-f1c3033eba8d\") " pod="openstack/glance-default-external-api-0" Feb 23 09:09:03 crc kubenswrapper[4940]: I0223 09:09:03.271830 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/301b7712-08f5-47e5-b4bf-f1c3033eba8d-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"301b7712-08f5-47e5-b4bf-f1c3033eba8d\") " pod="openstack/glance-default-external-api-0" Feb 23 09:09:03 crc kubenswrapper[4940]: I0223 09:09:03.271896 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-external-api-0\" (UID: \"301b7712-08f5-47e5-b4bf-f1c3033eba8d\") " pod="openstack/glance-default-external-api-0" Feb 23 09:09:03 crc kubenswrapper[4940]: I0223 09:09:03.271936 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/301b7712-08f5-47e5-b4bf-f1c3033eba8d-logs\") pod \"glance-default-external-api-0\" (UID: \"301b7712-08f5-47e5-b4bf-f1c3033eba8d\") " pod="openstack/glance-default-external-api-0" Feb 23 09:09:03 crc kubenswrapper[4940]: I0223 09:09:03.271970 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/301b7712-08f5-47e5-b4bf-f1c3033eba8d-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"301b7712-08f5-47e5-b4bf-f1c3033eba8d\") " pod="openstack/glance-default-external-api-0" Feb 23 09:09:03 crc kubenswrapper[4940]: I0223 09:09:03.271997 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/301b7712-08f5-47e5-b4bf-f1c3033eba8d-ceph\") pod \"glance-default-external-api-0\" (UID: \"301b7712-08f5-47e5-b4bf-f1c3033eba8d\") " pod="openstack/glance-default-external-api-0" Feb 23 09:09:03 crc kubenswrapper[4940]: I0223 09:09:03.272071 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/301b7712-08f5-47e5-b4bf-f1c3033eba8d-config-data\") pod \"glance-default-external-api-0\" (UID: \"301b7712-08f5-47e5-b4bf-f1c3033eba8d\") " pod="openstack/glance-default-external-api-0" Feb 23 09:09:03 crc kubenswrapper[4940]: I0223 09:09:03.272114 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/301b7712-08f5-47e5-b4bf-f1c3033eba8d-scripts\") pod \"glance-default-external-api-0\" (UID: \"301b7712-08f5-47e5-b4bf-f1c3033eba8d\") " pod="openstack/glance-default-external-api-0" Feb 23 09:09:03 crc kubenswrapper[4940]: I0223 09:09:03.272185 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9bbgw\" (UniqueName: \"kubernetes.io/projected/301b7712-08f5-47e5-b4bf-f1c3033eba8d-kube-api-access-9bbgw\") pod \"glance-default-external-api-0\" (UID: \"301b7712-08f5-47e5-b4bf-f1c3033eba8d\") " pod="openstack/glance-default-external-api-0" Feb 23 09:09:03 crc kubenswrapper[4940]: I0223 09:09:03.272211 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/301b7712-08f5-47e5-b4bf-f1c3033eba8d-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"301b7712-08f5-47e5-b4bf-f1c3033eba8d\") " pod="openstack/glance-default-external-api-0" Feb 23 09:09:03 crc kubenswrapper[4940]: I0223 09:09:03.273175 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/301b7712-08f5-47e5-b4bf-f1c3033eba8d-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"301b7712-08f5-47e5-b4bf-f1c3033eba8d\") " pod="openstack/glance-default-external-api-0" Feb 23 09:09:03 crc kubenswrapper[4940]: I0223 09:09:03.275106 4940 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-external-api-0\" (UID: \"301b7712-08f5-47e5-b4bf-f1c3033eba8d\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/glance-default-external-api-0" Feb 23 09:09:03 crc kubenswrapper[4940]: I0223 09:09:03.277848 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/301b7712-08f5-47e5-b4bf-f1c3033eba8d-logs\") pod \"glance-default-external-api-0\" (UID: \"301b7712-08f5-47e5-b4bf-f1c3033eba8d\") " pod="openstack/glance-default-external-api-0" Feb 23 09:09:03 crc kubenswrapper[4940]: I0223 09:09:03.284973 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/301b7712-08f5-47e5-b4bf-f1c3033eba8d-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"301b7712-08f5-47e5-b4bf-f1c3033eba8d\") " pod="openstack/glance-default-external-api-0" Feb 23 09:09:03 crc kubenswrapper[4940]: I0223 09:09:03.285133 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/301b7712-08f5-47e5-b4bf-f1c3033eba8d-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"301b7712-08f5-47e5-b4bf-f1c3033eba8d\") " pod="openstack/glance-default-external-api-0" Feb 23 09:09:03 crc kubenswrapper[4940]: I0223 09:09:03.287934 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/301b7712-08f5-47e5-b4bf-f1c3033eba8d-config-data\") pod \"glance-default-external-api-0\" (UID: \"301b7712-08f5-47e5-b4bf-f1c3033eba8d\") " pod="openstack/glance-default-external-api-0" Feb 23 09:09:03 crc kubenswrapper[4940]: I0223 09:09:03.289938 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/301b7712-08f5-47e5-b4bf-f1c3033eba8d-scripts\") pod \"glance-default-external-api-0\" (UID: \"301b7712-08f5-47e5-b4bf-f1c3033eba8d\") " pod="openstack/glance-default-external-api-0" Feb 23 09:09:03 crc kubenswrapper[4940]: I0223 09:09:03.295508 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9bbgw\" (UniqueName: \"kubernetes.io/projected/301b7712-08f5-47e5-b4bf-f1c3033eba8d-kube-api-access-9bbgw\") pod \"glance-default-external-api-0\" (UID: \"301b7712-08f5-47e5-b4bf-f1c3033eba8d\") " pod="openstack/glance-default-external-api-0" Feb 23 09:09:03 crc kubenswrapper[4940]: I0223 09:09:03.298950 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/301b7712-08f5-47e5-b4bf-f1c3033eba8d-ceph\") pod \"glance-default-external-api-0\" (UID: \"301b7712-08f5-47e5-b4bf-f1c3033eba8d\") " pod="openstack/glance-default-external-api-0" Feb 23 09:09:03 crc kubenswrapper[4940]: I0223 09:09:03.314690 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-external-api-0\" (UID: \"301b7712-08f5-47e5-b4bf-f1c3033eba8d\") " pod="openstack/glance-default-external-api-0" Feb 23 09:09:03 crc kubenswrapper[4940]: I0223 09:09:03.365266 4940 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e771581c-dd8e-43df-be95-2a6b3f3d47a1" path="/var/lib/kubelet/pods/e771581c-dd8e-43df-be95-2a6b3f3d47a1/volumes" Feb 23 09:09:03 crc kubenswrapper[4940]: I0223 09:09:03.386034 4940 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-b4dtx"] Feb 23 09:09:03 crc kubenswrapper[4940]: I0223 09:09:03.397685 4940 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-b4dtx"] Feb 23 09:09:03 crc kubenswrapper[4940]: I0223 09:09:03.477163 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 23 09:09:03 crc kubenswrapper[4940]: I0223 09:09:03.482755 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-7d9wv"] Feb 23 09:09:03 crc kubenswrapper[4940]: I0223 09:09:03.484157 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-7d9wv" Feb 23 09:09:03 crc kubenswrapper[4940]: I0223 09:09:03.488677 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-l76pf" Feb 23 09:09:03 crc kubenswrapper[4940]: I0223 09:09:03.488976 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 23 09:09:03 crc kubenswrapper[4940]: I0223 09:09:03.489005 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 23 09:09:03 crc kubenswrapper[4940]: I0223 09:09:03.489182 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 23 09:09:03 crc kubenswrapper[4940]: I0223 09:09:03.493963 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Feb 23 09:09:03 crc kubenswrapper[4940]: I0223 09:09:03.499438 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-7d9wv"] Feb 23 09:09:03 crc kubenswrapper[4940]: I0223 09:09:03.581050 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/b224c257-f773-40f9-b62b-8d6e897ed198-credential-keys\") pod \"keystone-bootstrap-7d9wv\" (UID: \"b224c257-f773-40f9-b62b-8d6e897ed198\") " pod="openstack/keystone-bootstrap-7d9wv" Feb 23 09:09:03 crc kubenswrapper[4940]: I0223 09:09:03.581106 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b224c257-f773-40f9-b62b-8d6e897ed198-config-data\") pod \"keystone-bootstrap-7d9wv\" (UID: \"b224c257-f773-40f9-b62b-8d6e897ed198\") " pod="openstack/keystone-bootstrap-7d9wv" Feb 23 09:09:03 crc kubenswrapper[4940]: I0223 09:09:03.581143 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b224c257-f773-40f9-b62b-8d6e897ed198-combined-ca-bundle\") pod \"keystone-bootstrap-7d9wv\" (UID: \"b224c257-f773-40f9-b62b-8d6e897ed198\") " pod="openstack/keystone-bootstrap-7d9wv" Feb 23 09:09:03 crc kubenswrapper[4940]: I0223 09:09:03.581324 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b224c257-f773-40f9-b62b-8d6e897ed198-scripts\") pod \"keystone-bootstrap-7d9wv\" (UID: \"b224c257-f773-40f9-b62b-8d6e897ed198\") " pod="openstack/keystone-bootstrap-7d9wv" Feb 23 09:09:03 crc kubenswrapper[4940]: I0223 09:09:03.581499 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b224c257-f773-40f9-b62b-8d6e897ed198-fernet-keys\") pod \"keystone-bootstrap-7d9wv\" (UID: \"b224c257-f773-40f9-b62b-8d6e897ed198\") " pod="openstack/keystone-bootstrap-7d9wv" Feb 23 09:09:03 crc kubenswrapper[4940]: I0223 09:09:03.582021 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7hc6w\" (UniqueName: \"kubernetes.io/projected/b224c257-f773-40f9-b62b-8d6e897ed198-kube-api-access-7hc6w\") pod \"keystone-bootstrap-7d9wv\" (UID: \"b224c257-f773-40f9-b62b-8d6e897ed198\") " pod="openstack/keystone-bootstrap-7d9wv" Feb 23 09:09:03 crc kubenswrapper[4940]: I0223 09:09:03.683990 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7hc6w\" (UniqueName: \"kubernetes.io/projected/b224c257-f773-40f9-b62b-8d6e897ed198-kube-api-access-7hc6w\") pod \"keystone-bootstrap-7d9wv\" (UID: \"b224c257-f773-40f9-b62b-8d6e897ed198\") " pod="openstack/keystone-bootstrap-7d9wv" Feb 23 09:09:03 crc kubenswrapper[4940]: I0223 09:09:03.684486 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/b224c257-f773-40f9-b62b-8d6e897ed198-credential-keys\") pod \"keystone-bootstrap-7d9wv\" (UID: \"b224c257-f773-40f9-b62b-8d6e897ed198\") " pod="openstack/keystone-bootstrap-7d9wv" Feb 23 09:09:03 crc kubenswrapper[4940]: I0223 09:09:03.684529 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b224c257-f773-40f9-b62b-8d6e897ed198-config-data\") pod \"keystone-bootstrap-7d9wv\" (UID: \"b224c257-f773-40f9-b62b-8d6e897ed198\") " pod="openstack/keystone-bootstrap-7d9wv" Feb 23 09:09:03 crc kubenswrapper[4940]: I0223 09:09:03.684574 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b224c257-f773-40f9-b62b-8d6e897ed198-combined-ca-bundle\") pod \"keystone-bootstrap-7d9wv\" (UID: \"b224c257-f773-40f9-b62b-8d6e897ed198\") " pod="openstack/keystone-bootstrap-7d9wv" Feb 23 09:09:03 crc kubenswrapper[4940]: I0223 09:09:03.684642 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b224c257-f773-40f9-b62b-8d6e897ed198-scripts\") pod \"keystone-bootstrap-7d9wv\" (UID: \"b224c257-f773-40f9-b62b-8d6e897ed198\") " pod="openstack/keystone-bootstrap-7d9wv" Feb 23 09:09:03 crc kubenswrapper[4940]: I0223 09:09:03.684727 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b224c257-f773-40f9-b62b-8d6e897ed198-fernet-keys\") pod \"keystone-bootstrap-7d9wv\" (UID: \"b224c257-f773-40f9-b62b-8d6e897ed198\") " pod="openstack/keystone-bootstrap-7d9wv" Feb 23 09:09:03 crc kubenswrapper[4940]: I0223 09:09:03.688536 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/b224c257-f773-40f9-b62b-8d6e897ed198-credential-keys\") pod \"keystone-bootstrap-7d9wv\" (UID: \"b224c257-f773-40f9-b62b-8d6e897ed198\") " pod="openstack/keystone-bootstrap-7d9wv" Feb 23 09:09:03 crc kubenswrapper[4940]: I0223 09:09:03.688599 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b224c257-f773-40f9-b62b-8d6e897ed198-combined-ca-bundle\") pod \"keystone-bootstrap-7d9wv\" (UID: \"b224c257-f773-40f9-b62b-8d6e897ed198\") " pod="openstack/keystone-bootstrap-7d9wv" Feb 23 09:09:03 crc kubenswrapper[4940]: I0223 09:09:03.688999 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b224c257-f773-40f9-b62b-8d6e897ed198-scripts\") pod \"keystone-bootstrap-7d9wv\" (UID: \"b224c257-f773-40f9-b62b-8d6e897ed198\") " pod="openstack/keystone-bootstrap-7d9wv" Feb 23 09:09:03 crc kubenswrapper[4940]: I0223 09:09:03.689095 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b224c257-f773-40f9-b62b-8d6e897ed198-fernet-keys\") pod \"keystone-bootstrap-7d9wv\" (UID: \"b224c257-f773-40f9-b62b-8d6e897ed198\") " pod="openstack/keystone-bootstrap-7d9wv" Feb 23 09:09:03 crc kubenswrapper[4940]: I0223 09:09:03.693438 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b224c257-f773-40f9-b62b-8d6e897ed198-config-data\") pod \"keystone-bootstrap-7d9wv\" (UID: \"b224c257-f773-40f9-b62b-8d6e897ed198\") " pod="openstack/keystone-bootstrap-7d9wv" Feb 23 09:09:03 crc kubenswrapper[4940]: I0223 09:09:03.707962 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7hc6w\" (UniqueName: \"kubernetes.io/projected/b224c257-f773-40f9-b62b-8d6e897ed198-kube-api-access-7hc6w\") pod \"keystone-bootstrap-7d9wv\" (UID: \"b224c257-f773-40f9-b62b-8d6e897ed198\") " pod="openstack/keystone-bootstrap-7d9wv" Feb 23 09:09:03 crc kubenswrapper[4940]: I0223 09:09:03.807556 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-7d9wv" Feb 23 09:09:04 crc kubenswrapper[4940]: I0223 09:09:04.139906 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-785d8bcb8c-s4wc7" Feb 23 09:09:04 crc kubenswrapper[4940]: I0223 09:09:04.227247 4940 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-74f6bcbc87-czh8n"] Feb 23 09:09:04 crc kubenswrapper[4940]: I0223 09:09:04.227532 4940 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-74f6bcbc87-czh8n" podUID="a8aafe0f-b233-4ea4-84a4-a28d6cfd2284" containerName="dnsmasq-dns" containerID="cri-o://d17f230221091f10b35e26889ddeb92a41f743813d36024c58f17a18baacb26b" gracePeriod=10 Feb 23 09:09:05 crc kubenswrapper[4940]: I0223 09:09:05.081188 4940 generic.go:334] "Generic (PLEG): container finished" podID="a8aafe0f-b233-4ea4-84a4-a28d6cfd2284" containerID="d17f230221091f10b35e26889ddeb92a41f743813d36024c58f17a18baacb26b" exitCode=0 Feb 23 09:09:05 crc kubenswrapper[4940]: I0223 09:09:05.081359 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74f6bcbc87-czh8n" event={"ID":"a8aafe0f-b233-4ea4-84a4-a28d6cfd2284","Type":"ContainerDied","Data":"d17f230221091f10b35e26889ddeb92a41f743813d36024c58f17a18baacb26b"} Feb 23 09:09:05 crc kubenswrapper[4940]: I0223 09:09:05.355994 4940 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0159383e-55f2-47df-9401-3fc82abecc72" path="/var/lib/kubelet/pods/0159383e-55f2-47df-9401-3fc82abecc72/volumes" Feb 23 09:09:10 crc kubenswrapper[4940]: E0223 09:09:10.700120 4940 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizon:current-podified" Feb 23 09:09:10 crc kubenswrapper[4940]: E0223 09:09:10.703908 4940 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n8bh549h56dh66fh65h5bfh9h5f4h65bh5b5h64fhddh699h545h558hb4h5b8hb4h5b4h5b5h5ch5dchd6h64ch666h558h674h556h57ch696h68fh68fq,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:no,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6d288,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-76557b5cdc-8z8m9_openstack(4fe165a1-2722-4594-82d4-d9b9e5e88a56): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 23 09:09:10 crc kubenswrapper[4940]: E0223 09:09:10.706239 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\"]" pod="openstack/horizon-76557b5cdc-8z8m9" podUID="4fe165a1-2722-4594-82d4-d9b9e5e88a56" Feb 23 09:09:13 crc kubenswrapper[4940]: E0223 09:09:13.265581 4940 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizon:current-podified" Feb 23 09:09:13 crc kubenswrapper[4940]: E0223 09:09:13.266113 4940 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n58dh5bh56ch5cch6h567h669h7bhc4h5cfh8fh665h86h57fh658h86h689h687h5dch655h68fh5b4h96h685hc7h58dhdbhfch547h668h584h5fcq,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:no,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8f29g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-86784fc7b9-5n6q4_openstack(caff4ac2-d49f-41a4-a186-c7c1f398a54d): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 23 09:09:13 crc kubenswrapper[4940]: E0223 09:09:13.268441 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\"]" pod="openstack/horizon-86784fc7b9-5n6q4" podUID="caff4ac2-d49f-41a4-a186-c7c1f398a54d" Feb 23 09:09:13 crc kubenswrapper[4940]: I0223 09:09:13.986218 4940 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-74f6bcbc87-czh8n" podUID="a8aafe0f-b233-4ea4-84a4-a28d6cfd2284" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.127:5353: i/o timeout" Feb 23 09:09:14 crc kubenswrapper[4940]: I0223 09:09:14.153926 4940 generic.go:334] "Generic (PLEG): container finished" podID="bf56a426-9c5a-4a94-8740-fbe2c05dafbb" containerID="93564bc54012fea7c0d172def0757dd0e72e20b2ee023b5224e79fa561559ff4" exitCode=0 Feb 23 09:09:14 crc kubenswrapper[4940]: I0223 09:09:14.154006 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-hcm9c" event={"ID":"bf56a426-9c5a-4a94-8740-fbe2c05dafbb","Type":"ContainerDied","Data":"93564bc54012fea7c0d172def0757dd0e72e20b2ee023b5224e79fa561559ff4"} Feb 23 09:09:18 crc kubenswrapper[4940]: I0223 09:09:18.987738 4940 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-74f6bcbc87-czh8n" podUID="a8aafe0f-b233-4ea4-84a4-a28d6cfd2284" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.127:5353: i/o timeout" Feb 23 09:09:22 crc kubenswrapper[4940]: E0223 09:09:22.799288 4940 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizon:current-podified" Feb 23 09:09:22 crc kubenswrapper[4940]: E0223 09:09:22.800081 4940 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n64fhc8h59dh577h5b6h88h546h58ch5b7hd8h65ch94h5c8hc4h576hb5h65fh545h75h5d8hfdhdfh64bh6bh69h8h588h544h584hfdh597h84q,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:no,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dlkf2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-6d4447f67f-bwqtp_openstack(05fcd3d3-fb9e-420f-a8b6-72bf1c84f0b3): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 23 09:09:22 crc kubenswrapper[4940]: E0223 09:09:22.802699 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\"]" pod="openstack/horizon-6d4447f67f-bwqtp" podUID="05fcd3d3-fb9e-420f-a8b6-72bf1c84f0b3" Feb 23 09:09:22 crc kubenswrapper[4940]: I0223 09:09:22.818795 4940 scope.go:117] "RemoveContainer" containerID="94fc406d47308236ad3f7e6cd6440e28d50b7047815ecf7a3d1afad2c90f894a" Feb 23 09:09:22 crc kubenswrapper[4940]: I0223 09:09:22.916765 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74f6bcbc87-czh8n" Feb 23 09:09:22 crc kubenswrapper[4940]: I0223 09:09:22.918380 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 23 09:09:22 crc kubenswrapper[4940]: I0223 09:09:22.971473 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b4vbp\" (UniqueName: \"kubernetes.io/projected/d4f65141-3bae-4ad8-978d-e1ff9cc68a7c-kube-api-access-b4vbp\") pod \"d4f65141-3bae-4ad8-978d-e1ff9cc68a7c\" (UID: \"d4f65141-3bae-4ad8-978d-e1ff9cc68a7c\") " Feb 23 09:09:22 crc kubenswrapper[4940]: I0223 09:09:22.971572 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a8aafe0f-b233-4ea4-84a4-a28d6cfd2284-config\") pod \"a8aafe0f-b233-4ea4-84a4-a28d6cfd2284\" (UID: \"a8aafe0f-b233-4ea4-84a4-a28d6cfd2284\") " Feb 23 09:09:22 crc kubenswrapper[4940]: I0223 09:09:22.971697 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a8aafe0f-b233-4ea4-84a4-a28d6cfd2284-dns-swift-storage-0\") pod \"a8aafe0f-b233-4ea4-84a4-a28d6cfd2284\" (UID: \"a8aafe0f-b233-4ea4-84a4-a28d6cfd2284\") " Feb 23 09:09:22 crc kubenswrapper[4940]: I0223 09:09:22.971834 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/d4f65141-3bae-4ad8-978d-e1ff9cc68a7c-ceph\") pod \"d4f65141-3bae-4ad8-978d-e1ff9cc68a7c\" (UID: \"d4f65141-3bae-4ad8-978d-e1ff9cc68a7c\") " Feb 23 09:09:22 crc kubenswrapper[4940]: I0223 09:09:22.971871 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d4f65141-3bae-4ad8-978d-e1ff9cc68a7c-scripts\") pod \"d4f65141-3bae-4ad8-978d-e1ff9cc68a7c\" (UID: \"d4f65141-3bae-4ad8-978d-e1ff9cc68a7c\") " Feb 23 09:09:22 crc kubenswrapper[4940]: I0223 09:09:22.971926 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a8aafe0f-b233-4ea4-84a4-a28d6cfd2284-ovsdbserver-nb\") pod \"a8aafe0f-b233-4ea4-84a4-a28d6cfd2284\" (UID: \"a8aafe0f-b233-4ea4-84a4-a28d6cfd2284\") " Feb 23 09:09:22 crc kubenswrapper[4940]: I0223 09:09:22.971984 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a8aafe0f-b233-4ea4-84a4-a28d6cfd2284-ovsdbserver-sb\") pod \"a8aafe0f-b233-4ea4-84a4-a28d6cfd2284\" (UID: \"a8aafe0f-b233-4ea4-84a4-a28d6cfd2284\") " Feb 23 09:09:22 crc kubenswrapper[4940]: I0223 09:09:22.972039 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"d4f65141-3bae-4ad8-978d-e1ff9cc68a7c\" (UID: \"d4f65141-3bae-4ad8-978d-e1ff9cc68a7c\") " Feb 23 09:09:22 crc kubenswrapper[4940]: I0223 09:09:22.972067 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4f65141-3bae-4ad8-978d-e1ff9cc68a7c-combined-ca-bundle\") pod \"d4f65141-3bae-4ad8-978d-e1ff9cc68a7c\" (UID: \"d4f65141-3bae-4ad8-978d-e1ff9cc68a7c\") " Feb 23 09:09:22 crc kubenswrapper[4940]: I0223 09:09:22.972117 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d4f65141-3bae-4ad8-978d-e1ff9cc68a7c-internal-tls-certs\") pod \"d4f65141-3bae-4ad8-978d-e1ff9cc68a7c\" (UID: \"d4f65141-3bae-4ad8-978d-e1ff9cc68a7c\") " Feb 23 09:09:22 crc kubenswrapper[4940]: I0223 09:09:22.972178 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d4f65141-3bae-4ad8-978d-e1ff9cc68a7c-logs\") pod \"d4f65141-3bae-4ad8-978d-e1ff9cc68a7c\" (UID: \"d4f65141-3bae-4ad8-978d-e1ff9cc68a7c\") " Feb 23 09:09:22 crc kubenswrapper[4940]: I0223 09:09:22.972248 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d4f65141-3bae-4ad8-978d-e1ff9cc68a7c-config-data\") pod \"d4f65141-3bae-4ad8-978d-e1ff9cc68a7c\" (UID: \"d4f65141-3bae-4ad8-978d-e1ff9cc68a7c\") " Feb 23 09:09:22 crc kubenswrapper[4940]: I0223 09:09:22.972314 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d4f65141-3bae-4ad8-978d-e1ff9cc68a7c-httpd-run\") pod \"d4f65141-3bae-4ad8-978d-e1ff9cc68a7c\" (UID: \"d4f65141-3bae-4ad8-978d-e1ff9cc68a7c\") " Feb 23 09:09:22 crc kubenswrapper[4940]: I0223 09:09:22.972360 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2ww5s\" (UniqueName: \"kubernetes.io/projected/a8aafe0f-b233-4ea4-84a4-a28d6cfd2284-kube-api-access-2ww5s\") pod \"a8aafe0f-b233-4ea4-84a4-a28d6cfd2284\" (UID: \"a8aafe0f-b233-4ea4-84a4-a28d6cfd2284\") " Feb 23 09:09:22 crc kubenswrapper[4940]: I0223 09:09:22.972389 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a8aafe0f-b233-4ea4-84a4-a28d6cfd2284-dns-svc\") pod \"a8aafe0f-b233-4ea4-84a4-a28d6cfd2284\" (UID: \"a8aafe0f-b233-4ea4-84a4-a28d6cfd2284\") " Feb 23 09:09:22 crc kubenswrapper[4940]: I0223 09:09:22.974929 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d4f65141-3bae-4ad8-978d-e1ff9cc68a7c-logs" (OuterVolumeSpecName: "logs") pod "d4f65141-3bae-4ad8-978d-e1ff9cc68a7c" (UID: "d4f65141-3bae-4ad8-978d-e1ff9cc68a7c"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 09:09:22 crc kubenswrapper[4940]: I0223 09:09:22.974970 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d4f65141-3bae-4ad8-978d-e1ff9cc68a7c-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "d4f65141-3bae-4ad8-978d-e1ff9cc68a7c" (UID: "d4f65141-3bae-4ad8-978d-e1ff9cc68a7c"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 09:09:22 crc kubenswrapper[4940]: I0223 09:09:22.975428 4940 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d4f65141-3bae-4ad8-978d-e1ff9cc68a7c-logs\") on node \"crc\" DevicePath \"\"" Feb 23 09:09:22 crc kubenswrapper[4940]: I0223 09:09:22.975463 4940 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d4f65141-3bae-4ad8-978d-e1ff9cc68a7c-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 23 09:09:22 crc kubenswrapper[4940]: I0223 09:09:22.980451 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d4f65141-3bae-4ad8-978d-e1ff9cc68a7c-ceph" (OuterVolumeSpecName: "ceph") pod "d4f65141-3bae-4ad8-978d-e1ff9cc68a7c" (UID: "d4f65141-3bae-4ad8-978d-e1ff9cc68a7c"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 09:09:22 crc kubenswrapper[4940]: I0223 09:09:22.982898 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d4f65141-3bae-4ad8-978d-e1ff9cc68a7c-scripts" (OuterVolumeSpecName: "scripts") pod "d4f65141-3bae-4ad8-978d-e1ff9cc68a7c" (UID: "d4f65141-3bae-4ad8-978d-e1ff9cc68a7c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 09:09:22 crc kubenswrapper[4940]: I0223 09:09:22.983674 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage09-crc" (OuterVolumeSpecName: "glance") pod "d4f65141-3bae-4ad8-978d-e1ff9cc68a7c" (UID: "d4f65141-3bae-4ad8-978d-e1ff9cc68a7c"). InnerVolumeSpecName "local-storage09-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 23 09:09:22 crc kubenswrapper[4940]: I0223 09:09:22.983946 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d4f65141-3bae-4ad8-978d-e1ff9cc68a7c-kube-api-access-b4vbp" (OuterVolumeSpecName: "kube-api-access-b4vbp") pod "d4f65141-3bae-4ad8-978d-e1ff9cc68a7c" (UID: "d4f65141-3bae-4ad8-978d-e1ff9cc68a7c"). InnerVolumeSpecName "kube-api-access-b4vbp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 09:09:22 crc kubenswrapper[4940]: I0223 09:09:22.989178 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a8aafe0f-b233-4ea4-84a4-a28d6cfd2284-kube-api-access-2ww5s" (OuterVolumeSpecName: "kube-api-access-2ww5s") pod "a8aafe0f-b233-4ea4-84a4-a28d6cfd2284" (UID: "a8aafe0f-b233-4ea4-84a4-a28d6cfd2284"). InnerVolumeSpecName "kube-api-access-2ww5s". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 09:09:23 crc kubenswrapper[4940]: I0223 09:09:23.016084 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d4f65141-3bae-4ad8-978d-e1ff9cc68a7c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d4f65141-3bae-4ad8-978d-e1ff9cc68a7c" (UID: "d4f65141-3bae-4ad8-978d-e1ff9cc68a7c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 09:09:23 crc kubenswrapper[4940]: I0223 09:09:23.029345 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a8aafe0f-b233-4ea4-84a4-a28d6cfd2284-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "a8aafe0f-b233-4ea4-84a4-a28d6cfd2284" (UID: "a8aafe0f-b233-4ea4-84a4-a28d6cfd2284"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 09:09:23 crc kubenswrapper[4940]: I0223 09:09:23.029354 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a8aafe0f-b233-4ea4-84a4-a28d6cfd2284-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "a8aafe0f-b233-4ea4-84a4-a28d6cfd2284" (UID: "a8aafe0f-b233-4ea4-84a4-a28d6cfd2284"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 09:09:23 crc kubenswrapper[4940]: I0223 09:09:23.035461 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d4f65141-3bae-4ad8-978d-e1ff9cc68a7c-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "d4f65141-3bae-4ad8-978d-e1ff9cc68a7c" (UID: "d4f65141-3bae-4ad8-978d-e1ff9cc68a7c"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 09:09:23 crc kubenswrapper[4940]: I0223 09:09:23.053978 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a8aafe0f-b233-4ea4-84a4-a28d6cfd2284-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "a8aafe0f-b233-4ea4-84a4-a28d6cfd2284" (UID: "a8aafe0f-b233-4ea4-84a4-a28d6cfd2284"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 09:09:23 crc kubenswrapper[4940]: I0223 09:09:23.054051 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a8aafe0f-b233-4ea4-84a4-a28d6cfd2284-config" (OuterVolumeSpecName: "config") pod "a8aafe0f-b233-4ea4-84a4-a28d6cfd2284" (UID: "a8aafe0f-b233-4ea4-84a4-a28d6cfd2284"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 09:09:23 crc kubenswrapper[4940]: I0223 09:09:23.064110 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a8aafe0f-b233-4ea4-84a4-a28d6cfd2284-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "a8aafe0f-b233-4ea4-84a4-a28d6cfd2284" (UID: "a8aafe0f-b233-4ea4-84a4-a28d6cfd2284"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 09:09:23 crc kubenswrapper[4940]: I0223 09:09:23.068954 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d4f65141-3bae-4ad8-978d-e1ff9cc68a7c-config-data" (OuterVolumeSpecName: "config-data") pod "d4f65141-3bae-4ad8-978d-e1ff9cc68a7c" (UID: "d4f65141-3bae-4ad8-978d-e1ff9cc68a7c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 09:09:23 crc kubenswrapper[4940]: I0223 09:09:23.077372 4940 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d4f65141-3bae-4ad8-978d-e1ff9cc68a7c-scripts\") on node \"crc\" DevicePath \"\"" Feb 23 09:09:23 crc kubenswrapper[4940]: I0223 09:09:23.077406 4940 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a8aafe0f-b233-4ea4-84a4-a28d6cfd2284-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 23 09:09:23 crc kubenswrapper[4940]: I0223 09:09:23.077419 4940 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a8aafe0f-b233-4ea4-84a4-a28d6cfd2284-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 23 09:09:23 crc kubenswrapper[4940]: I0223 09:09:23.077457 4940 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" " Feb 23 09:09:23 crc kubenswrapper[4940]: I0223 09:09:23.077468 4940 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4f65141-3bae-4ad8-978d-e1ff9cc68a7c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 09:09:23 crc kubenswrapper[4940]: I0223 09:09:23.077476 4940 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d4f65141-3bae-4ad8-978d-e1ff9cc68a7c-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 23 09:09:23 crc kubenswrapper[4940]: I0223 09:09:23.077484 4940 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d4f65141-3bae-4ad8-978d-e1ff9cc68a7c-config-data\") on node \"crc\" DevicePath \"\"" Feb 23 09:09:23 crc kubenswrapper[4940]: I0223 09:09:23.077495 4940 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2ww5s\" (UniqueName: \"kubernetes.io/projected/a8aafe0f-b233-4ea4-84a4-a28d6cfd2284-kube-api-access-2ww5s\") on node \"crc\" DevicePath \"\"" Feb 23 09:09:23 crc kubenswrapper[4940]: I0223 09:09:23.077503 4940 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a8aafe0f-b233-4ea4-84a4-a28d6cfd2284-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 23 09:09:23 crc kubenswrapper[4940]: I0223 09:09:23.077511 4940 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b4vbp\" (UniqueName: \"kubernetes.io/projected/d4f65141-3bae-4ad8-978d-e1ff9cc68a7c-kube-api-access-b4vbp\") on node \"crc\" DevicePath \"\"" Feb 23 09:09:23 crc kubenswrapper[4940]: I0223 09:09:23.077519 4940 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a8aafe0f-b233-4ea4-84a4-a28d6cfd2284-config\") on node \"crc\" DevicePath \"\"" Feb 23 09:09:23 crc kubenswrapper[4940]: I0223 09:09:23.077527 4940 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a8aafe0f-b233-4ea4-84a4-a28d6cfd2284-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 23 09:09:23 crc kubenswrapper[4940]: I0223 09:09:23.077534 4940 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/d4f65141-3bae-4ad8-978d-e1ff9cc68a7c-ceph\") on node \"crc\" DevicePath \"\"" Feb 23 09:09:23 crc kubenswrapper[4940]: I0223 09:09:23.097296 4940 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage09-crc" (UniqueName: "kubernetes.io/local-volume/local-storage09-crc") on node "crc" Feb 23 09:09:23 crc kubenswrapper[4940]: I0223 09:09:23.179843 4940 reconciler_common.go:293] "Volume detached for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" DevicePath \"\"" Feb 23 09:09:23 crc kubenswrapper[4940]: I0223 09:09:23.241546 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"d4f65141-3bae-4ad8-978d-e1ff9cc68a7c","Type":"ContainerDied","Data":"d61490aead547ddd5af96fba0c9edb863a81c1a68b181d13111708e12302a905"} Feb 23 09:09:23 crc kubenswrapper[4940]: I0223 09:09:23.241569 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 23 09:09:23 crc kubenswrapper[4940]: I0223 09:09:23.245986 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74f6bcbc87-czh8n" event={"ID":"a8aafe0f-b233-4ea4-84a4-a28d6cfd2284","Type":"ContainerDied","Data":"556b35648ccafe647b1d26aa35b82501159431767389488b7c8d4ad6dcd0e7e9"} Feb 23 09:09:23 crc kubenswrapper[4940]: I0223 09:09:23.246001 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74f6bcbc87-czh8n" Feb 23 09:09:23 crc kubenswrapper[4940]: I0223 09:09:23.295210 4940 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-74f6bcbc87-czh8n"] Feb 23 09:09:23 crc kubenswrapper[4940]: I0223 09:09:23.306730 4940 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-74f6bcbc87-czh8n"] Feb 23 09:09:23 crc kubenswrapper[4940]: I0223 09:09:23.323433 4940 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 23 09:09:23 crc kubenswrapper[4940]: I0223 09:09:23.335188 4940 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 23 09:09:23 crc kubenswrapper[4940]: I0223 09:09:23.356521 4940 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a8aafe0f-b233-4ea4-84a4-a28d6cfd2284" path="/var/lib/kubelet/pods/a8aafe0f-b233-4ea4-84a4-a28d6cfd2284/volumes" Feb 23 09:09:23 crc kubenswrapper[4940]: I0223 09:09:23.357192 4940 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d4f65141-3bae-4ad8-978d-e1ff9cc68a7c" path="/var/lib/kubelet/pods/d4f65141-3bae-4ad8-978d-e1ff9cc68a7c/volumes" Feb 23 09:09:23 crc kubenswrapper[4940]: I0223 09:09:23.357884 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 23 09:09:23 crc kubenswrapper[4940]: E0223 09:09:23.358183 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4f65141-3bae-4ad8-978d-e1ff9cc68a7c" containerName="glance-httpd" Feb 23 09:09:23 crc kubenswrapper[4940]: I0223 09:09:23.358205 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4f65141-3bae-4ad8-978d-e1ff9cc68a7c" containerName="glance-httpd" Feb 23 09:09:23 crc kubenswrapper[4940]: E0223 09:09:23.358226 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8aafe0f-b233-4ea4-84a4-a28d6cfd2284" containerName="init" Feb 23 09:09:23 crc kubenswrapper[4940]: I0223 09:09:23.358233 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8aafe0f-b233-4ea4-84a4-a28d6cfd2284" containerName="init" Feb 23 09:09:23 crc kubenswrapper[4940]: E0223 09:09:23.358250 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8aafe0f-b233-4ea4-84a4-a28d6cfd2284" containerName="dnsmasq-dns" Feb 23 09:09:23 crc kubenswrapper[4940]: I0223 09:09:23.358257 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8aafe0f-b233-4ea4-84a4-a28d6cfd2284" containerName="dnsmasq-dns" Feb 23 09:09:23 crc kubenswrapper[4940]: E0223 09:09:23.358278 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4f65141-3bae-4ad8-978d-e1ff9cc68a7c" containerName="glance-log" Feb 23 09:09:23 crc kubenswrapper[4940]: I0223 09:09:23.358286 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4f65141-3bae-4ad8-978d-e1ff9cc68a7c" containerName="glance-log" Feb 23 09:09:23 crc kubenswrapper[4940]: I0223 09:09:23.358461 4940 memory_manager.go:354] "RemoveStaleState removing state" podUID="a8aafe0f-b233-4ea4-84a4-a28d6cfd2284" containerName="dnsmasq-dns" Feb 23 09:09:23 crc kubenswrapper[4940]: I0223 09:09:23.358480 4940 memory_manager.go:354] "RemoveStaleState removing state" podUID="d4f65141-3bae-4ad8-978d-e1ff9cc68a7c" containerName="glance-log" Feb 23 09:09:23 crc kubenswrapper[4940]: I0223 09:09:23.358493 4940 memory_manager.go:354] "RemoveStaleState removing state" podUID="d4f65141-3bae-4ad8-978d-e1ff9cc68a7c" containerName="glance-httpd" Feb 23 09:09:23 crc kubenswrapper[4940]: I0223 09:09:23.359583 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 23 09:09:23 crc kubenswrapper[4940]: I0223 09:09:23.361939 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Feb 23 09:09:23 crc kubenswrapper[4940]: I0223 09:09:23.362061 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Feb 23 09:09:23 crc kubenswrapper[4940]: I0223 09:09:23.387508 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 23 09:09:23 crc kubenswrapper[4940]: E0223 09:09:23.392638 4940 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified" Feb 23 09:09:23 crc kubenswrapper[4940]: E0223 09:09:23.392804 4940 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n5fdh89h96hdh8fh678h67ch77h657h5cdh684hf4h644h587hf6hfh5h5cbh6h659h87h86h68fh5cch5f7h5bbh57h656h5f4hfbh87h647q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9lchs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(6af3cab0-e7d8-461f-9092-6b5afefff5cc): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 23 09:09:23 crc kubenswrapper[4940]: I0223 09:09:23.402566 4940 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 23 09:09:23 crc kubenswrapper[4940]: I0223 09:09:23.403174 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-76557b5cdc-8z8m9" Feb 23 09:09:23 crc kubenswrapper[4940]: I0223 09:09:23.488910 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"c3fdde6c-2b86-4f4c-b431-292279528e91\") " pod="openstack/glance-default-internal-api-0" Feb 23 09:09:23 crc kubenswrapper[4940]: I0223 09:09:23.489287 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c3fdde6c-2b86-4f4c-b431-292279528e91-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"c3fdde6c-2b86-4f4c-b431-292279528e91\") " pod="openstack/glance-default-internal-api-0" Feb 23 09:09:23 crc kubenswrapper[4940]: I0223 09:09:23.489436 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c3fdde6c-2b86-4f4c-b431-292279528e91-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"c3fdde6c-2b86-4f4c-b431-292279528e91\") " pod="openstack/glance-default-internal-api-0" Feb 23 09:09:23 crc kubenswrapper[4940]: I0223 09:09:23.489538 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c3fdde6c-2b86-4f4c-b431-292279528e91-scripts\") pod \"glance-default-internal-api-0\" (UID: \"c3fdde6c-2b86-4f4c-b431-292279528e91\") " pod="openstack/glance-default-internal-api-0" Feb 23 09:09:23 crc kubenswrapper[4940]: I0223 09:09:23.489654 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c3fdde6c-2b86-4f4c-b431-292279528e91-config-data\") pod \"glance-default-internal-api-0\" (UID: \"c3fdde6c-2b86-4f4c-b431-292279528e91\") " pod="openstack/glance-default-internal-api-0" Feb 23 09:09:23 crc kubenswrapper[4940]: I0223 09:09:23.489790 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3fdde6c-2b86-4f4c-b431-292279528e91-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"c3fdde6c-2b86-4f4c-b431-292279528e91\") " pod="openstack/glance-default-internal-api-0" Feb 23 09:09:23 crc kubenswrapper[4940]: I0223 09:09:23.489856 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c3fdde6c-2b86-4f4c-b431-292279528e91-logs\") pod \"glance-default-internal-api-0\" (UID: \"c3fdde6c-2b86-4f4c-b431-292279528e91\") " pod="openstack/glance-default-internal-api-0" Feb 23 09:09:23 crc kubenswrapper[4940]: I0223 09:09:23.489991 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/c3fdde6c-2b86-4f4c-b431-292279528e91-ceph\") pod \"glance-default-internal-api-0\" (UID: \"c3fdde6c-2b86-4f4c-b431-292279528e91\") " pod="openstack/glance-default-internal-api-0" Feb 23 09:09:23 crc kubenswrapper[4940]: I0223 09:09:23.490159 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sdqnc\" (UniqueName: \"kubernetes.io/projected/c3fdde6c-2b86-4f4c-b431-292279528e91-kube-api-access-sdqnc\") pod \"glance-default-internal-api-0\" (UID: \"c3fdde6c-2b86-4f4c-b431-292279528e91\") " pod="openstack/glance-default-internal-api-0" Feb 23 09:09:23 crc kubenswrapper[4940]: I0223 09:09:23.590973 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4fe165a1-2722-4594-82d4-d9b9e5e88a56-config-data\") pod \"4fe165a1-2722-4594-82d4-d9b9e5e88a56\" (UID: \"4fe165a1-2722-4594-82d4-d9b9e5e88a56\") " Feb 23 09:09:23 crc kubenswrapper[4940]: I0223 09:09:23.591080 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/4fe165a1-2722-4594-82d4-d9b9e5e88a56-horizon-secret-key\") pod \"4fe165a1-2722-4594-82d4-d9b9e5e88a56\" (UID: \"4fe165a1-2722-4594-82d4-d9b9e5e88a56\") " Feb 23 09:09:23 crc kubenswrapper[4940]: I0223 09:09:23.591103 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4fe165a1-2722-4594-82d4-d9b9e5e88a56-scripts\") pod \"4fe165a1-2722-4594-82d4-d9b9e5e88a56\" (UID: \"4fe165a1-2722-4594-82d4-d9b9e5e88a56\") " Feb 23 09:09:23 crc kubenswrapper[4940]: I0223 09:09:23.591161 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6d288\" (UniqueName: \"kubernetes.io/projected/4fe165a1-2722-4594-82d4-d9b9e5e88a56-kube-api-access-6d288\") pod \"4fe165a1-2722-4594-82d4-d9b9e5e88a56\" (UID: \"4fe165a1-2722-4594-82d4-d9b9e5e88a56\") " Feb 23 09:09:23 crc kubenswrapper[4940]: I0223 09:09:23.591195 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4fe165a1-2722-4594-82d4-d9b9e5e88a56-logs\") pod \"4fe165a1-2722-4594-82d4-d9b9e5e88a56\" (UID: \"4fe165a1-2722-4594-82d4-d9b9e5e88a56\") " Feb 23 09:09:23 crc kubenswrapper[4940]: I0223 09:09:23.591282 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c3fdde6c-2b86-4f4c-b431-292279528e91-config-data\") pod \"glance-default-internal-api-0\" (UID: \"c3fdde6c-2b86-4f4c-b431-292279528e91\") " pod="openstack/glance-default-internal-api-0" Feb 23 09:09:23 crc kubenswrapper[4940]: I0223 09:09:23.591322 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3fdde6c-2b86-4f4c-b431-292279528e91-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"c3fdde6c-2b86-4f4c-b431-292279528e91\") " pod="openstack/glance-default-internal-api-0" Feb 23 09:09:23 crc kubenswrapper[4940]: I0223 09:09:23.591346 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c3fdde6c-2b86-4f4c-b431-292279528e91-logs\") pod \"glance-default-internal-api-0\" (UID: \"c3fdde6c-2b86-4f4c-b431-292279528e91\") " pod="openstack/glance-default-internal-api-0" Feb 23 09:09:23 crc kubenswrapper[4940]: I0223 09:09:23.591406 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/c3fdde6c-2b86-4f4c-b431-292279528e91-ceph\") pod \"glance-default-internal-api-0\" (UID: \"c3fdde6c-2b86-4f4c-b431-292279528e91\") " pod="openstack/glance-default-internal-api-0" Feb 23 09:09:23 crc kubenswrapper[4940]: I0223 09:09:23.591424 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sdqnc\" (UniqueName: \"kubernetes.io/projected/c3fdde6c-2b86-4f4c-b431-292279528e91-kube-api-access-sdqnc\") pod \"glance-default-internal-api-0\" (UID: \"c3fdde6c-2b86-4f4c-b431-292279528e91\") " pod="openstack/glance-default-internal-api-0" Feb 23 09:09:23 crc kubenswrapper[4940]: I0223 09:09:23.591672 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"c3fdde6c-2b86-4f4c-b431-292279528e91\") " pod="openstack/glance-default-internal-api-0" Feb 23 09:09:23 crc kubenswrapper[4940]: I0223 09:09:23.591702 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c3fdde6c-2b86-4f4c-b431-292279528e91-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"c3fdde6c-2b86-4f4c-b431-292279528e91\") " pod="openstack/glance-default-internal-api-0" Feb 23 09:09:23 crc kubenswrapper[4940]: I0223 09:09:23.591738 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c3fdde6c-2b86-4f4c-b431-292279528e91-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"c3fdde6c-2b86-4f4c-b431-292279528e91\") " pod="openstack/glance-default-internal-api-0" Feb 23 09:09:23 crc kubenswrapper[4940]: I0223 09:09:23.591762 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c3fdde6c-2b86-4f4c-b431-292279528e91-scripts\") pod \"glance-default-internal-api-0\" (UID: \"c3fdde6c-2b86-4f4c-b431-292279528e91\") " pod="openstack/glance-default-internal-api-0" Feb 23 09:09:23 crc kubenswrapper[4940]: I0223 09:09:23.591997 4940 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"c3fdde6c-2b86-4f4c-b431-292279528e91\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/glance-default-internal-api-0" Feb 23 09:09:23 crc kubenswrapper[4940]: I0223 09:09:23.592856 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c3fdde6c-2b86-4f4c-b431-292279528e91-logs\") pod \"glance-default-internal-api-0\" (UID: \"c3fdde6c-2b86-4f4c-b431-292279528e91\") " pod="openstack/glance-default-internal-api-0" Feb 23 09:09:23 crc kubenswrapper[4940]: I0223 09:09:23.591697 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4fe165a1-2722-4594-82d4-d9b9e5e88a56-config-data" (OuterVolumeSpecName: "config-data") pod "4fe165a1-2722-4594-82d4-d9b9e5e88a56" (UID: "4fe165a1-2722-4594-82d4-d9b9e5e88a56"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 09:09:23 crc kubenswrapper[4940]: I0223 09:09:23.593031 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4fe165a1-2722-4594-82d4-d9b9e5e88a56-logs" (OuterVolumeSpecName: "logs") pod "4fe165a1-2722-4594-82d4-d9b9e5e88a56" (UID: "4fe165a1-2722-4594-82d4-d9b9e5e88a56"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 09:09:23 crc kubenswrapper[4940]: I0223 09:09:23.593203 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4fe165a1-2722-4594-82d4-d9b9e5e88a56-scripts" (OuterVolumeSpecName: "scripts") pod "4fe165a1-2722-4594-82d4-d9b9e5e88a56" (UID: "4fe165a1-2722-4594-82d4-d9b9e5e88a56"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 09:09:23 crc kubenswrapper[4940]: I0223 09:09:23.593198 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c3fdde6c-2b86-4f4c-b431-292279528e91-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"c3fdde6c-2b86-4f4c-b431-292279528e91\") " pod="openstack/glance-default-internal-api-0" Feb 23 09:09:23 crc kubenswrapper[4940]: I0223 09:09:23.596576 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4fe165a1-2722-4594-82d4-d9b9e5e88a56-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "4fe165a1-2722-4594-82d4-d9b9e5e88a56" (UID: "4fe165a1-2722-4594-82d4-d9b9e5e88a56"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 09:09:23 crc kubenswrapper[4940]: I0223 09:09:23.596915 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/c3fdde6c-2b86-4f4c-b431-292279528e91-ceph\") pod \"glance-default-internal-api-0\" (UID: \"c3fdde6c-2b86-4f4c-b431-292279528e91\") " pod="openstack/glance-default-internal-api-0" Feb 23 09:09:23 crc kubenswrapper[4940]: I0223 09:09:23.597421 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4fe165a1-2722-4594-82d4-d9b9e5e88a56-kube-api-access-6d288" (OuterVolumeSpecName: "kube-api-access-6d288") pod "4fe165a1-2722-4594-82d4-d9b9e5e88a56" (UID: "4fe165a1-2722-4594-82d4-d9b9e5e88a56"). InnerVolumeSpecName "kube-api-access-6d288". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 09:09:23 crc kubenswrapper[4940]: I0223 09:09:23.599557 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c3fdde6c-2b86-4f4c-b431-292279528e91-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"c3fdde6c-2b86-4f4c-b431-292279528e91\") " pod="openstack/glance-default-internal-api-0" Feb 23 09:09:23 crc kubenswrapper[4940]: I0223 09:09:23.600707 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c3fdde6c-2b86-4f4c-b431-292279528e91-config-data\") pod \"glance-default-internal-api-0\" (UID: \"c3fdde6c-2b86-4f4c-b431-292279528e91\") " pod="openstack/glance-default-internal-api-0" Feb 23 09:09:23 crc kubenswrapper[4940]: I0223 09:09:23.602295 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c3fdde6c-2b86-4f4c-b431-292279528e91-scripts\") pod \"glance-default-internal-api-0\" (UID: \"c3fdde6c-2b86-4f4c-b431-292279528e91\") " pod="openstack/glance-default-internal-api-0" Feb 23 09:09:23 crc kubenswrapper[4940]: I0223 09:09:23.606144 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3fdde6c-2b86-4f4c-b431-292279528e91-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"c3fdde6c-2b86-4f4c-b431-292279528e91\") " pod="openstack/glance-default-internal-api-0" Feb 23 09:09:23 crc kubenswrapper[4940]: I0223 09:09:23.614388 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sdqnc\" (UniqueName: \"kubernetes.io/projected/c3fdde6c-2b86-4f4c-b431-292279528e91-kube-api-access-sdqnc\") pod \"glance-default-internal-api-0\" (UID: \"c3fdde6c-2b86-4f4c-b431-292279528e91\") " pod="openstack/glance-default-internal-api-0" Feb 23 09:09:23 crc kubenswrapper[4940]: I0223 09:09:23.621861 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"c3fdde6c-2b86-4f4c-b431-292279528e91\") " pod="openstack/glance-default-internal-api-0" Feb 23 09:09:23 crc kubenswrapper[4940]: I0223 09:09:23.685266 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 23 09:09:23 crc kubenswrapper[4940]: I0223 09:09:23.692533 4940 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4fe165a1-2722-4594-82d4-d9b9e5e88a56-logs\") on node \"crc\" DevicePath \"\"" Feb 23 09:09:23 crc kubenswrapper[4940]: I0223 09:09:23.692761 4940 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4fe165a1-2722-4594-82d4-d9b9e5e88a56-config-data\") on node \"crc\" DevicePath \"\"" Feb 23 09:09:23 crc kubenswrapper[4940]: I0223 09:09:23.692852 4940 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/4fe165a1-2722-4594-82d4-d9b9e5e88a56-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Feb 23 09:09:23 crc kubenswrapper[4940]: I0223 09:09:23.692931 4940 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4fe165a1-2722-4594-82d4-d9b9e5e88a56-scripts\") on node \"crc\" DevicePath \"\"" Feb 23 09:09:23 crc kubenswrapper[4940]: I0223 09:09:23.693012 4940 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6d288\" (UniqueName: \"kubernetes.io/projected/4fe165a1-2722-4594-82d4-d9b9e5e88a56-kube-api-access-6d288\") on node \"crc\" DevicePath \"\"" Feb 23 09:09:23 crc kubenswrapper[4940]: I0223 09:09:23.989346 4940 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-74f6bcbc87-czh8n" podUID="a8aafe0f-b233-4ea4-84a4-a28d6cfd2284" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.127:5353: i/o timeout" Feb 23 09:09:24 crc kubenswrapper[4940]: E0223 09:09:24.028145 4940 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-manila-api:current-podified" Feb 23 09:09:24 crc kubenswrapper[4940]: E0223 09:09:24.028332 4940 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manila-db-sync,Image:quay.io/podified-antelope-centos9/openstack-manila-api:current-podified,Command:[/bin/bash],Args:[-c sleep 0 && /usr/bin/manila-manage --config-dir /etc/manila/manila.conf.d db sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:job-config-data,ReadOnly:true,MountPath:/etc/manila/manila.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sthvh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42429,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42429,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod manila-db-sync-ktg94_openstack(a43f9f8e-d118-4247-b1f0-b6aac984bb4d): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 23 09:09:24 crc kubenswrapper[4940]: E0223 09:09:24.030841 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/manila-db-sync-ktg94" podUID="a43f9f8e-d118-4247-b1f0-b6aac984bb4d" Feb 23 09:09:24 crc kubenswrapper[4940]: I0223 09:09:24.115468 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-86784fc7b9-5n6q4" Feb 23 09:09:24 crc kubenswrapper[4940]: I0223 09:09:24.122094 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-hcm9c" Feb 23 09:09:24 crc kubenswrapper[4940]: I0223 09:09:24.256086 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-76557b5cdc-8z8m9" Feb 23 09:09:24 crc kubenswrapper[4940]: I0223 09:09:24.256085 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-76557b5cdc-8z8m9" event={"ID":"4fe165a1-2722-4594-82d4-d9b9e5e88a56","Type":"ContainerDied","Data":"296838aa6b6a8a95fadb652db4cc85c6822da2d8cf5fa262722f8fcbc7ef481c"} Feb 23 09:09:24 crc kubenswrapper[4940]: I0223 09:09:24.258811 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-hcm9c" Feb 23 09:09:24 crc kubenswrapper[4940]: I0223 09:09:24.258802 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-hcm9c" event={"ID":"bf56a426-9c5a-4a94-8740-fbe2c05dafbb","Type":"ContainerDied","Data":"c8d81b28a4b41c8a9cb8fa5e371c45861dda3969c8f697195bdbe3c396974d0a"} Feb 23 09:09:24 crc kubenswrapper[4940]: I0223 09:09:24.258867 4940 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c8d81b28a4b41c8a9cb8fa5e371c45861dda3969c8f697195bdbe3c396974d0a" Feb 23 09:09:24 crc kubenswrapper[4940]: I0223 09:09:24.261842 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-86784fc7b9-5n6q4" Feb 23 09:09:24 crc kubenswrapper[4940]: I0223 09:09:24.262027 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-86784fc7b9-5n6q4" event={"ID":"caff4ac2-d49f-41a4-a186-c7c1f398a54d","Type":"ContainerDied","Data":"4d9ab5ff008aa1123d500ac1f7811753deb43facccc38957ba36e3df9375cbe2"} Feb 23 09:09:24 crc kubenswrapper[4940]: E0223 09:09:24.262804 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manila-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-manila-api:current-podified\\\"\"" pod="openstack/manila-db-sync-ktg94" podUID="a43f9f8e-d118-4247-b1f0-b6aac984bb4d" Feb 23 09:09:24 crc kubenswrapper[4940]: I0223 09:09:24.303855 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/caff4ac2-d49f-41a4-a186-c7c1f398a54d-scripts\") pod \"caff4ac2-d49f-41a4-a186-c7c1f398a54d\" (UID: \"caff4ac2-d49f-41a4-a186-c7c1f398a54d\") " Feb 23 09:09:24 crc kubenswrapper[4940]: I0223 09:09:24.303930 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5nr7g\" (UniqueName: \"kubernetes.io/projected/bf56a426-9c5a-4a94-8740-fbe2c05dafbb-kube-api-access-5nr7g\") pod \"bf56a426-9c5a-4a94-8740-fbe2c05dafbb\" (UID: \"bf56a426-9c5a-4a94-8740-fbe2c05dafbb\") " Feb 23 09:09:24 crc kubenswrapper[4940]: I0223 09:09:24.303956 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/caff4ac2-d49f-41a4-a186-c7c1f398a54d-config-data\") pod \"caff4ac2-d49f-41a4-a186-c7c1f398a54d\" (UID: \"caff4ac2-d49f-41a4-a186-c7c1f398a54d\") " Feb 23 09:09:24 crc kubenswrapper[4940]: I0223 09:09:24.304048 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/bf56a426-9c5a-4a94-8740-fbe2c05dafbb-config\") pod \"bf56a426-9c5a-4a94-8740-fbe2c05dafbb\" (UID: \"bf56a426-9c5a-4a94-8740-fbe2c05dafbb\") " Feb 23 09:09:24 crc kubenswrapper[4940]: I0223 09:09:24.304087 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/caff4ac2-d49f-41a4-a186-c7c1f398a54d-horizon-secret-key\") pod \"caff4ac2-d49f-41a4-a186-c7c1f398a54d\" (UID: \"caff4ac2-d49f-41a4-a186-c7c1f398a54d\") " Feb 23 09:09:24 crc kubenswrapper[4940]: I0223 09:09:24.304125 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/caff4ac2-d49f-41a4-a186-c7c1f398a54d-logs\") pod \"caff4ac2-d49f-41a4-a186-c7c1f398a54d\" (UID: \"caff4ac2-d49f-41a4-a186-c7c1f398a54d\") " Feb 23 09:09:24 crc kubenswrapper[4940]: I0223 09:09:24.304151 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8f29g\" (UniqueName: \"kubernetes.io/projected/caff4ac2-d49f-41a4-a186-c7c1f398a54d-kube-api-access-8f29g\") pod \"caff4ac2-d49f-41a4-a186-c7c1f398a54d\" (UID: \"caff4ac2-d49f-41a4-a186-c7c1f398a54d\") " Feb 23 09:09:24 crc kubenswrapper[4940]: I0223 09:09:24.304229 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf56a426-9c5a-4a94-8740-fbe2c05dafbb-combined-ca-bundle\") pod \"bf56a426-9c5a-4a94-8740-fbe2c05dafbb\" (UID: \"bf56a426-9c5a-4a94-8740-fbe2c05dafbb\") " Feb 23 09:09:24 crc kubenswrapper[4940]: I0223 09:09:24.304419 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/caff4ac2-d49f-41a4-a186-c7c1f398a54d-scripts" (OuterVolumeSpecName: "scripts") pod "caff4ac2-d49f-41a4-a186-c7c1f398a54d" (UID: "caff4ac2-d49f-41a4-a186-c7c1f398a54d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 09:09:24 crc kubenswrapper[4940]: I0223 09:09:24.305809 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/caff4ac2-d49f-41a4-a186-c7c1f398a54d-logs" (OuterVolumeSpecName: "logs") pod "caff4ac2-d49f-41a4-a186-c7c1f398a54d" (UID: "caff4ac2-d49f-41a4-a186-c7c1f398a54d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 09:09:24 crc kubenswrapper[4940]: I0223 09:09:24.305989 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/caff4ac2-d49f-41a4-a186-c7c1f398a54d-config-data" (OuterVolumeSpecName: "config-data") pod "caff4ac2-d49f-41a4-a186-c7c1f398a54d" (UID: "caff4ac2-d49f-41a4-a186-c7c1f398a54d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 09:09:24 crc kubenswrapper[4940]: I0223 09:09:24.309502 4940 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/caff4ac2-d49f-41a4-a186-c7c1f398a54d-scripts\") on node \"crc\" DevicePath \"\"" Feb 23 09:09:24 crc kubenswrapper[4940]: I0223 09:09:24.309548 4940 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/caff4ac2-d49f-41a4-a186-c7c1f398a54d-config-data\") on node \"crc\" DevicePath \"\"" Feb 23 09:09:24 crc kubenswrapper[4940]: I0223 09:09:24.309577 4940 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/caff4ac2-d49f-41a4-a186-c7c1f398a54d-logs\") on node \"crc\" DevicePath \"\"" Feb 23 09:09:24 crc kubenswrapper[4940]: I0223 09:09:24.314161 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf56a426-9c5a-4a94-8740-fbe2c05dafbb-kube-api-access-5nr7g" (OuterVolumeSpecName: "kube-api-access-5nr7g") pod "bf56a426-9c5a-4a94-8740-fbe2c05dafbb" (UID: "bf56a426-9c5a-4a94-8740-fbe2c05dafbb"). InnerVolumeSpecName "kube-api-access-5nr7g". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 09:09:24 crc kubenswrapper[4940]: I0223 09:09:24.318837 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/caff4ac2-d49f-41a4-a186-c7c1f398a54d-kube-api-access-8f29g" (OuterVolumeSpecName: "kube-api-access-8f29g") pod "caff4ac2-d49f-41a4-a186-c7c1f398a54d" (UID: "caff4ac2-d49f-41a4-a186-c7c1f398a54d"). InnerVolumeSpecName "kube-api-access-8f29g". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 09:09:24 crc kubenswrapper[4940]: I0223 09:09:24.321839 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/caff4ac2-d49f-41a4-a186-c7c1f398a54d-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "caff4ac2-d49f-41a4-a186-c7c1f398a54d" (UID: "caff4ac2-d49f-41a4-a186-c7c1f398a54d"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 09:09:24 crc kubenswrapper[4940]: I0223 09:09:24.338064 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf56a426-9c5a-4a94-8740-fbe2c05dafbb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bf56a426-9c5a-4a94-8740-fbe2c05dafbb" (UID: "bf56a426-9c5a-4a94-8740-fbe2c05dafbb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 09:09:24 crc kubenswrapper[4940]: I0223 09:09:24.344996 4940 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-76557b5cdc-8z8m9"] Feb 23 09:09:24 crc kubenswrapper[4940]: I0223 09:09:24.345623 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf56a426-9c5a-4a94-8740-fbe2c05dafbb-config" (OuterVolumeSpecName: "config") pod "bf56a426-9c5a-4a94-8740-fbe2c05dafbb" (UID: "bf56a426-9c5a-4a94-8740-fbe2c05dafbb"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 09:09:24 crc kubenswrapper[4940]: I0223 09:09:24.353921 4940 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-76557b5cdc-8z8m9"] Feb 23 09:09:24 crc kubenswrapper[4940]: I0223 09:09:24.412156 4940 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5nr7g\" (UniqueName: \"kubernetes.io/projected/bf56a426-9c5a-4a94-8740-fbe2c05dafbb-kube-api-access-5nr7g\") on node \"crc\" DevicePath \"\"" Feb 23 09:09:24 crc kubenswrapper[4940]: I0223 09:09:24.412206 4940 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/bf56a426-9c5a-4a94-8740-fbe2c05dafbb-config\") on node \"crc\" DevicePath \"\"" Feb 23 09:09:24 crc kubenswrapper[4940]: I0223 09:09:24.412221 4940 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/caff4ac2-d49f-41a4-a186-c7c1f398a54d-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Feb 23 09:09:24 crc kubenswrapper[4940]: I0223 09:09:24.412235 4940 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8f29g\" (UniqueName: \"kubernetes.io/projected/caff4ac2-d49f-41a4-a186-c7c1f398a54d-kube-api-access-8f29g\") on node \"crc\" DevicePath \"\"" Feb 23 09:09:24 crc kubenswrapper[4940]: I0223 09:09:24.412249 4940 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf56a426-9c5a-4a94-8740-fbe2c05dafbb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 09:09:24 crc kubenswrapper[4940]: I0223 09:09:24.667980 4940 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-86784fc7b9-5n6q4"] Feb 23 09:09:24 crc kubenswrapper[4940]: I0223 09:09:24.673248 4940 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-86784fc7b9-5n6q4"] Feb 23 09:09:25 crc kubenswrapper[4940]: I0223 09:09:25.359909 4940 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4fe165a1-2722-4594-82d4-d9b9e5e88a56" path="/var/lib/kubelet/pods/4fe165a1-2722-4594-82d4-d9b9e5e88a56/volumes" Feb 23 09:09:25 crc kubenswrapper[4940]: I0223 09:09:25.361053 4940 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="caff4ac2-d49f-41a4-a186-c7c1f398a54d" path="/var/lib/kubelet/pods/caff4ac2-d49f-41a4-a186-c7c1f398a54d/volumes" Feb 23 09:09:25 crc kubenswrapper[4940]: I0223 09:09:25.367316 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-q85q9"] Feb 23 09:09:25 crc kubenswrapper[4940]: E0223 09:09:25.367786 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf56a426-9c5a-4a94-8740-fbe2c05dafbb" containerName="neutron-db-sync" Feb 23 09:09:25 crc kubenswrapper[4940]: I0223 09:09:25.367805 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf56a426-9c5a-4a94-8740-fbe2c05dafbb" containerName="neutron-db-sync" Feb 23 09:09:25 crc kubenswrapper[4940]: I0223 09:09:25.367970 4940 memory_manager.go:354] "RemoveStaleState removing state" podUID="bf56a426-9c5a-4a94-8740-fbe2c05dafbb" containerName="neutron-db-sync" Feb 23 09:09:25 crc kubenswrapper[4940]: I0223 09:09:25.371936 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-q85q9" Feb 23 09:09:25 crc kubenswrapper[4940]: I0223 09:09:25.393674 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-q85q9"] Feb 23 09:09:25 crc kubenswrapper[4940]: I0223 09:09:25.435053 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-smf2c\" (UniqueName: \"kubernetes.io/projected/a6c275c2-a5e4-4761-b103-641ef82153f5-kube-api-access-smf2c\") pod \"dnsmasq-dns-55f844cf75-q85q9\" (UID: \"a6c275c2-a5e4-4761-b103-641ef82153f5\") " pod="openstack/dnsmasq-dns-55f844cf75-q85q9" Feb 23 09:09:25 crc kubenswrapper[4940]: I0223 09:09:25.435115 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a6c275c2-a5e4-4761-b103-641ef82153f5-ovsdbserver-sb\") pod \"dnsmasq-dns-55f844cf75-q85q9\" (UID: \"a6c275c2-a5e4-4761-b103-641ef82153f5\") " pod="openstack/dnsmasq-dns-55f844cf75-q85q9" Feb 23 09:09:25 crc kubenswrapper[4940]: I0223 09:09:25.435811 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a6c275c2-a5e4-4761-b103-641ef82153f5-ovsdbserver-nb\") pod \"dnsmasq-dns-55f844cf75-q85q9\" (UID: \"a6c275c2-a5e4-4761-b103-641ef82153f5\") " pod="openstack/dnsmasq-dns-55f844cf75-q85q9" Feb 23 09:09:25 crc kubenswrapper[4940]: I0223 09:09:25.435951 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a6c275c2-a5e4-4761-b103-641ef82153f5-dns-swift-storage-0\") pod \"dnsmasq-dns-55f844cf75-q85q9\" (UID: \"a6c275c2-a5e4-4761-b103-641ef82153f5\") " pod="openstack/dnsmasq-dns-55f844cf75-q85q9" Feb 23 09:09:25 crc kubenswrapper[4940]: I0223 09:09:25.436037 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a6c275c2-a5e4-4761-b103-641ef82153f5-config\") pod \"dnsmasq-dns-55f844cf75-q85q9\" (UID: \"a6c275c2-a5e4-4761-b103-641ef82153f5\") " pod="openstack/dnsmasq-dns-55f844cf75-q85q9" Feb 23 09:09:25 crc kubenswrapper[4940]: I0223 09:09:25.436232 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a6c275c2-a5e4-4761-b103-641ef82153f5-dns-svc\") pod \"dnsmasq-dns-55f844cf75-q85q9\" (UID: \"a6c275c2-a5e4-4761-b103-641ef82153f5\") " pod="openstack/dnsmasq-dns-55f844cf75-q85q9" Feb 23 09:09:25 crc kubenswrapper[4940]: I0223 09:09:25.454134 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-7cc5d5d86-sr2r2"] Feb 23 09:09:25 crc kubenswrapper[4940]: I0223 09:09:25.456010 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7cc5d5d86-sr2r2" Feb 23 09:09:25 crc kubenswrapper[4940]: I0223 09:09:25.462754 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Feb 23 09:09:25 crc kubenswrapper[4940]: I0223 09:09:25.462831 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Feb 23 09:09:25 crc kubenswrapper[4940]: I0223 09:09:25.462875 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Feb 23 09:09:25 crc kubenswrapper[4940]: I0223 09:09:25.463021 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-bntvw" Feb 23 09:09:25 crc kubenswrapper[4940]: I0223 09:09:25.475984 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-7cc5d5d86-sr2r2"] Feb 23 09:09:25 crc kubenswrapper[4940]: I0223 09:09:25.537760 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-smf2c\" (UniqueName: \"kubernetes.io/projected/a6c275c2-a5e4-4761-b103-641ef82153f5-kube-api-access-smf2c\") pod \"dnsmasq-dns-55f844cf75-q85q9\" (UID: \"a6c275c2-a5e4-4761-b103-641ef82153f5\") " pod="openstack/dnsmasq-dns-55f844cf75-q85q9" Feb 23 09:09:25 crc kubenswrapper[4940]: I0223 09:09:25.537806 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a6c275c2-a5e4-4761-b103-641ef82153f5-ovsdbserver-sb\") pod \"dnsmasq-dns-55f844cf75-q85q9\" (UID: \"a6c275c2-a5e4-4761-b103-641ef82153f5\") " pod="openstack/dnsmasq-dns-55f844cf75-q85q9" Feb 23 09:09:25 crc kubenswrapper[4940]: I0223 09:09:25.537848 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a6c275c2-a5e4-4761-b103-641ef82153f5-ovsdbserver-nb\") pod \"dnsmasq-dns-55f844cf75-q85q9\" (UID: \"a6c275c2-a5e4-4761-b103-641ef82153f5\") " pod="openstack/dnsmasq-dns-55f844cf75-q85q9" Feb 23 09:09:25 crc kubenswrapper[4940]: I0223 09:09:25.537893 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a6c275c2-a5e4-4761-b103-641ef82153f5-dns-swift-storage-0\") pod \"dnsmasq-dns-55f844cf75-q85q9\" (UID: \"a6c275c2-a5e4-4761-b103-641ef82153f5\") " pod="openstack/dnsmasq-dns-55f844cf75-q85q9" Feb 23 09:09:25 crc kubenswrapper[4940]: I0223 09:09:25.537930 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a6c275c2-a5e4-4761-b103-641ef82153f5-config\") pod \"dnsmasq-dns-55f844cf75-q85q9\" (UID: \"a6c275c2-a5e4-4761-b103-641ef82153f5\") " pod="openstack/dnsmasq-dns-55f844cf75-q85q9" Feb 23 09:09:25 crc kubenswrapper[4940]: I0223 09:09:25.537996 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a6c275c2-a5e4-4761-b103-641ef82153f5-dns-svc\") pod \"dnsmasq-dns-55f844cf75-q85q9\" (UID: \"a6c275c2-a5e4-4761-b103-641ef82153f5\") " pod="openstack/dnsmasq-dns-55f844cf75-q85q9" Feb 23 09:09:25 crc kubenswrapper[4940]: I0223 09:09:25.538902 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a6c275c2-a5e4-4761-b103-641ef82153f5-dns-svc\") pod \"dnsmasq-dns-55f844cf75-q85q9\" (UID: \"a6c275c2-a5e4-4761-b103-641ef82153f5\") " pod="openstack/dnsmasq-dns-55f844cf75-q85q9" Feb 23 09:09:25 crc kubenswrapper[4940]: I0223 09:09:25.539056 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a6c275c2-a5e4-4761-b103-641ef82153f5-dns-swift-storage-0\") pod \"dnsmasq-dns-55f844cf75-q85q9\" (UID: \"a6c275c2-a5e4-4761-b103-641ef82153f5\") " pod="openstack/dnsmasq-dns-55f844cf75-q85q9" Feb 23 09:09:25 crc kubenswrapper[4940]: I0223 09:09:25.540200 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a6c275c2-a5e4-4761-b103-641ef82153f5-ovsdbserver-sb\") pod \"dnsmasq-dns-55f844cf75-q85q9\" (UID: \"a6c275c2-a5e4-4761-b103-641ef82153f5\") " pod="openstack/dnsmasq-dns-55f844cf75-q85q9" Feb 23 09:09:25 crc kubenswrapper[4940]: I0223 09:09:25.540259 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a6c275c2-a5e4-4761-b103-641ef82153f5-ovsdbserver-nb\") pod \"dnsmasq-dns-55f844cf75-q85q9\" (UID: \"a6c275c2-a5e4-4761-b103-641ef82153f5\") " pod="openstack/dnsmasq-dns-55f844cf75-q85q9" Feb 23 09:09:25 crc kubenswrapper[4940]: I0223 09:09:25.540314 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a6c275c2-a5e4-4761-b103-641ef82153f5-config\") pod \"dnsmasq-dns-55f844cf75-q85q9\" (UID: \"a6c275c2-a5e4-4761-b103-641ef82153f5\") " pod="openstack/dnsmasq-dns-55f844cf75-q85q9" Feb 23 09:09:25 crc kubenswrapper[4940]: I0223 09:09:25.573242 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-smf2c\" (UniqueName: \"kubernetes.io/projected/a6c275c2-a5e4-4761-b103-641ef82153f5-kube-api-access-smf2c\") pod \"dnsmasq-dns-55f844cf75-q85q9\" (UID: \"a6c275c2-a5e4-4761-b103-641ef82153f5\") " pod="openstack/dnsmasq-dns-55f844cf75-q85q9" Feb 23 09:09:25 crc kubenswrapper[4940]: I0223 09:09:25.640075 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/a587b9bd-1362-449d-92a0-6b2f25b45735-config\") pod \"neutron-7cc5d5d86-sr2r2\" (UID: \"a587b9bd-1362-449d-92a0-6b2f25b45735\") " pod="openstack/neutron-7cc5d5d86-sr2r2" Feb 23 09:09:25 crc kubenswrapper[4940]: I0223 09:09:25.640162 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/a587b9bd-1362-449d-92a0-6b2f25b45735-httpd-config\") pod \"neutron-7cc5d5d86-sr2r2\" (UID: \"a587b9bd-1362-449d-92a0-6b2f25b45735\") " pod="openstack/neutron-7cc5d5d86-sr2r2" Feb 23 09:09:25 crc kubenswrapper[4940]: I0223 09:09:25.640194 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a587b9bd-1362-449d-92a0-6b2f25b45735-combined-ca-bundle\") pod \"neutron-7cc5d5d86-sr2r2\" (UID: \"a587b9bd-1362-449d-92a0-6b2f25b45735\") " pod="openstack/neutron-7cc5d5d86-sr2r2" Feb 23 09:09:25 crc kubenswrapper[4940]: I0223 09:09:25.640232 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lbjt8\" (UniqueName: \"kubernetes.io/projected/a587b9bd-1362-449d-92a0-6b2f25b45735-kube-api-access-lbjt8\") pod \"neutron-7cc5d5d86-sr2r2\" (UID: \"a587b9bd-1362-449d-92a0-6b2f25b45735\") " pod="openstack/neutron-7cc5d5d86-sr2r2" Feb 23 09:09:25 crc kubenswrapper[4940]: I0223 09:09:25.640416 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a587b9bd-1362-449d-92a0-6b2f25b45735-ovndb-tls-certs\") pod \"neutron-7cc5d5d86-sr2r2\" (UID: \"a587b9bd-1362-449d-92a0-6b2f25b45735\") " pod="openstack/neutron-7cc5d5d86-sr2r2" Feb 23 09:09:25 crc kubenswrapper[4940]: I0223 09:09:25.708493 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-q85q9" Feb 23 09:09:25 crc kubenswrapper[4940]: I0223 09:09:25.742049 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a587b9bd-1362-449d-92a0-6b2f25b45735-ovndb-tls-certs\") pod \"neutron-7cc5d5d86-sr2r2\" (UID: \"a587b9bd-1362-449d-92a0-6b2f25b45735\") " pod="openstack/neutron-7cc5d5d86-sr2r2" Feb 23 09:09:25 crc kubenswrapper[4940]: I0223 09:09:25.742149 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/a587b9bd-1362-449d-92a0-6b2f25b45735-config\") pod \"neutron-7cc5d5d86-sr2r2\" (UID: \"a587b9bd-1362-449d-92a0-6b2f25b45735\") " pod="openstack/neutron-7cc5d5d86-sr2r2" Feb 23 09:09:25 crc kubenswrapper[4940]: I0223 09:09:25.742249 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/a587b9bd-1362-449d-92a0-6b2f25b45735-httpd-config\") pod \"neutron-7cc5d5d86-sr2r2\" (UID: \"a587b9bd-1362-449d-92a0-6b2f25b45735\") " pod="openstack/neutron-7cc5d5d86-sr2r2" Feb 23 09:09:25 crc kubenswrapper[4940]: I0223 09:09:25.742295 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a587b9bd-1362-449d-92a0-6b2f25b45735-combined-ca-bundle\") pod \"neutron-7cc5d5d86-sr2r2\" (UID: \"a587b9bd-1362-449d-92a0-6b2f25b45735\") " pod="openstack/neutron-7cc5d5d86-sr2r2" Feb 23 09:09:25 crc kubenswrapper[4940]: I0223 09:09:25.742344 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lbjt8\" (UniqueName: \"kubernetes.io/projected/a587b9bd-1362-449d-92a0-6b2f25b45735-kube-api-access-lbjt8\") pod \"neutron-7cc5d5d86-sr2r2\" (UID: \"a587b9bd-1362-449d-92a0-6b2f25b45735\") " pod="openstack/neutron-7cc5d5d86-sr2r2" Feb 23 09:09:25 crc kubenswrapper[4940]: I0223 09:09:25.747718 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a587b9bd-1362-449d-92a0-6b2f25b45735-combined-ca-bundle\") pod \"neutron-7cc5d5d86-sr2r2\" (UID: \"a587b9bd-1362-449d-92a0-6b2f25b45735\") " pod="openstack/neutron-7cc5d5d86-sr2r2" Feb 23 09:09:25 crc kubenswrapper[4940]: I0223 09:09:25.748309 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/a587b9bd-1362-449d-92a0-6b2f25b45735-httpd-config\") pod \"neutron-7cc5d5d86-sr2r2\" (UID: \"a587b9bd-1362-449d-92a0-6b2f25b45735\") " pod="openstack/neutron-7cc5d5d86-sr2r2" Feb 23 09:09:25 crc kubenswrapper[4940]: I0223 09:09:25.749183 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/a587b9bd-1362-449d-92a0-6b2f25b45735-config\") pod \"neutron-7cc5d5d86-sr2r2\" (UID: \"a587b9bd-1362-449d-92a0-6b2f25b45735\") " pod="openstack/neutron-7cc5d5d86-sr2r2" Feb 23 09:09:25 crc kubenswrapper[4940]: I0223 09:09:25.762412 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a587b9bd-1362-449d-92a0-6b2f25b45735-ovndb-tls-certs\") pod \"neutron-7cc5d5d86-sr2r2\" (UID: \"a587b9bd-1362-449d-92a0-6b2f25b45735\") " pod="openstack/neutron-7cc5d5d86-sr2r2" Feb 23 09:09:25 crc kubenswrapper[4940]: I0223 09:09:25.765562 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lbjt8\" (UniqueName: \"kubernetes.io/projected/a587b9bd-1362-449d-92a0-6b2f25b45735-kube-api-access-lbjt8\") pod \"neutron-7cc5d5d86-sr2r2\" (UID: \"a587b9bd-1362-449d-92a0-6b2f25b45735\") " pod="openstack/neutron-7cc5d5d86-sr2r2" Feb 23 09:09:25 crc kubenswrapper[4940]: E0223 09:09:25.766414 4940 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified" Feb 23 09:09:25 crc kubenswrapper[4940]: E0223 09:09:25.766578 4940 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hqrcm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-6p69q_openstack(ab97aa50-1b14-4a5c-82cd-1be9f025b2b5): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 23 09:09:25 crc kubenswrapper[4940]: E0223 09:09:25.768118 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-6p69q" podUID="ab97aa50-1b14-4a5c-82cd-1be9f025b2b5" Feb 23 09:09:25 crc kubenswrapper[4940]: I0223 09:09:25.771972 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7cc5d5d86-sr2r2" Feb 23 09:09:25 crc kubenswrapper[4940]: I0223 09:09:25.808040 4940 scope.go:117] "RemoveContainer" containerID="94cafb9bbd168512415a6b93e03f7c0742ba6e5068268ad6517631e8bd1bd143" Feb 23 09:09:25 crc kubenswrapper[4940]: I0223 09:09:25.957036 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6d4447f67f-bwqtp" Feb 23 09:09:26 crc kubenswrapper[4940]: I0223 09:09:26.155335 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dlkf2\" (UniqueName: \"kubernetes.io/projected/05fcd3d3-fb9e-420f-a8b6-72bf1c84f0b3-kube-api-access-dlkf2\") pod \"05fcd3d3-fb9e-420f-a8b6-72bf1c84f0b3\" (UID: \"05fcd3d3-fb9e-420f-a8b6-72bf1c84f0b3\") " Feb 23 09:09:26 crc kubenswrapper[4940]: I0223 09:09:26.155909 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/05fcd3d3-fb9e-420f-a8b6-72bf1c84f0b3-scripts\") pod \"05fcd3d3-fb9e-420f-a8b6-72bf1c84f0b3\" (UID: \"05fcd3d3-fb9e-420f-a8b6-72bf1c84f0b3\") " Feb 23 09:09:26 crc kubenswrapper[4940]: I0223 09:09:26.156053 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/05fcd3d3-fb9e-420f-a8b6-72bf1c84f0b3-horizon-secret-key\") pod \"05fcd3d3-fb9e-420f-a8b6-72bf1c84f0b3\" (UID: \"05fcd3d3-fb9e-420f-a8b6-72bf1c84f0b3\") " Feb 23 09:09:26 crc kubenswrapper[4940]: I0223 09:09:26.156178 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/05fcd3d3-fb9e-420f-a8b6-72bf1c84f0b3-config-data\") pod \"05fcd3d3-fb9e-420f-a8b6-72bf1c84f0b3\" (UID: \"05fcd3d3-fb9e-420f-a8b6-72bf1c84f0b3\") " Feb 23 09:09:26 crc kubenswrapper[4940]: I0223 09:09:26.156238 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/05fcd3d3-fb9e-420f-a8b6-72bf1c84f0b3-logs\") pod \"05fcd3d3-fb9e-420f-a8b6-72bf1c84f0b3\" (UID: \"05fcd3d3-fb9e-420f-a8b6-72bf1c84f0b3\") " Feb 23 09:09:26 crc kubenswrapper[4940]: I0223 09:09:26.157213 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/05fcd3d3-fb9e-420f-a8b6-72bf1c84f0b3-logs" (OuterVolumeSpecName: "logs") pod "05fcd3d3-fb9e-420f-a8b6-72bf1c84f0b3" (UID: "05fcd3d3-fb9e-420f-a8b6-72bf1c84f0b3"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 09:09:26 crc kubenswrapper[4940]: I0223 09:09:26.161418 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/05fcd3d3-fb9e-420f-a8b6-72bf1c84f0b3-scripts" (OuterVolumeSpecName: "scripts") pod "05fcd3d3-fb9e-420f-a8b6-72bf1c84f0b3" (UID: "05fcd3d3-fb9e-420f-a8b6-72bf1c84f0b3"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 09:09:26 crc kubenswrapper[4940]: I0223 09:09:26.165286 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/05fcd3d3-fb9e-420f-a8b6-72bf1c84f0b3-config-data" (OuterVolumeSpecName: "config-data") pod "05fcd3d3-fb9e-420f-a8b6-72bf1c84f0b3" (UID: "05fcd3d3-fb9e-420f-a8b6-72bf1c84f0b3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 09:09:26 crc kubenswrapper[4940]: I0223 09:09:26.185804 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/05fcd3d3-fb9e-420f-a8b6-72bf1c84f0b3-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "05fcd3d3-fb9e-420f-a8b6-72bf1c84f0b3" (UID: "05fcd3d3-fb9e-420f-a8b6-72bf1c84f0b3"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 09:09:26 crc kubenswrapper[4940]: I0223 09:09:26.186414 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/05fcd3d3-fb9e-420f-a8b6-72bf1c84f0b3-kube-api-access-dlkf2" (OuterVolumeSpecName: "kube-api-access-dlkf2") pod "05fcd3d3-fb9e-420f-a8b6-72bf1c84f0b3" (UID: "05fcd3d3-fb9e-420f-a8b6-72bf1c84f0b3"). InnerVolumeSpecName "kube-api-access-dlkf2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 09:09:26 crc kubenswrapper[4940]: I0223 09:09:26.266626 4940 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/05fcd3d3-fb9e-420f-a8b6-72bf1c84f0b3-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Feb 23 09:09:26 crc kubenswrapper[4940]: I0223 09:09:26.266664 4940 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/05fcd3d3-fb9e-420f-a8b6-72bf1c84f0b3-config-data\") on node \"crc\" DevicePath \"\"" Feb 23 09:09:26 crc kubenswrapper[4940]: I0223 09:09:26.266676 4940 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/05fcd3d3-fb9e-420f-a8b6-72bf1c84f0b3-logs\") on node \"crc\" DevicePath \"\"" Feb 23 09:09:26 crc kubenswrapper[4940]: I0223 09:09:26.266688 4940 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dlkf2\" (UniqueName: \"kubernetes.io/projected/05fcd3d3-fb9e-420f-a8b6-72bf1c84f0b3-kube-api-access-dlkf2\") on node \"crc\" DevicePath \"\"" Feb 23 09:09:26 crc kubenswrapper[4940]: I0223 09:09:26.266700 4940 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/05fcd3d3-fb9e-420f-a8b6-72bf1c84f0b3-scripts\") on node \"crc\" DevicePath \"\"" Feb 23 09:09:26 crc kubenswrapper[4940]: I0223 09:09:26.302172 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-7d9wv"] Feb 23 09:09:26 crc kubenswrapper[4940]: I0223 09:09:26.306637 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6d4447f67f-bwqtp" Feb 23 09:09:26 crc kubenswrapper[4940]: I0223 09:09:26.306673 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6d4447f67f-bwqtp" event={"ID":"05fcd3d3-fb9e-420f-a8b6-72bf1c84f0b3","Type":"ContainerDied","Data":"2d10b1b7fcecd98adf497965f111edf6a861b0792894ef59104ed60803372099"} Feb 23 09:09:26 crc kubenswrapper[4940]: E0223 09:09:26.311072 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified\\\"\"" pod="openstack/cinder-db-sync-6p69q" podUID="ab97aa50-1b14-4a5c-82cd-1be9f025b2b5" Feb 23 09:09:26 crc kubenswrapper[4940]: I0223 09:09:26.412009 4940 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-6d4447f67f-bwqtp"] Feb 23 09:09:26 crc kubenswrapper[4940]: I0223 09:09:26.419714 4940 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-6d4447f67f-bwqtp"] Feb 23 09:09:26 crc kubenswrapper[4940]: I0223 09:09:26.443176 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 23 09:09:26 crc kubenswrapper[4940]: I0223 09:09:26.502593 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6884678d78-ckt87"] Feb 23 09:09:26 crc kubenswrapper[4940]: I0223 09:09:26.510848 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-8485464bb-cvmj5"] Feb 23 09:09:26 crc kubenswrapper[4940]: I0223 09:09:26.658987 4940 scope.go:117] "RemoveContainer" containerID="6f47c6c78b1a63e4aab8f7ba80739a7738f5171315a9f9c9ca4cdd4e57d06c47" Feb 23 09:09:26 crc kubenswrapper[4940]: I0223 09:09:26.887674 4940 scope.go:117] "RemoveContainer" containerID="d17f230221091f10b35e26889ddeb92a41f743813d36024c58f17a18baacb26b" Feb 23 09:09:26 crc kubenswrapper[4940]: I0223 09:09:26.987029 4940 scope.go:117] "RemoveContainer" containerID="de637a66fdb051f9a3f6f9ebd74e992ff6ea57aba8da0127ab4e6d0f58dc984c" Feb 23 09:09:27 crc kubenswrapper[4940]: I0223 09:09:27.220575 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 23 09:09:27 crc kubenswrapper[4940]: I0223 09:09:27.236463 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-q85q9"] Feb 23 09:09:27 crc kubenswrapper[4940]: W0223 09:09:27.267080 4940 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda6c275c2_a5e4_4761_b103_641ef82153f5.slice/crio-e52090449693cb10dc59cd27a5620e2e079afc1f9f1a510f86a6815cb4cf9679 WatchSource:0}: Error finding container e52090449693cb10dc59cd27a5620e2e079afc1f9f1a510f86a6815cb4cf9679: Status 404 returned error can't find the container with id e52090449693cb10dc59cd27a5620e2e079afc1f9f1a510f86a6815cb4cf9679 Feb 23 09:09:27 crc kubenswrapper[4940]: I0223 09:09:27.331715 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-7cc5d5d86-sr2r2"] Feb 23 09:09:27 crc kubenswrapper[4940]: I0223 09:09:27.360244 4940 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="05fcd3d3-fb9e-420f-a8b6-72bf1c84f0b3" path="/var/lib/kubelet/pods/05fcd3d3-fb9e-420f-a8b6-72bf1c84f0b3/volumes" Feb 23 09:09:27 crc kubenswrapper[4940]: I0223 09:09:27.360578 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6884678d78-ckt87" event={"ID":"e330abf6-9282-4221-b286-672ffc3985e7","Type":"ContainerStarted","Data":"b2f6b421fd3fcdbe1e9e51fe97930ec00f103bfbd3e3aa172f2f7caee49d75a0"} Feb 23 09:09:27 crc kubenswrapper[4940]: I0223 09:09:27.360603 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-mphgm" event={"ID":"b430a58d-ed32-4642-ac93-d6f0de2eeb0d","Type":"ContainerStarted","Data":"7584d944668278dbc303ed8ea0f9f93364b2d04f6bd4c7bd4b351eb7e68181a0"} Feb 23 09:09:27 crc kubenswrapper[4940]: I0223 09:09:27.360644 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-8485464bb-cvmj5" event={"ID":"0c698dee-e3c4-44d3-a08b-73e6b1e87986","Type":"ContainerStarted","Data":"24210f8f44c28fa84214a281091112554055677392f8da2ea7010c0b38e8d74d"} Feb 23 09:09:27 crc kubenswrapper[4940]: I0223 09:09:27.362646 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"301b7712-08f5-47e5-b4bf-f1c3033eba8d","Type":"ContainerStarted","Data":"f21125ec4223acc0f911d393b368b7f327de745cfbdcf21d8d02fbe2f5fba52b"} Feb 23 09:09:27 crc kubenswrapper[4940]: I0223 09:09:27.363896 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"c3fdde6c-2b86-4f4c-b431-292279528e91","Type":"ContainerStarted","Data":"79287ac34ed1c2aa4a9aa7fb372145156f95f46c3bfc296d298bfa4e70ebd52d"} Feb 23 09:09:27 crc kubenswrapper[4940]: W0223 09:09:27.366705 4940 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda587b9bd_1362_449d_92a0_6b2f25b45735.slice/crio-e02385b9d9223fb7aca0f8e3b245a1ec416f44cdc12ca79700aa821d7f0fff1b WatchSource:0}: Error finding container e02385b9d9223fb7aca0f8e3b245a1ec416f44cdc12ca79700aa821d7f0fff1b: Status 404 returned error can't find the container with id e02385b9d9223fb7aca0f8e3b245a1ec416f44cdc12ca79700aa821d7f0fff1b Feb 23 09:09:27 crc kubenswrapper[4940]: I0223 09:09:27.367687 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-q85q9" event={"ID":"a6c275c2-a5e4-4761-b103-641ef82153f5","Type":"ContainerStarted","Data":"e52090449693cb10dc59cd27a5620e2e079afc1f9f1a510f86a6815cb4cf9679"} Feb 23 09:09:27 crc kubenswrapper[4940]: I0223 09:09:27.390528 4940 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-mphgm" podStartSLOduration=6.007682801 podStartE2EDuration="34.390501272s" podCreationTimestamp="2026-02-23 09:08:53 +0000 UTC" firstStartedPulling="2026-02-23 09:08:54.9862478 +0000 UTC m=+1266.369453957" lastFinishedPulling="2026-02-23 09:09:23.369066281 +0000 UTC m=+1294.752272428" observedRunningTime="2026-02-23 09:09:27.366694465 +0000 UTC m=+1298.749900632" watchObservedRunningTime="2026-02-23 09:09:27.390501272 +0000 UTC m=+1298.773707429" Feb 23 09:09:27 crc kubenswrapper[4940]: I0223 09:09:27.407251 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-7d9wv" event={"ID":"b224c257-f773-40f9-b62b-8d6e897ed198","Type":"ContainerStarted","Data":"cd2764c789e4740aa378dbf6c1d22791d291e706850273c689c394120e943215"} Feb 23 09:09:27 crc kubenswrapper[4940]: I0223 09:09:27.407303 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-7d9wv" event={"ID":"b224c257-f773-40f9-b62b-8d6e897ed198","Type":"ContainerStarted","Data":"999ccab4397619dbb48e4fa69b23709970e0792affe3d5eb572cf30e6363835f"} Feb 23 09:09:27 crc kubenswrapper[4940]: I0223 09:09:27.487829 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-rxlnz" event={"ID":"7c217a33-e32d-41cc-8fda-6691bf37db15","Type":"ContainerStarted","Data":"935a7a505dfe9d7724626c35bf0f3d5f01b1fcafcb203ec8ac32ce3cc29422db"} Feb 23 09:09:27 crc kubenswrapper[4940]: I0223 09:09:27.520689 4940 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-7d9wv" podStartSLOduration=24.520670772 podStartE2EDuration="24.520670772s" podCreationTimestamp="2026-02-23 09:09:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 09:09:27.458824469 +0000 UTC m=+1298.842030636" watchObservedRunningTime="2026-02-23 09:09:27.520670772 +0000 UTC m=+1298.903876919" Feb 23 09:09:28 crc kubenswrapper[4940]: I0223 09:09:28.192863 4940 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-rxlnz" podStartSLOduration=5.852117039 podStartE2EDuration="35.192834568s" podCreationTimestamp="2026-02-23 09:08:53 +0000 UTC" firstStartedPulling="2026-02-23 09:08:54.676003818 +0000 UTC m=+1266.059209975" lastFinishedPulling="2026-02-23 09:09:24.016721347 +0000 UTC m=+1295.399927504" observedRunningTime="2026-02-23 09:09:27.521105706 +0000 UTC m=+1298.904311853" watchObservedRunningTime="2026-02-23 09:09:28.192834568 +0000 UTC m=+1299.576040725" Feb 23 09:09:28 crc kubenswrapper[4940]: I0223 09:09:28.203462 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-7ffc6bfc65-qhp9j"] Feb 23 09:09:28 crc kubenswrapper[4940]: I0223 09:09:28.205337 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7ffc6bfc65-qhp9j" Feb 23 09:09:28 crc kubenswrapper[4940]: I0223 09:09:28.211997 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Feb 23 09:09:28 crc kubenswrapper[4940]: I0223 09:09:28.212486 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Feb 23 09:09:28 crc kubenswrapper[4940]: I0223 09:09:28.242293 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-7ffc6bfc65-qhp9j"] Feb 23 09:09:28 crc kubenswrapper[4940]: I0223 09:09:28.317495 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b6ace7c6-781c-4053-8e6e-26232d9355da-public-tls-certs\") pod \"neutron-7ffc6bfc65-qhp9j\" (UID: \"b6ace7c6-781c-4053-8e6e-26232d9355da\") " pod="openstack/neutron-7ffc6bfc65-qhp9j" Feb 23 09:09:28 crc kubenswrapper[4940]: I0223 09:09:28.317570 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f5s77\" (UniqueName: \"kubernetes.io/projected/b6ace7c6-781c-4053-8e6e-26232d9355da-kube-api-access-f5s77\") pod \"neutron-7ffc6bfc65-qhp9j\" (UID: \"b6ace7c6-781c-4053-8e6e-26232d9355da\") " pod="openstack/neutron-7ffc6bfc65-qhp9j" Feb 23 09:09:28 crc kubenswrapper[4940]: I0223 09:09:28.317631 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/b6ace7c6-781c-4053-8e6e-26232d9355da-httpd-config\") pod \"neutron-7ffc6bfc65-qhp9j\" (UID: \"b6ace7c6-781c-4053-8e6e-26232d9355da\") " pod="openstack/neutron-7ffc6bfc65-qhp9j" Feb 23 09:09:28 crc kubenswrapper[4940]: I0223 09:09:28.317742 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/b6ace7c6-781c-4053-8e6e-26232d9355da-ovndb-tls-certs\") pod \"neutron-7ffc6bfc65-qhp9j\" (UID: \"b6ace7c6-781c-4053-8e6e-26232d9355da\") " pod="openstack/neutron-7ffc6bfc65-qhp9j" Feb 23 09:09:28 crc kubenswrapper[4940]: I0223 09:09:28.317940 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b6ace7c6-781c-4053-8e6e-26232d9355da-internal-tls-certs\") pod \"neutron-7ffc6bfc65-qhp9j\" (UID: \"b6ace7c6-781c-4053-8e6e-26232d9355da\") " pod="openstack/neutron-7ffc6bfc65-qhp9j" Feb 23 09:09:28 crc kubenswrapper[4940]: I0223 09:09:28.318017 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6ace7c6-781c-4053-8e6e-26232d9355da-combined-ca-bundle\") pod \"neutron-7ffc6bfc65-qhp9j\" (UID: \"b6ace7c6-781c-4053-8e6e-26232d9355da\") " pod="openstack/neutron-7ffc6bfc65-qhp9j" Feb 23 09:09:28 crc kubenswrapper[4940]: I0223 09:09:28.318065 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/b6ace7c6-781c-4053-8e6e-26232d9355da-config\") pod \"neutron-7ffc6bfc65-qhp9j\" (UID: \"b6ace7c6-781c-4053-8e6e-26232d9355da\") " pod="openstack/neutron-7ffc6bfc65-qhp9j" Feb 23 09:09:28 crc kubenswrapper[4940]: I0223 09:09:28.420770 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/b6ace7c6-781c-4053-8e6e-26232d9355da-config\") pod \"neutron-7ffc6bfc65-qhp9j\" (UID: \"b6ace7c6-781c-4053-8e6e-26232d9355da\") " pod="openstack/neutron-7ffc6bfc65-qhp9j" Feb 23 09:09:28 crc kubenswrapper[4940]: I0223 09:09:28.420885 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b6ace7c6-781c-4053-8e6e-26232d9355da-public-tls-certs\") pod \"neutron-7ffc6bfc65-qhp9j\" (UID: \"b6ace7c6-781c-4053-8e6e-26232d9355da\") " pod="openstack/neutron-7ffc6bfc65-qhp9j" Feb 23 09:09:28 crc kubenswrapper[4940]: I0223 09:09:28.420923 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f5s77\" (UniqueName: \"kubernetes.io/projected/b6ace7c6-781c-4053-8e6e-26232d9355da-kube-api-access-f5s77\") pod \"neutron-7ffc6bfc65-qhp9j\" (UID: \"b6ace7c6-781c-4053-8e6e-26232d9355da\") " pod="openstack/neutron-7ffc6bfc65-qhp9j" Feb 23 09:09:28 crc kubenswrapper[4940]: I0223 09:09:28.420958 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/b6ace7c6-781c-4053-8e6e-26232d9355da-httpd-config\") pod \"neutron-7ffc6bfc65-qhp9j\" (UID: \"b6ace7c6-781c-4053-8e6e-26232d9355da\") " pod="openstack/neutron-7ffc6bfc65-qhp9j" Feb 23 09:09:28 crc kubenswrapper[4940]: I0223 09:09:28.421069 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/b6ace7c6-781c-4053-8e6e-26232d9355da-ovndb-tls-certs\") pod \"neutron-7ffc6bfc65-qhp9j\" (UID: \"b6ace7c6-781c-4053-8e6e-26232d9355da\") " pod="openstack/neutron-7ffc6bfc65-qhp9j" Feb 23 09:09:28 crc kubenswrapper[4940]: I0223 09:09:28.421116 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b6ace7c6-781c-4053-8e6e-26232d9355da-internal-tls-certs\") pod \"neutron-7ffc6bfc65-qhp9j\" (UID: \"b6ace7c6-781c-4053-8e6e-26232d9355da\") " pod="openstack/neutron-7ffc6bfc65-qhp9j" Feb 23 09:09:28 crc kubenswrapper[4940]: I0223 09:09:28.421148 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6ace7c6-781c-4053-8e6e-26232d9355da-combined-ca-bundle\") pod \"neutron-7ffc6bfc65-qhp9j\" (UID: \"b6ace7c6-781c-4053-8e6e-26232d9355da\") " pod="openstack/neutron-7ffc6bfc65-qhp9j" Feb 23 09:09:28 crc kubenswrapper[4940]: I0223 09:09:28.425457 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6ace7c6-781c-4053-8e6e-26232d9355da-combined-ca-bundle\") pod \"neutron-7ffc6bfc65-qhp9j\" (UID: \"b6ace7c6-781c-4053-8e6e-26232d9355da\") " pod="openstack/neutron-7ffc6bfc65-qhp9j" Feb 23 09:09:28 crc kubenswrapper[4940]: I0223 09:09:28.430313 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/b6ace7c6-781c-4053-8e6e-26232d9355da-httpd-config\") pod \"neutron-7ffc6bfc65-qhp9j\" (UID: \"b6ace7c6-781c-4053-8e6e-26232d9355da\") " pod="openstack/neutron-7ffc6bfc65-qhp9j" Feb 23 09:09:28 crc kubenswrapper[4940]: I0223 09:09:28.434310 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b6ace7c6-781c-4053-8e6e-26232d9355da-public-tls-certs\") pod \"neutron-7ffc6bfc65-qhp9j\" (UID: \"b6ace7c6-781c-4053-8e6e-26232d9355da\") " pod="openstack/neutron-7ffc6bfc65-qhp9j" Feb 23 09:09:28 crc kubenswrapper[4940]: I0223 09:09:28.436441 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/b6ace7c6-781c-4053-8e6e-26232d9355da-ovndb-tls-certs\") pod \"neutron-7ffc6bfc65-qhp9j\" (UID: \"b6ace7c6-781c-4053-8e6e-26232d9355da\") " pod="openstack/neutron-7ffc6bfc65-qhp9j" Feb 23 09:09:28 crc kubenswrapper[4940]: I0223 09:09:28.440358 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/b6ace7c6-781c-4053-8e6e-26232d9355da-config\") pod \"neutron-7ffc6bfc65-qhp9j\" (UID: \"b6ace7c6-781c-4053-8e6e-26232d9355da\") " pod="openstack/neutron-7ffc6bfc65-qhp9j" Feb 23 09:09:28 crc kubenswrapper[4940]: I0223 09:09:28.445640 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b6ace7c6-781c-4053-8e6e-26232d9355da-internal-tls-certs\") pod \"neutron-7ffc6bfc65-qhp9j\" (UID: \"b6ace7c6-781c-4053-8e6e-26232d9355da\") " pod="openstack/neutron-7ffc6bfc65-qhp9j" Feb 23 09:09:28 crc kubenswrapper[4940]: I0223 09:09:28.447700 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f5s77\" (UniqueName: \"kubernetes.io/projected/b6ace7c6-781c-4053-8e6e-26232d9355da-kube-api-access-f5s77\") pod \"neutron-7ffc6bfc65-qhp9j\" (UID: \"b6ace7c6-781c-4053-8e6e-26232d9355da\") " pod="openstack/neutron-7ffc6bfc65-qhp9j" Feb 23 09:09:28 crc kubenswrapper[4940]: I0223 09:09:28.571888 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7cc5d5d86-sr2r2" event={"ID":"a587b9bd-1362-449d-92a0-6b2f25b45735","Type":"ContainerStarted","Data":"f8f0ef202b54c93d5f890ebcc3a445b472067725bf46cacfd6e9c5cfa9fd63ad"} Feb 23 09:09:28 crc kubenswrapper[4940]: I0223 09:09:28.572211 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7cc5d5d86-sr2r2" event={"ID":"a587b9bd-1362-449d-92a0-6b2f25b45735","Type":"ContainerStarted","Data":"b34f129a279e5f9d6d4a796ffb2079cee98358e47fe89b7c10add880a8f7af7e"} Feb 23 09:09:28 crc kubenswrapper[4940]: I0223 09:09:28.572226 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7cc5d5d86-sr2r2" event={"ID":"a587b9bd-1362-449d-92a0-6b2f25b45735","Type":"ContainerStarted","Data":"e02385b9d9223fb7aca0f8e3b245a1ec416f44cdc12ca79700aa821d7f0fff1b"} Feb 23 09:09:28 crc kubenswrapper[4940]: I0223 09:09:28.572261 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-7cc5d5d86-sr2r2" Feb 23 09:09:28 crc kubenswrapper[4940]: I0223 09:09:28.588467 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7ffc6bfc65-qhp9j" Feb 23 09:09:28 crc kubenswrapper[4940]: I0223 09:09:28.612450 4940 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-7cc5d5d86-sr2r2" podStartSLOduration=3.612430659 podStartE2EDuration="3.612430659s" podCreationTimestamp="2026-02-23 09:09:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 09:09:28.612042856 +0000 UTC m=+1299.995249023" watchObservedRunningTime="2026-02-23 09:09:28.612430659 +0000 UTC m=+1299.995636816" Feb 23 09:09:28 crc kubenswrapper[4940]: I0223 09:09:28.612514 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6af3cab0-e7d8-461f-9092-6b5afefff5cc","Type":"ContainerStarted","Data":"f8a149cbff71bf8a8cf32a4d15c2ded691d1894cc7b8d4cd9488753cfd672826"} Feb 23 09:09:28 crc kubenswrapper[4940]: I0223 09:09:28.654598 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-8485464bb-cvmj5" event={"ID":"0c698dee-e3c4-44d3-a08b-73e6b1e87986","Type":"ContainerStarted","Data":"0b5ba7c24d799d97fff8361b493bd9dffc9ddf3fcc9ebf75ba3b8292df21351d"} Feb 23 09:09:28 crc kubenswrapper[4940]: I0223 09:09:28.654683 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-8485464bb-cvmj5" event={"ID":"0c698dee-e3c4-44d3-a08b-73e6b1e87986","Type":"ContainerStarted","Data":"d4e9711e52cb1c545d97d739a3264a8254a8bd83ddd8073c0f14486fd75366ba"} Feb 23 09:09:28 crc kubenswrapper[4940]: I0223 09:09:28.711244 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6884678d78-ckt87" event={"ID":"e330abf6-9282-4221-b286-672ffc3985e7","Type":"ContainerStarted","Data":"7867726af076a08e4add2f52c56486390e581182e1eadda89120ffd72a88d736"} Feb 23 09:09:28 crc kubenswrapper[4940]: I0223 09:09:28.711303 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6884678d78-ckt87" event={"ID":"e330abf6-9282-4221-b286-672ffc3985e7","Type":"ContainerStarted","Data":"9839860a41439a44231b70aa313797924a814f32443061171c7749de8e9583fd"} Feb 23 09:09:28 crc kubenswrapper[4940]: I0223 09:09:28.751818 4940 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-8485464bb-cvmj5" podStartSLOduration=26.270464126 podStartE2EDuration="26.751792426s" podCreationTimestamp="2026-02-23 09:09:02 +0000 UTC" firstStartedPulling="2026-02-23 09:09:26.658462667 +0000 UTC m=+1298.041668824" lastFinishedPulling="2026-02-23 09:09:27.139790967 +0000 UTC m=+1298.522997124" observedRunningTime="2026-02-23 09:09:28.701537278 +0000 UTC m=+1300.084743435" watchObservedRunningTime="2026-02-23 09:09:28.751792426 +0000 UTC m=+1300.134998583" Feb 23 09:09:28 crc kubenswrapper[4940]: I0223 09:09:28.788011 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"301b7712-08f5-47e5-b4bf-f1c3033eba8d","Type":"ContainerStarted","Data":"3b6b92fcd4fc029fa385dfd58b0573e51722a8378618e58b1258bf6bd5622be1"} Feb 23 09:09:28 crc kubenswrapper[4940]: I0223 09:09:28.796895 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"c3fdde6c-2b86-4f4c-b431-292279528e91","Type":"ContainerStarted","Data":"225b8f1b78a337f94f840edf63aede90339c5be288f9db5611f4a308ed21f154"} Feb 23 09:09:28 crc kubenswrapper[4940]: I0223 09:09:28.809303 4940 generic.go:334] "Generic (PLEG): container finished" podID="a6c275c2-a5e4-4761-b103-641ef82153f5" containerID="82b2579512ad25d8fe0460debe2a2e4350256279749682da587366dc659d6256" exitCode=0 Feb 23 09:09:28 crc kubenswrapper[4940]: I0223 09:09:28.810693 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-q85q9" event={"ID":"a6c275c2-a5e4-4761-b103-641ef82153f5","Type":"ContainerDied","Data":"82b2579512ad25d8fe0460debe2a2e4350256279749682da587366dc659d6256"} Feb 23 09:09:28 crc kubenswrapper[4940]: I0223 09:09:28.943082 4940 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-6884678d78-ckt87" podStartSLOduration=26.438567938 podStartE2EDuration="26.943060115s" podCreationTimestamp="2026-02-23 09:09:02 +0000 UTC" firstStartedPulling="2026-02-23 09:09:26.602948563 +0000 UTC m=+1297.986154730" lastFinishedPulling="2026-02-23 09:09:27.10744074 +0000 UTC m=+1298.490646907" observedRunningTime="2026-02-23 09:09:28.777309498 +0000 UTC m=+1300.160515675" watchObservedRunningTime="2026-02-23 09:09:28.943060115 +0000 UTC m=+1300.326266272" Feb 23 09:09:29 crc kubenswrapper[4940]: I0223 09:09:29.310228 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-7ffc6bfc65-qhp9j"] Feb 23 09:09:29 crc kubenswrapper[4940]: W0223 09:09:29.313806 4940 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb6ace7c6_781c_4053_8e6e_26232d9355da.slice/crio-822dbf834a34eff4ed53946e3e7f9d40658456a755d08ba57863c8f211a2ef0a WatchSource:0}: Error finding container 822dbf834a34eff4ed53946e3e7f9d40658456a755d08ba57863c8f211a2ef0a: Status 404 returned error can't find the container with id 822dbf834a34eff4ed53946e3e7f9d40658456a755d08ba57863c8f211a2ef0a Feb 23 09:09:29 crc kubenswrapper[4940]: I0223 09:09:29.836760 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-q85q9" event={"ID":"a6c275c2-a5e4-4761-b103-641ef82153f5","Type":"ContainerStarted","Data":"6f5d1af74217454355635a221cfb2b337188566b3f28a25eaf9bc292127d925f"} Feb 23 09:09:29 crc kubenswrapper[4940]: I0223 09:09:29.837124 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-55f844cf75-q85q9" Feb 23 09:09:29 crc kubenswrapper[4940]: I0223 09:09:29.853345 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7ffc6bfc65-qhp9j" event={"ID":"b6ace7c6-781c-4053-8e6e-26232d9355da","Type":"ContainerStarted","Data":"74b3089cf0f921509af84286bb5056e18b7337f0e6fb28a9d0aa489627692b95"} Feb 23 09:09:29 crc kubenswrapper[4940]: I0223 09:09:29.853393 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7ffc6bfc65-qhp9j" event={"ID":"b6ace7c6-781c-4053-8e6e-26232d9355da","Type":"ContainerStarted","Data":"822dbf834a34eff4ed53946e3e7f9d40658456a755d08ba57863c8f211a2ef0a"} Feb 23 09:09:29 crc kubenswrapper[4940]: I0223 09:09:29.867903 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"301b7712-08f5-47e5-b4bf-f1c3033eba8d","Type":"ContainerStarted","Data":"1aa64c34618472d0564c8fc7028b863b88d1acb04832bfcd0dff1c20073ebc14"} Feb 23 09:09:29 crc kubenswrapper[4940]: I0223 09:09:29.879708 4940 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-55f844cf75-q85q9" podStartSLOduration=4.871926064 podStartE2EDuration="4.871926064s" podCreationTimestamp="2026-02-23 09:09:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 09:09:29.856136919 +0000 UTC m=+1301.239343076" watchObservedRunningTime="2026-02-23 09:09:29.871926064 +0000 UTC m=+1301.255132221" Feb 23 09:09:29 crc kubenswrapper[4940]: I0223 09:09:29.901985 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"c3fdde6c-2b86-4f4c-b431-292279528e91","Type":"ContainerStarted","Data":"0358e3f2841db2404e85206a092ff663454bce4fea1a3a237583a4abe8f0178a"} Feb 23 09:09:29 crc kubenswrapper[4940]: I0223 09:09:29.912398 4940 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=26.912375324 podStartE2EDuration="26.912375324s" podCreationTimestamp="2026-02-23 09:09:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 09:09:29.89407569 +0000 UTC m=+1301.277281847" watchObservedRunningTime="2026-02-23 09:09:29.912375324 +0000 UTC m=+1301.295581481" Feb 23 09:09:29 crc kubenswrapper[4940]: I0223 09:09:29.939589 4940 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=6.939562439 podStartE2EDuration="6.939562439s" podCreationTimestamp="2026-02-23 09:09:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 09:09:29.927026335 +0000 UTC m=+1301.310232502" watchObservedRunningTime="2026-02-23 09:09:29.939562439 +0000 UTC m=+1301.322768596" Feb 23 09:09:30 crc kubenswrapper[4940]: I0223 09:09:30.914514 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7ffc6bfc65-qhp9j" event={"ID":"b6ace7c6-781c-4053-8e6e-26232d9355da","Type":"ContainerStarted","Data":"e3c8d0b0648ec94d2501f62779ab5e1aa6d83808ac2d9c0160962627213f80f6"} Feb 23 09:09:30 crc kubenswrapper[4940]: I0223 09:09:30.915558 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-7ffc6bfc65-qhp9j" Feb 23 09:09:30 crc kubenswrapper[4940]: I0223 09:09:30.966911 4940 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-7ffc6bfc65-qhp9j" podStartSLOduration=2.966886631 podStartE2EDuration="2.966886631s" podCreationTimestamp="2026-02-23 09:09:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 09:09:30.960998026 +0000 UTC m=+1302.344204183" watchObservedRunningTime="2026-02-23 09:09:30.966886631 +0000 UTC m=+1302.350092788" Feb 23 09:09:31 crc kubenswrapper[4940]: I0223 09:09:31.929486 4940 generic.go:334] "Generic (PLEG): container finished" podID="7c217a33-e32d-41cc-8fda-6691bf37db15" containerID="935a7a505dfe9d7724626c35bf0f3d5f01b1fcafcb203ec8ac32ce3cc29422db" exitCode=0 Feb 23 09:09:31 crc kubenswrapper[4940]: I0223 09:09:31.929573 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-rxlnz" event={"ID":"7c217a33-e32d-41cc-8fda-6691bf37db15","Type":"ContainerDied","Data":"935a7a505dfe9d7724626c35bf0f3d5f01b1fcafcb203ec8ac32ce3cc29422db"} Feb 23 09:09:31 crc kubenswrapper[4940]: I0223 09:09:31.933567 4940 generic.go:334] "Generic (PLEG): container finished" podID="b224c257-f773-40f9-b62b-8d6e897ed198" containerID="cd2764c789e4740aa378dbf6c1d22791d291e706850273c689c394120e943215" exitCode=0 Feb 23 09:09:31 crc kubenswrapper[4940]: I0223 09:09:31.933661 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-7d9wv" event={"ID":"b224c257-f773-40f9-b62b-8d6e897ed198","Type":"ContainerDied","Data":"cd2764c789e4740aa378dbf6c1d22791d291e706850273c689c394120e943215"} Feb 23 09:09:32 crc kubenswrapper[4940]: I0223 09:09:32.577475 4940 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-6884678d78-ckt87" Feb 23 09:09:32 crc kubenswrapper[4940]: I0223 09:09:32.577739 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-6884678d78-ckt87" Feb 23 09:09:32 crc kubenswrapper[4940]: I0223 09:09:32.590810 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-8485464bb-cvmj5" Feb 23 09:09:32 crc kubenswrapper[4940]: I0223 09:09:32.591478 4940 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-8485464bb-cvmj5" Feb 23 09:09:32 crc kubenswrapper[4940]: I0223 09:09:32.945711 4940 generic.go:334] "Generic (PLEG): container finished" podID="b430a58d-ed32-4642-ac93-d6f0de2eeb0d" containerID="7584d944668278dbc303ed8ea0f9f93364b2d04f6bd4c7bd4b351eb7e68181a0" exitCode=0 Feb 23 09:09:32 crc kubenswrapper[4940]: I0223 09:09:32.945772 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-mphgm" event={"ID":"b430a58d-ed32-4642-ac93-d6f0de2eeb0d","Type":"ContainerDied","Data":"7584d944668278dbc303ed8ea0f9f93364b2d04f6bd4c7bd4b351eb7e68181a0"} Feb 23 09:09:33 crc kubenswrapper[4940]: I0223 09:09:33.477692 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 23 09:09:33 crc kubenswrapper[4940]: I0223 09:09:33.477838 4940 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 23 09:09:33 crc kubenswrapper[4940]: I0223 09:09:33.478035 4940 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 23 09:09:33 crc kubenswrapper[4940]: I0223 09:09:33.478664 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 23 09:09:33 crc kubenswrapper[4940]: I0223 09:09:33.514063 4940 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 23 09:09:33 crc kubenswrapper[4940]: I0223 09:09:33.545564 4940 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 23 09:09:33 crc kubenswrapper[4940]: I0223 09:09:33.685576 4940 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 23 09:09:33 crc kubenswrapper[4940]: I0223 09:09:33.685659 4940 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 23 09:09:33 crc kubenswrapper[4940]: I0223 09:09:33.719355 4940 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 23 09:09:33 crc kubenswrapper[4940]: I0223 09:09:33.729148 4940 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 23 09:09:33 crc kubenswrapper[4940]: I0223 09:09:33.956401 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 23 09:09:33 crc kubenswrapper[4940]: I0223 09:09:33.956446 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 23 09:09:35 crc kubenswrapper[4940]: I0223 09:09:35.724860 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-55f844cf75-q85q9" Feb 23 09:09:35 crc kubenswrapper[4940]: I0223 09:09:35.828128 4940 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-s4wc7"] Feb 23 09:09:35 crc kubenswrapper[4940]: I0223 09:09:35.828399 4940 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-785d8bcb8c-s4wc7" podUID="7dc47a31-b1a6-40b2-8d67-5d60854fea4e" containerName="dnsmasq-dns" containerID="cri-o://1bf928855769ada0b818334a0a7eaf1e33ae4bf03eeec0a85338f4139f7f3393" gracePeriod=10 Feb 23 09:09:36 crc kubenswrapper[4940]: I0223 09:09:36.032375 4940 generic.go:334] "Generic (PLEG): container finished" podID="7dc47a31-b1a6-40b2-8d67-5d60854fea4e" containerID="1bf928855769ada0b818334a0a7eaf1e33ae4bf03eeec0a85338f4139f7f3393" exitCode=0 Feb 23 09:09:36 crc kubenswrapper[4940]: I0223 09:09:36.032428 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-s4wc7" event={"ID":"7dc47a31-b1a6-40b2-8d67-5d60854fea4e","Type":"ContainerDied","Data":"1bf928855769ada0b818334a0a7eaf1e33ae4bf03eeec0a85338f4139f7f3393"} Feb 23 09:09:37 crc kubenswrapper[4940]: I0223 09:09:37.168217 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 23 09:09:37 crc kubenswrapper[4940]: I0223 09:09:37.168394 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 23 09:09:37 crc kubenswrapper[4940]: I0223 09:09:37.305054 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 23 09:09:37 crc kubenswrapper[4940]: I0223 09:09:37.832718 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-rxlnz" Feb 23 09:09:37 crc kubenswrapper[4940]: I0223 09:09:37.848291 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-7d9wv" Feb 23 09:09:37 crc kubenswrapper[4940]: I0223 09:09:37.895937 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-mphgm" Feb 23 09:09:37 crc kubenswrapper[4940]: I0223 09:09:37.904075 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7c217a33-e32d-41cc-8fda-6691bf37db15-logs\") pod \"7c217a33-e32d-41cc-8fda-6691bf37db15\" (UID: \"7c217a33-e32d-41cc-8fda-6691bf37db15\") " Feb 23 09:09:37 crc kubenswrapper[4940]: I0223 09:09:37.904184 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7c217a33-e32d-41cc-8fda-6691bf37db15-config-data\") pod \"7c217a33-e32d-41cc-8fda-6691bf37db15\" (UID: \"7c217a33-e32d-41cc-8fda-6691bf37db15\") " Feb 23 09:09:37 crc kubenswrapper[4940]: I0223 09:09:37.904313 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7hc6w\" (UniqueName: \"kubernetes.io/projected/b224c257-f773-40f9-b62b-8d6e897ed198-kube-api-access-7hc6w\") pod \"b224c257-f773-40f9-b62b-8d6e897ed198\" (UID: \"b224c257-f773-40f9-b62b-8d6e897ed198\") " Feb 23 09:09:37 crc kubenswrapper[4940]: I0223 09:09:37.904415 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b224c257-f773-40f9-b62b-8d6e897ed198-combined-ca-bundle\") pod \"b224c257-f773-40f9-b62b-8d6e897ed198\" (UID: \"b224c257-f773-40f9-b62b-8d6e897ed198\") " Feb 23 09:09:37 crc kubenswrapper[4940]: I0223 09:09:37.904494 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rsrtb\" (UniqueName: \"kubernetes.io/projected/b430a58d-ed32-4642-ac93-d6f0de2eeb0d-kube-api-access-rsrtb\") pod \"b430a58d-ed32-4642-ac93-d6f0de2eeb0d\" (UID: \"b430a58d-ed32-4642-ac93-d6f0de2eeb0d\") " Feb 23 09:09:37 crc kubenswrapper[4940]: I0223 09:09:37.904600 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xffzp\" (UniqueName: \"kubernetes.io/projected/7c217a33-e32d-41cc-8fda-6691bf37db15-kube-api-access-xffzp\") pod \"7c217a33-e32d-41cc-8fda-6691bf37db15\" (UID: \"7c217a33-e32d-41cc-8fda-6691bf37db15\") " Feb 23 09:09:37 crc kubenswrapper[4940]: I0223 09:09:37.904748 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b224c257-f773-40f9-b62b-8d6e897ed198-scripts\") pod \"b224c257-f773-40f9-b62b-8d6e897ed198\" (UID: \"b224c257-f773-40f9-b62b-8d6e897ed198\") " Feb 23 09:09:37 crc kubenswrapper[4940]: I0223 09:09:37.904886 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/b224c257-f773-40f9-b62b-8d6e897ed198-credential-keys\") pod \"b224c257-f773-40f9-b62b-8d6e897ed198\" (UID: \"b224c257-f773-40f9-b62b-8d6e897ed198\") " Feb 23 09:09:37 crc kubenswrapper[4940]: I0223 09:09:37.905035 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c217a33-e32d-41cc-8fda-6691bf37db15-combined-ca-bundle\") pod \"7c217a33-e32d-41cc-8fda-6691bf37db15\" (UID: \"7c217a33-e32d-41cc-8fda-6691bf37db15\") " Feb 23 09:09:37 crc kubenswrapper[4940]: I0223 09:09:37.905139 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7c217a33-e32d-41cc-8fda-6691bf37db15-scripts\") pod \"7c217a33-e32d-41cc-8fda-6691bf37db15\" (UID: \"7c217a33-e32d-41cc-8fda-6691bf37db15\") " Feb 23 09:09:37 crc kubenswrapper[4940]: I0223 09:09:37.905263 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b224c257-f773-40f9-b62b-8d6e897ed198-config-data\") pod \"b224c257-f773-40f9-b62b-8d6e897ed198\" (UID: \"b224c257-f773-40f9-b62b-8d6e897ed198\") " Feb 23 09:09:37 crc kubenswrapper[4940]: I0223 09:09:37.905339 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b430a58d-ed32-4642-ac93-d6f0de2eeb0d-combined-ca-bundle\") pod \"b430a58d-ed32-4642-ac93-d6f0de2eeb0d\" (UID: \"b430a58d-ed32-4642-ac93-d6f0de2eeb0d\") " Feb 23 09:09:37 crc kubenswrapper[4940]: I0223 09:09:37.905407 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b224c257-f773-40f9-b62b-8d6e897ed198-fernet-keys\") pod \"b224c257-f773-40f9-b62b-8d6e897ed198\" (UID: \"b224c257-f773-40f9-b62b-8d6e897ed198\") " Feb 23 09:09:37 crc kubenswrapper[4940]: I0223 09:09:37.905546 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/b430a58d-ed32-4642-ac93-d6f0de2eeb0d-db-sync-config-data\") pod \"b430a58d-ed32-4642-ac93-d6f0de2eeb0d\" (UID: \"b430a58d-ed32-4642-ac93-d6f0de2eeb0d\") " Feb 23 09:09:37 crc kubenswrapper[4940]: I0223 09:09:37.914469 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b224c257-f773-40f9-b62b-8d6e897ed198-kube-api-access-7hc6w" (OuterVolumeSpecName: "kube-api-access-7hc6w") pod "b224c257-f773-40f9-b62b-8d6e897ed198" (UID: "b224c257-f773-40f9-b62b-8d6e897ed198"). InnerVolumeSpecName "kube-api-access-7hc6w". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 09:09:37 crc kubenswrapper[4940]: I0223 09:09:37.914806 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7c217a33-e32d-41cc-8fda-6691bf37db15-logs" (OuterVolumeSpecName: "logs") pod "7c217a33-e32d-41cc-8fda-6691bf37db15" (UID: "7c217a33-e32d-41cc-8fda-6691bf37db15"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 09:09:37 crc kubenswrapper[4940]: I0223 09:09:37.915169 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7c217a33-e32d-41cc-8fda-6691bf37db15-scripts" (OuterVolumeSpecName: "scripts") pod "7c217a33-e32d-41cc-8fda-6691bf37db15" (UID: "7c217a33-e32d-41cc-8fda-6691bf37db15"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 09:09:37 crc kubenswrapper[4940]: I0223 09:09:37.918370 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b224c257-f773-40f9-b62b-8d6e897ed198-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "b224c257-f773-40f9-b62b-8d6e897ed198" (UID: "b224c257-f773-40f9-b62b-8d6e897ed198"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 09:09:37 crc kubenswrapper[4940]: I0223 09:09:37.926812 4940 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/b224c257-f773-40f9-b62b-8d6e897ed198-credential-keys\") on node \"crc\" DevicePath \"\"" Feb 23 09:09:37 crc kubenswrapper[4940]: I0223 09:09:37.926846 4940 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7c217a33-e32d-41cc-8fda-6691bf37db15-scripts\") on node \"crc\" DevicePath \"\"" Feb 23 09:09:37 crc kubenswrapper[4940]: I0223 09:09:37.926860 4940 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7c217a33-e32d-41cc-8fda-6691bf37db15-logs\") on node \"crc\" DevicePath \"\"" Feb 23 09:09:37 crc kubenswrapper[4940]: I0223 09:09:37.926871 4940 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7hc6w\" (UniqueName: \"kubernetes.io/projected/b224c257-f773-40f9-b62b-8d6e897ed198-kube-api-access-7hc6w\") on node \"crc\" DevicePath \"\"" Feb 23 09:09:37 crc kubenswrapper[4940]: I0223 09:09:37.926988 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b224c257-f773-40f9-b62b-8d6e897ed198-scripts" (OuterVolumeSpecName: "scripts") pod "b224c257-f773-40f9-b62b-8d6e897ed198" (UID: "b224c257-f773-40f9-b62b-8d6e897ed198"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 09:09:37 crc kubenswrapper[4940]: I0223 09:09:37.933997 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7c217a33-e32d-41cc-8fda-6691bf37db15-kube-api-access-xffzp" (OuterVolumeSpecName: "kube-api-access-xffzp") pod "7c217a33-e32d-41cc-8fda-6691bf37db15" (UID: "7c217a33-e32d-41cc-8fda-6691bf37db15"). InnerVolumeSpecName "kube-api-access-xffzp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 09:09:37 crc kubenswrapper[4940]: I0223 09:09:37.951788 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b430a58d-ed32-4642-ac93-d6f0de2eeb0d-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "b430a58d-ed32-4642-ac93-d6f0de2eeb0d" (UID: "b430a58d-ed32-4642-ac93-d6f0de2eeb0d"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 09:09:37 crc kubenswrapper[4940]: I0223 09:09:37.959902 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b430a58d-ed32-4642-ac93-d6f0de2eeb0d-kube-api-access-rsrtb" (OuterVolumeSpecName: "kube-api-access-rsrtb") pod "b430a58d-ed32-4642-ac93-d6f0de2eeb0d" (UID: "b430a58d-ed32-4642-ac93-d6f0de2eeb0d"). InnerVolumeSpecName "kube-api-access-rsrtb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 09:09:37 crc kubenswrapper[4940]: I0223 09:09:37.969776 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b224c257-f773-40f9-b62b-8d6e897ed198-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "b224c257-f773-40f9-b62b-8d6e897ed198" (UID: "b224c257-f773-40f9-b62b-8d6e897ed198"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 09:09:37 crc kubenswrapper[4940]: I0223 09:09:37.986009 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7c217a33-e32d-41cc-8fda-6691bf37db15-config-data" (OuterVolumeSpecName: "config-data") pod "7c217a33-e32d-41cc-8fda-6691bf37db15" (UID: "7c217a33-e32d-41cc-8fda-6691bf37db15"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 09:09:38 crc kubenswrapper[4940]: I0223 09:09:38.028944 4940 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/b430a58d-ed32-4642-ac93-d6f0de2eeb0d-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 23 09:09:38 crc kubenswrapper[4940]: I0223 09:09:38.028985 4940 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7c217a33-e32d-41cc-8fda-6691bf37db15-config-data\") on node \"crc\" DevicePath \"\"" Feb 23 09:09:38 crc kubenswrapper[4940]: I0223 09:09:38.028995 4940 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rsrtb\" (UniqueName: \"kubernetes.io/projected/b430a58d-ed32-4642-ac93-d6f0de2eeb0d-kube-api-access-rsrtb\") on node \"crc\" DevicePath \"\"" Feb 23 09:09:38 crc kubenswrapper[4940]: I0223 09:09:38.029014 4940 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xffzp\" (UniqueName: \"kubernetes.io/projected/7c217a33-e32d-41cc-8fda-6691bf37db15-kube-api-access-xffzp\") on node \"crc\" DevicePath \"\"" Feb 23 09:09:38 crc kubenswrapper[4940]: I0223 09:09:38.029022 4940 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b224c257-f773-40f9-b62b-8d6e897ed198-scripts\") on node \"crc\" DevicePath \"\"" Feb 23 09:09:38 crc kubenswrapper[4940]: I0223 09:09:38.029031 4940 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b224c257-f773-40f9-b62b-8d6e897ed198-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 23 09:09:38 crc kubenswrapper[4940]: I0223 09:09:38.068895 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b224c257-f773-40f9-b62b-8d6e897ed198-config-data" (OuterVolumeSpecName: "config-data") pod "b224c257-f773-40f9-b62b-8d6e897ed198" (UID: "b224c257-f773-40f9-b62b-8d6e897ed198"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 09:09:38 crc kubenswrapper[4940]: I0223 09:09:38.084188 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-s4wc7" Feb 23 09:09:38 crc kubenswrapper[4940]: I0223 09:09:38.084619 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b224c257-f773-40f9-b62b-8d6e897ed198-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b224c257-f773-40f9-b62b-8d6e897ed198" (UID: "b224c257-f773-40f9-b62b-8d6e897ed198"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 09:09:38 crc kubenswrapper[4940]: I0223 09:09:38.092729 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b430a58d-ed32-4642-ac93-d6f0de2eeb0d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b430a58d-ed32-4642-ac93-d6f0de2eeb0d" (UID: "b430a58d-ed32-4642-ac93-d6f0de2eeb0d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 09:09:38 crc kubenswrapper[4940]: I0223 09:09:38.122066 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-7d9wv" event={"ID":"b224c257-f773-40f9-b62b-8d6e897ed198","Type":"ContainerDied","Data":"999ccab4397619dbb48e4fa69b23709970e0792affe3d5eb572cf30e6363835f"} Feb 23 09:09:38 crc kubenswrapper[4940]: I0223 09:09:38.122114 4940 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="999ccab4397619dbb48e4fa69b23709970e0792affe3d5eb572cf30e6363835f" Feb 23 09:09:38 crc kubenswrapper[4940]: I0223 09:09:38.122210 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-7d9wv" Feb 23 09:09:38 crc kubenswrapper[4940]: I0223 09:09:38.131302 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7dc47a31-b1a6-40b2-8d67-5d60854fea4e-config\") pod \"7dc47a31-b1a6-40b2-8d67-5d60854fea4e\" (UID: \"7dc47a31-b1a6-40b2-8d67-5d60854fea4e\") " Feb 23 09:09:38 crc kubenswrapper[4940]: I0223 09:09:38.131404 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7dc47a31-b1a6-40b2-8d67-5d60854fea4e-ovsdbserver-nb\") pod \"7dc47a31-b1a6-40b2-8d67-5d60854fea4e\" (UID: \"7dc47a31-b1a6-40b2-8d67-5d60854fea4e\") " Feb 23 09:09:38 crc kubenswrapper[4940]: I0223 09:09:38.131448 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7dc47a31-b1a6-40b2-8d67-5d60854fea4e-dns-swift-storage-0\") pod \"7dc47a31-b1a6-40b2-8d67-5d60854fea4e\" (UID: \"7dc47a31-b1a6-40b2-8d67-5d60854fea4e\") " Feb 23 09:09:38 crc kubenswrapper[4940]: I0223 09:09:38.131672 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dz6dn\" (UniqueName: \"kubernetes.io/projected/7dc47a31-b1a6-40b2-8d67-5d60854fea4e-kube-api-access-dz6dn\") pod \"7dc47a31-b1a6-40b2-8d67-5d60854fea4e\" (UID: \"7dc47a31-b1a6-40b2-8d67-5d60854fea4e\") " Feb 23 09:09:38 crc kubenswrapper[4940]: I0223 09:09:38.131758 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7dc47a31-b1a6-40b2-8d67-5d60854fea4e-ovsdbserver-sb\") pod \"7dc47a31-b1a6-40b2-8d67-5d60854fea4e\" (UID: \"7dc47a31-b1a6-40b2-8d67-5d60854fea4e\") " Feb 23 09:09:38 crc kubenswrapper[4940]: I0223 09:09:38.131794 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7dc47a31-b1a6-40b2-8d67-5d60854fea4e-dns-svc\") pod \"7dc47a31-b1a6-40b2-8d67-5d60854fea4e\" (UID: \"7dc47a31-b1a6-40b2-8d67-5d60854fea4e\") " Feb 23 09:09:38 crc kubenswrapper[4940]: I0223 09:09:38.132292 4940 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b224c257-f773-40f9-b62b-8d6e897ed198-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 09:09:38 crc kubenswrapper[4940]: I0223 09:09:38.132316 4940 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b224c257-f773-40f9-b62b-8d6e897ed198-config-data\") on node \"crc\" DevicePath \"\"" Feb 23 09:09:38 crc kubenswrapper[4940]: I0223 09:09:38.132329 4940 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b430a58d-ed32-4642-ac93-d6f0de2eeb0d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 09:09:38 crc kubenswrapper[4940]: I0223 09:09:38.143913 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7c217a33-e32d-41cc-8fda-6691bf37db15-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7c217a33-e32d-41cc-8fda-6691bf37db15" (UID: "7c217a33-e32d-41cc-8fda-6691bf37db15"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 09:09:38 crc kubenswrapper[4940]: I0223 09:09:38.177314 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7dc47a31-b1a6-40b2-8d67-5d60854fea4e-kube-api-access-dz6dn" (OuterVolumeSpecName: "kube-api-access-dz6dn") pod "7dc47a31-b1a6-40b2-8d67-5d60854fea4e" (UID: "7dc47a31-b1a6-40b2-8d67-5d60854fea4e"). InnerVolumeSpecName "kube-api-access-dz6dn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 09:09:38 crc kubenswrapper[4940]: I0223 09:09:38.179347 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-s4wc7" event={"ID":"7dc47a31-b1a6-40b2-8d67-5d60854fea4e","Type":"ContainerDied","Data":"26d81b74267ebec2270adbc9d151982b32431fc15e490e012f205a245b807b97"} Feb 23 09:09:38 crc kubenswrapper[4940]: I0223 09:09:38.179395 4940 scope.go:117] "RemoveContainer" containerID="1bf928855769ada0b818334a0a7eaf1e33ae4bf03eeec0a85338f4139f7f3393" Feb 23 09:09:38 crc kubenswrapper[4940]: I0223 09:09:38.179516 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-s4wc7" Feb 23 09:09:38 crc kubenswrapper[4940]: I0223 09:09:38.184210 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-mphgm" event={"ID":"b430a58d-ed32-4642-ac93-d6f0de2eeb0d","Type":"ContainerDied","Data":"da277968472d3c31a972269d1529bfb7c7881186deb860687c991ec7ea656593"} Feb 23 09:09:38 crc kubenswrapper[4940]: I0223 09:09:38.184243 4940 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="da277968472d3c31a972269d1529bfb7c7881186deb860687c991ec7ea656593" Feb 23 09:09:38 crc kubenswrapper[4940]: I0223 09:09:38.184284 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-mphgm" Feb 23 09:09:38 crc kubenswrapper[4940]: I0223 09:09:38.186550 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-rxlnz" Feb 23 09:09:38 crc kubenswrapper[4940]: I0223 09:09:38.187568 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-rxlnz" event={"ID":"7c217a33-e32d-41cc-8fda-6691bf37db15","Type":"ContainerDied","Data":"ab3d4155c70b911088fcaae189a2aa3c2cfd709ddc0dbf3c363ec5b5fa235783"} Feb 23 09:09:38 crc kubenswrapper[4940]: I0223 09:09:38.187595 4940 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ab3d4155c70b911088fcaae189a2aa3c2cfd709ddc0dbf3c363ec5b5fa235783" Feb 23 09:09:38 crc kubenswrapper[4940]: I0223 09:09:38.208067 4940 scope.go:117] "RemoveContainer" containerID="0262a32618565c7f5571ccacc289cae4e7cbd2594d01b023febd1caf10395488" Feb 23 09:09:38 crc kubenswrapper[4940]: I0223 09:09:38.235921 4940 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dz6dn\" (UniqueName: \"kubernetes.io/projected/7dc47a31-b1a6-40b2-8d67-5d60854fea4e-kube-api-access-dz6dn\") on node \"crc\" DevicePath \"\"" Feb 23 09:09:38 crc kubenswrapper[4940]: I0223 09:09:38.235952 4940 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c217a33-e32d-41cc-8fda-6691bf37db15-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 09:09:38 crc kubenswrapper[4940]: I0223 09:09:38.335481 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7dc47a31-b1a6-40b2-8d67-5d60854fea4e-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "7dc47a31-b1a6-40b2-8d67-5d60854fea4e" (UID: "7dc47a31-b1a6-40b2-8d67-5d60854fea4e"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 09:09:38 crc kubenswrapper[4940]: I0223 09:09:38.337105 4940 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7dc47a31-b1a6-40b2-8d67-5d60854fea4e-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 23 09:09:38 crc kubenswrapper[4940]: I0223 09:09:38.338258 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7dc47a31-b1a6-40b2-8d67-5d60854fea4e-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "7dc47a31-b1a6-40b2-8d67-5d60854fea4e" (UID: "7dc47a31-b1a6-40b2-8d67-5d60854fea4e"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 09:09:38 crc kubenswrapper[4940]: I0223 09:09:38.385250 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7dc47a31-b1a6-40b2-8d67-5d60854fea4e-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "7dc47a31-b1a6-40b2-8d67-5d60854fea4e" (UID: "7dc47a31-b1a6-40b2-8d67-5d60854fea4e"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 09:09:38 crc kubenswrapper[4940]: I0223 09:09:38.390867 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7dc47a31-b1a6-40b2-8d67-5d60854fea4e-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "7dc47a31-b1a6-40b2-8d67-5d60854fea4e" (UID: "7dc47a31-b1a6-40b2-8d67-5d60854fea4e"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 09:09:38 crc kubenswrapper[4940]: I0223 09:09:38.417815 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7dc47a31-b1a6-40b2-8d67-5d60854fea4e-config" (OuterVolumeSpecName: "config") pod "7dc47a31-b1a6-40b2-8d67-5d60854fea4e" (UID: "7dc47a31-b1a6-40b2-8d67-5d60854fea4e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 09:09:38 crc kubenswrapper[4940]: I0223 09:09:38.438871 4940 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7dc47a31-b1a6-40b2-8d67-5d60854fea4e-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 23 09:09:38 crc kubenswrapper[4940]: I0223 09:09:38.438905 4940 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7dc47a31-b1a6-40b2-8d67-5d60854fea4e-config\") on node \"crc\" DevicePath \"\"" Feb 23 09:09:38 crc kubenswrapper[4940]: I0223 09:09:38.438918 4940 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7dc47a31-b1a6-40b2-8d67-5d60854fea4e-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 23 09:09:38 crc kubenswrapper[4940]: I0223 09:09:38.438928 4940 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7dc47a31-b1a6-40b2-8d67-5d60854fea4e-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 23 09:09:38 crc kubenswrapper[4940]: I0223 09:09:38.518915 4940 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-s4wc7"] Feb 23 09:09:38 crc kubenswrapper[4940]: I0223 09:09:38.527899 4940 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-s4wc7"] Feb 23 09:09:38 crc kubenswrapper[4940]: I0223 09:09:38.617366 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 23 09:09:39 crc kubenswrapper[4940]: I0223 09:09:39.037593 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-657b46f66d-5snf5"] Feb 23 09:09:39 crc kubenswrapper[4940]: E0223 09:09:39.045858 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b430a58d-ed32-4642-ac93-d6f0de2eeb0d" containerName="barbican-db-sync" Feb 23 09:09:39 crc kubenswrapper[4940]: I0223 09:09:39.045880 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="b430a58d-ed32-4642-ac93-d6f0de2eeb0d" containerName="barbican-db-sync" Feb 23 09:09:39 crc kubenswrapper[4940]: E0223 09:09:39.045894 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7dc47a31-b1a6-40b2-8d67-5d60854fea4e" containerName="init" Feb 23 09:09:39 crc kubenswrapper[4940]: I0223 09:09:39.045901 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="7dc47a31-b1a6-40b2-8d67-5d60854fea4e" containerName="init" Feb 23 09:09:39 crc kubenswrapper[4940]: E0223 09:09:39.045915 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7dc47a31-b1a6-40b2-8d67-5d60854fea4e" containerName="dnsmasq-dns" Feb 23 09:09:39 crc kubenswrapper[4940]: I0223 09:09:39.045921 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="7dc47a31-b1a6-40b2-8d67-5d60854fea4e" containerName="dnsmasq-dns" Feb 23 09:09:39 crc kubenswrapper[4940]: E0223 09:09:39.045933 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b224c257-f773-40f9-b62b-8d6e897ed198" containerName="keystone-bootstrap" Feb 23 09:09:39 crc kubenswrapper[4940]: I0223 09:09:39.045939 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="b224c257-f773-40f9-b62b-8d6e897ed198" containerName="keystone-bootstrap" Feb 23 09:09:39 crc kubenswrapper[4940]: E0223 09:09:39.045947 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c217a33-e32d-41cc-8fda-6691bf37db15" containerName="placement-db-sync" Feb 23 09:09:39 crc kubenswrapper[4940]: I0223 09:09:39.045953 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c217a33-e32d-41cc-8fda-6691bf37db15" containerName="placement-db-sync" Feb 23 09:09:39 crc kubenswrapper[4940]: I0223 09:09:39.046125 4940 memory_manager.go:354] "RemoveStaleState removing state" podUID="b224c257-f773-40f9-b62b-8d6e897ed198" containerName="keystone-bootstrap" Feb 23 09:09:39 crc kubenswrapper[4940]: I0223 09:09:39.046146 4940 memory_manager.go:354] "RemoveStaleState removing state" podUID="b430a58d-ed32-4642-ac93-d6f0de2eeb0d" containerName="barbican-db-sync" Feb 23 09:09:39 crc kubenswrapper[4940]: I0223 09:09:39.046164 4940 memory_manager.go:354] "RemoveStaleState removing state" podUID="7dc47a31-b1a6-40b2-8d67-5d60854fea4e" containerName="dnsmasq-dns" Feb 23 09:09:39 crc kubenswrapper[4940]: I0223 09:09:39.046177 4940 memory_manager.go:354] "RemoveStaleState removing state" podUID="7c217a33-e32d-41cc-8fda-6691bf37db15" containerName="placement-db-sync" Feb 23 09:09:39 crc kubenswrapper[4940]: I0223 09:09:39.046748 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-657b46f66d-5snf5" Feb 23 09:09:39 crc kubenswrapper[4940]: I0223 09:09:39.051061 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Feb 23 09:09:39 crc kubenswrapper[4940]: I0223 09:09:39.051319 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-l76pf" Feb 23 09:09:39 crc kubenswrapper[4940]: I0223 09:09:39.051468 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Feb 23 09:09:39 crc kubenswrapper[4940]: I0223 09:09:39.051661 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 23 09:09:39 crc kubenswrapper[4940]: I0223 09:09:39.051771 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 23 09:09:39 crc kubenswrapper[4940]: I0223 09:09:39.054253 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 23 09:09:39 crc kubenswrapper[4940]: I0223 09:09:39.075988 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-657b46f66d-5snf5"] Feb 23 09:09:39 crc kubenswrapper[4940]: I0223 09:09:39.139281 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-6c6749c74d-ng8p9"] Feb 23 09:09:39 crc kubenswrapper[4940]: I0223 09:09:39.140986 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-6c6749c74d-ng8p9" Feb 23 09:09:39 crc kubenswrapper[4940]: I0223 09:09:39.145093 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-bwcqd" Feb 23 09:09:39 crc kubenswrapper[4940]: I0223 09:09:39.145214 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Feb 23 09:09:39 crc kubenswrapper[4940]: I0223 09:09:39.145443 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Feb 23 09:09:39 crc kubenswrapper[4940]: I0223 09:09:39.154396 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kjwj2\" (UniqueName: \"kubernetes.io/projected/14b5e353-0333-4351-a628-4767407854ec-kube-api-access-kjwj2\") pod \"keystone-657b46f66d-5snf5\" (UID: \"14b5e353-0333-4351-a628-4767407854ec\") " pod="openstack/keystone-657b46f66d-5snf5" Feb 23 09:09:39 crc kubenswrapper[4940]: I0223 09:09:39.154453 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/14b5e353-0333-4351-a628-4767407854ec-internal-tls-certs\") pod \"keystone-657b46f66d-5snf5\" (UID: \"14b5e353-0333-4351-a628-4767407854ec\") " pod="openstack/keystone-657b46f66d-5snf5" Feb 23 09:09:39 crc kubenswrapper[4940]: I0223 09:09:39.154475 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/14b5e353-0333-4351-a628-4767407854ec-credential-keys\") pod \"keystone-657b46f66d-5snf5\" (UID: \"14b5e353-0333-4351-a628-4767407854ec\") " pod="openstack/keystone-657b46f66d-5snf5" Feb 23 09:09:39 crc kubenswrapper[4940]: I0223 09:09:39.154505 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f433de8f-71fb-4f02-a223-871cc2959145-internal-tls-certs\") pod \"placement-6c6749c74d-ng8p9\" (UID: \"f433de8f-71fb-4f02-a223-871cc2959145\") " pod="openstack/placement-6c6749c74d-ng8p9" Feb 23 09:09:39 crc kubenswrapper[4940]: I0223 09:09:39.154531 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s7xts\" (UniqueName: \"kubernetes.io/projected/f433de8f-71fb-4f02-a223-871cc2959145-kube-api-access-s7xts\") pod \"placement-6c6749c74d-ng8p9\" (UID: \"f433de8f-71fb-4f02-a223-871cc2959145\") " pod="openstack/placement-6c6749c74d-ng8p9" Feb 23 09:09:39 crc kubenswrapper[4940]: I0223 09:09:39.154571 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f433de8f-71fb-4f02-a223-871cc2959145-logs\") pod \"placement-6c6749c74d-ng8p9\" (UID: \"f433de8f-71fb-4f02-a223-871cc2959145\") " pod="openstack/placement-6c6749c74d-ng8p9" Feb 23 09:09:39 crc kubenswrapper[4940]: I0223 09:09:39.154594 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/14b5e353-0333-4351-a628-4767407854ec-fernet-keys\") pod \"keystone-657b46f66d-5snf5\" (UID: \"14b5e353-0333-4351-a628-4767407854ec\") " pod="openstack/keystone-657b46f66d-5snf5" Feb 23 09:09:39 crc kubenswrapper[4940]: I0223 09:09:39.154622 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f433de8f-71fb-4f02-a223-871cc2959145-combined-ca-bundle\") pod \"placement-6c6749c74d-ng8p9\" (UID: \"f433de8f-71fb-4f02-a223-871cc2959145\") " pod="openstack/placement-6c6749c74d-ng8p9" Feb 23 09:09:39 crc kubenswrapper[4940]: I0223 09:09:39.154661 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f433de8f-71fb-4f02-a223-871cc2959145-public-tls-certs\") pod \"placement-6c6749c74d-ng8p9\" (UID: \"f433de8f-71fb-4f02-a223-871cc2959145\") " pod="openstack/placement-6c6749c74d-ng8p9" Feb 23 09:09:39 crc kubenswrapper[4940]: I0223 09:09:39.154732 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/14b5e353-0333-4351-a628-4767407854ec-scripts\") pod \"keystone-657b46f66d-5snf5\" (UID: \"14b5e353-0333-4351-a628-4767407854ec\") " pod="openstack/keystone-657b46f66d-5snf5" Feb 23 09:09:39 crc kubenswrapper[4940]: I0223 09:09:39.154819 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/14b5e353-0333-4351-a628-4767407854ec-config-data\") pod \"keystone-657b46f66d-5snf5\" (UID: \"14b5e353-0333-4351-a628-4767407854ec\") " pod="openstack/keystone-657b46f66d-5snf5" Feb 23 09:09:39 crc kubenswrapper[4940]: I0223 09:09:39.154840 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/14b5e353-0333-4351-a628-4767407854ec-public-tls-certs\") pod \"keystone-657b46f66d-5snf5\" (UID: \"14b5e353-0333-4351-a628-4767407854ec\") " pod="openstack/keystone-657b46f66d-5snf5" Feb 23 09:09:39 crc kubenswrapper[4940]: I0223 09:09:39.154860 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14b5e353-0333-4351-a628-4767407854ec-combined-ca-bundle\") pod \"keystone-657b46f66d-5snf5\" (UID: \"14b5e353-0333-4351-a628-4767407854ec\") " pod="openstack/keystone-657b46f66d-5snf5" Feb 23 09:09:39 crc kubenswrapper[4940]: I0223 09:09:39.154923 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f433de8f-71fb-4f02-a223-871cc2959145-scripts\") pod \"placement-6c6749c74d-ng8p9\" (UID: \"f433de8f-71fb-4f02-a223-871cc2959145\") " pod="openstack/placement-6c6749c74d-ng8p9" Feb 23 09:09:39 crc kubenswrapper[4940]: I0223 09:09:39.154954 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f433de8f-71fb-4f02-a223-871cc2959145-config-data\") pod \"placement-6c6749c74d-ng8p9\" (UID: \"f433de8f-71fb-4f02-a223-871cc2959145\") " pod="openstack/placement-6c6749c74d-ng8p9" Feb 23 09:09:39 crc kubenswrapper[4940]: I0223 09:09:39.156093 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Feb 23 09:09:39 crc kubenswrapper[4940]: I0223 09:09:39.156280 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Feb 23 09:09:39 crc kubenswrapper[4940]: I0223 09:09:39.170341 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-6c6749c74d-ng8p9"] Feb 23 09:09:39 crc kubenswrapper[4940]: I0223 09:09:39.202823 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-677b768799-7xn5v"] Feb 23 09:09:39 crc kubenswrapper[4940]: I0223 09:09:39.204586 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-677b768799-7xn5v" Feb 23 09:09:39 crc kubenswrapper[4940]: I0223 09:09:39.206383 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-89wzw" Feb 23 09:09:39 crc kubenswrapper[4940]: I0223 09:09:39.206989 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Feb 23 09:09:39 crc kubenswrapper[4940]: I0223 09:09:39.207151 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Feb 23 09:09:39 crc kubenswrapper[4940]: I0223 09:09:39.228683 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-677b768799-7xn5v"] Feb 23 09:09:39 crc kubenswrapper[4940]: I0223 09:09:39.256750 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kjwj2\" (UniqueName: \"kubernetes.io/projected/14b5e353-0333-4351-a628-4767407854ec-kube-api-access-kjwj2\") pod \"keystone-657b46f66d-5snf5\" (UID: \"14b5e353-0333-4351-a628-4767407854ec\") " pod="openstack/keystone-657b46f66d-5snf5" Feb 23 09:09:39 crc kubenswrapper[4940]: I0223 09:09:39.256800 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/14b5e353-0333-4351-a628-4767407854ec-internal-tls-certs\") pod \"keystone-657b46f66d-5snf5\" (UID: \"14b5e353-0333-4351-a628-4767407854ec\") " pod="openstack/keystone-657b46f66d-5snf5" Feb 23 09:09:39 crc kubenswrapper[4940]: I0223 09:09:39.256822 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/14b5e353-0333-4351-a628-4767407854ec-credential-keys\") pod \"keystone-657b46f66d-5snf5\" (UID: \"14b5e353-0333-4351-a628-4767407854ec\") " pod="openstack/keystone-657b46f66d-5snf5" Feb 23 09:09:39 crc kubenswrapper[4940]: I0223 09:09:39.256848 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f433de8f-71fb-4f02-a223-871cc2959145-internal-tls-certs\") pod \"placement-6c6749c74d-ng8p9\" (UID: \"f433de8f-71fb-4f02-a223-871cc2959145\") " pod="openstack/placement-6c6749c74d-ng8p9" Feb 23 09:09:39 crc kubenswrapper[4940]: I0223 09:09:39.256874 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s7xts\" (UniqueName: \"kubernetes.io/projected/f433de8f-71fb-4f02-a223-871cc2959145-kube-api-access-s7xts\") pod \"placement-6c6749c74d-ng8p9\" (UID: \"f433de8f-71fb-4f02-a223-871cc2959145\") " pod="openstack/placement-6c6749c74d-ng8p9" Feb 23 09:09:39 crc kubenswrapper[4940]: I0223 09:09:39.256910 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f433de8f-71fb-4f02-a223-871cc2959145-logs\") pod \"placement-6c6749c74d-ng8p9\" (UID: \"f433de8f-71fb-4f02-a223-871cc2959145\") " pod="openstack/placement-6c6749c74d-ng8p9" Feb 23 09:09:39 crc kubenswrapper[4940]: I0223 09:09:39.256935 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/14b5e353-0333-4351-a628-4767407854ec-fernet-keys\") pod \"keystone-657b46f66d-5snf5\" (UID: \"14b5e353-0333-4351-a628-4767407854ec\") " pod="openstack/keystone-657b46f66d-5snf5" Feb 23 09:09:39 crc kubenswrapper[4940]: I0223 09:09:39.256963 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/02b394c0-8e56-4cfd-b85b-27109abc1b52-config-data-custom\") pod \"barbican-worker-677b768799-7xn5v\" (UID: \"02b394c0-8e56-4cfd-b85b-27109abc1b52\") " pod="openstack/barbican-worker-677b768799-7xn5v" Feb 23 09:09:39 crc kubenswrapper[4940]: I0223 09:09:39.256984 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f433de8f-71fb-4f02-a223-871cc2959145-combined-ca-bundle\") pod \"placement-6c6749c74d-ng8p9\" (UID: \"f433de8f-71fb-4f02-a223-871cc2959145\") " pod="openstack/placement-6c6749c74d-ng8p9" Feb 23 09:09:39 crc kubenswrapper[4940]: I0223 09:09:39.257001 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f433de8f-71fb-4f02-a223-871cc2959145-public-tls-certs\") pod \"placement-6c6749c74d-ng8p9\" (UID: \"f433de8f-71fb-4f02-a223-871cc2959145\") " pod="openstack/placement-6c6749c74d-ng8p9" Feb 23 09:09:39 crc kubenswrapper[4940]: I0223 09:09:39.257019 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/02b394c0-8e56-4cfd-b85b-27109abc1b52-logs\") pod \"barbican-worker-677b768799-7xn5v\" (UID: \"02b394c0-8e56-4cfd-b85b-27109abc1b52\") " pod="openstack/barbican-worker-677b768799-7xn5v" Feb 23 09:09:39 crc kubenswrapper[4940]: I0223 09:09:39.257050 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/02b394c0-8e56-4cfd-b85b-27109abc1b52-combined-ca-bundle\") pod \"barbican-worker-677b768799-7xn5v\" (UID: \"02b394c0-8e56-4cfd-b85b-27109abc1b52\") " pod="openstack/barbican-worker-677b768799-7xn5v" Feb 23 09:09:39 crc kubenswrapper[4940]: I0223 09:09:39.257083 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/14b5e353-0333-4351-a628-4767407854ec-scripts\") pod \"keystone-657b46f66d-5snf5\" (UID: \"14b5e353-0333-4351-a628-4767407854ec\") " pod="openstack/keystone-657b46f66d-5snf5" Feb 23 09:09:39 crc kubenswrapper[4940]: I0223 09:09:39.257099 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/02b394c0-8e56-4cfd-b85b-27109abc1b52-config-data\") pod \"barbican-worker-677b768799-7xn5v\" (UID: \"02b394c0-8e56-4cfd-b85b-27109abc1b52\") " pod="openstack/barbican-worker-677b768799-7xn5v" Feb 23 09:09:39 crc kubenswrapper[4940]: I0223 09:09:39.257128 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2d4vw\" (UniqueName: \"kubernetes.io/projected/02b394c0-8e56-4cfd-b85b-27109abc1b52-kube-api-access-2d4vw\") pod \"barbican-worker-677b768799-7xn5v\" (UID: \"02b394c0-8e56-4cfd-b85b-27109abc1b52\") " pod="openstack/barbican-worker-677b768799-7xn5v" Feb 23 09:09:39 crc kubenswrapper[4940]: I0223 09:09:39.257190 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/14b5e353-0333-4351-a628-4767407854ec-config-data\") pod \"keystone-657b46f66d-5snf5\" (UID: \"14b5e353-0333-4351-a628-4767407854ec\") " pod="openstack/keystone-657b46f66d-5snf5" Feb 23 09:09:39 crc kubenswrapper[4940]: I0223 09:09:39.257214 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/14b5e353-0333-4351-a628-4767407854ec-public-tls-certs\") pod \"keystone-657b46f66d-5snf5\" (UID: \"14b5e353-0333-4351-a628-4767407854ec\") " pod="openstack/keystone-657b46f66d-5snf5" Feb 23 09:09:39 crc kubenswrapper[4940]: I0223 09:09:39.257236 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14b5e353-0333-4351-a628-4767407854ec-combined-ca-bundle\") pod \"keystone-657b46f66d-5snf5\" (UID: \"14b5e353-0333-4351-a628-4767407854ec\") " pod="openstack/keystone-657b46f66d-5snf5" Feb 23 09:09:39 crc kubenswrapper[4940]: I0223 09:09:39.257292 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f433de8f-71fb-4f02-a223-871cc2959145-scripts\") pod \"placement-6c6749c74d-ng8p9\" (UID: \"f433de8f-71fb-4f02-a223-871cc2959145\") " pod="openstack/placement-6c6749c74d-ng8p9" Feb 23 09:09:39 crc kubenswrapper[4940]: I0223 09:09:39.257316 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f433de8f-71fb-4f02-a223-871cc2959145-config-data\") pod \"placement-6c6749c74d-ng8p9\" (UID: \"f433de8f-71fb-4f02-a223-871cc2959145\") " pod="openstack/placement-6c6749c74d-ng8p9" Feb 23 09:09:39 crc kubenswrapper[4940]: I0223 09:09:39.257669 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-sync-ktg94" event={"ID":"a43f9f8e-d118-4247-b1f0-b6aac984bb4d","Type":"ContainerStarted","Data":"342f7def9f50941425d743518c4769503c938bad72ec71dd786fa1971cffb42d"} Feb 23 09:09:39 crc kubenswrapper[4940]: I0223 09:09:39.266166 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f433de8f-71fb-4f02-a223-871cc2959145-logs\") pod \"placement-6c6749c74d-ng8p9\" (UID: \"f433de8f-71fb-4f02-a223-871cc2959145\") " pod="openstack/placement-6c6749c74d-ng8p9" Feb 23 09:09:39 crc kubenswrapper[4940]: I0223 09:09:39.273680 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6af3cab0-e7d8-461f-9092-6b5afefff5cc","Type":"ContainerStarted","Data":"1c5004709c21be0889fea3e58d8bd445fa72a121658b331f7949822e7ea39bf3"} Feb 23 09:09:39 crc kubenswrapper[4940]: I0223 09:09:39.274498 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f433de8f-71fb-4f02-a223-871cc2959145-public-tls-certs\") pod \"placement-6c6749c74d-ng8p9\" (UID: \"f433de8f-71fb-4f02-a223-871cc2959145\") " pod="openstack/placement-6c6749c74d-ng8p9" Feb 23 09:09:39 crc kubenswrapper[4940]: I0223 09:09:39.280383 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f433de8f-71fb-4f02-a223-871cc2959145-config-data\") pod \"placement-6c6749c74d-ng8p9\" (UID: \"f433de8f-71fb-4f02-a223-871cc2959145\") " pod="openstack/placement-6c6749c74d-ng8p9" Feb 23 09:09:39 crc kubenswrapper[4940]: I0223 09:09:39.282745 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-5fff59f9db-l27lk"] Feb 23 09:09:39 crc kubenswrapper[4940]: I0223 09:09:39.283291 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/14b5e353-0333-4351-a628-4767407854ec-fernet-keys\") pod \"keystone-657b46f66d-5snf5\" (UID: \"14b5e353-0333-4351-a628-4767407854ec\") " pod="openstack/keystone-657b46f66d-5snf5" Feb 23 09:09:39 crc kubenswrapper[4940]: I0223 09:09:39.284116 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-5fff59f9db-l27lk" Feb 23 09:09:39 crc kubenswrapper[4940]: I0223 09:09:39.284399 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/14b5e353-0333-4351-a628-4767407854ec-internal-tls-certs\") pod \"keystone-657b46f66d-5snf5\" (UID: \"14b5e353-0333-4351-a628-4767407854ec\") " pod="openstack/keystone-657b46f66d-5snf5" Feb 23 09:09:39 crc kubenswrapper[4940]: I0223 09:09:39.292574 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14b5e353-0333-4351-a628-4767407854ec-combined-ca-bundle\") pod \"keystone-657b46f66d-5snf5\" (UID: \"14b5e353-0333-4351-a628-4767407854ec\") " pod="openstack/keystone-657b46f66d-5snf5" Feb 23 09:09:39 crc kubenswrapper[4940]: I0223 09:09:39.299456 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Feb 23 09:09:39 crc kubenswrapper[4940]: I0223 09:09:39.320210 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f433de8f-71fb-4f02-a223-871cc2959145-combined-ca-bundle\") pod \"placement-6c6749c74d-ng8p9\" (UID: \"f433de8f-71fb-4f02-a223-871cc2959145\") " pod="openstack/placement-6c6749c74d-ng8p9" Feb 23 09:09:39 crc kubenswrapper[4940]: I0223 09:09:39.323687 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-5fff59f9db-l27lk"] Feb 23 09:09:39 crc kubenswrapper[4940]: I0223 09:09:39.328879 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/14b5e353-0333-4351-a628-4767407854ec-config-data\") pod \"keystone-657b46f66d-5snf5\" (UID: \"14b5e353-0333-4351-a628-4767407854ec\") " pod="openstack/keystone-657b46f66d-5snf5" Feb 23 09:09:39 crc kubenswrapper[4940]: I0223 09:09:39.329347 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kjwj2\" (UniqueName: \"kubernetes.io/projected/14b5e353-0333-4351-a628-4767407854ec-kube-api-access-kjwj2\") pod \"keystone-657b46f66d-5snf5\" (UID: \"14b5e353-0333-4351-a628-4767407854ec\") " pod="openstack/keystone-657b46f66d-5snf5" Feb 23 09:09:39 crc kubenswrapper[4940]: I0223 09:09:39.330022 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f433de8f-71fb-4f02-a223-871cc2959145-scripts\") pod \"placement-6c6749c74d-ng8p9\" (UID: \"f433de8f-71fb-4f02-a223-871cc2959145\") " pod="openstack/placement-6c6749c74d-ng8p9" Feb 23 09:09:39 crc kubenswrapper[4940]: I0223 09:09:39.330243 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f433de8f-71fb-4f02-a223-871cc2959145-internal-tls-certs\") pod \"placement-6c6749c74d-ng8p9\" (UID: \"f433de8f-71fb-4f02-a223-871cc2959145\") " pod="openstack/placement-6c6749c74d-ng8p9" Feb 23 09:09:39 crc kubenswrapper[4940]: I0223 09:09:39.332986 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/14b5e353-0333-4351-a628-4767407854ec-credential-keys\") pod \"keystone-657b46f66d-5snf5\" (UID: \"14b5e353-0333-4351-a628-4767407854ec\") " pod="openstack/keystone-657b46f66d-5snf5" Feb 23 09:09:39 crc kubenswrapper[4940]: I0223 09:09:39.336888 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s7xts\" (UniqueName: \"kubernetes.io/projected/f433de8f-71fb-4f02-a223-871cc2959145-kube-api-access-s7xts\") pod \"placement-6c6749c74d-ng8p9\" (UID: \"f433de8f-71fb-4f02-a223-871cc2959145\") " pod="openstack/placement-6c6749c74d-ng8p9" Feb 23 09:09:39 crc kubenswrapper[4940]: I0223 09:09:39.336984 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/14b5e353-0333-4351-a628-4767407854ec-scripts\") pod \"keystone-657b46f66d-5snf5\" (UID: \"14b5e353-0333-4351-a628-4767407854ec\") " pod="openstack/keystone-657b46f66d-5snf5" Feb 23 09:09:39 crc kubenswrapper[4940]: I0223 09:09:39.337621 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/14b5e353-0333-4351-a628-4767407854ec-public-tls-certs\") pod \"keystone-657b46f66d-5snf5\" (UID: \"14b5e353-0333-4351-a628-4767407854ec\") " pod="openstack/keystone-657b46f66d-5snf5" Feb 23 09:09:39 crc kubenswrapper[4940]: I0223 09:09:39.358952 4940 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-db-sync-ktg94" podStartSLOduration=3.377398062 podStartE2EDuration="46.358932219s" podCreationTimestamp="2026-02-23 09:08:53 +0000 UTC" firstStartedPulling="2026-02-23 09:08:54.605604983 +0000 UTC m=+1265.988811140" lastFinishedPulling="2026-02-23 09:09:37.58713913 +0000 UTC m=+1308.970345297" observedRunningTime="2026-02-23 09:09:39.313646776 +0000 UTC m=+1310.696852953" watchObservedRunningTime="2026-02-23 09:09:39.358932219 +0000 UTC m=+1310.742138376" Feb 23 09:09:39 crc kubenswrapper[4940]: I0223 09:09:39.362483 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/02b394c0-8e56-4cfd-b85b-27109abc1b52-config-data\") pod \"barbican-worker-677b768799-7xn5v\" (UID: \"02b394c0-8e56-4cfd-b85b-27109abc1b52\") " pod="openstack/barbican-worker-677b768799-7xn5v" Feb 23 09:09:39 crc kubenswrapper[4940]: I0223 09:09:39.362523 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5488f264-b445-4d42-9d11-9b74f2c9b1f5-config-data\") pod \"barbican-keystone-listener-5fff59f9db-l27lk\" (UID: \"5488f264-b445-4d42-9d11-9b74f2c9b1f5\") " pod="openstack/barbican-keystone-listener-5fff59f9db-l27lk" Feb 23 09:09:39 crc kubenswrapper[4940]: I0223 09:09:39.362539 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5488f264-b445-4d42-9d11-9b74f2c9b1f5-config-data-custom\") pod \"barbican-keystone-listener-5fff59f9db-l27lk\" (UID: \"5488f264-b445-4d42-9d11-9b74f2c9b1f5\") " pod="openstack/barbican-keystone-listener-5fff59f9db-l27lk" Feb 23 09:09:39 crc kubenswrapper[4940]: I0223 09:09:39.362584 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2d4vw\" (UniqueName: \"kubernetes.io/projected/02b394c0-8e56-4cfd-b85b-27109abc1b52-kube-api-access-2d4vw\") pod \"barbican-worker-677b768799-7xn5v\" (UID: \"02b394c0-8e56-4cfd-b85b-27109abc1b52\") " pod="openstack/barbican-worker-677b768799-7xn5v" Feb 23 09:09:39 crc kubenswrapper[4940]: I0223 09:09:39.362602 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5488f264-b445-4d42-9d11-9b74f2c9b1f5-combined-ca-bundle\") pod \"barbican-keystone-listener-5fff59f9db-l27lk\" (UID: \"5488f264-b445-4d42-9d11-9b74f2c9b1f5\") " pod="openstack/barbican-keystone-listener-5fff59f9db-l27lk" Feb 23 09:09:39 crc kubenswrapper[4940]: I0223 09:09:39.362669 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7phh4\" (UniqueName: \"kubernetes.io/projected/5488f264-b445-4d42-9d11-9b74f2c9b1f5-kube-api-access-7phh4\") pod \"barbican-keystone-listener-5fff59f9db-l27lk\" (UID: \"5488f264-b445-4d42-9d11-9b74f2c9b1f5\") " pod="openstack/barbican-keystone-listener-5fff59f9db-l27lk" Feb 23 09:09:39 crc kubenswrapper[4940]: I0223 09:09:39.362820 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/02b394c0-8e56-4cfd-b85b-27109abc1b52-config-data-custom\") pod \"barbican-worker-677b768799-7xn5v\" (UID: \"02b394c0-8e56-4cfd-b85b-27109abc1b52\") " pod="openstack/barbican-worker-677b768799-7xn5v" Feb 23 09:09:39 crc kubenswrapper[4940]: I0223 09:09:39.362844 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/02b394c0-8e56-4cfd-b85b-27109abc1b52-logs\") pod \"barbican-worker-677b768799-7xn5v\" (UID: \"02b394c0-8e56-4cfd-b85b-27109abc1b52\") " pod="openstack/barbican-worker-677b768799-7xn5v" Feb 23 09:09:39 crc kubenswrapper[4940]: I0223 09:09:39.362865 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5488f264-b445-4d42-9d11-9b74f2c9b1f5-logs\") pod \"barbican-keystone-listener-5fff59f9db-l27lk\" (UID: \"5488f264-b445-4d42-9d11-9b74f2c9b1f5\") " pod="openstack/barbican-keystone-listener-5fff59f9db-l27lk" Feb 23 09:09:39 crc kubenswrapper[4940]: I0223 09:09:39.362883 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/02b394c0-8e56-4cfd-b85b-27109abc1b52-combined-ca-bundle\") pod \"barbican-worker-677b768799-7xn5v\" (UID: \"02b394c0-8e56-4cfd-b85b-27109abc1b52\") " pod="openstack/barbican-worker-677b768799-7xn5v" Feb 23 09:09:39 crc kubenswrapper[4940]: I0223 09:09:39.366435 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/02b394c0-8e56-4cfd-b85b-27109abc1b52-combined-ca-bundle\") pod \"barbican-worker-677b768799-7xn5v\" (UID: \"02b394c0-8e56-4cfd-b85b-27109abc1b52\") " pod="openstack/barbican-worker-677b768799-7xn5v" Feb 23 09:09:39 crc kubenswrapper[4940]: I0223 09:09:39.366882 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/02b394c0-8e56-4cfd-b85b-27109abc1b52-logs\") pod \"barbican-worker-677b768799-7xn5v\" (UID: \"02b394c0-8e56-4cfd-b85b-27109abc1b52\") " pod="openstack/barbican-worker-677b768799-7xn5v" Feb 23 09:09:39 crc kubenswrapper[4940]: I0223 09:09:39.370484 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-657b46f66d-5snf5" Feb 23 09:09:39 crc kubenswrapper[4940]: I0223 09:09:39.452601 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2d4vw\" (UniqueName: \"kubernetes.io/projected/02b394c0-8e56-4cfd-b85b-27109abc1b52-kube-api-access-2d4vw\") pod \"barbican-worker-677b768799-7xn5v\" (UID: \"02b394c0-8e56-4cfd-b85b-27109abc1b52\") " pod="openstack/barbican-worker-677b768799-7xn5v" Feb 23 09:09:39 crc kubenswrapper[4940]: I0223 09:09:39.465040 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5488f264-b445-4d42-9d11-9b74f2c9b1f5-logs\") pod \"barbican-keystone-listener-5fff59f9db-l27lk\" (UID: \"5488f264-b445-4d42-9d11-9b74f2c9b1f5\") " pod="openstack/barbican-keystone-listener-5fff59f9db-l27lk" Feb 23 09:09:39 crc kubenswrapper[4940]: I0223 09:09:39.465136 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5488f264-b445-4d42-9d11-9b74f2c9b1f5-config-data\") pod \"barbican-keystone-listener-5fff59f9db-l27lk\" (UID: \"5488f264-b445-4d42-9d11-9b74f2c9b1f5\") " pod="openstack/barbican-keystone-listener-5fff59f9db-l27lk" Feb 23 09:09:39 crc kubenswrapper[4940]: I0223 09:09:39.465156 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5488f264-b445-4d42-9d11-9b74f2c9b1f5-config-data-custom\") pod \"barbican-keystone-listener-5fff59f9db-l27lk\" (UID: \"5488f264-b445-4d42-9d11-9b74f2c9b1f5\") " pod="openstack/barbican-keystone-listener-5fff59f9db-l27lk" Feb 23 09:09:39 crc kubenswrapper[4940]: I0223 09:09:39.465208 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5488f264-b445-4d42-9d11-9b74f2c9b1f5-combined-ca-bundle\") pod \"barbican-keystone-listener-5fff59f9db-l27lk\" (UID: \"5488f264-b445-4d42-9d11-9b74f2c9b1f5\") " pod="openstack/barbican-keystone-listener-5fff59f9db-l27lk" Feb 23 09:09:39 crc kubenswrapper[4940]: I0223 09:09:39.465270 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7phh4\" (UniqueName: \"kubernetes.io/projected/5488f264-b445-4d42-9d11-9b74f2c9b1f5-kube-api-access-7phh4\") pod \"barbican-keystone-listener-5fff59f9db-l27lk\" (UID: \"5488f264-b445-4d42-9d11-9b74f2c9b1f5\") " pod="openstack/barbican-keystone-listener-5fff59f9db-l27lk" Feb 23 09:09:39 crc kubenswrapper[4940]: I0223 09:09:39.466467 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5488f264-b445-4d42-9d11-9b74f2c9b1f5-logs\") pod \"barbican-keystone-listener-5fff59f9db-l27lk\" (UID: \"5488f264-b445-4d42-9d11-9b74f2c9b1f5\") " pod="openstack/barbican-keystone-listener-5fff59f9db-l27lk" Feb 23 09:09:39 crc kubenswrapper[4940]: I0223 09:09:39.476099 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-6c6749c74d-ng8p9" Feb 23 09:09:39 crc kubenswrapper[4940]: I0223 09:09:39.509911 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5488f264-b445-4d42-9d11-9b74f2c9b1f5-config-data\") pod \"barbican-keystone-listener-5fff59f9db-l27lk\" (UID: \"5488f264-b445-4d42-9d11-9b74f2c9b1f5\") " pod="openstack/barbican-keystone-listener-5fff59f9db-l27lk" Feb 23 09:09:39 crc kubenswrapper[4940]: I0223 09:09:39.513191 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5488f264-b445-4d42-9d11-9b74f2c9b1f5-combined-ca-bundle\") pod \"barbican-keystone-listener-5fff59f9db-l27lk\" (UID: \"5488f264-b445-4d42-9d11-9b74f2c9b1f5\") " pod="openstack/barbican-keystone-listener-5fff59f9db-l27lk" Feb 23 09:09:39 crc kubenswrapper[4940]: I0223 09:09:39.514181 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/02b394c0-8e56-4cfd-b85b-27109abc1b52-config-data-custom\") pod \"barbican-worker-677b768799-7xn5v\" (UID: \"02b394c0-8e56-4cfd-b85b-27109abc1b52\") " pod="openstack/barbican-worker-677b768799-7xn5v" Feb 23 09:09:39 crc kubenswrapper[4940]: I0223 09:09:39.542226 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7phh4\" (UniqueName: \"kubernetes.io/projected/5488f264-b445-4d42-9d11-9b74f2c9b1f5-kube-api-access-7phh4\") pod \"barbican-keystone-listener-5fff59f9db-l27lk\" (UID: \"5488f264-b445-4d42-9d11-9b74f2c9b1f5\") " pod="openstack/barbican-keystone-listener-5fff59f9db-l27lk" Feb 23 09:09:39 crc kubenswrapper[4940]: I0223 09:09:39.545289 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5488f264-b445-4d42-9d11-9b74f2c9b1f5-config-data-custom\") pod \"barbican-keystone-listener-5fff59f9db-l27lk\" (UID: \"5488f264-b445-4d42-9d11-9b74f2c9b1f5\") " pod="openstack/barbican-keystone-listener-5fff59f9db-l27lk" Feb 23 09:09:39 crc kubenswrapper[4940]: I0223 09:09:39.595865 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/02b394c0-8e56-4cfd-b85b-27109abc1b52-config-data\") pod \"barbican-worker-677b768799-7xn5v\" (UID: \"02b394c0-8e56-4cfd-b85b-27109abc1b52\") " pod="openstack/barbican-worker-677b768799-7xn5v" Feb 23 09:09:39 crc kubenswrapper[4940]: I0223 09:09:39.647101 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-677b768799-7xn5v" Feb 23 09:09:39 crc kubenswrapper[4940]: I0223 09:09:39.675980 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-5fff59f9db-l27lk" Feb 23 09:09:39 crc kubenswrapper[4940]: I0223 09:09:39.804331 4940 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7dc47a31-b1a6-40b2-8d67-5d60854fea4e" path="/var/lib/kubelet/pods/7dc47a31-b1a6-40b2-8d67-5d60854fea4e/volumes" Feb 23 09:09:39 crc kubenswrapper[4940]: I0223 09:09:39.813076 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-ghmbn"] Feb 23 09:09:39 crc kubenswrapper[4940]: I0223 09:09:39.817689 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-ghmbn"] Feb 23 09:09:39 crc kubenswrapper[4940]: I0223 09:09:39.817790 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-ghmbn" Feb 23 09:09:39 crc kubenswrapper[4940]: I0223 09:09:39.835956 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-7b9b88c6bc-hkv9v"] Feb 23 09:09:39 crc kubenswrapper[4940]: I0223 09:09:39.844383 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-7b9b88c6bc-hkv9v" Feb 23 09:09:39 crc kubenswrapper[4940]: I0223 09:09:39.939760 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-5d6cbfd9cd-f6hzm"] Feb 23 09:09:39 crc kubenswrapper[4940]: I0223 09:09:39.941292 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-5d6cbfd9cd-f6hzm" Feb 23 09:09:39 crc kubenswrapper[4940]: I0223 09:09:39.973981 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-7b9b88c6bc-hkv9v"] Feb 23 09:09:40 crc kubenswrapper[4940]: I0223 09:09:40.003445 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b39bf698-8f6d-4434-a926-239c936bbdca-dns-svc\") pod \"dnsmasq-dns-85ff748b95-ghmbn\" (UID: \"b39bf698-8f6d-4434-a926-239c936bbdca\") " pod="openstack/dnsmasq-dns-85ff748b95-ghmbn" Feb 23 09:09:40 crc kubenswrapper[4940]: I0223 09:09:40.003723 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8008f8dc-0709-408f-88d1-0707f66c0a10-config-data\") pod \"barbican-worker-7b9b88c6bc-hkv9v\" (UID: \"8008f8dc-0709-408f-88d1-0707f66c0a10\") " pod="openstack/barbican-worker-7b9b88c6bc-hkv9v" Feb 23 09:09:40 crc kubenswrapper[4940]: I0223 09:09:40.003869 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b39bf698-8f6d-4434-a926-239c936bbdca-dns-swift-storage-0\") pod \"dnsmasq-dns-85ff748b95-ghmbn\" (UID: \"b39bf698-8f6d-4434-a926-239c936bbdca\") " pod="openstack/dnsmasq-dns-85ff748b95-ghmbn" Feb 23 09:09:40 crc kubenswrapper[4940]: I0223 09:09:40.004005 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8008f8dc-0709-408f-88d1-0707f66c0a10-config-data-custom\") pod \"barbican-worker-7b9b88c6bc-hkv9v\" (UID: \"8008f8dc-0709-408f-88d1-0707f66c0a10\") " pod="openstack/barbican-worker-7b9b88c6bc-hkv9v" Feb 23 09:09:40 crc kubenswrapper[4940]: I0223 09:09:40.004144 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b39bf698-8f6d-4434-a926-239c936bbdca-ovsdbserver-nb\") pod \"dnsmasq-dns-85ff748b95-ghmbn\" (UID: \"b39bf698-8f6d-4434-a926-239c936bbdca\") " pod="openstack/dnsmasq-dns-85ff748b95-ghmbn" Feb 23 09:09:40 crc kubenswrapper[4940]: I0223 09:09:40.004246 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tsstc\" (UniqueName: \"kubernetes.io/projected/b39bf698-8f6d-4434-a926-239c936bbdca-kube-api-access-tsstc\") pod \"dnsmasq-dns-85ff748b95-ghmbn\" (UID: \"b39bf698-8f6d-4434-a926-239c936bbdca\") " pod="openstack/dnsmasq-dns-85ff748b95-ghmbn" Feb 23 09:09:40 crc kubenswrapper[4940]: I0223 09:09:40.004419 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jgbgb\" (UniqueName: \"kubernetes.io/projected/8008f8dc-0709-408f-88d1-0707f66c0a10-kube-api-access-jgbgb\") pod \"barbican-worker-7b9b88c6bc-hkv9v\" (UID: \"8008f8dc-0709-408f-88d1-0707f66c0a10\") " pod="openstack/barbican-worker-7b9b88c6bc-hkv9v" Feb 23 09:09:40 crc kubenswrapper[4940]: I0223 09:09:40.004536 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b39bf698-8f6d-4434-a926-239c936bbdca-config\") pod \"dnsmasq-dns-85ff748b95-ghmbn\" (UID: \"b39bf698-8f6d-4434-a926-239c936bbdca\") " pod="openstack/dnsmasq-dns-85ff748b95-ghmbn" Feb 23 09:09:40 crc kubenswrapper[4940]: I0223 09:09:40.009120 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b39bf698-8f6d-4434-a926-239c936bbdca-ovsdbserver-sb\") pod \"dnsmasq-dns-85ff748b95-ghmbn\" (UID: \"b39bf698-8f6d-4434-a926-239c936bbdca\") " pod="openstack/dnsmasq-dns-85ff748b95-ghmbn" Feb 23 09:09:40 crc kubenswrapper[4940]: I0223 09:09:40.009417 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8008f8dc-0709-408f-88d1-0707f66c0a10-combined-ca-bundle\") pod \"barbican-worker-7b9b88c6bc-hkv9v\" (UID: \"8008f8dc-0709-408f-88d1-0707f66c0a10\") " pod="openstack/barbican-worker-7b9b88c6bc-hkv9v" Feb 23 09:09:40 crc kubenswrapper[4940]: I0223 09:09:40.009834 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8008f8dc-0709-408f-88d1-0707f66c0a10-logs\") pod \"barbican-worker-7b9b88c6bc-hkv9v\" (UID: \"8008f8dc-0709-408f-88d1-0707f66c0a10\") " pod="openstack/barbican-worker-7b9b88c6bc-hkv9v" Feb 23 09:09:40 crc kubenswrapper[4940]: I0223 09:09:40.039869 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-5d6cbfd9cd-f6hzm"] Feb 23 09:09:40 crc kubenswrapper[4940]: I0223 09:09:40.079007 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-96958f474-956sq"] Feb 23 09:09:40 crc kubenswrapper[4940]: I0223 09:09:40.080837 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-96958f474-956sq" Feb 23 09:09:40 crc kubenswrapper[4940]: I0223 09:09:40.098175 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-64b687bd7d-jhpmr"] Feb 23 09:09:40 crc kubenswrapper[4940]: I0223 09:09:40.099959 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-64b687bd7d-jhpmr" Feb 23 09:09:40 crc kubenswrapper[4940]: I0223 09:09:40.103738 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Feb 23 09:09:40 crc kubenswrapper[4940]: I0223 09:09:40.112077 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8008f8dc-0709-408f-88d1-0707f66c0a10-config-data\") pod \"barbican-worker-7b9b88c6bc-hkv9v\" (UID: \"8008f8dc-0709-408f-88d1-0707f66c0a10\") " pod="openstack/barbican-worker-7b9b88c6bc-hkv9v" Feb 23 09:09:40 crc kubenswrapper[4940]: I0223 09:09:40.112141 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b39bf698-8f6d-4434-a926-239c936bbdca-dns-swift-storage-0\") pod \"dnsmasq-dns-85ff748b95-ghmbn\" (UID: \"b39bf698-8f6d-4434-a926-239c936bbdca\") " pod="openstack/dnsmasq-dns-85ff748b95-ghmbn" Feb 23 09:09:40 crc kubenswrapper[4940]: I0223 09:09:40.112260 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8008f8dc-0709-408f-88d1-0707f66c0a10-config-data-custom\") pod \"barbican-worker-7b9b88c6bc-hkv9v\" (UID: \"8008f8dc-0709-408f-88d1-0707f66c0a10\") " pod="openstack/barbican-worker-7b9b88c6bc-hkv9v" Feb 23 09:09:40 crc kubenswrapper[4940]: I0223 09:09:40.112304 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b39bf698-8f6d-4434-a926-239c936bbdca-ovsdbserver-nb\") pod \"dnsmasq-dns-85ff748b95-ghmbn\" (UID: \"b39bf698-8f6d-4434-a926-239c936bbdca\") " pod="openstack/dnsmasq-dns-85ff748b95-ghmbn" Feb 23 09:09:40 crc kubenswrapper[4940]: I0223 09:09:40.112342 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8e5d03e5-c75c-4cfe-a31a-fa57e04a51ce-config-data\") pod \"barbican-keystone-listener-5d6cbfd9cd-f6hzm\" (UID: \"8e5d03e5-c75c-4cfe-a31a-fa57e04a51ce\") " pod="openstack/barbican-keystone-listener-5d6cbfd9cd-f6hzm" Feb 23 09:09:40 crc kubenswrapper[4940]: I0223 09:09:40.112361 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tsstc\" (UniqueName: \"kubernetes.io/projected/b39bf698-8f6d-4434-a926-239c936bbdca-kube-api-access-tsstc\") pod \"dnsmasq-dns-85ff748b95-ghmbn\" (UID: \"b39bf698-8f6d-4434-a926-239c936bbdca\") " pod="openstack/dnsmasq-dns-85ff748b95-ghmbn" Feb 23 09:09:40 crc kubenswrapper[4940]: I0223 09:09:40.112421 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e5d03e5-c75c-4cfe-a31a-fa57e04a51ce-combined-ca-bundle\") pod \"barbican-keystone-listener-5d6cbfd9cd-f6hzm\" (UID: \"8e5d03e5-c75c-4cfe-a31a-fa57e04a51ce\") " pod="openstack/barbican-keystone-listener-5d6cbfd9cd-f6hzm" Feb 23 09:09:40 crc kubenswrapper[4940]: I0223 09:09:40.112448 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jgbgb\" (UniqueName: \"kubernetes.io/projected/8008f8dc-0709-408f-88d1-0707f66c0a10-kube-api-access-jgbgb\") pod \"barbican-worker-7b9b88c6bc-hkv9v\" (UID: \"8008f8dc-0709-408f-88d1-0707f66c0a10\") " pod="openstack/barbican-worker-7b9b88c6bc-hkv9v" Feb 23 09:09:40 crc kubenswrapper[4940]: I0223 09:09:40.112464 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8e5d03e5-c75c-4cfe-a31a-fa57e04a51ce-logs\") pod \"barbican-keystone-listener-5d6cbfd9cd-f6hzm\" (UID: \"8e5d03e5-c75c-4cfe-a31a-fa57e04a51ce\") " pod="openstack/barbican-keystone-listener-5d6cbfd9cd-f6hzm" Feb 23 09:09:40 crc kubenswrapper[4940]: I0223 09:09:40.112500 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b39bf698-8f6d-4434-a926-239c936bbdca-config\") pod \"dnsmasq-dns-85ff748b95-ghmbn\" (UID: \"b39bf698-8f6d-4434-a926-239c936bbdca\") " pod="openstack/dnsmasq-dns-85ff748b95-ghmbn" Feb 23 09:09:40 crc kubenswrapper[4940]: I0223 09:09:40.112525 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b39bf698-8f6d-4434-a926-239c936bbdca-ovsdbserver-sb\") pod \"dnsmasq-dns-85ff748b95-ghmbn\" (UID: \"b39bf698-8f6d-4434-a926-239c936bbdca\") " pod="openstack/dnsmasq-dns-85ff748b95-ghmbn" Feb 23 09:09:40 crc kubenswrapper[4940]: I0223 09:09:40.112570 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t8gvr\" (UniqueName: \"kubernetes.io/projected/8e5d03e5-c75c-4cfe-a31a-fa57e04a51ce-kube-api-access-t8gvr\") pod \"barbican-keystone-listener-5d6cbfd9cd-f6hzm\" (UID: \"8e5d03e5-c75c-4cfe-a31a-fa57e04a51ce\") " pod="openstack/barbican-keystone-listener-5d6cbfd9cd-f6hzm" Feb 23 09:09:40 crc kubenswrapper[4940]: I0223 09:09:40.112605 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8008f8dc-0709-408f-88d1-0707f66c0a10-combined-ca-bundle\") pod \"barbican-worker-7b9b88c6bc-hkv9v\" (UID: \"8008f8dc-0709-408f-88d1-0707f66c0a10\") " pod="openstack/barbican-worker-7b9b88c6bc-hkv9v" Feb 23 09:09:40 crc kubenswrapper[4940]: I0223 09:09:40.112668 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8008f8dc-0709-408f-88d1-0707f66c0a10-logs\") pod \"barbican-worker-7b9b88c6bc-hkv9v\" (UID: \"8008f8dc-0709-408f-88d1-0707f66c0a10\") " pod="openstack/barbican-worker-7b9b88c6bc-hkv9v" Feb 23 09:09:40 crc kubenswrapper[4940]: I0223 09:09:40.112687 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8e5d03e5-c75c-4cfe-a31a-fa57e04a51ce-config-data-custom\") pod \"barbican-keystone-listener-5d6cbfd9cd-f6hzm\" (UID: \"8e5d03e5-c75c-4cfe-a31a-fa57e04a51ce\") " pod="openstack/barbican-keystone-listener-5d6cbfd9cd-f6hzm" Feb 23 09:09:40 crc kubenswrapper[4940]: I0223 09:09:40.112728 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b39bf698-8f6d-4434-a926-239c936bbdca-dns-svc\") pod \"dnsmasq-dns-85ff748b95-ghmbn\" (UID: \"b39bf698-8f6d-4434-a926-239c936bbdca\") " pod="openstack/dnsmasq-dns-85ff748b95-ghmbn" Feb 23 09:09:40 crc kubenswrapper[4940]: I0223 09:09:40.114203 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b39bf698-8f6d-4434-a926-239c936bbdca-dns-svc\") pod \"dnsmasq-dns-85ff748b95-ghmbn\" (UID: \"b39bf698-8f6d-4434-a926-239c936bbdca\") " pod="openstack/dnsmasq-dns-85ff748b95-ghmbn" Feb 23 09:09:40 crc kubenswrapper[4940]: I0223 09:09:40.121786 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-96958f474-956sq"] Feb 23 09:09:40 crc kubenswrapper[4940]: I0223 09:09:40.122911 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b39bf698-8f6d-4434-a926-239c936bbdca-config\") pod \"dnsmasq-dns-85ff748b95-ghmbn\" (UID: \"b39bf698-8f6d-4434-a926-239c936bbdca\") " pod="openstack/dnsmasq-dns-85ff748b95-ghmbn" Feb 23 09:09:40 crc kubenswrapper[4940]: I0223 09:09:40.123173 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b39bf698-8f6d-4434-a926-239c936bbdca-dns-swift-storage-0\") pod \"dnsmasq-dns-85ff748b95-ghmbn\" (UID: \"b39bf698-8f6d-4434-a926-239c936bbdca\") " pod="openstack/dnsmasq-dns-85ff748b95-ghmbn" Feb 23 09:09:40 crc kubenswrapper[4940]: I0223 09:09:40.123627 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b39bf698-8f6d-4434-a926-239c936bbdca-ovsdbserver-sb\") pod \"dnsmasq-dns-85ff748b95-ghmbn\" (UID: \"b39bf698-8f6d-4434-a926-239c936bbdca\") " pod="openstack/dnsmasq-dns-85ff748b95-ghmbn" Feb 23 09:09:40 crc kubenswrapper[4940]: I0223 09:09:40.123879 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8008f8dc-0709-408f-88d1-0707f66c0a10-logs\") pod \"barbican-worker-7b9b88c6bc-hkv9v\" (UID: \"8008f8dc-0709-408f-88d1-0707f66c0a10\") " pod="openstack/barbican-worker-7b9b88c6bc-hkv9v" Feb 23 09:09:40 crc kubenswrapper[4940]: I0223 09:09:40.126145 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8008f8dc-0709-408f-88d1-0707f66c0a10-config-data-custom\") pod \"barbican-worker-7b9b88c6bc-hkv9v\" (UID: \"8008f8dc-0709-408f-88d1-0707f66c0a10\") " pod="openstack/barbican-worker-7b9b88c6bc-hkv9v" Feb 23 09:09:40 crc kubenswrapper[4940]: I0223 09:09:40.126675 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b39bf698-8f6d-4434-a926-239c936bbdca-ovsdbserver-nb\") pod \"dnsmasq-dns-85ff748b95-ghmbn\" (UID: \"b39bf698-8f6d-4434-a926-239c936bbdca\") " pod="openstack/dnsmasq-dns-85ff748b95-ghmbn" Feb 23 09:09:40 crc kubenswrapper[4940]: I0223 09:09:40.129089 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8008f8dc-0709-408f-88d1-0707f66c0a10-combined-ca-bundle\") pod \"barbican-worker-7b9b88c6bc-hkv9v\" (UID: \"8008f8dc-0709-408f-88d1-0707f66c0a10\") " pod="openstack/barbican-worker-7b9b88c6bc-hkv9v" Feb 23 09:09:40 crc kubenswrapper[4940]: I0223 09:09:40.135124 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8008f8dc-0709-408f-88d1-0707f66c0a10-config-data\") pod \"barbican-worker-7b9b88c6bc-hkv9v\" (UID: \"8008f8dc-0709-408f-88d1-0707f66c0a10\") " pod="openstack/barbican-worker-7b9b88c6bc-hkv9v" Feb 23 09:09:40 crc kubenswrapper[4940]: I0223 09:09:40.154961 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-64b687bd7d-jhpmr"] Feb 23 09:09:40 crc kubenswrapper[4940]: I0223 09:09:40.181976 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tsstc\" (UniqueName: \"kubernetes.io/projected/b39bf698-8f6d-4434-a926-239c936bbdca-kube-api-access-tsstc\") pod \"dnsmasq-dns-85ff748b95-ghmbn\" (UID: \"b39bf698-8f6d-4434-a926-239c936bbdca\") " pod="openstack/dnsmasq-dns-85ff748b95-ghmbn" Feb 23 09:09:40 crc kubenswrapper[4940]: I0223 09:09:40.207882 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jgbgb\" (UniqueName: \"kubernetes.io/projected/8008f8dc-0709-408f-88d1-0707f66c0a10-kube-api-access-jgbgb\") pod \"barbican-worker-7b9b88c6bc-hkv9v\" (UID: \"8008f8dc-0709-408f-88d1-0707f66c0a10\") " pod="openstack/barbican-worker-7b9b88c6bc-hkv9v" Feb 23 09:09:40 crc kubenswrapper[4940]: I0223 09:09:40.215404 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e38493cb-6fde-4245-a5a4-99a91920708b-public-tls-certs\") pod \"placement-96958f474-956sq\" (UID: \"e38493cb-6fde-4245-a5a4-99a91920708b\") " pod="openstack/placement-96958f474-956sq" Feb 23 09:09:40 crc kubenswrapper[4940]: I0223 09:09:40.215451 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2f80e757-efd8-4d5f-a2bf-46c03b169956-config-data-custom\") pod \"barbican-api-64b687bd7d-jhpmr\" (UID: \"2f80e757-efd8-4d5f-a2bf-46c03b169956\") " pod="openstack/barbican-api-64b687bd7d-jhpmr" Feb 23 09:09:40 crc kubenswrapper[4940]: I0223 09:09:40.215477 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t8gvr\" (UniqueName: \"kubernetes.io/projected/8e5d03e5-c75c-4cfe-a31a-fa57e04a51ce-kube-api-access-t8gvr\") pod \"barbican-keystone-listener-5d6cbfd9cd-f6hzm\" (UID: \"8e5d03e5-c75c-4cfe-a31a-fa57e04a51ce\") " pod="openstack/barbican-keystone-listener-5d6cbfd9cd-f6hzm" Feb 23 09:09:40 crc kubenswrapper[4940]: I0223 09:09:40.215511 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2f80e757-efd8-4d5f-a2bf-46c03b169956-logs\") pod \"barbican-api-64b687bd7d-jhpmr\" (UID: \"2f80e757-efd8-4d5f-a2bf-46c03b169956\") " pod="openstack/barbican-api-64b687bd7d-jhpmr" Feb 23 09:09:40 crc kubenswrapper[4940]: I0223 09:09:40.215530 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8e5d03e5-c75c-4cfe-a31a-fa57e04a51ce-config-data-custom\") pod \"barbican-keystone-listener-5d6cbfd9cd-f6hzm\" (UID: \"8e5d03e5-c75c-4cfe-a31a-fa57e04a51ce\") " pod="openstack/barbican-keystone-listener-5d6cbfd9cd-f6hzm" Feb 23 09:09:40 crc kubenswrapper[4940]: I0223 09:09:40.215545 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e38493cb-6fde-4245-a5a4-99a91920708b-scripts\") pod \"placement-96958f474-956sq\" (UID: \"e38493cb-6fde-4245-a5a4-99a91920708b\") " pod="openstack/placement-96958f474-956sq" Feb 23 09:09:40 crc kubenswrapper[4940]: I0223 09:09:40.215580 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e38493cb-6fde-4245-a5a4-99a91920708b-internal-tls-certs\") pod \"placement-96958f474-956sq\" (UID: \"e38493cb-6fde-4245-a5a4-99a91920708b\") " pod="openstack/placement-96958f474-956sq" Feb 23 09:09:40 crc kubenswrapper[4940]: I0223 09:09:40.215642 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e38493cb-6fde-4245-a5a4-99a91920708b-config-data\") pod \"placement-96958f474-956sq\" (UID: \"e38493cb-6fde-4245-a5a4-99a91920708b\") " pod="openstack/placement-96958f474-956sq" Feb 23 09:09:40 crc kubenswrapper[4940]: I0223 09:09:40.215670 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w5495\" (UniqueName: \"kubernetes.io/projected/2f80e757-efd8-4d5f-a2bf-46c03b169956-kube-api-access-w5495\") pod \"barbican-api-64b687bd7d-jhpmr\" (UID: \"2f80e757-efd8-4d5f-a2bf-46c03b169956\") " pod="openstack/barbican-api-64b687bd7d-jhpmr" Feb 23 09:09:40 crc kubenswrapper[4940]: I0223 09:09:40.215690 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8e5d03e5-c75c-4cfe-a31a-fa57e04a51ce-config-data\") pod \"barbican-keystone-listener-5d6cbfd9cd-f6hzm\" (UID: \"8e5d03e5-c75c-4cfe-a31a-fa57e04a51ce\") " pod="openstack/barbican-keystone-listener-5d6cbfd9cd-f6hzm" Feb 23 09:09:40 crc kubenswrapper[4940]: I0223 09:09:40.215716 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e38493cb-6fde-4245-a5a4-99a91920708b-logs\") pod \"placement-96958f474-956sq\" (UID: \"e38493cb-6fde-4245-a5a4-99a91920708b\") " pod="openstack/placement-96958f474-956sq" Feb 23 09:09:40 crc kubenswrapper[4940]: I0223 09:09:40.215749 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e5d03e5-c75c-4cfe-a31a-fa57e04a51ce-combined-ca-bundle\") pod \"barbican-keystone-listener-5d6cbfd9cd-f6hzm\" (UID: \"8e5d03e5-c75c-4cfe-a31a-fa57e04a51ce\") " pod="openstack/barbican-keystone-listener-5d6cbfd9cd-f6hzm" Feb 23 09:09:40 crc kubenswrapper[4940]: I0223 09:09:40.215767 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2f80e757-efd8-4d5f-a2bf-46c03b169956-config-data\") pod \"barbican-api-64b687bd7d-jhpmr\" (UID: \"2f80e757-efd8-4d5f-a2bf-46c03b169956\") " pod="openstack/barbican-api-64b687bd7d-jhpmr" Feb 23 09:09:40 crc kubenswrapper[4940]: I0223 09:09:40.215783 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f80e757-efd8-4d5f-a2bf-46c03b169956-combined-ca-bundle\") pod \"barbican-api-64b687bd7d-jhpmr\" (UID: \"2f80e757-efd8-4d5f-a2bf-46c03b169956\") " pod="openstack/barbican-api-64b687bd7d-jhpmr" Feb 23 09:09:40 crc kubenswrapper[4940]: I0223 09:09:40.215800 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nbpkb\" (UniqueName: \"kubernetes.io/projected/e38493cb-6fde-4245-a5a4-99a91920708b-kube-api-access-nbpkb\") pod \"placement-96958f474-956sq\" (UID: \"e38493cb-6fde-4245-a5a4-99a91920708b\") " pod="openstack/placement-96958f474-956sq" Feb 23 09:09:40 crc kubenswrapper[4940]: I0223 09:09:40.215829 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e38493cb-6fde-4245-a5a4-99a91920708b-combined-ca-bundle\") pod \"placement-96958f474-956sq\" (UID: \"e38493cb-6fde-4245-a5a4-99a91920708b\") " pod="openstack/placement-96958f474-956sq" Feb 23 09:09:40 crc kubenswrapper[4940]: I0223 09:09:40.215846 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8e5d03e5-c75c-4cfe-a31a-fa57e04a51ce-logs\") pod \"barbican-keystone-listener-5d6cbfd9cd-f6hzm\" (UID: \"8e5d03e5-c75c-4cfe-a31a-fa57e04a51ce\") " pod="openstack/barbican-keystone-listener-5d6cbfd9cd-f6hzm" Feb 23 09:09:40 crc kubenswrapper[4940]: I0223 09:09:40.216219 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8e5d03e5-c75c-4cfe-a31a-fa57e04a51ce-logs\") pod \"barbican-keystone-listener-5d6cbfd9cd-f6hzm\" (UID: \"8e5d03e5-c75c-4cfe-a31a-fa57e04a51ce\") " pod="openstack/barbican-keystone-listener-5d6cbfd9cd-f6hzm" Feb 23 09:09:40 crc kubenswrapper[4940]: I0223 09:09:40.227891 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-ghmbn" Feb 23 09:09:40 crc kubenswrapper[4940]: I0223 09:09:40.228470 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8e5d03e5-c75c-4cfe-a31a-fa57e04a51ce-config-data\") pod \"barbican-keystone-listener-5d6cbfd9cd-f6hzm\" (UID: \"8e5d03e5-c75c-4cfe-a31a-fa57e04a51ce\") " pod="openstack/barbican-keystone-listener-5d6cbfd9cd-f6hzm" Feb 23 09:09:40 crc kubenswrapper[4940]: I0223 09:09:40.258335 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8e5d03e5-c75c-4cfe-a31a-fa57e04a51ce-config-data-custom\") pod \"barbican-keystone-listener-5d6cbfd9cd-f6hzm\" (UID: \"8e5d03e5-c75c-4cfe-a31a-fa57e04a51ce\") " pod="openstack/barbican-keystone-listener-5d6cbfd9cd-f6hzm" Feb 23 09:09:40 crc kubenswrapper[4940]: I0223 09:09:40.258760 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e5d03e5-c75c-4cfe-a31a-fa57e04a51ce-combined-ca-bundle\") pod \"barbican-keystone-listener-5d6cbfd9cd-f6hzm\" (UID: \"8e5d03e5-c75c-4cfe-a31a-fa57e04a51ce\") " pod="openstack/barbican-keystone-listener-5d6cbfd9cd-f6hzm" Feb 23 09:09:40 crc kubenswrapper[4940]: I0223 09:09:40.265262 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t8gvr\" (UniqueName: \"kubernetes.io/projected/8e5d03e5-c75c-4cfe-a31a-fa57e04a51ce-kube-api-access-t8gvr\") pod \"barbican-keystone-listener-5d6cbfd9cd-f6hzm\" (UID: \"8e5d03e5-c75c-4cfe-a31a-fa57e04a51ce\") " pod="openstack/barbican-keystone-listener-5d6cbfd9cd-f6hzm" Feb 23 09:09:40 crc kubenswrapper[4940]: I0223 09:09:40.318424 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e38493cb-6fde-4245-a5a4-99a91920708b-internal-tls-certs\") pod \"placement-96958f474-956sq\" (UID: \"e38493cb-6fde-4245-a5a4-99a91920708b\") " pod="openstack/placement-96958f474-956sq" Feb 23 09:09:40 crc kubenswrapper[4940]: I0223 09:09:40.318510 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e38493cb-6fde-4245-a5a4-99a91920708b-config-data\") pod \"placement-96958f474-956sq\" (UID: \"e38493cb-6fde-4245-a5a4-99a91920708b\") " pod="openstack/placement-96958f474-956sq" Feb 23 09:09:40 crc kubenswrapper[4940]: I0223 09:09:40.318554 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w5495\" (UniqueName: \"kubernetes.io/projected/2f80e757-efd8-4d5f-a2bf-46c03b169956-kube-api-access-w5495\") pod \"barbican-api-64b687bd7d-jhpmr\" (UID: \"2f80e757-efd8-4d5f-a2bf-46c03b169956\") " pod="openstack/barbican-api-64b687bd7d-jhpmr" Feb 23 09:09:40 crc kubenswrapper[4940]: I0223 09:09:40.318596 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e38493cb-6fde-4245-a5a4-99a91920708b-logs\") pod \"placement-96958f474-956sq\" (UID: \"e38493cb-6fde-4245-a5a4-99a91920708b\") " pod="openstack/placement-96958f474-956sq" Feb 23 09:09:40 crc kubenswrapper[4940]: I0223 09:09:40.318666 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2f80e757-efd8-4d5f-a2bf-46c03b169956-config-data\") pod \"barbican-api-64b687bd7d-jhpmr\" (UID: \"2f80e757-efd8-4d5f-a2bf-46c03b169956\") " pod="openstack/barbican-api-64b687bd7d-jhpmr" Feb 23 09:09:40 crc kubenswrapper[4940]: I0223 09:09:40.318691 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f80e757-efd8-4d5f-a2bf-46c03b169956-combined-ca-bundle\") pod \"barbican-api-64b687bd7d-jhpmr\" (UID: \"2f80e757-efd8-4d5f-a2bf-46c03b169956\") " pod="openstack/barbican-api-64b687bd7d-jhpmr" Feb 23 09:09:40 crc kubenswrapper[4940]: I0223 09:09:40.318717 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nbpkb\" (UniqueName: \"kubernetes.io/projected/e38493cb-6fde-4245-a5a4-99a91920708b-kube-api-access-nbpkb\") pod \"placement-96958f474-956sq\" (UID: \"e38493cb-6fde-4245-a5a4-99a91920708b\") " pod="openstack/placement-96958f474-956sq" Feb 23 09:09:40 crc kubenswrapper[4940]: I0223 09:09:40.318752 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e38493cb-6fde-4245-a5a4-99a91920708b-combined-ca-bundle\") pod \"placement-96958f474-956sq\" (UID: \"e38493cb-6fde-4245-a5a4-99a91920708b\") " pod="openstack/placement-96958f474-956sq" Feb 23 09:09:40 crc kubenswrapper[4940]: I0223 09:09:40.318804 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e38493cb-6fde-4245-a5a4-99a91920708b-public-tls-certs\") pod \"placement-96958f474-956sq\" (UID: \"e38493cb-6fde-4245-a5a4-99a91920708b\") " pod="openstack/placement-96958f474-956sq" Feb 23 09:09:40 crc kubenswrapper[4940]: I0223 09:09:40.318824 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2f80e757-efd8-4d5f-a2bf-46c03b169956-config-data-custom\") pod \"barbican-api-64b687bd7d-jhpmr\" (UID: \"2f80e757-efd8-4d5f-a2bf-46c03b169956\") " pod="openstack/barbican-api-64b687bd7d-jhpmr" Feb 23 09:09:40 crc kubenswrapper[4940]: I0223 09:09:40.318875 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2f80e757-efd8-4d5f-a2bf-46c03b169956-logs\") pod \"barbican-api-64b687bd7d-jhpmr\" (UID: \"2f80e757-efd8-4d5f-a2bf-46c03b169956\") " pod="openstack/barbican-api-64b687bd7d-jhpmr" Feb 23 09:09:40 crc kubenswrapper[4940]: I0223 09:09:40.318896 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e38493cb-6fde-4245-a5a4-99a91920708b-scripts\") pod \"placement-96958f474-956sq\" (UID: \"e38493cb-6fde-4245-a5a4-99a91920708b\") " pod="openstack/placement-96958f474-956sq" Feb 23 09:09:40 crc kubenswrapper[4940]: I0223 09:09:40.326103 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e38493cb-6fde-4245-a5a4-99a91920708b-logs\") pod \"placement-96958f474-956sq\" (UID: \"e38493cb-6fde-4245-a5a4-99a91920708b\") " pod="openstack/placement-96958f474-956sq" Feb 23 09:09:40 crc kubenswrapper[4940]: I0223 09:09:40.326559 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2f80e757-efd8-4d5f-a2bf-46c03b169956-logs\") pod \"barbican-api-64b687bd7d-jhpmr\" (UID: \"2f80e757-efd8-4d5f-a2bf-46c03b169956\") " pod="openstack/barbican-api-64b687bd7d-jhpmr" Feb 23 09:09:40 crc kubenswrapper[4940]: I0223 09:09:40.336076 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e38493cb-6fde-4245-a5a4-99a91920708b-scripts\") pod \"placement-96958f474-956sq\" (UID: \"e38493cb-6fde-4245-a5a4-99a91920708b\") " pod="openstack/placement-96958f474-956sq" Feb 23 09:09:40 crc kubenswrapper[4940]: I0223 09:09:40.350890 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e38493cb-6fde-4245-a5a4-99a91920708b-config-data\") pod \"placement-96958f474-956sq\" (UID: \"e38493cb-6fde-4245-a5a4-99a91920708b\") " pod="openstack/placement-96958f474-956sq" Feb 23 09:09:40 crc kubenswrapper[4940]: I0223 09:09:40.373220 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e38493cb-6fde-4245-a5a4-99a91920708b-internal-tls-certs\") pod \"placement-96958f474-956sq\" (UID: \"e38493cb-6fde-4245-a5a4-99a91920708b\") " pod="openstack/placement-96958f474-956sq" Feb 23 09:09:40 crc kubenswrapper[4940]: I0223 09:09:40.376269 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w5495\" (UniqueName: \"kubernetes.io/projected/2f80e757-efd8-4d5f-a2bf-46c03b169956-kube-api-access-w5495\") pod \"barbican-api-64b687bd7d-jhpmr\" (UID: \"2f80e757-efd8-4d5f-a2bf-46c03b169956\") " pod="openstack/barbican-api-64b687bd7d-jhpmr" Feb 23 09:09:40 crc kubenswrapper[4940]: I0223 09:09:40.377321 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nbpkb\" (UniqueName: \"kubernetes.io/projected/e38493cb-6fde-4245-a5a4-99a91920708b-kube-api-access-nbpkb\") pod \"placement-96958f474-956sq\" (UID: \"e38493cb-6fde-4245-a5a4-99a91920708b\") " pod="openstack/placement-96958f474-956sq" Feb 23 09:09:40 crc kubenswrapper[4940]: I0223 09:09:40.377339 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2f80e757-efd8-4d5f-a2bf-46c03b169956-config-data-custom\") pod \"barbican-api-64b687bd7d-jhpmr\" (UID: \"2f80e757-efd8-4d5f-a2bf-46c03b169956\") " pod="openstack/barbican-api-64b687bd7d-jhpmr" Feb 23 09:09:40 crc kubenswrapper[4940]: I0223 09:09:40.377452 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e38493cb-6fde-4245-a5a4-99a91920708b-combined-ca-bundle\") pod \"placement-96958f474-956sq\" (UID: \"e38493cb-6fde-4245-a5a4-99a91920708b\") " pod="openstack/placement-96958f474-956sq" Feb 23 09:09:40 crc kubenswrapper[4940]: I0223 09:09:40.383642 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2f80e757-efd8-4d5f-a2bf-46c03b169956-config-data\") pod \"barbican-api-64b687bd7d-jhpmr\" (UID: \"2f80e757-efd8-4d5f-a2bf-46c03b169956\") " pod="openstack/barbican-api-64b687bd7d-jhpmr" Feb 23 09:09:40 crc kubenswrapper[4940]: I0223 09:09:40.384358 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e38493cb-6fde-4245-a5a4-99a91920708b-public-tls-certs\") pod \"placement-96958f474-956sq\" (UID: \"e38493cb-6fde-4245-a5a4-99a91920708b\") " pod="openstack/placement-96958f474-956sq" Feb 23 09:09:40 crc kubenswrapper[4940]: I0223 09:09:40.384571 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f80e757-efd8-4d5f-a2bf-46c03b169956-combined-ca-bundle\") pod \"barbican-api-64b687bd7d-jhpmr\" (UID: \"2f80e757-efd8-4d5f-a2bf-46c03b169956\") " pod="openstack/barbican-api-64b687bd7d-jhpmr" Feb 23 09:09:40 crc kubenswrapper[4940]: I0223 09:09:40.407436 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-7b9b88c6bc-hkv9v" Feb 23 09:09:40 crc kubenswrapper[4940]: I0223 09:09:40.410151 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-5d6cbfd9cd-f6hzm" Feb 23 09:09:40 crc kubenswrapper[4940]: I0223 09:09:40.431478 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-96958f474-956sq" Feb 23 09:09:40 crc kubenswrapper[4940]: I0223 09:09:40.461754 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-64b687bd7d-jhpmr" Feb 23 09:09:40 crc kubenswrapper[4940]: I0223 09:09:40.503427 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-657b46f66d-5snf5"] Feb 23 09:09:40 crc kubenswrapper[4940]: I0223 09:09:40.814176 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-6c6749c74d-ng8p9"] Feb 23 09:09:40 crc kubenswrapper[4940]: W0223 09:09:40.823763 4940 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf433de8f_71fb_4f02_a223_871cc2959145.slice/crio-49eaa9426bb097e691ef23490366aa36cbbb95591a71cc92ebcb85c243fba664 WatchSource:0}: Error finding container 49eaa9426bb097e691ef23490366aa36cbbb95591a71cc92ebcb85c243fba664: Status 404 returned error can't find the container with id 49eaa9426bb097e691ef23490366aa36cbbb95591a71cc92ebcb85c243fba664 Feb 23 09:09:41 crc kubenswrapper[4940]: I0223 09:09:41.070352 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-5fff59f9db-l27lk"] Feb 23 09:09:41 crc kubenswrapper[4940]: I0223 09:09:41.271563 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-677b768799-7xn5v"] Feb 23 09:09:41 crc kubenswrapper[4940]: W0223 09:09:41.300973 4940 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod02b394c0_8e56_4cfd_b85b_27109abc1b52.slice/crio-0acbd31a37704ab8d49ce47a1a3cb1359e3efaa8d9c22b64204913e8e3395b76 WatchSource:0}: Error finding container 0acbd31a37704ab8d49ce47a1a3cb1359e3efaa8d9c22b64204913e8e3395b76: Status 404 returned error can't find the container with id 0acbd31a37704ab8d49ce47a1a3cb1359e3efaa8d9c22b64204913e8e3395b76 Feb 23 09:09:41 crc kubenswrapper[4940]: I0223 09:09:41.386186 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-5fff59f9db-l27lk" event={"ID":"5488f264-b445-4d42-9d11-9b74f2c9b1f5","Type":"ContainerStarted","Data":"570b4e62edca17379ddd80c94d983ac2a92bc5fbd38bc1adc3ec6d1f662894d2"} Feb 23 09:09:41 crc kubenswrapper[4940]: I0223 09:09:41.386226 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-ghmbn"] Feb 23 09:09:41 crc kubenswrapper[4940]: I0223 09:09:41.396982 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-657b46f66d-5snf5" event={"ID":"14b5e353-0333-4351-a628-4767407854ec","Type":"ContainerStarted","Data":"36cfce92efca27919d183fc25321b5a81b3abd2593a2a0700d12f3440de638ed"} Feb 23 09:09:41 crc kubenswrapper[4940]: I0223 09:09:41.397042 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-657b46f66d-5snf5" event={"ID":"14b5e353-0333-4351-a628-4767407854ec","Type":"ContainerStarted","Data":"9ae2a6d9e3de6e128e454aa75c11088686e147638f58cc6dacf21a925535c508"} Feb 23 09:09:41 crc kubenswrapper[4940]: I0223 09:09:41.397794 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-657b46f66d-5snf5" Feb 23 09:09:41 crc kubenswrapper[4940]: I0223 09:09:41.403831 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-677b768799-7xn5v" event={"ID":"02b394c0-8e56-4cfd-b85b-27109abc1b52","Type":"ContainerStarted","Data":"0acbd31a37704ab8d49ce47a1a3cb1359e3efaa8d9c22b64204913e8e3395b76"} Feb 23 09:09:41 crc kubenswrapper[4940]: I0223 09:09:41.416222 4940 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-657b46f66d-5snf5" podStartSLOduration=2.416211226 podStartE2EDuration="2.416211226s" podCreationTimestamp="2026-02-23 09:09:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 09:09:41.414378379 +0000 UTC m=+1312.797584536" watchObservedRunningTime="2026-02-23 09:09:41.416211226 +0000 UTC m=+1312.799417383" Feb 23 09:09:41 crc kubenswrapper[4940]: I0223 09:09:41.434591 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-6p69q" event={"ID":"ab97aa50-1b14-4a5c-82cd-1be9f025b2b5","Type":"ContainerStarted","Data":"d259250918978cd7c3ae3722a903f4629c7b64cbf15a3fbf81f3518820b864db"} Feb 23 09:09:41 crc kubenswrapper[4940]: I0223 09:09:41.447735 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6c6749c74d-ng8p9" event={"ID":"f433de8f-71fb-4f02-a223-871cc2959145","Type":"ContainerStarted","Data":"49eaa9426bb097e691ef23490366aa36cbbb95591a71cc92ebcb85c243fba664"} Feb 23 09:09:41 crc kubenswrapper[4940]: I0223 09:09:41.473442 4940 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-6p69q" podStartSLOduration=3.938563252 podStartE2EDuration="48.473424434s" podCreationTimestamp="2026-02-23 09:08:53 +0000 UTC" firstStartedPulling="2026-02-23 09:08:54.357294504 +0000 UTC m=+1265.740500661" lastFinishedPulling="2026-02-23 09:09:38.892155686 +0000 UTC m=+1310.275361843" observedRunningTime="2026-02-23 09:09:41.472394601 +0000 UTC m=+1312.855600758" watchObservedRunningTime="2026-02-23 09:09:41.473424434 +0000 UTC m=+1312.856630591" Feb 23 09:09:41 crc kubenswrapper[4940]: I0223 09:09:41.790826 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-7b9b88c6bc-hkv9v"] Feb 23 09:09:41 crc kubenswrapper[4940]: I0223 09:09:41.833964 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-64b687bd7d-jhpmr"] Feb 23 09:09:41 crc kubenswrapper[4940]: I0223 09:09:41.868147 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-5d6cbfd9cd-f6hzm"] Feb 23 09:09:41 crc kubenswrapper[4940]: I0223 09:09:41.887245 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-96958f474-956sq"] Feb 23 09:09:42 crc kubenswrapper[4940]: I0223 09:09:42.473853 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-5d6cbfd9cd-f6hzm" event={"ID":"8e5d03e5-c75c-4cfe-a31a-fa57e04a51ce","Type":"ContainerStarted","Data":"46b3cdfcb17f3f83dfd847a28d0c36e0125778f1800438b5611c4b111371bd78"} Feb 23 09:09:42 crc kubenswrapper[4940]: I0223 09:09:42.480408 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6c6749c74d-ng8p9" event={"ID":"f433de8f-71fb-4f02-a223-871cc2959145","Type":"ContainerStarted","Data":"7a06c9772d7b7b2824187fdbe6566e63cba4d50fa670b25290e09e202ad9a4db"} Feb 23 09:09:42 crc kubenswrapper[4940]: I0223 09:09:42.480462 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6c6749c74d-ng8p9" event={"ID":"f433de8f-71fb-4f02-a223-871cc2959145","Type":"ContainerStarted","Data":"74f58278e9eaf1ba3aaa6c4c89dd754902de0e73978ad252afed9398bcb8f2e6"} Feb 23 09:09:42 crc kubenswrapper[4940]: I0223 09:09:42.480918 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-6c6749c74d-ng8p9" Feb 23 09:09:42 crc kubenswrapper[4940]: I0223 09:09:42.480951 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-6c6749c74d-ng8p9" Feb 23 09:09:42 crc kubenswrapper[4940]: I0223 09:09:42.483301 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-96958f474-956sq" event={"ID":"e38493cb-6fde-4245-a5a4-99a91920708b","Type":"ContainerStarted","Data":"10af2d1ea10b284f4c0ace7b4879b24b6f69151cf45d5fd895d7e3f9c4e88a1d"} Feb 23 09:09:42 crc kubenswrapper[4940]: I0223 09:09:42.483354 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-96958f474-956sq" event={"ID":"e38493cb-6fde-4245-a5a4-99a91920708b","Type":"ContainerStarted","Data":"dce42171589b190de320377a22f1e6269678a8561b19b1e0ee740edd766240e7"} Feb 23 09:09:42 crc kubenswrapper[4940]: I0223 09:09:42.485918 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-7b9b88c6bc-hkv9v" event={"ID":"8008f8dc-0709-408f-88d1-0707f66c0a10","Type":"ContainerStarted","Data":"6a8b097869029b94a90934e5c9eb1969a6dce9758433b3f5f0e888bb936733bf"} Feb 23 09:09:42 crc kubenswrapper[4940]: I0223 09:09:42.519914 4940 generic.go:334] "Generic (PLEG): container finished" podID="b39bf698-8f6d-4434-a926-239c936bbdca" containerID="596291a8e998dffd49a123d45c88e3c1ffab983d33f61afe467c6d13867dbe9b" exitCode=0 Feb 23 09:09:42 crc kubenswrapper[4940]: I0223 09:09:42.520003 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-ghmbn" event={"ID":"b39bf698-8f6d-4434-a926-239c936bbdca","Type":"ContainerDied","Data":"596291a8e998dffd49a123d45c88e3c1ffab983d33f61afe467c6d13867dbe9b"} Feb 23 09:09:42 crc kubenswrapper[4940]: I0223 09:09:42.520033 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-ghmbn" event={"ID":"b39bf698-8f6d-4434-a926-239c936bbdca","Type":"ContainerStarted","Data":"afca7315b672da42c17cfa6c3463952df3efc5ea90dce4c6d570d8228ab1518e"} Feb 23 09:09:42 crc kubenswrapper[4940]: I0223 09:09:42.551096 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-64b687bd7d-jhpmr" event={"ID":"2f80e757-efd8-4d5f-a2bf-46c03b169956","Type":"ContainerStarted","Data":"91408324e2a67395fe5476ad2dc90bf08a1986c8a99b0d971404b15fcd6427e3"} Feb 23 09:09:42 crc kubenswrapper[4940]: I0223 09:09:42.551431 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-64b687bd7d-jhpmr" Feb 23 09:09:42 crc kubenswrapper[4940]: I0223 09:09:42.551446 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-64b687bd7d-jhpmr" event={"ID":"2f80e757-efd8-4d5f-a2bf-46c03b169956","Type":"ContainerStarted","Data":"44e17f3d5963f3c652e966b59a18c53aa8cfdc71dc72079e653c8c9e8206a72b"} Feb 23 09:09:42 crc kubenswrapper[4940]: I0223 09:09:42.551458 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-64b687bd7d-jhpmr" event={"ID":"2f80e757-efd8-4d5f-a2bf-46c03b169956","Type":"ContainerStarted","Data":"6208bf17e39601a8aab0fe3fe464b39191a36eed24d0d5b693c197aeeb819382"} Feb 23 09:09:42 crc kubenswrapper[4940]: I0223 09:09:42.551470 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-64b687bd7d-jhpmr" Feb 23 09:09:42 crc kubenswrapper[4940]: I0223 09:09:42.577470 4940 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-6c6749c74d-ng8p9" podStartSLOduration=3.577444846 podStartE2EDuration="3.577444846s" podCreationTimestamp="2026-02-23 09:09:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 09:09:42.540125113 +0000 UTC m=+1313.923331270" watchObservedRunningTime="2026-02-23 09:09:42.577444846 +0000 UTC m=+1313.960651003" Feb 23 09:09:42 crc kubenswrapper[4940]: I0223 09:09:42.580802 4940 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-6884678d78-ckt87" podUID="e330abf6-9282-4221-b286-672ffc3985e7" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.151:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.151:8443: connect: connection refused" Feb 23 09:09:42 crc kubenswrapper[4940]: I0223 09:09:42.595449 4940 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-8485464bb-cvmj5" podUID="0c698dee-e3c4-44d3-a08b-73e6b1e87986" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.152:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.152:8443: connect: connection refused" Feb 23 09:09:42 crc kubenswrapper[4940]: I0223 09:09:42.659766 4940 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-64b687bd7d-jhpmr" podStartSLOduration=3.659748371 podStartE2EDuration="3.659748371s" podCreationTimestamp="2026-02-23 09:09:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 09:09:42.632182215 +0000 UTC m=+1314.015388382" watchObservedRunningTime="2026-02-23 09:09:42.659748371 +0000 UTC m=+1314.042954518" Feb 23 09:09:42 crc kubenswrapper[4940]: I0223 09:09:42.988152 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-8f67f879d-fb7mr"] Feb 23 09:09:42 crc kubenswrapper[4940]: I0223 09:09:42.990242 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-8f67f879d-fb7mr" Feb 23 09:09:42 crc kubenswrapper[4940]: I0223 09:09:42.994361 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Feb 23 09:09:42 crc kubenswrapper[4940]: I0223 09:09:42.994867 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Feb 23 09:09:43 crc kubenswrapper[4940]: I0223 09:09:43.025288 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-8f67f879d-fb7mr"] Feb 23 09:09:43 crc kubenswrapper[4940]: I0223 09:09:43.147990 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e15eadde-81b6-46a2-bc90-7f8ded67b3bd-combined-ca-bundle\") pod \"barbican-api-8f67f879d-fb7mr\" (UID: \"e15eadde-81b6-46a2-bc90-7f8ded67b3bd\") " pod="openstack/barbican-api-8f67f879d-fb7mr" Feb 23 09:09:43 crc kubenswrapper[4940]: I0223 09:09:43.148074 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e15eadde-81b6-46a2-bc90-7f8ded67b3bd-config-data-custom\") pod \"barbican-api-8f67f879d-fb7mr\" (UID: \"e15eadde-81b6-46a2-bc90-7f8ded67b3bd\") " pod="openstack/barbican-api-8f67f879d-fb7mr" Feb 23 09:09:43 crc kubenswrapper[4940]: I0223 09:09:43.148124 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-flf6z\" (UniqueName: \"kubernetes.io/projected/e15eadde-81b6-46a2-bc90-7f8ded67b3bd-kube-api-access-flf6z\") pod \"barbican-api-8f67f879d-fb7mr\" (UID: \"e15eadde-81b6-46a2-bc90-7f8ded67b3bd\") " pod="openstack/barbican-api-8f67f879d-fb7mr" Feb 23 09:09:43 crc kubenswrapper[4940]: I0223 09:09:43.148364 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e15eadde-81b6-46a2-bc90-7f8ded67b3bd-config-data\") pod \"barbican-api-8f67f879d-fb7mr\" (UID: \"e15eadde-81b6-46a2-bc90-7f8ded67b3bd\") " pod="openstack/barbican-api-8f67f879d-fb7mr" Feb 23 09:09:43 crc kubenswrapper[4940]: I0223 09:09:43.148510 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e15eadde-81b6-46a2-bc90-7f8ded67b3bd-public-tls-certs\") pod \"barbican-api-8f67f879d-fb7mr\" (UID: \"e15eadde-81b6-46a2-bc90-7f8ded67b3bd\") " pod="openstack/barbican-api-8f67f879d-fb7mr" Feb 23 09:09:43 crc kubenswrapper[4940]: I0223 09:09:43.148546 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e15eadde-81b6-46a2-bc90-7f8ded67b3bd-logs\") pod \"barbican-api-8f67f879d-fb7mr\" (UID: \"e15eadde-81b6-46a2-bc90-7f8ded67b3bd\") " pod="openstack/barbican-api-8f67f879d-fb7mr" Feb 23 09:09:43 crc kubenswrapper[4940]: I0223 09:09:43.149678 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e15eadde-81b6-46a2-bc90-7f8ded67b3bd-internal-tls-certs\") pod \"barbican-api-8f67f879d-fb7mr\" (UID: \"e15eadde-81b6-46a2-bc90-7f8ded67b3bd\") " pod="openstack/barbican-api-8f67f879d-fb7mr" Feb 23 09:09:43 crc kubenswrapper[4940]: I0223 09:09:43.251658 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-flf6z\" (UniqueName: \"kubernetes.io/projected/e15eadde-81b6-46a2-bc90-7f8ded67b3bd-kube-api-access-flf6z\") pod \"barbican-api-8f67f879d-fb7mr\" (UID: \"e15eadde-81b6-46a2-bc90-7f8ded67b3bd\") " pod="openstack/barbican-api-8f67f879d-fb7mr" Feb 23 09:09:43 crc kubenswrapper[4940]: I0223 09:09:43.251762 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e15eadde-81b6-46a2-bc90-7f8ded67b3bd-config-data\") pod \"barbican-api-8f67f879d-fb7mr\" (UID: \"e15eadde-81b6-46a2-bc90-7f8ded67b3bd\") " pod="openstack/barbican-api-8f67f879d-fb7mr" Feb 23 09:09:43 crc kubenswrapper[4940]: I0223 09:09:43.251822 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e15eadde-81b6-46a2-bc90-7f8ded67b3bd-public-tls-certs\") pod \"barbican-api-8f67f879d-fb7mr\" (UID: \"e15eadde-81b6-46a2-bc90-7f8ded67b3bd\") " pod="openstack/barbican-api-8f67f879d-fb7mr" Feb 23 09:09:43 crc kubenswrapper[4940]: I0223 09:09:43.251850 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e15eadde-81b6-46a2-bc90-7f8ded67b3bd-logs\") pod \"barbican-api-8f67f879d-fb7mr\" (UID: \"e15eadde-81b6-46a2-bc90-7f8ded67b3bd\") " pod="openstack/barbican-api-8f67f879d-fb7mr" Feb 23 09:09:43 crc kubenswrapper[4940]: I0223 09:09:43.251898 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e15eadde-81b6-46a2-bc90-7f8ded67b3bd-internal-tls-certs\") pod \"barbican-api-8f67f879d-fb7mr\" (UID: \"e15eadde-81b6-46a2-bc90-7f8ded67b3bd\") " pod="openstack/barbican-api-8f67f879d-fb7mr" Feb 23 09:09:43 crc kubenswrapper[4940]: I0223 09:09:43.251994 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e15eadde-81b6-46a2-bc90-7f8ded67b3bd-combined-ca-bundle\") pod \"barbican-api-8f67f879d-fb7mr\" (UID: \"e15eadde-81b6-46a2-bc90-7f8ded67b3bd\") " pod="openstack/barbican-api-8f67f879d-fb7mr" Feb 23 09:09:43 crc kubenswrapper[4940]: I0223 09:09:43.252036 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e15eadde-81b6-46a2-bc90-7f8ded67b3bd-config-data-custom\") pod \"barbican-api-8f67f879d-fb7mr\" (UID: \"e15eadde-81b6-46a2-bc90-7f8ded67b3bd\") " pod="openstack/barbican-api-8f67f879d-fb7mr" Feb 23 09:09:43 crc kubenswrapper[4940]: I0223 09:09:43.253322 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e15eadde-81b6-46a2-bc90-7f8ded67b3bd-logs\") pod \"barbican-api-8f67f879d-fb7mr\" (UID: \"e15eadde-81b6-46a2-bc90-7f8ded67b3bd\") " pod="openstack/barbican-api-8f67f879d-fb7mr" Feb 23 09:09:43 crc kubenswrapper[4940]: I0223 09:09:43.258944 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e15eadde-81b6-46a2-bc90-7f8ded67b3bd-config-data\") pod \"barbican-api-8f67f879d-fb7mr\" (UID: \"e15eadde-81b6-46a2-bc90-7f8ded67b3bd\") " pod="openstack/barbican-api-8f67f879d-fb7mr" Feb 23 09:09:43 crc kubenswrapper[4940]: I0223 09:09:43.259734 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e15eadde-81b6-46a2-bc90-7f8ded67b3bd-config-data-custom\") pod \"barbican-api-8f67f879d-fb7mr\" (UID: \"e15eadde-81b6-46a2-bc90-7f8ded67b3bd\") " pod="openstack/barbican-api-8f67f879d-fb7mr" Feb 23 09:09:43 crc kubenswrapper[4940]: I0223 09:09:43.260354 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e15eadde-81b6-46a2-bc90-7f8ded67b3bd-combined-ca-bundle\") pod \"barbican-api-8f67f879d-fb7mr\" (UID: \"e15eadde-81b6-46a2-bc90-7f8ded67b3bd\") " pod="openstack/barbican-api-8f67f879d-fb7mr" Feb 23 09:09:43 crc kubenswrapper[4940]: I0223 09:09:43.262368 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e15eadde-81b6-46a2-bc90-7f8ded67b3bd-internal-tls-certs\") pod \"barbican-api-8f67f879d-fb7mr\" (UID: \"e15eadde-81b6-46a2-bc90-7f8ded67b3bd\") " pod="openstack/barbican-api-8f67f879d-fb7mr" Feb 23 09:09:43 crc kubenswrapper[4940]: I0223 09:09:43.264409 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e15eadde-81b6-46a2-bc90-7f8ded67b3bd-public-tls-certs\") pod \"barbican-api-8f67f879d-fb7mr\" (UID: \"e15eadde-81b6-46a2-bc90-7f8ded67b3bd\") " pod="openstack/barbican-api-8f67f879d-fb7mr" Feb 23 09:09:43 crc kubenswrapper[4940]: I0223 09:09:43.274473 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-flf6z\" (UniqueName: \"kubernetes.io/projected/e15eadde-81b6-46a2-bc90-7f8ded67b3bd-kube-api-access-flf6z\") pod \"barbican-api-8f67f879d-fb7mr\" (UID: \"e15eadde-81b6-46a2-bc90-7f8ded67b3bd\") " pod="openstack/barbican-api-8f67f879d-fb7mr" Feb 23 09:09:43 crc kubenswrapper[4940]: I0223 09:09:43.337862 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-8f67f879d-fb7mr" Feb 23 09:09:43 crc kubenswrapper[4940]: I0223 09:09:43.562473 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-96958f474-956sq" event={"ID":"e38493cb-6fde-4245-a5a4-99a91920708b","Type":"ContainerStarted","Data":"eb5264e49e0b64c9a80a99a6a8ec234b40e14900242f5a752f1ae1ea69c8334e"} Feb 23 09:09:43 crc kubenswrapper[4940]: I0223 09:09:43.562596 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-96958f474-956sq" Feb 23 09:09:43 crc kubenswrapper[4940]: I0223 09:09:43.562644 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-96958f474-956sq" Feb 23 09:09:43 crc kubenswrapper[4940]: I0223 09:09:43.574707 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-ghmbn" event={"ID":"b39bf698-8f6d-4434-a926-239c936bbdca","Type":"ContainerStarted","Data":"da71d54e51d93b687f2d84195190423a48acc1e3c64d191b6bead08f5304d0ec"} Feb 23 09:09:43 crc kubenswrapper[4940]: I0223 09:09:43.608979 4940 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-96958f474-956sq" podStartSLOduration=4.60895954 podStartE2EDuration="4.60895954s" podCreationTimestamp="2026-02-23 09:09:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 09:09:43.58793366 +0000 UTC m=+1314.971139827" watchObservedRunningTime="2026-02-23 09:09:43.60895954 +0000 UTC m=+1314.992165687" Feb 23 09:09:43 crc kubenswrapper[4940]: I0223 09:09:43.620889 4940 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-85ff748b95-ghmbn" podStartSLOduration=4.620871504 podStartE2EDuration="4.620871504s" podCreationTimestamp="2026-02-23 09:09:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 09:09:43.607149423 +0000 UTC m=+1314.990355580" watchObservedRunningTime="2026-02-23 09:09:43.620871504 +0000 UTC m=+1315.004077661" Feb 23 09:09:44 crc kubenswrapper[4940]: I0223 09:09:44.583590 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-85ff748b95-ghmbn" Feb 23 09:09:44 crc kubenswrapper[4940]: I0223 09:09:44.879525 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-8f67f879d-fb7mr"] Feb 23 09:09:45 crc kubenswrapper[4940]: I0223 09:09:45.596959 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-7b9b88c6bc-hkv9v" event={"ID":"8008f8dc-0709-408f-88d1-0707f66c0a10","Type":"ContainerStarted","Data":"4d2456607c1b5e72ddef31a192e695f56154a65ac1c1b64f50332ae65216b10f"} Feb 23 09:09:45 crc kubenswrapper[4940]: I0223 09:09:45.597298 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-7b9b88c6bc-hkv9v" event={"ID":"8008f8dc-0709-408f-88d1-0707f66c0a10","Type":"ContainerStarted","Data":"b957a640a260c40a8e1468cc59206d9095dc2a2ef9d4bedff5a14bf6a2e1f0d0"} Feb 23 09:09:45 crc kubenswrapper[4940]: I0223 09:09:45.635288 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-8f67f879d-fb7mr" event={"ID":"e15eadde-81b6-46a2-bc90-7f8ded67b3bd","Type":"ContainerStarted","Data":"c9159c507fcf20ea036e1dd5a9066dc76b1d5099c1d47993f84c2b82637a70b3"} Feb 23 09:09:45 crc kubenswrapper[4940]: I0223 09:09:45.635345 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-8f67f879d-fb7mr" event={"ID":"e15eadde-81b6-46a2-bc90-7f8ded67b3bd","Type":"ContainerStarted","Data":"2f7db0845abbb4e761835327cc94d88798a97bde447ec93f797bc5d16041249d"} Feb 23 09:09:45 crc kubenswrapper[4940]: I0223 09:09:45.635357 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-8f67f879d-fb7mr" event={"ID":"e15eadde-81b6-46a2-bc90-7f8ded67b3bd","Type":"ContainerStarted","Data":"14ebe7727b3526e93a5c7e6b16d4727d5fb9a0b0e7ddecb43497e1af31b19521"} Feb 23 09:09:45 crc kubenswrapper[4940]: I0223 09:09:45.635637 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-8f67f879d-fb7mr" Feb 23 09:09:45 crc kubenswrapper[4940]: I0223 09:09:45.635695 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-8f67f879d-fb7mr" Feb 23 09:09:45 crc kubenswrapper[4940]: I0223 09:09:45.638466 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-5fff59f9db-l27lk" event={"ID":"5488f264-b445-4d42-9d11-9b74f2c9b1f5","Type":"ContainerStarted","Data":"4f3b58b752dabd5615dae5111ffdad266cc6114a01f91aa7f765c18078d93c74"} Feb 23 09:09:45 crc kubenswrapper[4940]: I0223 09:09:45.638508 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-5fff59f9db-l27lk" event={"ID":"5488f264-b445-4d42-9d11-9b74f2c9b1f5","Type":"ContainerStarted","Data":"aa5a595ea9e2f328c035647f7d67a32bd75240eb0390da36a5b2f82d3529298a"} Feb 23 09:09:45 crc kubenswrapper[4940]: I0223 09:09:45.640943 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-677b768799-7xn5v" event={"ID":"02b394c0-8e56-4cfd-b85b-27109abc1b52","Type":"ContainerStarted","Data":"a2e2115fe7f1e78a9427ea2bd13a97e5519c990c52b97da68af30b471b4eeb41"} Feb 23 09:09:45 crc kubenswrapper[4940]: I0223 09:09:45.640995 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-677b768799-7xn5v" event={"ID":"02b394c0-8e56-4cfd-b85b-27109abc1b52","Type":"ContainerStarted","Data":"2ab8e327aeffc16d0654359873e48593f22470c77079aa12754b4371d3e3b369"} Feb 23 09:09:45 crc kubenswrapper[4940]: I0223 09:09:45.647885 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-5d6cbfd9cd-f6hzm" event={"ID":"8e5d03e5-c75c-4cfe-a31a-fa57e04a51ce","Type":"ContainerStarted","Data":"cc406920c5b4bf671b1a888e6cb212aef451a5b8494d2e938e42d7d39a370479"} Feb 23 09:09:45 crc kubenswrapper[4940]: I0223 09:09:45.647945 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-5d6cbfd9cd-f6hzm" event={"ID":"8e5d03e5-c75c-4cfe-a31a-fa57e04a51ce","Type":"ContainerStarted","Data":"b776424c2dfc7fd2c13c12fa3ac5da1e1baa574191814badffb82ebf5e4c240a"} Feb 23 09:09:45 crc kubenswrapper[4940]: I0223 09:09:45.719110 4940 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-7b9b88c6bc-hkv9v" podStartSLOduration=4.099470144 podStartE2EDuration="6.719090297s" podCreationTimestamp="2026-02-23 09:09:39 +0000 UTC" firstStartedPulling="2026-02-23 09:09:41.818296918 +0000 UTC m=+1313.201503065" lastFinishedPulling="2026-02-23 09:09:44.437917061 +0000 UTC m=+1315.821123218" observedRunningTime="2026-02-23 09:09:45.650519883 +0000 UTC m=+1317.033726040" watchObservedRunningTime="2026-02-23 09:09:45.719090297 +0000 UTC m=+1317.102296454" Feb 23 09:09:45 crc kubenswrapper[4940]: I0223 09:09:45.772387 4940 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-8f67f879d-fb7mr" podStartSLOduration=3.772367211 podStartE2EDuration="3.772367211s" podCreationTimestamp="2026-02-23 09:09:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 09:09:45.726330925 +0000 UTC m=+1317.109537082" watchObservedRunningTime="2026-02-23 09:09:45.772367211 +0000 UTC m=+1317.155573368" Feb 23 09:09:45 crc kubenswrapper[4940]: I0223 09:09:45.782205 4940 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-worker-677b768799-7xn5v"] Feb 23 09:09:45 crc kubenswrapper[4940]: I0223 09:09:45.791003 4940 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-677b768799-7xn5v" podStartSLOduration=3.666146603 podStartE2EDuration="6.790987066s" podCreationTimestamp="2026-02-23 09:09:39 +0000 UTC" firstStartedPulling="2026-02-23 09:09:41.314595315 +0000 UTC m=+1312.697801472" lastFinishedPulling="2026-02-23 09:09:44.439435788 +0000 UTC m=+1315.822641935" observedRunningTime="2026-02-23 09:09:45.778153082 +0000 UTC m=+1317.161359239" watchObservedRunningTime="2026-02-23 09:09:45.790987066 +0000 UTC m=+1317.174193223" Feb 23 09:09:45 crc kubenswrapper[4940]: I0223 09:09:45.871083 4940 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-5d6cbfd9cd-f6hzm" podStartSLOduration=4.286576542 podStartE2EDuration="6.871063331s" podCreationTimestamp="2026-02-23 09:09:39 +0000 UTC" firstStartedPulling="2026-02-23 09:09:41.847730722 +0000 UTC m=+1313.230936879" lastFinishedPulling="2026-02-23 09:09:44.432217511 +0000 UTC m=+1315.815423668" observedRunningTime="2026-02-23 09:09:45.815378662 +0000 UTC m=+1317.198584829" watchObservedRunningTime="2026-02-23 09:09:45.871063331 +0000 UTC m=+1317.254269488" Feb 23 09:09:45 crc kubenswrapper[4940]: I0223 09:09:45.918672 4940 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-keystone-listener-5fff59f9db-l27lk"] Feb 23 09:09:45 crc kubenswrapper[4940]: I0223 09:09:45.919963 4940 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-5fff59f9db-l27lk" podStartSLOduration=3.574691629 podStartE2EDuration="6.919943217s" podCreationTimestamp="2026-02-23 09:09:39 +0000 UTC" firstStartedPulling="2026-02-23 09:09:41.092941731 +0000 UTC m=+1312.476147888" lastFinishedPulling="2026-02-23 09:09:44.438193329 +0000 UTC m=+1315.821399476" observedRunningTime="2026-02-23 09:09:45.884127251 +0000 UTC m=+1317.267333408" watchObservedRunningTime="2026-02-23 09:09:45.919943217 +0000 UTC m=+1317.303149374" Feb 23 09:09:47 crc kubenswrapper[4940]: I0223 09:09:47.665525 4940 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-worker-677b768799-7xn5v" podUID="02b394c0-8e56-4cfd-b85b-27109abc1b52" containerName="barbican-worker-log" containerID="cri-o://2ab8e327aeffc16d0654359873e48593f22470c77079aa12754b4371d3e3b369" gracePeriod=30 Feb 23 09:09:47 crc kubenswrapper[4940]: I0223 09:09:47.665964 4940 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-worker-677b768799-7xn5v" podUID="02b394c0-8e56-4cfd-b85b-27109abc1b52" containerName="barbican-worker" containerID="cri-o://a2e2115fe7f1e78a9427ea2bd13a97e5519c990c52b97da68af30b471b4eeb41" gracePeriod=30 Feb 23 09:09:47 crc kubenswrapper[4940]: I0223 09:09:47.665952 4940 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-keystone-listener-5fff59f9db-l27lk" podUID="5488f264-b445-4d42-9d11-9b74f2c9b1f5" containerName="barbican-keystone-listener-log" containerID="cri-o://aa5a595ea9e2f328c035647f7d67a32bd75240eb0390da36a5b2f82d3529298a" gracePeriod=30 Feb 23 09:09:47 crc kubenswrapper[4940]: I0223 09:09:47.666258 4940 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-keystone-listener-5fff59f9db-l27lk" podUID="5488f264-b445-4d42-9d11-9b74f2c9b1f5" containerName="barbican-keystone-listener" containerID="cri-o://4f3b58b752dabd5615dae5111ffdad266cc6114a01f91aa7f765c18078d93c74" gracePeriod=30 Feb 23 09:09:48 crc kubenswrapper[4940]: I0223 09:09:48.678489 4940 generic.go:334] "Generic (PLEG): container finished" podID="5488f264-b445-4d42-9d11-9b74f2c9b1f5" containerID="4f3b58b752dabd5615dae5111ffdad266cc6114a01f91aa7f765c18078d93c74" exitCode=0 Feb 23 09:09:48 crc kubenswrapper[4940]: I0223 09:09:48.679075 4940 generic.go:334] "Generic (PLEG): container finished" podID="5488f264-b445-4d42-9d11-9b74f2c9b1f5" containerID="aa5a595ea9e2f328c035647f7d67a32bd75240eb0390da36a5b2f82d3529298a" exitCode=143 Feb 23 09:09:48 crc kubenswrapper[4940]: I0223 09:09:48.678890 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-5fff59f9db-l27lk" event={"ID":"5488f264-b445-4d42-9d11-9b74f2c9b1f5","Type":"ContainerDied","Data":"4f3b58b752dabd5615dae5111ffdad266cc6114a01f91aa7f765c18078d93c74"} Feb 23 09:09:48 crc kubenswrapper[4940]: I0223 09:09:48.679149 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-5fff59f9db-l27lk" event={"ID":"5488f264-b445-4d42-9d11-9b74f2c9b1f5","Type":"ContainerDied","Data":"aa5a595ea9e2f328c035647f7d67a32bd75240eb0390da36a5b2f82d3529298a"} Feb 23 09:09:48 crc kubenswrapper[4940]: I0223 09:09:48.681940 4940 generic.go:334] "Generic (PLEG): container finished" podID="02b394c0-8e56-4cfd-b85b-27109abc1b52" containerID="a2e2115fe7f1e78a9427ea2bd13a97e5519c990c52b97da68af30b471b4eeb41" exitCode=0 Feb 23 09:09:48 crc kubenswrapper[4940]: I0223 09:09:48.681976 4940 generic.go:334] "Generic (PLEG): container finished" podID="02b394c0-8e56-4cfd-b85b-27109abc1b52" containerID="2ab8e327aeffc16d0654359873e48593f22470c77079aa12754b4371d3e3b369" exitCode=143 Feb 23 09:09:48 crc kubenswrapper[4940]: I0223 09:09:48.682009 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-677b768799-7xn5v" event={"ID":"02b394c0-8e56-4cfd-b85b-27109abc1b52","Type":"ContainerDied","Data":"a2e2115fe7f1e78a9427ea2bd13a97e5519c990c52b97da68af30b471b4eeb41"} Feb 23 09:09:48 crc kubenswrapper[4940]: I0223 09:09:48.682037 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-677b768799-7xn5v" event={"ID":"02b394c0-8e56-4cfd-b85b-27109abc1b52","Type":"ContainerDied","Data":"2ab8e327aeffc16d0654359873e48593f22470c77079aa12754b4371d3e3b369"} Feb 23 09:09:48 crc kubenswrapper[4940]: I0223 09:09:48.684013 4940 generic.go:334] "Generic (PLEG): container finished" podID="ab97aa50-1b14-4a5c-82cd-1be9f025b2b5" containerID="d259250918978cd7c3ae3722a903f4629c7b64cbf15a3fbf81f3518820b864db" exitCode=0 Feb 23 09:09:48 crc kubenswrapper[4940]: I0223 09:09:48.684042 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-6p69q" event={"ID":"ab97aa50-1b14-4a5c-82cd-1be9f025b2b5","Type":"ContainerDied","Data":"d259250918978cd7c3ae3722a903f4629c7b64cbf15a3fbf81f3518820b864db"} Feb 23 09:09:50 crc kubenswrapper[4940]: I0223 09:09:50.230776 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-85ff748b95-ghmbn" Feb 23 09:09:50 crc kubenswrapper[4940]: I0223 09:09:50.312763 4940 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-q85q9"] Feb 23 09:09:50 crc kubenswrapper[4940]: I0223 09:09:50.313000 4940 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-55f844cf75-q85q9" podUID="a6c275c2-a5e4-4761-b103-641ef82153f5" containerName="dnsmasq-dns" containerID="cri-o://6f5d1af74217454355635a221cfb2b337188566b3f28a25eaf9bc292127d925f" gracePeriod=10 Feb 23 09:09:50 crc kubenswrapper[4940]: I0223 09:09:50.712580 4940 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-55f844cf75-q85q9" podUID="a6c275c2-a5e4-4761-b103-641ef82153f5" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.156:5353: connect: connection refused" Feb 23 09:09:50 crc kubenswrapper[4940]: I0223 09:09:50.731360 4940 generic.go:334] "Generic (PLEG): container finished" podID="a6c275c2-a5e4-4761-b103-641ef82153f5" containerID="6f5d1af74217454355635a221cfb2b337188566b3f28a25eaf9bc292127d925f" exitCode=0 Feb 23 09:09:50 crc kubenswrapper[4940]: I0223 09:09:50.731417 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-q85q9" event={"ID":"a6c275c2-a5e4-4761-b103-641ef82153f5","Type":"ContainerDied","Data":"6f5d1af74217454355635a221cfb2b337188566b3f28a25eaf9bc292127d925f"} Feb 23 09:09:52 crc kubenswrapper[4940]: I0223 09:09:52.272099 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-64b687bd7d-jhpmr" Feb 23 09:09:52 crc kubenswrapper[4940]: I0223 09:09:52.435502 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-64b687bd7d-jhpmr" Feb 23 09:09:52 crc kubenswrapper[4940]: I0223 09:09:52.583735 4940 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-6884678d78-ckt87" podUID="e330abf6-9282-4221-b286-672ffc3985e7" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.151:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.151:8443: connect: connection refused" Feb 23 09:09:52 crc kubenswrapper[4940]: I0223 09:09:52.622394 4940 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-8485464bb-cvmj5" podUID="0c698dee-e3c4-44d3-a08b-73e6b1e87986" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.152:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.152:8443: connect: connection refused" Feb 23 09:09:52 crc kubenswrapper[4940]: I0223 09:09:52.640736 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-6p69q" Feb 23 09:09:52 crc kubenswrapper[4940]: I0223 09:09:52.740459 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/ab97aa50-1b14-4a5c-82cd-1be9f025b2b5-db-sync-config-data\") pod \"ab97aa50-1b14-4a5c-82cd-1be9f025b2b5\" (UID: \"ab97aa50-1b14-4a5c-82cd-1be9f025b2b5\") " Feb 23 09:09:52 crc kubenswrapper[4940]: I0223 09:09:52.740534 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ab97aa50-1b14-4a5c-82cd-1be9f025b2b5-scripts\") pod \"ab97aa50-1b14-4a5c-82cd-1be9f025b2b5\" (UID: \"ab97aa50-1b14-4a5c-82cd-1be9f025b2b5\") " Feb 23 09:09:52 crc kubenswrapper[4940]: I0223 09:09:52.740654 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab97aa50-1b14-4a5c-82cd-1be9f025b2b5-config-data\") pod \"ab97aa50-1b14-4a5c-82cd-1be9f025b2b5\" (UID: \"ab97aa50-1b14-4a5c-82cd-1be9f025b2b5\") " Feb 23 09:09:52 crc kubenswrapper[4940]: I0223 09:09:52.740739 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ab97aa50-1b14-4a5c-82cd-1be9f025b2b5-etc-machine-id\") pod \"ab97aa50-1b14-4a5c-82cd-1be9f025b2b5\" (UID: \"ab97aa50-1b14-4a5c-82cd-1be9f025b2b5\") " Feb 23 09:09:52 crc kubenswrapper[4940]: I0223 09:09:52.740811 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hqrcm\" (UniqueName: \"kubernetes.io/projected/ab97aa50-1b14-4a5c-82cd-1be9f025b2b5-kube-api-access-hqrcm\") pod \"ab97aa50-1b14-4a5c-82cd-1be9f025b2b5\" (UID: \"ab97aa50-1b14-4a5c-82cd-1be9f025b2b5\") " Feb 23 09:09:52 crc kubenswrapper[4940]: I0223 09:09:52.740872 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab97aa50-1b14-4a5c-82cd-1be9f025b2b5-combined-ca-bundle\") pod \"ab97aa50-1b14-4a5c-82cd-1be9f025b2b5\" (UID: \"ab97aa50-1b14-4a5c-82cd-1be9f025b2b5\") " Feb 23 09:09:52 crc kubenswrapper[4940]: I0223 09:09:52.742134 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ab97aa50-1b14-4a5c-82cd-1be9f025b2b5-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "ab97aa50-1b14-4a5c-82cd-1be9f025b2b5" (UID: "ab97aa50-1b14-4a5c-82cd-1be9f025b2b5"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 09:09:52 crc kubenswrapper[4940]: I0223 09:09:52.748836 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab97aa50-1b14-4a5c-82cd-1be9f025b2b5-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "ab97aa50-1b14-4a5c-82cd-1be9f025b2b5" (UID: "ab97aa50-1b14-4a5c-82cd-1be9f025b2b5"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 09:09:52 crc kubenswrapper[4940]: I0223 09:09:52.749500 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ab97aa50-1b14-4a5c-82cd-1be9f025b2b5-kube-api-access-hqrcm" (OuterVolumeSpecName: "kube-api-access-hqrcm") pod "ab97aa50-1b14-4a5c-82cd-1be9f025b2b5" (UID: "ab97aa50-1b14-4a5c-82cd-1be9f025b2b5"). InnerVolumeSpecName "kube-api-access-hqrcm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 09:09:52 crc kubenswrapper[4940]: I0223 09:09:52.768118 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab97aa50-1b14-4a5c-82cd-1be9f025b2b5-scripts" (OuterVolumeSpecName: "scripts") pod "ab97aa50-1b14-4a5c-82cd-1be9f025b2b5" (UID: "ab97aa50-1b14-4a5c-82cd-1be9f025b2b5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 09:09:52 crc kubenswrapper[4940]: I0223 09:09:52.840944 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-677b768799-7xn5v" Feb 23 09:09:52 crc kubenswrapper[4940]: I0223 09:09:52.842679 4940 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ab97aa50-1b14-4a5c-82cd-1be9f025b2b5-scripts\") on node \"crc\" DevicePath \"\"" Feb 23 09:09:52 crc kubenswrapper[4940]: I0223 09:09:52.842706 4940 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ab97aa50-1b14-4a5c-82cd-1be9f025b2b5-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 23 09:09:52 crc kubenswrapper[4940]: I0223 09:09:52.842722 4940 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hqrcm\" (UniqueName: \"kubernetes.io/projected/ab97aa50-1b14-4a5c-82cd-1be9f025b2b5-kube-api-access-hqrcm\") on node \"crc\" DevicePath \"\"" Feb 23 09:09:52 crc kubenswrapper[4940]: I0223 09:09:52.842734 4940 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/ab97aa50-1b14-4a5c-82cd-1be9f025b2b5-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 23 09:09:52 crc kubenswrapper[4940]: I0223 09:09:52.856965 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-677b768799-7xn5v" event={"ID":"02b394c0-8e56-4cfd-b85b-27109abc1b52","Type":"ContainerDied","Data":"0acbd31a37704ab8d49ce47a1a3cb1359e3efaa8d9c22b64204913e8e3395b76"} Feb 23 09:09:52 crc kubenswrapper[4940]: I0223 09:09:52.857025 4940 scope.go:117] "RemoveContainer" containerID="a2e2115fe7f1e78a9427ea2bd13a97e5519c990c52b97da68af30b471b4eeb41" Feb 23 09:09:52 crc kubenswrapper[4940]: I0223 09:09:52.862427 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab97aa50-1b14-4a5c-82cd-1be9f025b2b5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ab97aa50-1b14-4a5c-82cd-1be9f025b2b5" (UID: "ab97aa50-1b14-4a5c-82cd-1be9f025b2b5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 09:09:52 crc kubenswrapper[4940]: I0223 09:09:52.866338 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-6p69q" Feb 23 09:09:52 crc kubenswrapper[4940]: I0223 09:09:52.866526 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-6p69q" event={"ID":"ab97aa50-1b14-4a5c-82cd-1be9f025b2b5","Type":"ContainerDied","Data":"0f166c1680cacfb199a057ba9c87258008797ad60c5982554e9cb4c7208aa7fd"} Feb 23 09:09:52 crc kubenswrapper[4940]: I0223 09:09:52.866555 4940 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0f166c1680cacfb199a057ba9c87258008797ad60c5982554e9cb4c7208aa7fd" Feb 23 09:09:52 crc kubenswrapper[4940]: I0223 09:09:52.906780 4940 scope.go:117] "RemoveContainer" containerID="2ab8e327aeffc16d0654359873e48593f22470c77079aa12754b4371d3e3b369" Feb 23 09:09:52 crc kubenswrapper[4940]: I0223 09:09:52.927867 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab97aa50-1b14-4a5c-82cd-1be9f025b2b5-config-data" (OuterVolumeSpecName: "config-data") pod "ab97aa50-1b14-4a5c-82cd-1be9f025b2b5" (UID: "ab97aa50-1b14-4a5c-82cd-1be9f025b2b5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 09:09:52 crc kubenswrapper[4940]: I0223 09:09:52.928154 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-5fff59f9db-l27lk" Feb 23 09:09:52 crc kubenswrapper[4940]: I0223 09:09:52.945642 4940 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab97aa50-1b14-4a5c-82cd-1be9f025b2b5-config-data\") on node \"crc\" DevicePath \"\"" Feb 23 09:09:52 crc kubenswrapper[4940]: I0223 09:09:52.945674 4940 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab97aa50-1b14-4a5c-82cd-1be9f025b2b5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 09:09:52 crc kubenswrapper[4940]: E0223 09:09:52.993047 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ceilometer-0" podUID="6af3cab0-e7d8-461f-9092-6b5afefff5cc" Feb 23 09:09:53 crc kubenswrapper[4940]: I0223 09:09:53.049512 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5488f264-b445-4d42-9d11-9b74f2c9b1f5-config-data-custom\") pod \"5488f264-b445-4d42-9d11-9b74f2c9b1f5\" (UID: \"5488f264-b445-4d42-9d11-9b74f2c9b1f5\") " Feb 23 09:09:53 crc kubenswrapper[4940]: I0223 09:09:53.049639 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5488f264-b445-4d42-9d11-9b74f2c9b1f5-config-data\") pod \"5488f264-b445-4d42-9d11-9b74f2c9b1f5\" (UID: \"5488f264-b445-4d42-9d11-9b74f2c9b1f5\") " Feb 23 09:09:53 crc kubenswrapper[4940]: I0223 09:09:53.049689 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4vw\" (UniqueName: \"kubernetes.io/projected/02b394c0-8e56-4cfd-b85b-27109abc1b52-kube-api-access-2d4vw\") pod \"02b394c0-8e56-4cfd-b85b-27109abc1b52\" (UID: \"02b394c0-8e56-4cfd-b85b-27109abc1b52\") " Feb 23 09:09:53 crc kubenswrapper[4940]: I0223 09:09:53.049749 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5488f264-b445-4d42-9d11-9b74f2c9b1f5-combined-ca-bundle\") pod \"5488f264-b445-4d42-9d11-9b74f2c9b1f5\" (UID: \"5488f264-b445-4d42-9d11-9b74f2c9b1f5\") " Feb 23 09:09:53 crc kubenswrapper[4940]: I0223 09:09:53.049834 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7phh4\" (UniqueName: \"kubernetes.io/projected/5488f264-b445-4d42-9d11-9b74f2c9b1f5-kube-api-access-7phh4\") pod \"5488f264-b445-4d42-9d11-9b74f2c9b1f5\" (UID: \"5488f264-b445-4d42-9d11-9b74f2c9b1f5\") " Feb 23 09:09:53 crc kubenswrapper[4940]: I0223 09:09:53.049870 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/02b394c0-8e56-4cfd-b85b-27109abc1b52-config-data-custom\") pod \"02b394c0-8e56-4cfd-b85b-27109abc1b52\" (UID: \"02b394c0-8e56-4cfd-b85b-27109abc1b52\") " Feb 23 09:09:53 crc kubenswrapper[4940]: I0223 09:09:53.049902 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/02b394c0-8e56-4cfd-b85b-27109abc1b52-logs\") pod \"02b394c0-8e56-4cfd-b85b-27109abc1b52\" (UID: \"02b394c0-8e56-4cfd-b85b-27109abc1b52\") " Feb 23 09:09:53 crc kubenswrapper[4940]: I0223 09:09:53.049944 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/02b394c0-8e56-4cfd-b85b-27109abc1b52-config-data\") pod \"02b394c0-8e56-4cfd-b85b-27109abc1b52\" (UID: \"02b394c0-8e56-4cfd-b85b-27109abc1b52\") " Feb 23 09:09:53 crc kubenswrapper[4940]: I0223 09:09:53.050040 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/02b394c0-8e56-4cfd-b85b-27109abc1b52-combined-ca-bundle\") pod \"02b394c0-8e56-4cfd-b85b-27109abc1b52\" (UID: \"02b394c0-8e56-4cfd-b85b-27109abc1b52\") " Feb 23 09:09:53 crc kubenswrapper[4940]: I0223 09:09:53.050073 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5488f264-b445-4d42-9d11-9b74f2c9b1f5-logs\") pod \"5488f264-b445-4d42-9d11-9b74f2c9b1f5\" (UID: \"5488f264-b445-4d42-9d11-9b74f2c9b1f5\") " Feb 23 09:09:53 crc kubenswrapper[4940]: I0223 09:09:53.051719 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/02b394c0-8e56-4cfd-b85b-27109abc1b52-logs" (OuterVolumeSpecName: "logs") pod "02b394c0-8e56-4cfd-b85b-27109abc1b52" (UID: "02b394c0-8e56-4cfd-b85b-27109abc1b52"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 09:09:53 crc kubenswrapper[4940]: I0223 09:09:53.074640 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5488f264-b445-4d42-9d11-9b74f2c9b1f5-kube-api-access-7phh4" (OuterVolumeSpecName: "kube-api-access-7phh4") pod "5488f264-b445-4d42-9d11-9b74f2c9b1f5" (UID: "5488f264-b445-4d42-9d11-9b74f2c9b1f5"). InnerVolumeSpecName "kube-api-access-7phh4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 09:09:53 crc kubenswrapper[4940]: I0223 09:09:53.075971 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5488f264-b445-4d42-9d11-9b74f2c9b1f5-logs" (OuterVolumeSpecName: "logs") pod "5488f264-b445-4d42-9d11-9b74f2c9b1f5" (UID: "5488f264-b445-4d42-9d11-9b74f2c9b1f5"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 09:09:53 crc kubenswrapper[4940]: I0223 09:09:53.082522 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/02b394c0-8e56-4cfd-b85b-27109abc1b52-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "02b394c0-8e56-4cfd-b85b-27109abc1b52" (UID: "02b394c0-8e56-4cfd-b85b-27109abc1b52"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 09:09:53 crc kubenswrapper[4940]: I0223 09:09:53.093836 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/02b394c0-8e56-4cfd-b85b-27109abc1b52-kube-api-access-2d4vw" (OuterVolumeSpecName: "kube-api-access-2d4vw") pod "02b394c0-8e56-4cfd-b85b-27109abc1b52" (UID: "02b394c0-8e56-4cfd-b85b-27109abc1b52"). InnerVolumeSpecName "kube-api-access-2d4vw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 09:09:53 crc kubenswrapper[4940]: I0223 09:09:53.096765 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5488f264-b445-4d42-9d11-9b74f2c9b1f5-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "5488f264-b445-4d42-9d11-9b74f2c9b1f5" (UID: "5488f264-b445-4d42-9d11-9b74f2c9b1f5"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 09:09:53 crc kubenswrapper[4940]: I0223 09:09:53.116784 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5488f264-b445-4d42-9d11-9b74f2c9b1f5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5488f264-b445-4d42-9d11-9b74f2c9b1f5" (UID: "5488f264-b445-4d42-9d11-9b74f2c9b1f5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 09:09:53 crc kubenswrapper[4940]: I0223 09:09:53.146859 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/02b394c0-8e56-4cfd-b85b-27109abc1b52-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "02b394c0-8e56-4cfd-b85b-27109abc1b52" (UID: "02b394c0-8e56-4cfd-b85b-27109abc1b52"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 09:09:53 crc kubenswrapper[4940]: I0223 09:09:53.154252 4940 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/02b394c0-8e56-4cfd-b85b-27109abc1b52-logs\") on node \"crc\" DevicePath \"\"" Feb 23 09:09:53 crc kubenswrapper[4940]: I0223 09:09:53.154282 4940 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/02b394c0-8e56-4cfd-b85b-27109abc1b52-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 09:09:53 crc kubenswrapper[4940]: I0223 09:09:53.154292 4940 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5488f264-b445-4d42-9d11-9b74f2c9b1f5-logs\") on node \"crc\" DevicePath \"\"" Feb 23 09:09:53 crc kubenswrapper[4940]: I0223 09:09:53.154300 4940 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5488f264-b445-4d42-9d11-9b74f2c9b1f5-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 23 09:09:53 crc kubenswrapper[4940]: I0223 09:09:53.154308 4940 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4vw\" (UniqueName: \"kubernetes.io/projected/02b394c0-8e56-4cfd-b85b-27109abc1b52-kube-api-access-2d4vw\") on node \"crc\" DevicePath \"\"" Feb 23 09:09:53 crc kubenswrapper[4940]: I0223 09:09:53.154317 4940 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5488f264-b445-4d42-9d11-9b74f2c9b1f5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 09:09:53 crc kubenswrapper[4940]: I0223 09:09:53.154327 4940 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7phh4\" (UniqueName: \"kubernetes.io/projected/5488f264-b445-4d42-9d11-9b74f2c9b1f5-kube-api-access-7phh4\") on node \"crc\" DevicePath \"\"" Feb 23 09:09:53 crc kubenswrapper[4940]: I0223 09:09:53.154336 4940 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/02b394c0-8e56-4cfd-b85b-27109abc1b52-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 23 09:09:53 crc kubenswrapper[4940]: I0223 09:09:53.201581 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5488f264-b445-4d42-9d11-9b74f2c9b1f5-config-data" (OuterVolumeSpecName: "config-data") pod "5488f264-b445-4d42-9d11-9b74f2c9b1f5" (UID: "5488f264-b445-4d42-9d11-9b74f2c9b1f5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 09:09:53 crc kubenswrapper[4940]: I0223 09:09:53.215448 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/02b394c0-8e56-4cfd-b85b-27109abc1b52-config-data" (OuterVolumeSpecName: "config-data") pod "02b394c0-8e56-4cfd-b85b-27109abc1b52" (UID: "02b394c0-8e56-4cfd-b85b-27109abc1b52"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 09:09:53 crc kubenswrapper[4940]: I0223 09:09:53.256049 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-q85q9" Feb 23 09:09:53 crc kubenswrapper[4940]: I0223 09:09:53.256824 4940 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/02b394c0-8e56-4cfd-b85b-27109abc1b52-config-data\") on node \"crc\" DevicePath \"\"" Feb 23 09:09:53 crc kubenswrapper[4940]: I0223 09:09:53.256850 4940 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5488f264-b445-4d42-9d11-9b74f2c9b1f5-config-data\") on node \"crc\" DevicePath \"\"" Feb 23 09:09:53 crc kubenswrapper[4940]: I0223 09:09:53.358706 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a6c275c2-a5e4-4761-b103-641ef82153f5-ovsdbserver-sb\") pod \"a6c275c2-a5e4-4761-b103-641ef82153f5\" (UID: \"a6c275c2-a5e4-4761-b103-641ef82153f5\") " Feb 23 09:09:53 crc kubenswrapper[4940]: I0223 09:09:53.358745 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-smf2c\" (UniqueName: \"kubernetes.io/projected/a6c275c2-a5e4-4761-b103-641ef82153f5-kube-api-access-smf2c\") pod \"a6c275c2-a5e4-4761-b103-641ef82153f5\" (UID: \"a6c275c2-a5e4-4761-b103-641ef82153f5\") " Feb 23 09:09:53 crc kubenswrapper[4940]: I0223 09:09:53.358865 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a6c275c2-a5e4-4761-b103-641ef82153f5-dns-svc\") pod \"a6c275c2-a5e4-4761-b103-641ef82153f5\" (UID: \"a6c275c2-a5e4-4761-b103-641ef82153f5\") " Feb 23 09:09:53 crc kubenswrapper[4940]: I0223 09:09:53.358905 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a6c275c2-a5e4-4761-b103-641ef82153f5-dns-swift-storage-0\") pod \"a6c275c2-a5e4-4761-b103-641ef82153f5\" (UID: \"a6c275c2-a5e4-4761-b103-641ef82153f5\") " Feb 23 09:09:53 crc kubenswrapper[4940]: I0223 09:09:53.358967 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a6c275c2-a5e4-4761-b103-641ef82153f5-ovsdbserver-nb\") pod \"a6c275c2-a5e4-4761-b103-641ef82153f5\" (UID: \"a6c275c2-a5e4-4761-b103-641ef82153f5\") " Feb 23 09:09:53 crc kubenswrapper[4940]: I0223 09:09:53.359096 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a6c275c2-a5e4-4761-b103-641ef82153f5-config\") pod \"a6c275c2-a5e4-4761-b103-641ef82153f5\" (UID: \"a6c275c2-a5e4-4761-b103-641ef82153f5\") " Feb 23 09:09:53 crc kubenswrapper[4940]: I0223 09:09:53.372297 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a6c275c2-a5e4-4761-b103-641ef82153f5-kube-api-access-smf2c" (OuterVolumeSpecName: "kube-api-access-smf2c") pod "a6c275c2-a5e4-4761-b103-641ef82153f5" (UID: "a6c275c2-a5e4-4761-b103-641ef82153f5"). InnerVolumeSpecName "kube-api-access-smf2c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 09:09:53 crc kubenswrapper[4940]: I0223 09:09:53.432898 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a6c275c2-a5e4-4761-b103-641ef82153f5-config" (OuterVolumeSpecName: "config") pod "a6c275c2-a5e4-4761-b103-641ef82153f5" (UID: "a6c275c2-a5e4-4761-b103-641ef82153f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 09:09:53 crc kubenswrapper[4940]: I0223 09:09:53.445349 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a6c275c2-a5e4-4761-b103-641ef82153f5-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "a6c275c2-a5e4-4761-b103-641ef82153f5" (UID: "a6c275c2-a5e4-4761-b103-641ef82153f5"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 09:09:53 crc kubenswrapper[4940]: I0223 09:09:53.456792 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a6c275c2-a5e4-4761-b103-641ef82153f5-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "a6c275c2-a5e4-4761-b103-641ef82153f5" (UID: "a6c275c2-a5e4-4761-b103-641ef82153f5"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 09:09:53 crc kubenswrapper[4940]: I0223 09:09:53.461411 4940 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a6c275c2-a5e4-4761-b103-641ef82153f5-config\") on node \"crc\" DevicePath \"\"" Feb 23 09:09:53 crc kubenswrapper[4940]: I0223 09:09:53.461445 4940 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a6c275c2-a5e4-4761-b103-641ef82153f5-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 23 09:09:53 crc kubenswrapper[4940]: I0223 09:09:53.461460 4940 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-smf2c\" (UniqueName: \"kubernetes.io/projected/a6c275c2-a5e4-4761-b103-641ef82153f5-kube-api-access-smf2c\") on node \"crc\" DevicePath \"\"" Feb 23 09:09:53 crc kubenswrapper[4940]: I0223 09:09:53.461472 4940 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a6c275c2-a5e4-4761-b103-641ef82153f5-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 23 09:09:53 crc kubenswrapper[4940]: I0223 09:09:53.466131 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a6c275c2-a5e4-4761-b103-641ef82153f5-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "a6c275c2-a5e4-4761-b103-641ef82153f5" (UID: "a6c275c2-a5e4-4761-b103-641ef82153f5"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 09:09:53 crc kubenswrapper[4940]: I0223 09:09:53.475658 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a6c275c2-a5e4-4761-b103-641ef82153f5-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "a6c275c2-a5e4-4761-b103-641ef82153f5" (UID: "a6c275c2-a5e4-4761-b103-641ef82153f5"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 09:09:53 crc kubenswrapper[4940]: I0223 09:09:53.562973 4940 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a6c275c2-a5e4-4761-b103-641ef82153f5-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 23 09:09:53 crc kubenswrapper[4940]: I0223 09:09:53.563012 4940 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a6c275c2-a5e4-4761-b103-641ef82153f5-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 23 09:09:53 crc kubenswrapper[4940]: I0223 09:09:53.876028 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6af3cab0-e7d8-461f-9092-6b5afefff5cc","Type":"ContainerStarted","Data":"45584dc60dfed0903a05c4236e685cbc1320a25de5d826a48c3751896a04aa55"} Feb 23 09:09:53 crc kubenswrapper[4940]: I0223 09:09:53.876217 4940 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="6af3cab0-e7d8-461f-9092-6b5afefff5cc" containerName="ceilometer-notification-agent" containerID="cri-o://f8a149cbff71bf8a8cf32a4d15c2ded691d1894cc7b8d4cd9488753cfd672826" gracePeriod=30 Feb 23 09:09:53 crc kubenswrapper[4940]: I0223 09:09:53.876440 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 23 09:09:53 crc kubenswrapper[4940]: I0223 09:09:53.876696 4940 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="6af3cab0-e7d8-461f-9092-6b5afefff5cc" containerName="proxy-httpd" containerID="cri-o://45584dc60dfed0903a05c4236e685cbc1320a25de5d826a48c3751896a04aa55" gracePeriod=30 Feb 23 09:09:53 crc kubenswrapper[4940]: I0223 09:09:53.876759 4940 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="6af3cab0-e7d8-461f-9092-6b5afefff5cc" containerName="sg-core" containerID="cri-o://1c5004709c21be0889fea3e58d8bd445fa72a121658b331f7949822e7ea39bf3" gracePeriod=30 Feb 23 09:09:53 crc kubenswrapper[4940]: I0223 09:09:53.883568 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-5fff59f9db-l27lk" Feb 23 09:09:53 crc kubenswrapper[4940]: I0223 09:09:53.883644 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-5fff59f9db-l27lk" event={"ID":"5488f264-b445-4d42-9d11-9b74f2c9b1f5","Type":"ContainerDied","Data":"570b4e62edca17379ddd80c94d983ac2a92bc5fbd38bc1adc3ec6d1f662894d2"} Feb 23 09:09:53 crc kubenswrapper[4940]: I0223 09:09:53.883702 4940 scope.go:117] "RemoveContainer" containerID="4f3b58b752dabd5615dae5111ffdad266cc6114a01f91aa7f765c18078d93c74" Feb 23 09:09:53 crc kubenswrapper[4940]: I0223 09:09:53.889181 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-q85q9" event={"ID":"a6c275c2-a5e4-4761-b103-641ef82153f5","Type":"ContainerDied","Data":"e52090449693cb10dc59cd27a5620e2e079afc1f9f1a510f86a6815cb4cf9679"} Feb 23 09:09:53 crc kubenswrapper[4940]: I0223 09:09:53.889272 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-q85q9" Feb 23 09:09:53 crc kubenswrapper[4940]: I0223 09:09:53.901728 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-677b768799-7xn5v" Feb 23 09:09:53 crc kubenswrapper[4940]: I0223 09:09:53.944781 4940 scope.go:117] "RemoveContainer" containerID="aa5a595ea9e2f328c035647f7d67a32bd75240eb0390da36a5b2f82d3529298a" Feb 23 09:09:53 crc kubenswrapper[4940]: I0223 09:09:53.945187 4940 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-q85q9"] Feb 23 09:09:53 crc kubenswrapper[4940]: I0223 09:09:53.967962 4940 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-q85q9"] Feb 23 09:09:53 crc kubenswrapper[4940]: I0223 09:09:53.980291 4940 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-worker-677b768799-7xn5v"] Feb 23 09:09:53 crc kubenswrapper[4940]: I0223 09:09:53.989099 4940 scope.go:117] "RemoveContainer" containerID="6f5d1af74217454355635a221cfb2b337188566b3f28a25eaf9bc292127d925f" Feb 23 09:09:54 crc kubenswrapper[4940]: I0223 09:09:54.001675 4940 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-worker-677b768799-7xn5v"] Feb 23 09:09:54 crc kubenswrapper[4940]: I0223 09:09:54.017242 4940 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-keystone-listener-5fff59f9db-l27lk"] Feb 23 09:09:54 crc kubenswrapper[4940]: I0223 09:09:54.037173 4940 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-keystone-listener-5fff59f9db-l27lk"] Feb 23 09:09:54 crc kubenswrapper[4940]: I0223 09:09:54.118000 4940 scope.go:117] "RemoveContainer" containerID="82b2579512ad25d8fe0460debe2a2e4350256279749682da587366dc659d6256" Feb 23 09:09:54 crc kubenswrapper[4940]: I0223 09:09:54.131642 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Feb 23 09:09:54 crc kubenswrapper[4940]: E0223 09:09:54.132370 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5488f264-b445-4d42-9d11-9b74f2c9b1f5" containerName="barbican-keystone-listener-log" Feb 23 09:09:54 crc kubenswrapper[4940]: I0223 09:09:54.132382 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="5488f264-b445-4d42-9d11-9b74f2c9b1f5" containerName="barbican-keystone-listener-log" Feb 23 09:09:54 crc kubenswrapper[4940]: E0223 09:09:54.132400 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a6c275c2-a5e4-4761-b103-641ef82153f5" containerName="dnsmasq-dns" Feb 23 09:09:54 crc kubenswrapper[4940]: I0223 09:09:54.132409 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="a6c275c2-a5e4-4761-b103-641ef82153f5" containerName="dnsmasq-dns" Feb 23 09:09:54 crc kubenswrapper[4940]: E0223 09:09:54.132427 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02b394c0-8e56-4cfd-b85b-27109abc1b52" containerName="barbican-worker-log" Feb 23 09:09:54 crc kubenswrapper[4940]: I0223 09:09:54.132434 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="02b394c0-8e56-4cfd-b85b-27109abc1b52" containerName="barbican-worker-log" Feb 23 09:09:54 crc kubenswrapper[4940]: E0223 09:09:54.132450 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab97aa50-1b14-4a5c-82cd-1be9f025b2b5" containerName="cinder-db-sync" Feb 23 09:09:54 crc kubenswrapper[4940]: I0223 09:09:54.132456 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab97aa50-1b14-4a5c-82cd-1be9f025b2b5" containerName="cinder-db-sync" Feb 23 09:09:54 crc kubenswrapper[4940]: E0223 09:09:54.132467 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02b394c0-8e56-4cfd-b85b-27109abc1b52" containerName="barbican-worker" Feb 23 09:09:54 crc kubenswrapper[4940]: I0223 09:09:54.132473 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="02b394c0-8e56-4cfd-b85b-27109abc1b52" containerName="barbican-worker" Feb 23 09:09:54 crc kubenswrapper[4940]: E0223 09:09:54.132484 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a6c275c2-a5e4-4761-b103-641ef82153f5" containerName="init" Feb 23 09:09:54 crc kubenswrapper[4940]: I0223 09:09:54.132490 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="a6c275c2-a5e4-4761-b103-641ef82153f5" containerName="init" Feb 23 09:09:54 crc kubenswrapper[4940]: E0223 09:09:54.132519 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5488f264-b445-4d42-9d11-9b74f2c9b1f5" containerName="barbican-keystone-listener" Feb 23 09:09:54 crc kubenswrapper[4940]: I0223 09:09:54.132525 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="5488f264-b445-4d42-9d11-9b74f2c9b1f5" containerName="barbican-keystone-listener" Feb 23 09:09:54 crc kubenswrapper[4940]: I0223 09:09:54.133025 4940 memory_manager.go:354] "RemoveStaleState removing state" podUID="5488f264-b445-4d42-9d11-9b74f2c9b1f5" containerName="barbican-keystone-listener" Feb 23 09:09:54 crc kubenswrapper[4940]: I0223 09:09:54.133041 4940 memory_manager.go:354] "RemoveStaleState removing state" podUID="ab97aa50-1b14-4a5c-82cd-1be9f025b2b5" containerName="cinder-db-sync" Feb 23 09:09:54 crc kubenswrapper[4940]: I0223 09:09:54.133061 4940 memory_manager.go:354] "RemoveStaleState removing state" podUID="5488f264-b445-4d42-9d11-9b74f2c9b1f5" containerName="barbican-keystone-listener-log" Feb 23 09:09:54 crc kubenswrapper[4940]: I0223 09:09:54.133081 4940 memory_manager.go:354] "RemoveStaleState removing state" podUID="02b394c0-8e56-4cfd-b85b-27109abc1b52" containerName="barbican-worker-log" Feb 23 09:09:54 crc kubenswrapper[4940]: I0223 09:09:54.133101 4940 memory_manager.go:354] "RemoveStaleState removing state" podUID="02b394c0-8e56-4cfd-b85b-27109abc1b52" containerName="barbican-worker" Feb 23 09:09:54 crc kubenswrapper[4940]: I0223 09:09:54.133112 4940 memory_manager.go:354] "RemoveStaleState removing state" podUID="a6c275c2-a5e4-4761-b103-641ef82153f5" containerName="dnsmasq-dns" Feb 23 09:09:54 crc kubenswrapper[4940]: I0223 09:09:54.135338 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 23 09:09:54 crc kubenswrapper[4940]: I0223 09:09:54.143583 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-f965d" Feb 23 09:09:54 crc kubenswrapper[4940]: I0223 09:09:54.156869 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Feb 23 09:09:54 crc kubenswrapper[4940]: I0223 09:09:54.173381 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Feb 23 09:09:54 crc kubenswrapper[4940]: I0223 09:09:54.173671 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Feb 23 09:09:54 crc kubenswrapper[4940]: I0223 09:09:54.184346 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4z84q\" (UniqueName: \"kubernetes.io/projected/e5a733b1-ef74-46e6-804f-c0ad92ef38aa-kube-api-access-4z84q\") pod \"cinder-scheduler-0\" (UID: \"e5a733b1-ef74-46e6-804f-c0ad92ef38aa\") " pod="openstack/cinder-scheduler-0" Feb 23 09:09:54 crc kubenswrapper[4940]: I0223 09:09:54.184508 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e5a733b1-ef74-46e6-804f-c0ad92ef38aa-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"e5a733b1-ef74-46e6-804f-c0ad92ef38aa\") " pod="openstack/cinder-scheduler-0" Feb 23 09:09:54 crc kubenswrapper[4940]: I0223 09:09:54.184691 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e5a733b1-ef74-46e6-804f-c0ad92ef38aa-scripts\") pod \"cinder-scheduler-0\" (UID: \"e5a733b1-ef74-46e6-804f-c0ad92ef38aa\") " pod="openstack/cinder-scheduler-0" Feb 23 09:09:54 crc kubenswrapper[4940]: I0223 09:09:54.199711 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5a733b1-ef74-46e6-804f-c0ad92ef38aa-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"e5a733b1-ef74-46e6-804f-c0ad92ef38aa\") " pod="openstack/cinder-scheduler-0" Feb 23 09:09:54 crc kubenswrapper[4940]: I0223 09:09:54.199794 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e5a733b1-ef74-46e6-804f-c0ad92ef38aa-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"e5a733b1-ef74-46e6-804f-c0ad92ef38aa\") " pod="openstack/cinder-scheduler-0" Feb 23 09:09:54 crc kubenswrapper[4940]: I0223 09:09:54.199856 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e5a733b1-ef74-46e6-804f-c0ad92ef38aa-config-data\") pod \"cinder-scheduler-0\" (UID: \"e5a733b1-ef74-46e6-804f-c0ad92ef38aa\") " pod="openstack/cinder-scheduler-0" Feb 23 09:09:54 crc kubenswrapper[4940]: I0223 09:09:54.247750 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 23 09:09:54 crc kubenswrapper[4940]: I0223 09:09:54.269875 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-volume-volume1-0"] Feb 23 09:09:54 crc kubenswrapper[4940]: I0223 09:09:54.278559 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-volume-volume1-0" Feb 23 09:09:54 crc kubenswrapper[4940]: I0223 09:09:54.288401 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-volume-volume1-config-data" Feb 23 09:09:54 crc kubenswrapper[4940]: I0223 09:09:54.301344 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e5a733b1-ef74-46e6-804f-c0ad92ef38aa-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"e5a733b1-ef74-46e6-804f-c0ad92ef38aa\") " pod="openstack/cinder-scheduler-0" Feb 23 09:09:54 crc kubenswrapper[4940]: I0223 09:09:54.301429 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e5a733b1-ef74-46e6-804f-c0ad92ef38aa-scripts\") pod \"cinder-scheduler-0\" (UID: \"e5a733b1-ef74-46e6-804f-c0ad92ef38aa\") " pod="openstack/cinder-scheduler-0" Feb 23 09:09:54 crc kubenswrapper[4940]: I0223 09:09:54.301466 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5a733b1-ef74-46e6-804f-c0ad92ef38aa-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"e5a733b1-ef74-46e6-804f-c0ad92ef38aa\") " pod="openstack/cinder-scheduler-0" Feb 23 09:09:54 crc kubenswrapper[4940]: I0223 09:09:54.301494 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e5a733b1-ef74-46e6-804f-c0ad92ef38aa-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"e5a733b1-ef74-46e6-804f-c0ad92ef38aa\") " pod="openstack/cinder-scheduler-0" Feb 23 09:09:54 crc kubenswrapper[4940]: I0223 09:09:54.301520 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e5a733b1-ef74-46e6-804f-c0ad92ef38aa-config-data\") pod \"cinder-scheduler-0\" (UID: \"e5a733b1-ef74-46e6-804f-c0ad92ef38aa\") " pod="openstack/cinder-scheduler-0" Feb 23 09:09:54 crc kubenswrapper[4940]: I0223 09:09:54.301567 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4z84q\" (UniqueName: \"kubernetes.io/projected/e5a733b1-ef74-46e6-804f-c0ad92ef38aa-kube-api-access-4z84q\") pod \"cinder-scheduler-0\" (UID: \"e5a733b1-ef74-46e6-804f-c0ad92ef38aa\") " pod="openstack/cinder-scheduler-0" Feb 23 09:09:54 crc kubenswrapper[4940]: I0223 09:09:54.301942 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e5a733b1-ef74-46e6-804f-c0ad92ef38aa-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"e5a733b1-ef74-46e6-804f-c0ad92ef38aa\") " pod="openstack/cinder-scheduler-0" Feb 23 09:09:54 crc kubenswrapper[4940]: I0223 09:09:54.310596 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-backup-0"] Feb 23 09:09:54 crc kubenswrapper[4940]: I0223 09:09:54.314603 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-backup-0" Feb 23 09:09:54 crc kubenswrapper[4940]: I0223 09:09:54.323546 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-backup-config-data" Feb 23 09:09:54 crc kubenswrapper[4940]: I0223 09:09:54.334226 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4z84q\" (UniqueName: \"kubernetes.io/projected/e5a733b1-ef74-46e6-804f-c0ad92ef38aa-kube-api-access-4z84q\") pod \"cinder-scheduler-0\" (UID: \"e5a733b1-ef74-46e6-804f-c0ad92ef38aa\") " pod="openstack/cinder-scheduler-0" Feb 23 09:09:54 crc kubenswrapper[4940]: I0223 09:09:54.336141 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e5a733b1-ef74-46e6-804f-c0ad92ef38aa-config-data\") pod \"cinder-scheduler-0\" (UID: \"e5a733b1-ef74-46e6-804f-c0ad92ef38aa\") " pod="openstack/cinder-scheduler-0" Feb 23 09:09:54 crc kubenswrapper[4940]: I0223 09:09:54.350817 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e5a733b1-ef74-46e6-804f-c0ad92ef38aa-scripts\") pod \"cinder-scheduler-0\" (UID: \"e5a733b1-ef74-46e6-804f-c0ad92ef38aa\") " pod="openstack/cinder-scheduler-0" Feb 23 09:09:54 crc kubenswrapper[4940]: I0223 09:09:54.351228 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e5a733b1-ef74-46e6-804f-c0ad92ef38aa-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"e5a733b1-ef74-46e6-804f-c0ad92ef38aa\") " pod="openstack/cinder-scheduler-0" Feb 23 09:09:54 crc kubenswrapper[4940]: I0223 09:09:54.356480 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-volume-volume1-0"] Feb 23 09:09:54 crc kubenswrapper[4940]: I0223 09:09:54.367548 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5a733b1-ef74-46e6-804f-c0ad92ef38aa-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"e5a733b1-ef74-46e6-804f-c0ad92ef38aa\") " pod="openstack/cinder-scheduler-0" Feb 23 09:09:54 crc kubenswrapper[4940]: I0223 09:09:54.401486 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-fzrcd"] Feb 23 09:09:54 crc kubenswrapper[4940]: I0223 09:09:54.402995 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-fzrcd" Feb 23 09:09:54 crc kubenswrapper[4940]: I0223 09:09:54.403692 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nq7dq\" (UniqueName: \"kubernetes.io/projected/0911a5da-3249-4858-8246-5334db240255-kube-api-access-nq7dq\") pod \"cinder-backup-0\" (UID: \"0911a5da-3249-4858-8246-5334db240255\") " pod="openstack/cinder-backup-0" Feb 23 09:09:54 crc kubenswrapper[4940]: I0223 09:09:54.403758 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/3f62a0b8-7bf4-40cb-a561-b2afd12eabdf-ceph\") pod \"cinder-volume-volume1-0\" (UID: \"3f62a0b8-7bf4-40cb-a561-b2afd12eabdf\") " pod="openstack/cinder-volume-volume1-0" Feb 23 09:09:54 crc kubenswrapper[4940]: I0223 09:09:54.403788 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/3f62a0b8-7bf4-40cb-a561-b2afd12eabdf-var-lib-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"3f62a0b8-7bf4-40cb-a561-b2afd12eabdf\") " pod="openstack/cinder-volume-volume1-0" Feb 23 09:09:54 crc kubenswrapper[4940]: I0223 09:09:54.403804 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5zhcq\" (UniqueName: \"kubernetes.io/projected/3f62a0b8-7bf4-40cb-a561-b2afd12eabdf-kube-api-access-5zhcq\") pod \"cinder-volume-volume1-0\" (UID: \"3f62a0b8-7bf4-40cb-a561-b2afd12eabdf\") " pod="openstack/cinder-volume-volume1-0" Feb 23 09:09:54 crc kubenswrapper[4940]: I0223 09:09:54.403818 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/0911a5da-3249-4858-8246-5334db240255-dev\") pod \"cinder-backup-0\" (UID: \"0911a5da-3249-4858-8246-5334db240255\") " pod="openstack/cinder-backup-0" Feb 23 09:09:54 crc kubenswrapper[4940]: I0223 09:09:54.403837 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3f62a0b8-7bf4-40cb-a561-b2afd12eabdf-scripts\") pod \"cinder-volume-volume1-0\" (UID: \"3f62a0b8-7bf4-40cb-a561-b2afd12eabdf\") " pod="openstack/cinder-volume-volume1-0" Feb 23 09:09:54 crc kubenswrapper[4940]: I0223 09:09:54.403858 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/0911a5da-3249-4858-8246-5334db240255-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"0911a5da-3249-4858-8246-5334db240255\") " pod="openstack/cinder-backup-0" Feb 23 09:09:54 crc kubenswrapper[4940]: I0223 09:09:54.403875 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0911a5da-3249-4858-8246-5334db240255-config-data\") pod \"cinder-backup-0\" (UID: \"0911a5da-3249-4858-8246-5334db240255\") " pod="openstack/cinder-backup-0" Feb 23 09:09:54 crc kubenswrapper[4940]: I0223 09:09:54.403891 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/3f62a0b8-7bf4-40cb-a561-b2afd12eabdf-etc-iscsi\") pod \"cinder-volume-volume1-0\" (UID: \"3f62a0b8-7bf4-40cb-a561-b2afd12eabdf\") " pod="openstack/cinder-volume-volume1-0" Feb 23 09:09:54 crc kubenswrapper[4940]: I0223 09:09:54.403908 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f62a0b8-7bf4-40cb-a561-b2afd12eabdf-combined-ca-bundle\") pod \"cinder-volume-volume1-0\" (UID: \"3f62a0b8-7bf4-40cb-a561-b2afd12eabdf\") " pod="openstack/cinder-volume-volume1-0" Feb 23 09:09:54 crc kubenswrapper[4940]: I0223 09:09:54.403931 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3f62a0b8-7bf4-40cb-a561-b2afd12eabdf-config-data-custom\") pod \"cinder-volume-volume1-0\" (UID: \"3f62a0b8-7bf4-40cb-a561-b2afd12eabdf\") " pod="openstack/cinder-volume-volume1-0" Feb 23 09:09:54 crc kubenswrapper[4940]: I0223 09:09:54.403948 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/0911a5da-3249-4858-8246-5334db240255-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"0911a5da-3249-4858-8246-5334db240255\") " pod="openstack/cinder-backup-0" Feb 23 09:09:54 crc kubenswrapper[4940]: I0223 09:09:54.403968 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/0911a5da-3249-4858-8246-5334db240255-run\") pod \"cinder-backup-0\" (UID: \"0911a5da-3249-4858-8246-5334db240255\") " pod="openstack/cinder-backup-0" Feb 23 09:09:54 crc kubenswrapper[4940]: I0223 09:09:54.403988 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/3f62a0b8-7bf4-40cb-a561-b2afd12eabdf-run\") pod \"cinder-volume-volume1-0\" (UID: \"3f62a0b8-7bf4-40cb-a561-b2afd12eabdf\") " pod="openstack/cinder-volume-volume1-0" Feb 23 09:09:54 crc kubenswrapper[4940]: I0223 09:09:54.404008 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/0911a5da-3249-4858-8246-5334db240255-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"0911a5da-3249-4858-8246-5334db240255\") " pod="openstack/cinder-backup-0" Feb 23 09:09:54 crc kubenswrapper[4940]: I0223 09:09:54.404026 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/3f62a0b8-7bf4-40cb-a561-b2afd12eabdf-dev\") pod \"cinder-volume-volume1-0\" (UID: \"3f62a0b8-7bf4-40cb-a561-b2afd12eabdf\") " pod="openstack/cinder-volume-volume1-0" Feb 23 09:09:54 crc kubenswrapper[4940]: I0223 09:09:54.404043 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/0911a5da-3249-4858-8246-5334db240255-sys\") pod \"cinder-backup-0\" (UID: \"0911a5da-3249-4858-8246-5334db240255\") " pod="openstack/cinder-backup-0" Feb 23 09:09:54 crc kubenswrapper[4940]: I0223 09:09:54.404068 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3f62a0b8-7bf4-40cb-a561-b2afd12eabdf-lib-modules\") pod \"cinder-volume-volume1-0\" (UID: \"3f62a0b8-7bf4-40cb-a561-b2afd12eabdf\") " pod="openstack/cinder-volume-volume1-0" Feb 23 09:09:54 crc kubenswrapper[4940]: I0223 09:09:54.404083 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3f62a0b8-7bf4-40cb-a561-b2afd12eabdf-etc-machine-id\") pod \"cinder-volume-volume1-0\" (UID: \"3f62a0b8-7bf4-40cb-a561-b2afd12eabdf\") " pod="openstack/cinder-volume-volume1-0" Feb 23 09:09:54 crc kubenswrapper[4940]: I0223 09:09:54.404109 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/3f62a0b8-7bf4-40cb-a561-b2afd12eabdf-sys\") pod \"cinder-volume-volume1-0\" (UID: \"3f62a0b8-7bf4-40cb-a561-b2afd12eabdf\") " pod="openstack/cinder-volume-volume1-0" Feb 23 09:09:54 crc kubenswrapper[4940]: I0223 09:09:54.404125 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/3f62a0b8-7bf4-40cb-a561-b2afd12eabdf-var-locks-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"3f62a0b8-7bf4-40cb-a561-b2afd12eabdf\") " pod="openstack/cinder-volume-volume1-0" Feb 23 09:09:54 crc kubenswrapper[4940]: I0223 09:09:54.404145 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0911a5da-3249-4858-8246-5334db240255-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"0911a5da-3249-4858-8246-5334db240255\") " pod="openstack/cinder-backup-0" Feb 23 09:09:54 crc kubenswrapper[4940]: I0223 09:09:54.404185 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f62a0b8-7bf4-40cb-a561-b2afd12eabdf-config-data\") pod \"cinder-volume-volume1-0\" (UID: \"3f62a0b8-7bf4-40cb-a561-b2afd12eabdf\") " pod="openstack/cinder-volume-volume1-0" Feb 23 09:09:54 crc kubenswrapper[4940]: I0223 09:09:54.404204 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0911a5da-3249-4858-8246-5334db240255-lib-modules\") pod \"cinder-backup-0\" (UID: \"0911a5da-3249-4858-8246-5334db240255\") " pod="openstack/cinder-backup-0" Feb 23 09:09:54 crc kubenswrapper[4940]: I0223 09:09:54.404223 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/3f62a0b8-7bf4-40cb-a561-b2afd12eabdf-etc-nvme\") pod \"cinder-volume-volume1-0\" (UID: \"3f62a0b8-7bf4-40cb-a561-b2afd12eabdf\") " pod="openstack/cinder-volume-volume1-0" Feb 23 09:09:54 crc kubenswrapper[4940]: I0223 09:09:54.404259 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/3f62a0b8-7bf4-40cb-a561-b2afd12eabdf-var-locks-brick\") pod \"cinder-volume-volume1-0\" (UID: \"3f62a0b8-7bf4-40cb-a561-b2afd12eabdf\") " pod="openstack/cinder-volume-volume1-0" Feb 23 09:09:54 crc kubenswrapper[4940]: I0223 09:09:54.404272 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/0911a5da-3249-4858-8246-5334db240255-etc-nvme\") pod \"cinder-backup-0\" (UID: \"0911a5da-3249-4858-8246-5334db240255\") " pod="openstack/cinder-backup-0" Feb 23 09:09:54 crc kubenswrapper[4940]: I0223 09:09:54.404305 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0911a5da-3249-4858-8246-5334db240255-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"0911a5da-3249-4858-8246-5334db240255\") " pod="openstack/cinder-backup-0" Feb 23 09:09:54 crc kubenswrapper[4940]: I0223 09:09:54.404329 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0911a5da-3249-4858-8246-5334db240255-config-data-custom\") pod \"cinder-backup-0\" (UID: \"0911a5da-3249-4858-8246-5334db240255\") " pod="openstack/cinder-backup-0" Feb 23 09:09:54 crc kubenswrapper[4940]: I0223 09:09:54.404344 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0911a5da-3249-4858-8246-5334db240255-scripts\") pod \"cinder-backup-0\" (UID: \"0911a5da-3249-4858-8246-5334db240255\") " pod="openstack/cinder-backup-0" Feb 23 09:09:54 crc kubenswrapper[4940]: I0223 09:09:54.404374 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/0911a5da-3249-4858-8246-5334db240255-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"0911a5da-3249-4858-8246-5334db240255\") " pod="openstack/cinder-backup-0" Feb 23 09:09:54 crc kubenswrapper[4940]: I0223 09:09:54.404407 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/0911a5da-3249-4858-8246-5334db240255-ceph\") pod \"cinder-backup-0\" (UID: \"0911a5da-3249-4858-8246-5334db240255\") " pod="openstack/cinder-backup-0" Feb 23 09:09:54 crc kubenswrapper[4940]: I0223 09:09:54.419632 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-backup-0"] Feb 23 09:09:54 crc kubenswrapper[4940]: I0223 09:09:54.433054 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-fzrcd"] Feb 23 09:09:54 crc kubenswrapper[4940]: I0223 09:09:54.486695 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Feb 23 09:09:54 crc kubenswrapper[4940]: I0223 09:09:54.488738 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 23 09:09:54 crc kubenswrapper[4940]: I0223 09:09:54.493255 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Feb 23 09:09:54 crc kubenswrapper[4940]: I0223 09:09:54.501256 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 23 09:09:54 crc kubenswrapper[4940]: I0223 09:09:54.504392 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 23 09:09:54 crc kubenswrapper[4940]: I0223 09:09:54.505929 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mhhnh\" (UniqueName: \"kubernetes.io/projected/f94f1199-3d0b-4502-9013-a1e408c7280e-kube-api-access-mhhnh\") pod \"dnsmasq-dns-5c9776ccc5-fzrcd\" (UID: \"f94f1199-3d0b-4502-9013-a1e408c7280e\") " pod="openstack/dnsmasq-dns-5c9776ccc5-fzrcd" Feb 23 09:09:54 crc kubenswrapper[4940]: I0223 09:09:54.505961 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/3f62a0b8-7bf4-40cb-a561-b2afd12eabdf-dev\") pod \"cinder-volume-volume1-0\" (UID: \"3f62a0b8-7bf4-40cb-a561-b2afd12eabdf\") " pod="openstack/cinder-volume-volume1-0" Feb 23 09:09:54 crc kubenswrapper[4940]: I0223 09:09:54.505982 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/0911a5da-3249-4858-8246-5334db240255-sys\") pod \"cinder-backup-0\" (UID: \"0911a5da-3249-4858-8246-5334db240255\") " pod="openstack/cinder-backup-0" Feb 23 09:09:54 crc kubenswrapper[4940]: I0223 09:09:54.506005 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3f62a0b8-7bf4-40cb-a561-b2afd12eabdf-lib-modules\") pod \"cinder-volume-volume1-0\" (UID: \"3f62a0b8-7bf4-40cb-a561-b2afd12eabdf\") " pod="openstack/cinder-volume-volume1-0" Feb 23 09:09:54 crc kubenswrapper[4940]: I0223 09:09:54.506022 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3f62a0b8-7bf4-40cb-a561-b2afd12eabdf-etc-machine-id\") pod \"cinder-volume-volume1-0\" (UID: \"3f62a0b8-7bf4-40cb-a561-b2afd12eabdf\") " pod="openstack/cinder-volume-volume1-0" Feb 23 09:09:54 crc kubenswrapper[4940]: I0223 09:09:54.506039 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/3f62a0b8-7bf4-40cb-a561-b2afd12eabdf-sys\") pod \"cinder-volume-volume1-0\" (UID: \"3f62a0b8-7bf4-40cb-a561-b2afd12eabdf\") " pod="openstack/cinder-volume-volume1-0" Feb 23 09:09:54 crc kubenswrapper[4940]: I0223 09:09:54.506052 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/3f62a0b8-7bf4-40cb-a561-b2afd12eabdf-var-locks-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"3f62a0b8-7bf4-40cb-a561-b2afd12eabdf\") " pod="openstack/cinder-volume-volume1-0" Feb 23 09:09:54 crc kubenswrapper[4940]: I0223 09:09:54.506069 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f94f1199-3d0b-4502-9013-a1e408c7280e-config\") pod \"dnsmasq-dns-5c9776ccc5-fzrcd\" (UID: \"f94f1199-3d0b-4502-9013-a1e408c7280e\") " pod="openstack/dnsmasq-dns-5c9776ccc5-fzrcd" Feb 23 09:09:54 crc kubenswrapper[4940]: I0223 09:09:54.506089 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0911a5da-3249-4858-8246-5334db240255-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"0911a5da-3249-4858-8246-5334db240255\") " pod="openstack/cinder-backup-0" Feb 23 09:09:54 crc kubenswrapper[4940]: I0223 09:09:54.506108 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f94f1199-3d0b-4502-9013-a1e408c7280e-dns-swift-storage-0\") pod \"dnsmasq-dns-5c9776ccc5-fzrcd\" (UID: \"f94f1199-3d0b-4502-9013-a1e408c7280e\") " pod="openstack/dnsmasq-dns-5c9776ccc5-fzrcd" Feb 23 09:09:54 crc kubenswrapper[4940]: I0223 09:09:54.506126 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f62a0b8-7bf4-40cb-a561-b2afd12eabdf-config-data\") pod \"cinder-volume-volume1-0\" (UID: \"3f62a0b8-7bf4-40cb-a561-b2afd12eabdf\") " pod="openstack/cinder-volume-volume1-0" Feb 23 09:09:54 crc kubenswrapper[4940]: I0223 09:09:54.506149 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0911a5da-3249-4858-8246-5334db240255-lib-modules\") pod \"cinder-backup-0\" (UID: \"0911a5da-3249-4858-8246-5334db240255\") " pod="openstack/cinder-backup-0" Feb 23 09:09:54 crc kubenswrapper[4940]: I0223 09:09:54.506166 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/3f62a0b8-7bf4-40cb-a561-b2afd12eabdf-etc-nvme\") pod \"cinder-volume-volume1-0\" (UID: \"3f62a0b8-7bf4-40cb-a561-b2afd12eabdf\") " pod="openstack/cinder-volume-volume1-0" Feb 23 09:09:54 crc kubenswrapper[4940]: I0223 09:09:54.506191 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/0911a5da-3249-4858-8246-5334db240255-etc-nvme\") pod \"cinder-backup-0\" (UID: \"0911a5da-3249-4858-8246-5334db240255\") " pod="openstack/cinder-backup-0" Feb 23 09:09:54 crc kubenswrapper[4940]: I0223 09:09:54.506206 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/3f62a0b8-7bf4-40cb-a561-b2afd12eabdf-var-locks-brick\") pod \"cinder-volume-volume1-0\" (UID: \"3f62a0b8-7bf4-40cb-a561-b2afd12eabdf\") " pod="openstack/cinder-volume-volume1-0" Feb 23 09:09:54 crc kubenswrapper[4940]: I0223 09:09:54.506226 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f94f1199-3d0b-4502-9013-a1e408c7280e-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9776ccc5-fzrcd\" (UID: \"f94f1199-3d0b-4502-9013-a1e408c7280e\") " pod="openstack/dnsmasq-dns-5c9776ccc5-fzrcd" Feb 23 09:09:54 crc kubenswrapper[4940]: I0223 09:09:54.506247 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0911a5da-3249-4858-8246-5334db240255-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"0911a5da-3249-4858-8246-5334db240255\") " pod="openstack/cinder-backup-0" Feb 23 09:09:54 crc kubenswrapper[4940]: I0223 09:09:54.506272 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0911a5da-3249-4858-8246-5334db240255-config-data-custom\") pod \"cinder-backup-0\" (UID: \"0911a5da-3249-4858-8246-5334db240255\") " pod="openstack/cinder-backup-0" Feb 23 09:09:54 crc kubenswrapper[4940]: I0223 09:09:54.506286 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0911a5da-3249-4858-8246-5334db240255-scripts\") pod \"cinder-backup-0\" (UID: \"0911a5da-3249-4858-8246-5334db240255\") " pod="openstack/cinder-backup-0" Feb 23 09:09:54 crc kubenswrapper[4940]: I0223 09:09:54.506303 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/0911a5da-3249-4858-8246-5334db240255-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"0911a5da-3249-4858-8246-5334db240255\") " pod="openstack/cinder-backup-0" Feb 23 09:09:54 crc kubenswrapper[4940]: I0223 09:09:54.506322 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/0911a5da-3249-4858-8246-5334db240255-ceph\") pod \"cinder-backup-0\" (UID: \"0911a5da-3249-4858-8246-5334db240255\") " pod="openstack/cinder-backup-0" Feb 23 09:09:54 crc kubenswrapper[4940]: I0223 09:09:54.506341 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nq7dq\" (UniqueName: \"kubernetes.io/projected/0911a5da-3249-4858-8246-5334db240255-kube-api-access-nq7dq\") pod \"cinder-backup-0\" (UID: \"0911a5da-3249-4858-8246-5334db240255\") " pod="openstack/cinder-backup-0" Feb 23 09:09:54 crc kubenswrapper[4940]: I0223 09:09:54.506372 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f94f1199-3d0b-4502-9013-a1e408c7280e-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9776ccc5-fzrcd\" (UID: \"f94f1199-3d0b-4502-9013-a1e408c7280e\") " pod="openstack/dnsmasq-dns-5c9776ccc5-fzrcd" Feb 23 09:09:54 crc kubenswrapper[4940]: I0223 09:09:54.506390 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/3f62a0b8-7bf4-40cb-a561-b2afd12eabdf-ceph\") pod \"cinder-volume-volume1-0\" (UID: \"3f62a0b8-7bf4-40cb-a561-b2afd12eabdf\") " pod="openstack/cinder-volume-volume1-0" Feb 23 09:09:54 crc kubenswrapper[4940]: I0223 09:09:54.506418 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/3f62a0b8-7bf4-40cb-a561-b2afd12eabdf-var-lib-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"3f62a0b8-7bf4-40cb-a561-b2afd12eabdf\") " pod="openstack/cinder-volume-volume1-0" Feb 23 09:09:54 crc kubenswrapper[4940]: I0223 09:09:54.506434 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5zhcq\" (UniqueName: \"kubernetes.io/projected/3f62a0b8-7bf4-40cb-a561-b2afd12eabdf-kube-api-access-5zhcq\") pod \"cinder-volume-volume1-0\" (UID: \"3f62a0b8-7bf4-40cb-a561-b2afd12eabdf\") " pod="openstack/cinder-volume-volume1-0" Feb 23 09:09:54 crc kubenswrapper[4940]: I0223 09:09:54.506448 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/0911a5da-3249-4858-8246-5334db240255-dev\") pod \"cinder-backup-0\" (UID: \"0911a5da-3249-4858-8246-5334db240255\") " pod="openstack/cinder-backup-0" Feb 23 09:09:54 crc kubenswrapper[4940]: I0223 09:09:54.506465 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3f62a0b8-7bf4-40cb-a561-b2afd12eabdf-scripts\") pod \"cinder-volume-volume1-0\" (UID: \"3f62a0b8-7bf4-40cb-a561-b2afd12eabdf\") " pod="openstack/cinder-volume-volume1-0" Feb 23 09:09:54 crc kubenswrapper[4940]: I0223 09:09:54.506484 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/0911a5da-3249-4858-8246-5334db240255-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"0911a5da-3249-4858-8246-5334db240255\") " pod="openstack/cinder-backup-0" Feb 23 09:09:54 crc kubenswrapper[4940]: I0223 09:09:54.506505 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0911a5da-3249-4858-8246-5334db240255-config-data\") pod \"cinder-backup-0\" (UID: \"0911a5da-3249-4858-8246-5334db240255\") " pod="openstack/cinder-backup-0" Feb 23 09:09:54 crc kubenswrapper[4940]: I0223 09:09:54.506520 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/3f62a0b8-7bf4-40cb-a561-b2afd12eabdf-etc-iscsi\") pod \"cinder-volume-volume1-0\" (UID: \"3f62a0b8-7bf4-40cb-a561-b2afd12eabdf\") " pod="openstack/cinder-volume-volume1-0" Feb 23 09:09:54 crc kubenswrapper[4940]: I0223 09:09:54.506560 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f62a0b8-7bf4-40cb-a561-b2afd12eabdf-combined-ca-bundle\") pod \"cinder-volume-volume1-0\" (UID: \"3f62a0b8-7bf4-40cb-a561-b2afd12eabdf\") " pod="openstack/cinder-volume-volume1-0" Feb 23 09:09:54 crc kubenswrapper[4940]: I0223 09:09:54.506582 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3f62a0b8-7bf4-40cb-a561-b2afd12eabdf-config-data-custom\") pod \"cinder-volume-volume1-0\" (UID: \"3f62a0b8-7bf4-40cb-a561-b2afd12eabdf\") " pod="openstack/cinder-volume-volume1-0" Feb 23 09:09:54 crc kubenswrapper[4940]: I0223 09:09:54.506603 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/0911a5da-3249-4858-8246-5334db240255-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"0911a5da-3249-4858-8246-5334db240255\") " pod="openstack/cinder-backup-0" Feb 23 09:09:54 crc kubenswrapper[4940]: I0223 09:09:54.506644 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f94f1199-3d0b-4502-9013-a1e408c7280e-dns-svc\") pod \"dnsmasq-dns-5c9776ccc5-fzrcd\" (UID: \"f94f1199-3d0b-4502-9013-a1e408c7280e\") " pod="openstack/dnsmasq-dns-5c9776ccc5-fzrcd" Feb 23 09:09:54 crc kubenswrapper[4940]: I0223 09:09:54.506673 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/0911a5da-3249-4858-8246-5334db240255-run\") pod \"cinder-backup-0\" (UID: \"0911a5da-3249-4858-8246-5334db240255\") " pod="openstack/cinder-backup-0" Feb 23 09:09:54 crc kubenswrapper[4940]: I0223 09:09:54.506707 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/3f62a0b8-7bf4-40cb-a561-b2afd12eabdf-run\") pod \"cinder-volume-volume1-0\" (UID: \"3f62a0b8-7bf4-40cb-a561-b2afd12eabdf\") " pod="openstack/cinder-volume-volume1-0" Feb 23 09:09:54 crc kubenswrapper[4940]: I0223 09:09:54.506734 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/0911a5da-3249-4858-8246-5334db240255-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"0911a5da-3249-4858-8246-5334db240255\") " pod="openstack/cinder-backup-0" Feb 23 09:09:54 crc kubenswrapper[4940]: I0223 09:09:54.506840 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/0911a5da-3249-4858-8246-5334db240255-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"0911a5da-3249-4858-8246-5334db240255\") " pod="openstack/cinder-backup-0" Feb 23 09:09:54 crc kubenswrapper[4940]: I0223 09:09:54.506893 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/3f62a0b8-7bf4-40cb-a561-b2afd12eabdf-dev\") pod \"cinder-volume-volume1-0\" (UID: \"3f62a0b8-7bf4-40cb-a561-b2afd12eabdf\") " pod="openstack/cinder-volume-volume1-0" Feb 23 09:09:54 crc kubenswrapper[4940]: I0223 09:09:54.506924 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/0911a5da-3249-4858-8246-5334db240255-sys\") pod \"cinder-backup-0\" (UID: \"0911a5da-3249-4858-8246-5334db240255\") " pod="openstack/cinder-backup-0" Feb 23 09:09:54 crc kubenswrapper[4940]: I0223 09:09:54.506953 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3f62a0b8-7bf4-40cb-a561-b2afd12eabdf-lib-modules\") pod \"cinder-volume-volume1-0\" (UID: \"3f62a0b8-7bf4-40cb-a561-b2afd12eabdf\") " pod="openstack/cinder-volume-volume1-0" Feb 23 09:09:54 crc kubenswrapper[4940]: I0223 09:09:54.506982 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3f62a0b8-7bf4-40cb-a561-b2afd12eabdf-etc-machine-id\") pod \"cinder-volume-volume1-0\" (UID: \"3f62a0b8-7bf4-40cb-a561-b2afd12eabdf\") " pod="openstack/cinder-volume-volume1-0" Feb 23 09:09:54 crc kubenswrapper[4940]: I0223 09:09:54.507010 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/3f62a0b8-7bf4-40cb-a561-b2afd12eabdf-sys\") pod \"cinder-volume-volume1-0\" (UID: \"3f62a0b8-7bf4-40cb-a561-b2afd12eabdf\") " pod="openstack/cinder-volume-volume1-0" Feb 23 09:09:54 crc kubenswrapper[4940]: I0223 09:09:54.507246 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/3f62a0b8-7bf4-40cb-a561-b2afd12eabdf-var-locks-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"3f62a0b8-7bf4-40cb-a561-b2afd12eabdf\") " pod="openstack/cinder-volume-volume1-0" Feb 23 09:09:54 crc kubenswrapper[4940]: I0223 09:09:54.507292 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0911a5da-3249-4858-8246-5334db240255-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"0911a5da-3249-4858-8246-5334db240255\") " pod="openstack/cinder-backup-0" Feb 23 09:09:54 crc kubenswrapper[4940]: I0223 09:09:54.516244 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0911a5da-3249-4858-8246-5334db240255-lib-modules\") pod \"cinder-backup-0\" (UID: \"0911a5da-3249-4858-8246-5334db240255\") " pod="openstack/cinder-backup-0" Feb 23 09:09:54 crc kubenswrapper[4940]: I0223 09:09:54.517548 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/0911a5da-3249-4858-8246-5334db240255-etc-nvme\") pod \"cinder-backup-0\" (UID: \"0911a5da-3249-4858-8246-5334db240255\") " pod="openstack/cinder-backup-0" Feb 23 09:09:54 crc kubenswrapper[4940]: I0223 09:09:54.517720 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/3f62a0b8-7bf4-40cb-a561-b2afd12eabdf-var-locks-brick\") pod \"cinder-volume-volume1-0\" (UID: \"3f62a0b8-7bf4-40cb-a561-b2afd12eabdf\") " pod="openstack/cinder-volume-volume1-0" Feb 23 09:09:54 crc kubenswrapper[4940]: I0223 09:09:54.520156 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/0911a5da-3249-4858-8246-5334db240255-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"0911a5da-3249-4858-8246-5334db240255\") " pod="openstack/cinder-backup-0" Feb 23 09:09:54 crc kubenswrapper[4940]: I0223 09:09:54.520225 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/0911a5da-3249-4858-8246-5334db240255-dev\") pod \"cinder-backup-0\" (UID: \"0911a5da-3249-4858-8246-5334db240255\") " pod="openstack/cinder-backup-0" Feb 23 09:09:54 crc kubenswrapper[4940]: I0223 09:09:54.521920 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/0911a5da-3249-4858-8246-5334db240255-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"0911a5da-3249-4858-8246-5334db240255\") " pod="openstack/cinder-backup-0" Feb 23 09:09:54 crc kubenswrapper[4940]: I0223 09:09:54.522896 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/3f62a0b8-7bf4-40cb-a561-b2afd12eabdf-var-lib-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"3f62a0b8-7bf4-40cb-a561-b2afd12eabdf\") " pod="openstack/cinder-volume-volume1-0" Feb 23 09:09:54 crc kubenswrapper[4940]: I0223 09:09:54.522941 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/3f62a0b8-7bf4-40cb-a561-b2afd12eabdf-etc-iscsi\") pod \"cinder-volume-volume1-0\" (UID: \"3f62a0b8-7bf4-40cb-a561-b2afd12eabdf\") " pod="openstack/cinder-volume-volume1-0" Feb 23 09:09:54 crc kubenswrapper[4940]: I0223 09:09:54.522978 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/0911a5da-3249-4858-8246-5334db240255-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"0911a5da-3249-4858-8246-5334db240255\") " pod="openstack/cinder-backup-0" Feb 23 09:09:54 crc kubenswrapper[4940]: I0223 09:09:54.523003 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/3f62a0b8-7bf4-40cb-a561-b2afd12eabdf-run\") pod \"cinder-volume-volume1-0\" (UID: \"3f62a0b8-7bf4-40cb-a561-b2afd12eabdf\") " pod="openstack/cinder-volume-volume1-0" Feb 23 09:09:54 crc kubenswrapper[4940]: I0223 09:09:54.523035 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/0911a5da-3249-4858-8246-5334db240255-run\") pod \"cinder-backup-0\" (UID: \"0911a5da-3249-4858-8246-5334db240255\") " pod="openstack/cinder-backup-0" Feb 23 09:09:54 crc kubenswrapper[4940]: I0223 09:09:54.523789 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3f62a0b8-7bf4-40cb-a561-b2afd12eabdf-config-data-custom\") pod \"cinder-volume-volume1-0\" (UID: \"3f62a0b8-7bf4-40cb-a561-b2afd12eabdf\") " pod="openstack/cinder-volume-volume1-0" Feb 23 09:09:54 crc kubenswrapper[4940]: I0223 09:09:54.529330 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/3f62a0b8-7bf4-40cb-a561-b2afd12eabdf-etc-nvme\") pod \"cinder-volume-volume1-0\" (UID: \"3f62a0b8-7bf4-40cb-a561-b2afd12eabdf\") " pod="openstack/cinder-volume-volume1-0" Feb 23 09:09:54 crc kubenswrapper[4940]: I0223 09:09:54.539120 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f62a0b8-7bf4-40cb-a561-b2afd12eabdf-config-data\") pod \"cinder-volume-volume1-0\" (UID: \"3f62a0b8-7bf4-40cb-a561-b2afd12eabdf\") " pod="openstack/cinder-volume-volume1-0" Feb 23 09:09:54 crc kubenswrapper[4940]: I0223 09:09:54.560795 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3f62a0b8-7bf4-40cb-a561-b2afd12eabdf-scripts\") pod \"cinder-volume-volume1-0\" (UID: \"3f62a0b8-7bf4-40cb-a561-b2afd12eabdf\") " pod="openstack/cinder-volume-volume1-0" Feb 23 09:09:54 crc kubenswrapper[4940]: I0223 09:09:54.561212 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/0911a5da-3249-4858-8246-5334db240255-ceph\") pod \"cinder-backup-0\" (UID: \"0911a5da-3249-4858-8246-5334db240255\") " pod="openstack/cinder-backup-0" Feb 23 09:09:54 crc kubenswrapper[4940]: I0223 09:09:54.561368 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0911a5da-3249-4858-8246-5334db240255-config-data-custom\") pod \"cinder-backup-0\" (UID: \"0911a5da-3249-4858-8246-5334db240255\") " pod="openstack/cinder-backup-0" Feb 23 09:09:54 crc kubenswrapper[4940]: I0223 09:09:54.561635 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0911a5da-3249-4858-8246-5334db240255-scripts\") pod \"cinder-backup-0\" (UID: \"0911a5da-3249-4858-8246-5334db240255\") " pod="openstack/cinder-backup-0" Feb 23 09:09:54 crc kubenswrapper[4940]: I0223 09:09:54.573908 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5zhcq\" (UniqueName: \"kubernetes.io/projected/3f62a0b8-7bf4-40cb-a561-b2afd12eabdf-kube-api-access-5zhcq\") pod \"cinder-volume-volume1-0\" (UID: \"3f62a0b8-7bf4-40cb-a561-b2afd12eabdf\") " pod="openstack/cinder-volume-volume1-0" Feb 23 09:09:54 crc kubenswrapper[4940]: I0223 09:09:54.574228 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/3f62a0b8-7bf4-40cb-a561-b2afd12eabdf-ceph\") pod \"cinder-volume-volume1-0\" (UID: \"3f62a0b8-7bf4-40cb-a561-b2afd12eabdf\") " pod="openstack/cinder-volume-volume1-0" Feb 23 09:09:54 crc kubenswrapper[4940]: I0223 09:09:54.575312 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f62a0b8-7bf4-40cb-a561-b2afd12eabdf-combined-ca-bundle\") pod \"cinder-volume-volume1-0\" (UID: \"3f62a0b8-7bf4-40cb-a561-b2afd12eabdf\") " pod="openstack/cinder-volume-volume1-0" Feb 23 09:09:54 crc kubenswrapper[4940]: I0223 09:09:54.575956 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0911a5da-3249-4858-8246-5334db240255-config-data\") pod \"cinder-backup-0\" (UID: \"0911a5da-3249-4858-8246-5334db240255\") " pod="openstack/cinder-backup-0" Feb 23 09:09:54 crc kubenswrapper[4940]: I0223 09:09:54.576557 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0911a5da-3249-4858-8246-5334db240255-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"0911a5da-3249-4858-8246-5334db240255\") " pod="openstack/cinder-backup-0" Feb 23 09:09:54 crc kubenswrapper[4940]: I0223 09:09:54.583190 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nq7dq\" (UniqueName: \"kubernetes.io/projected/0911a5da-3249-4858-8246-5334db240255-kube-api-access-nq7dq\") pod \"cinder-backup-0\" (UID: \"0911a5da-3249-4858-8246-5334db240255\") " pod="openstack/cinder-backup-0" Feb 23 09:09:54 crc kubenswrapper[4940]: I0223 09:09:54.608008 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f94f1199-3d0b-4502-9013-a1e408c7280e-dns-swift-storage-0\") pod \"dnsmasq-dns-5c9776ccc5-fzrcd\" (UID: \"f94f1199-3d0b-4502-9013-a1e408c7280e\") " pod="openstack/dnsmasq-dns-5c9776ccc5-fzrcd" Feb 23 09:09:54 crc kubenswrapper[4940]: I0223 09:09:54.608056 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2cb80df2-e9ff-48b8-86c1-301afe49d9ed-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"2cb80df2-e9ff-48b8-86c1-301afe49d9ed\") " pod="openstack/cinder-api-0" Feb 23 09:09:54 crc kubenswrapper[4940]: I0223 09:09:54.608105 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f94f1199-3d0b-4502-9013-a1e408c7280e-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9776ccc5-fzrcd\" (UID: \"f94f1199-3d0b-4502-9013-a1e408c7280e\") " pod="openstack/dnsmasq-dns-5c9776ccc5-fzrcd" Feb 23 09:09:54 crc kubenswrapper[4940]: I0223 09:09:54.608126 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2cb80df2-e9ff-48b8-86c1-301afe49d9ed-etc-machine-id\") pod \"cinder-api-0\" (UID: \"2cb80df2-e9ff-48b8-86c1-301afe49d9ed\") " pod="openstack/cinder-api-0" Feb 23 09:09:54 crc kubenswrapper[4940]: I0223 09:09:54.608173 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2cb80df2-e9ff-48b8-86c1-301afe49d9ed-scripts\") pod \"cinder-api-0\" (UID: \"2cb80df2-e9ff-48b8-86c1-301afe49d9ed\") " pod="openstack/cinder-api-0" Feb 23 09:09:54 crc kubenswrapper[4940]: I0223 09:09:54.608191 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2cb80df2-e9ff-48b8-86c1-301afe49d9ed-logs\") pod \"cinder-api-0\" (UID: \"2cb80df2-e9ff-48b8-86c1-301afe49d9ed\") " pod="openstack/cinder-api-0" Feb 23 09:09:54 crc kubenswrapper[4940]: I0223 09:09:54.608212 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pdnp7\" (UniqueName: \"kubernetes.io/projected/2cb80df2-e9ff-48b8-86c1-301afe49d9ed-kube-api-access-pdnp7\") pod \"cinder-api-0\" (UID: \"2cb80df2-e9ff-48b8-86c1-301afe49d9ed\") " pod="openstack/cinder-api-0" Feb 23 09:09:54 crc kubenswrapper[4940]: I0223 09:09:54.608232 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f94f1199-3d0b-4502-9013-a1e408c7280e-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9776ccc5-fzrcd\" (UID: \"f94f1199-3d0b-4502-9013-a1e408c7280e\") " pod="openstack/dnsmasq-dns-5c9776ccc5-fzrcd" Feb 23 09:09:54 crc kubenswrapper[4940]: I0223 09:09:54.608262 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2cb80df2-e9ff-48b8-86c1-301afe49d9ed-config-data\") pod \"cinder-api-0\" (UID: \"2cb80df2-e9ff-48b8-86c1-301afe49d9ed\") " pod="openstack/cinder-api-0" Feb 23 09:09:54 crc kubenswrapper[4940]: I0223 09:09:54.608301 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f94f1199-3d0b-4502-9013-a1e408c7280e-dns-svc\") pod \"dnsmasq-dns-5c9776ccc5-fzrcd\" (UID: \"f94f1199-3d0b-4502-9013-a1e408c7280e\") " pod="openstack/dnsmasq-dns-5c9776ccc5-fzrcd" Feb 23 09:09:54 crc kubenswrapper[4940]: I0223 09:09:54.608328 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2cb80df2-e9ff-48b8-86c1-301afe49d9ed-config-data-custom\") pod \"cinder-api-0\" (UID: \"2cb80df2-e9ff-48b8-86c1-301afe49d9ed\") " pod="openstack/cinder-api-0" Feb 23 09:09:54 crc kubenswrapper[4940]: I0223 09:09:54.608353 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mhhnh\" (UniqueName: \"kubernetes.io/projected/f94f1199-3d0b-4502-9013-a1e408c7280e-kube-api-access-mhhnh\") pod \"dnsmasq-dns-5c9776ccc5-fzrcd\" (UID: \"f94f1199-3d0b-4502-9013-a1e408c7280e\") " pod="openstack/dnsmasq-dns-5c9776ccc5-fzrcd" Feb 23 09:09:54 crc kubenswrapper[4940]: I0223 09:09:54.608394 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f94f1199-3d0b-4502-9013-a1e408c7280e-config\") pod \"dnsmasq-dns-5c9776ccc5-fzrcd\" (UID: \"f94f1199-3d0b-4502-9013-a1e408c7280e\") " pod="openstack/dnsmasq-dns-5c9776ccc5-fzrcd" Feb 23 09:09:54 crc kubenswrapper[4940]: I0223 09:09:54.609892 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f94f1199-3d0b-4502-9013-a1e408c7280e-config\") pod \"dnsmasq-dns-5c9776ccc5-fzrcd\" (UID: \"f94f1199-3d0b-4502-9013-a1e408c7280e\") " pod="openstack/dnsmasq-dns-5c9776ccc5-fzrcd" Feb 23 09:09:54 crc kubenswrapper[4940]: I0223 09:09:54.611125 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f94f1199-3d0b-4502-9013-a1e408c7280e-dns-swift-storage-0\") pod \"dnsmasq-dns-5c9776ccc5-fzrcd\" (UID: \"f94f1199-3d0b-4502-9013-a1e408c7280e\") " pod="openstack/dnsmasq-dns-5c9776ccc5-fzrcd" Feb 23 09:09:54 crc kubenswrapper[4940]: I0223 09:09:54.611654 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f94f1199-3d0b-4502-9013-a1e408c7280e-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9776ccc5-fzrcd\" (UID: \"f94f1199-3d0b-4502-9013-a1e408c7280e\") " pod="openstack/dnsmasq-dns-5c9776ccc5-fzrcd" Feb 23 09:09:54 crc kubenswrapper[4940]: I0223 09:09:54.612308 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f94f1199-3d0b-4502-9013-a1e408c7280e-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9776ccc5-fzrcd\" (UID: \"f94f1199-3d0b-4502-9013-a1e408c7280e\") " pod="openstack/dnsmasq-dns-5c9776ccc5-fzrcd" Feb 23 09:09:54 crc kubenswrapper[4940]: I0223 09:09:54.612981 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f94f1199-3d0b-4502-9013-a1e408c7280e-dns-svc\") pod \"dnsmasq-dns-5c9776ccc5-fzrcd\" (UID: \"f94f1199-3d0b-4502-9013-a1e408c7280e\") " pod="openstack/dnsmasq-dns-5c9776ccc5-fzrcd" Feb 23 09:09:54 crc kubenswrapper[4940]: I0223 09:09:54.633392 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mhhnh\" (UniqueName: \"kubernetes.io/projected/f94f1199-3d0b-4502-9013-a1e408c7280e-kube-api-access-mhhnh\") pod \"dnsmasq-dns-5c9776ccc5-fzrcd\" (UID: \"f94f1199-3d0b-4502-9013-a1e408c7280e\") " pod="openstack/dnsmasq-dns-5c9776ccc5-fzrcd" Feb 23 09:09:54 crc kubenswrapper[4940]: I0223 09:09:54.710719 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2cb80df2-e9ff-48b8-86c1-301afe49d9ed-config-data-custom\") pod \"cinder-api-0\" (UID: \"2cb80df2-e9ff-48b8-86c1-301afe49d9ed\") " pod="openstack/cinder-api-0" Feb 23 09:09:54 crc kubenswrapper[4940]: I0223 09:09:54.711102 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2cb80df2-e9ff-48b8-86c1-301afe49d9ed-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"2cb80df2-e9ff-48b8-86c1-301afe49d9ed\") " pod="openstack/cinder-api-0" Feb 23 09:09:54 crc kubenswrapper[4940]: I0223 09:09:54.712338 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2cb80df2-e9ff-48b8-86c1-301afe49d9ed-etc-machine-id\") pod \"cinder-api-0\" (UID: \"2cb80df2-e9ff-48b8-86c1-301afe49d9ed\") " pod="openstack/cinder-api-0" Feb 23 09:09:54 crc kubenswrapper[4940]: I0223 09:09:54.712522 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2cb80df2-e9ff-48b8-86c1-301afe49d9ed-scripts\") pod \"cinder-api-0\" (UID: \"2cb80df2-e9ff-48b8-86c1-301afe49d9ed\") " pod="openstack/cinder-api-0" Feb 23 09:09:54 crc kubenswrapper[4940]: I0223 09:09:54.712544 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2cb80df2-e9ff-48b8-86c1-301afe49d9ed-logs\") pod \"cinder-api-0\" (UID: \"2cb80df2-e9ff-48b8-86c1-301afe49d9ed\") " pod="openstack/cinder-api-0" Feb 23 09:09:54 crc kubenswrapper[4940]: I0223 09:09:54.712589 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pdnp7\" (UniqueName: \"kubernetes.io/projected/2cb80df2-e9ff-48b8-86c1-301afe49d9ed-kube-api-access-pdnp7\") pod \"cinder-api-0\" (UID: \"2cb80df2-e9ff-48b8-86c1-301afe49d9ed\") " pod="openstack/cinder-api-0" Feb 23 09:09:54 crc kubenswrapper[4940]: I0223 09:09:54.712684 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2cb80df2-e9ff-48b8-86c1-301afe49d9ed-config-data\") pod \"cinder-api-0\" (UID: \"2cb80df2-e9ff-48b8-86c1-301afe49d9ed\") " pod="openstack/cinder-api-0" Feb 23 09:09:54 crc kubenswrapper[4940]: I0223 09:09:54.716816 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2cb80df2-e9ff-48b8-86c1-301afe49d9ed-config-data\") pod \"cinder-api-0\" (UID: \"2cb80df2-e9ff-48b8-86c1-301afe49d9ed\") " pod="openstack/cinder-api-0" Feb 23 09:09:54 crc kubenswrapper[4940]: I0223 09:09:54.716892 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2cb80df2-e9ff-48b8-86c1-301afe49d9ed-etc-machine-id\") pod \"cinder-api-0\" (UID: \"2cb80df2-e9ff-48b8-86c1-301afe49d9ed\") " pod="openstack/cinder-api-0" Feb 23 09:09:54 crc kubenswrapper[4940]: I0223 09:09:54.718247 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2cb80df2-e9ff-48b8-86c1-301afe49d9ed-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"2cb80df2-e9ff-48b8-86c1-301afe49d9ed\") " pod="openstack/cinder-api-0" Feb 23 09:09:54 crc kubenswrapper[4940]: I0223 09:09:54.725074 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2cb80df2-e9ff-48b8-86c1-301afe49d9ed-config-data-custom\") pod \"cinder-api-0\" (UID: \"2cb80df2-e9ff-48b8-86c1-301afe49d9ed\") " pod="openstack/cinder-api-0" Feb 23 09:09:54 crc kubenswrapper[4940]: I0223 09:09:54.725495 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2cb80df2-e9ff-48b8-86c1-301afe49d9ed-logs\") pod \"cinder-api-0\" (UID: \"2cb80df2-e9ff-48b8-86c1-301afe49d9ed\") " pod="openstack/cinder-api-0" Feb 23 09:09:54 crc kubenswrapper[4940]: I0223 09:09:54.726073 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2cb80df2-e9ff-48b8-86c1-301afe49d9ed-scripts\") pod \"cinder-api-0\" (UID: \"2cb80df2-e9ff-48b8-86c1-301afe49d9ed\") " pod="openstack/cinder-api-0" Feb 23 09:09:54 crc kubenswrapper[4940]: I0223 09:09:54.757793 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pdnp7\" (UniqueName: \"kubernetes.io/projected/2cb80df2-e9ff-48b8-86c1-301afe49d9ed-kube-api-access-pdnp7\") pod \"cinder-api-0\" (UID: \"2cb80df2-e9ff-48b8-86c1-301afe49d9ed\") " pod="openstack/cinder-api-0" Feb 23 09:09:54 crc kubenswrapper[4940]: I0223 09:09:54.771445 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-volume-volume1-0" Feb 23 09:09:54 crc kubenswrapper[4940]: I0223 09:09:54.779777 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-backup-0" Feb 23 09:09:54 crc kubenswrapper[4940]: I0223 09:09:54.801087 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-fzrcd" Feb 23 09:09:54 crc kubenswrapper[4940]: I0223 09:09:54.913352 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 23 09:09:55 crc kubenswrapper[4940]: I0223 09:09:55.009513 4940 generic.go:334] "Generic (PLEG): container finished" podID="6af3cab0-e7d8-461f-9092-6b5afefff5cc" containerID="1c5004709c21be0889fea3e58d8bd445fa72a121658b331f7949822e7ea39bf3" exitCode=2 Feb 23 09:09:55 crc kubenswrapper[4940]: I0223 09:09:55.009745 4940 generic.go:334] "Generic (PLEG): container finished" podID="6af3cab0-e7d8-461f-9092-6b5afefff5cc" containerID="f8a149cbff71bf8a8cf32a4d15c2ded691d1894cc7b8d4cd9488753cfd672826" exitCode=0 Feb 23 09:09:55 crc kubenswrapper[4940]: I0223 09:09:55.009763 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6af3cab0-e7d8-461f-9092-6b5afefff5cc","Type":"ContainerDied","Data":"1c5004709c21be0889fea3e58d8bd445fa72a121658b331f7949822e7ea39bf3"} Feb 23 09:09:55 crc kubenswrapper[4940]: I0223 09:09:55.009788 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6af3cab0-e7d8-461f-9092-6b5afefff5cc","Type":"ContainerDied","Data":"f8a149cbff71bf8a8cf32a4d15c2ded691d1894cc7b8d4cd9488753cfd672826"} Feb 23 09:09:55 crc kubenswrapper[4940]: I0223 09:09:55.115751 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 23 09:09:55 crc kubenswrapper[4940]: I0223 09:09:55.393294 4940 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="02b394c0-8e56-4cfd-b85b-27109abc1b52" path="/var/lib/kubelet/pods/02b394c0-8e56-4cfd-b85b-27109abc1b52/volumes" Feb 23 09:09:55 crc kubenswrapper[4940]: I0223 09:09:55.394263 4940 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5488f264-b445-4d42-9d11-9b74f2c9b1f5" path="/var/lib/kubelet/pods/5488f264-b445-4d42-9d11-9b74f2c9b1f5/volumes" Feb 23 09:09:55 crc kubenswrapper[4940]: I0223 09:09:55.394830 4940 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a6c275c2-a5e4-4761-b103-641ef82153f5" path="/var/lib/kubelet/pods/a6c275c2-a5e4-4761-b103-641ef82153f5/volumes" Feb 23 09:09:55 crc kubenswrapper[4940]: I0223 09:09:55.635671 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-backup-0"] Feb 23 09:09:55 crc kubenswrapper[4940]: W0223 09:09:55.677414 4940 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0911a5da_3249_4858_8246_5334db240255.slice/crio-f819f014d4e2ddd6f0104c3634bbb56bb28b87dbd06d18188332947a2453706b WatchSource:0}: Error finding container f819f014d4e2ddd6f0104c3634bbb56bb28b87dbd06d18188332947a2453706b: Status 404 returned error can't find the container with id f819f014d4e2ddd6f0104c3634bbb56bb28b87dbd06d18188332947a2453706b Feb 23 09:09:55 crc kubenswrapper[4940]: I0223 09:09:55.755490 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-fzrcd"] Feb 23 09:09:55 crc kubenswrapper[4940]: W0223 09:09:55.782255 4940 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf94f1199_3d0b_4502_9013_a1e408c7280e.slice/crio-dbbe5efec5b413474aac62d752f5790ebad57d9ebdea6ce825db27674de2ca69 WatchSource:0}: Error finding container dbbe5efec5b413474aac62d752f5790ebad57d9ebdea6ce825db27674de2ca69: Status 404 returned error can't find the container with id dbbe5efec5b413474aac62d752f5790ebad57d9ebdea6ce825db27674de2ca69 Feb 23 09:09:55 crc kubenswrapper[4940]: I0223 09:09:55.811935 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-7cc5d5d86-sr2r2" Feb 23 09:09:56 crc kubenswrapper[4940]: I0223 09:09:56.026294 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-fzrcd" event={"ID":"f94f1199-3d0b-4502-9013-a1e408c7280e","Type":"ContainerStarted","Data":"dbbe5efec5b413474aac62d752f5790ebad57d9ebdea6ce825db27674de2ca69"} Feb 23 09:09:56 crc kubenswrapper[4940]: I0223 09:09:56.027593 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"e5a733b1-ef74-46e6-804f-c0ad92ef38aa","Type":"ContainerStarted","Data":"81da9152bc042f95853473ae615973c62f415f4fefb401ce0805b43028302201"} Feb 23 09:09:56 crc kubenswrapper[4940]: I0223 09:09:56.027750 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 23 09:09:56 crc kubenswrapper[4940]: I0223 09:09:56.028856 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-backup-0" event={"ID":"0911a5da-3249-4858-8246-5334db240255","Type":"ContainerStarted","Data":"f819f014d4e2ddd6f0104c3634bbb56bb28b87dbd06d18188332947a2453706b"} Feb 23 09:09:56 crc kubenswrapper[4940]: I0223 09:09:56.052089 4940 generic.go:334] "Generic (PLEG): container finished" podID="6af3cab0-e7d8-461f-9092-6b5afefff5cc" containerID="45584dc60dfed0903a05c4236e685cbc1320a25de5d826a48c3751896a04aa55" exitCode=0 Feb 23 09:09:56 crc kubenswrapper[4940]: I0223 09:09:56.052120 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6af3cab0-e7d8-461f-9092-6b5afefff5cc","Type":"ContainerDied","Data":"45584dc60dfed0903a05c4236e685cbc1320a25de5d826a48c3751896a04aa55"} Feb 23 09:09:56 crc kubenswrapper[4940]: I0223 09:09:56.052144 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6af3cab0-e7d8-461f-9092-6b5afefff5cc","Type":"ContainerDied","Data":"b7996fbac951655972a87c5f5d69c66e2e11b6f66042deee94fbe46e6c2a8141"} Feb 23 09:09:56 crc kubenswrapper[4940]: I0223 09:09:56.052161 4940 scope.go:117] "RemoveContainer" containerID="45584dc60dfed0903a05c4236e685cbc1320a25de5d826a48c3751896a04aa55" Feb 23 09:09:56 crc kubenswrapper[4940]: I0223 09:09:56.052259 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 23 09:09:56 crc kubenswrapper[4940]: I0223 09:09:56.104849 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 23 09:09:56 crc kubenswrapper[4940]: I0223 09:09:56.111797 4940 scope.go:117] "RemoveContainer" containerID="1c5004709c21be0889fea3e58d8bd445fa72a121658b331f7949822e7ea39bf3" Feb 23 09:09:56 crc kubenswrapper[4940]: I0223 09:09:56.160597 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9lchs\" (UniqueName: \"kubernetes.io/projected/6af3cab0-e7d8-461f-9092-6b5afefff5cc-kube-api-access-9lchs\") pod \"6af3cab0-e7d8-461f-9092-6b5afefff5cc\" (UID: \"6af3cab0-e7d8-461f-9092-6b5afefff5cc\") " Feb 23 09:09:56 crc kubenswrapper[4940]: I0223 09:09:56.160760 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6af3cab0-e7d8-461f-9092-6b5afefff5cc-config-data\") pod \"6af3cab0-e7d8-461f-9092-6b5afefff5cc\" (UID: \"6af3cab0-e7d8-461f-9092-6b5afefff5cc\") " Feb 23 09:09:56 crc kubenswrapper[4940]: I0223 09:09:56.160871 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6af3cab0-e7d8-461f-9092-6b5afefff5cc-sg-core-conf-yaml\") pod \"6af3cab0-e7d8-461f-9092-6b5afefff5cc\" (UID: \"6af3cab0-e7d8-461f-9092-6b5afefff5cc\") " Feb 23 09:09:56 crc kubenswrapper[4940]: I0223 09:09:56.160911 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6af3cab0-e7d8-461f-9092-6b5afefff5cc-run-httpd\") pod \"6af3cab0-e7d8-461f-9092-6b5afefff5cc\" (UID: \"6af3cab0-e7d8-461f-9092-6b5afefff5cc\") " Feb 23 09:09:56 crc kubenswrapper[4940]: I0223 09:09:56.160970 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6af3cab0-e7d8-461f-9092-6b5afefff5cc-log-httpd\") pod \"6af3cab0-e7d8-461f-9092-6b5afefff5cc\" (UID: \"6af3cab0-e7d8-461f-9092-6b5afefff5cc\") " Feb 23 09:09:56 crc kubenswrapper[4940]: I0223 09:09:56.161713 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6af3cab0-e7d8-461f-9092-6b5afefff5cc-scripts\") pod \"6af3cab0-e7d8-461f-9092-6b5afefff5cc\" (UID: \"6af3cab0-e7d8-461f-9092-6b5afefff5cc\") " Feb 23 09:09:56 crc kubenswrapper[4940]: I0223 09:09:56.161779 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6af3cab0-e7d8-461f-9092-6b5afefff5cc-combined-ca-bundle\") pod \"6af3cab0-e7d8-461f-9092-6b5afefff5cc\" (UID: \"6af3cab0-e7d8-461f-9092-6b5afefff5cc\") " Feb 23 09:09:56 crc kubenswrapper[4940]: I0223 09:09:56.164249 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6af3cab0-e7d8-461f-9092-6b5afefff5cc-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "6af3cab0-e7d8-461f-9092-6b5afefff5cc" (UID: "6af3cab0-e7d8-461f-9092-6b5afefff5cc"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 09:09:56 crc kubenswrapper[4940]: I0223 09:09:56.174162 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6af3cab0-e7d8-461f-9092-6b5afefff5cc-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "6af3cab0-e7d8-461f-9092-6b5afefff5cc" (UID: "6af3cab0-e7d8-461f-9092-6b5afefff5cc"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 09:09:56 crc kubenswrapper[4940]: I0223 09:09:56.194813 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6af3cab0-e7d8-461f-9092-6b5afefff5cc-scripts" (OuterVolumeSpecName: "scripts") pod "6af3cab0-e7d8-461f-9092-6b5afefff5cc" (UID: "6af3cab0-e7d8-461f-9092-6b5afefff5cc"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 09:09:56 crc kubenswrapper[4940]: I0223 09:09:56.248977 4940 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-7ffc6bfc65-qhp9j"] Feb 23 09:09:56 crc kubenswrapper[4940]: I0223 09:09:56.249216 4940 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-7ffc6bfc65-qhp9j" podUID="b6ace7c6-781c-4053-8e6e-26232d9355da" containerName="neutron-api" containerID="cri-o://74b3089cf0f921509af84286bb5056e18b7337f0e6fb28a9d0aa489627692b95" gracePeriod=30 Feb 23 09:09:56 crc kubenswrapper[4940]: I0223 09:09:56.249876 4940 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-7ffc6bfc65-qhp9j" podUID="b6ace7c6-781c-4053-8e6e-26232d9355da" containerName="neutron-httpd" containerID="cri-o://e3c8d0b0648ec94d2501f62779ab5e1aa6d83808ac2d9c0160962627213f80f6" gracePeriod=30 Feb 23 09:09:56 crc kubenswrapper[4940]: I0223 09:09:56.265149 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6af3cab0-e7d8-461f-9092-6b5afefff5cc-kube-api-access-9lchs" (OuterVolumeSpecName: "kube-api-access-9lchs") pod "6af3cab0-e7d8-461f-9092-6b5afefff5cc" (UID: "6af3cab0-e7d8-461f-9092-6b5afefff5cc"). InnerVolumeSpecName "kube-api-access-9lchs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 09:09:56 crc kubenswrapper[4940]: I0223 09:09:56.265652 4940 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6af3cab0-e7d8-461f-9092-6b5afefff5cc-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 23 09:09:56 crc kubenswrapper[4940]: I0223 09:09:56.265684 4940 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6af3cab0-e7d8-461f-9092-6b5afefff5cc-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 23 09:09:56 crc kubenswrapper[4940]: I0223 09:09:56.265696 4940 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6af3cab0-e7d8-461f-9092-6b5afefff5cc-scripts\") on node \"crc\" DevicePath \"\"" Feb 23 09:09:56 crc kubenswrapper[4940]: I0223 09:09:56.265717 4940 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9lchs\" (UniqueName: \"kubernetes.io/projected/6af3cab0-e7d8-461f-9092-6b5afefff5cc-kube-api-access-9lchs\") on node \"crc\" DevicePath \"\"" Feb 23 09:09:56 crc kubenswrapper[4940]: I0223 09:09:56.286357 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-654489f6f-92jdq"] Feb 23 09:09:56 crc kubenswrapper[4940]: E0223 09:09:56.287395 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6af3cab0-e7d8-461f-9092-6b5afefff5cc" containerName="proxy-httpd" Feb 23 09:09:56 crc kubenswrapper[4940]: I0223 09:09:56.287409 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="6af3cab0-e7d8-461f-9092-6b5afefff5cc" containerName="proxy-httpd" Feb 23 09:09:56 crc kubenswrapper[4940]: E0223 09:09:56.287427 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6af3cab0-e7d8-461f-9092-6b5afefff5cc" containerName="ceilometer-notification-agent" Feb 23 09:09:56 crc kubenswrapper[4940]: I0223 09:09:56.287434 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="6af3cab0-e7d8-461f-9092-6b5afefff5cc" containerName="ceilometer-notification-agent" Feb 23 09:09:56 crc kubenswrapper[4940]: E0223 09:09:56.287478 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6af3cab0-e7d8-461f-9092-6b5afefff5cc" containerName="sg-core" Feb 23 09:09:56 crc kubenswrapper[4940]: I0223 09:09:56.287486 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="6af3cab0-e7d8-461f-9092-6b5afefff5cc" containerName="sg-core" Feb 23 09:09:56 crc kubenswrapper[4940]: I0223 09:09:56.287652 4940 memory_manager.go:354] "RemoveStaleState removing state" podUID="6af3cab0-e7d8-461f-9092-6b5afefff5cc" containerName="ceilometer-notification-agent" Feb 23 09:09:56 crc kubenswrapper[4940]: I0223 09:09:56.287670 4940 memory_manager.go:354] "RemoveStaleState removing state" podUID="6af3cab0-e7d8-461f-9092-6b5afefff5cc" containerName="sg-core" Feb 23 09:09:56 crc kubenswrapper[4940]: I0223 09:09:56.287692 4940 memory_manager.go:354] "RemoveStaleState removing state" podUID="6af3cab0-e7d8-461f-9092-6b5afefff5cc" containerName="proxy-httpd" Feb 23 09:09:56 crc kubenswrapper[4940]: I0223 09:09:56.288869 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-654489f6f-92jdq" Feb 23 09:09:56 crc kubenswrapper[4940]: I0223 09:09:56.293894 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-654489f6f-92jdq"] Feb 23 09:09:56 crc kubenswrapper[4940]: I0223 09:09:56.317884 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6af3cab0-e7d8-461f-9092-6b5afefff5cc-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "6af3cab0-e7d8-461f-9092-6b5afefff5cc" (UID: "6af3cab0-e7d8-461f-9092-6b5afefff5cc"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 09:09:56 crc kubenswrapper[4940]: I0223 09:09:56.368727 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/feae1958-0b14-4a24-af08-cb96a4131a47-config\") pod \"neutron-654489f6f-92jdq\" (UID: \"feae1958-0b14-4a24-af08-cb96a4131a47\") " pod="openstack/neutron-654489f6f-92jdq" Feb 23 09:09:56 crc kubenswrapper[4940]: I0223 09:09:56.368780 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/feae1958-0b14-4a24-af08-cb96a4131a47-internal-tls-certs\") pod \"neutron-654489f6f-92jdq\" (UID: \"feae1958-0b14-4a24-af08-cb96a4131a47\") " pod="openstack/neutron-654489f6f-92jdq" Feb 23 09:09:56 crc kubenswrapper[4940]: I0223 09:09:56.368824 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/feae1958-0b14-4a24-af08-cb96a4131a47-httpd-config\") pod \"neutron-654489f6f-92jdq\" (UID: \"feae1958-0b14-4a24-af08-cb96a4131a47\") " pod="openstack/neutron-654489f6f-92jdq" Feb 23 09:09:56 crc kubenswrapper[4940]: I0223 09:09:56.368846 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/feae1958-0b14-4a24-af08-cb96a4131a47-combined-ca-bundle\") pod \"neutron-654489f6f-92jdq\" (UID: \"feae1958-0b14-4a24-af08-cb96a4131a47\") " pod="openstack/neutron-654489f6f-92jdq" Feb 23 09:09:56 crc kubenswrapper[4940]: I0223 09:09:56.368918 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x24qc\" (UniqueName: \"kubernetes.io/projected/feae1958-0b14-4a24-af08-cb96a4131a47-kube-api-access-x24qc\") pod \"neutron-654489f6f-92jdq\" (UID: \"feae1958-0b14-4a24-af08-cb96a4131a47\") " pod="openstack/neutron-654489f6f-92jdq" Feb 23 09:09:56 crc kubenswrapper[4940]: I0223 09:09:56.368950 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/feae1958-0b14-4a24-af08-cb96a4131a47-ovndb-tls-certs\") pod \"neutron-654489f6f-92jdq\" (UID: \"feae1958-0b14-4a24-af08-cb96a4131a47\") " pod="openstack/neutron-654489f6f-92jdq" Feb 23 09:09:56 crc kubenswrapper[4940]: I0223 09:09:56.368989 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/feae1958-0b14-4a24-af08-cb96a4131a47-public-tls-certs\") pod \"neutron-654489f6f-92jdq\" (UID: \"feae1958-0b14-4a24-af08-cb96a4131a47\") " pod="openstack/neutron-654489f6f-92jdq" Feb 23 09:09:56 crc kubenswrapper[4940]: I0223 09:09:56.369048 4940 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6af3cab0-e7d8-461f-9092-6b5afefff5cc-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 23 09:09:56 crc kubenswrapper[4940]: I0223 09:09:56.378120 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6af3cab0-e7d8-461f-9092-6b5afefff5cc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6af3cab0-e7d8-461f-9092-6b5afefff5cc" (UID: "6af3cab0-e7d8-461f-9092-6b5afefff5cc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 09:09:56 crc kubenswrapper[4940]: I0223 09:09:56.392028 4940 scope.go:117] "RemoveContainer" containerID="f8a149cbff71bf8a8cf32a4d15c2ded691d1894cc7b8d4cd9488753cfd672826" Feb 23 09:09:56 crc kubenswrapper[4940]: I0223 09:09:56.432752 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6af3cab0-e7d8-461f-9092-6b5afefff5cc-config-data" (OuterVolumeSpecName: "config-data") pod "6af3cab0-e7d8-461f-9092-6b5afefff5cc" (UID: "6af3cab0-e7d8-461f-9092-6b5afefff5cc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 09:09:56 crc kubenswrapper[4940]: I0223 09:09:56.473893 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/feae1958-0b14-4a24-af08-cb96a4131a47-ovndb-tls-certs\") pod \"neutron-654489f6f-92jdq\" (UID: \"feae1958-0b14-4a24-af08-cb96a4131a47\") " pod="openstack/neutron-654489f6f-92jdq" Feb 23 09:09:56 crc kubenswrapper[4940]: I0223 09:09:56.473977 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/feae1958-0b14-4a24-af08-cb96a4131a47-public-tls-certs\") pod \"neutron-654489f6f-92jdq\" (UID: \"feae1958-0b14-4a24-af08-cb96a4131a47\") " pod="openstack/neutron-654489f6f-92jdq" Feb 23 09:09:56 crc kubenswrapper[4940]: I0223 09:09:56.474021 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/feae1958-0b14-4a24-af08-cb96a4131a47-config\") pod \"neutron-654489f6f-92jdq\" (UID: \"feae1958-0b14-4a24-af08-cb96a4131a47\") " pod="openstack/neutron-654489f6f-92jdq" Feb 23 09:09:56 crc kubenswrapper[4940]: I0223 09:09:56.474039 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/feae1958-0b14-4a24-af08-cb96a4131a47-internal-tls-certs\") pod \"neutron-654489f6f-92jdq\" (UID: \"feae1958-0b14-4a24-af08-cb96a4131a47\") " pod="openstack/neutron-654489f6f-92jdq" Feb 23 09:09:56 crc kubenswrapper[4940]: I0223 09:09:56.474078 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/feae1958-0b14-4a24-af08-cb96a4131a47-httpd-config\") pod \"neutron-654489f6f-92jdq\" (UID: \"feae1958-0b14-4a24-af08-cb96a4131a47\") " pod="openstack/neutron-654489f6f-92jdq" Feb 23 09:09:56 crc kubenswrapper[4940]: I0223 09:09:56.474098 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/feae1958-0b14-4a24-af08-cb96a4131a47-combined-ca-bundle\") pod \"neutron-654489f6f-92jdq\" (UID: \"feae1958-0b14-4a24-af08-cb96a4131a47\") " pod="openstack/neutron-654489f6f-92jdq" Feb 23 09:09:56 crc kubenswrapper[4940]: I0223 09:09:56.474182 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x24qc\" (UniqueName: \"kubernetes.io/projected/feae1958-0b14-4a24-af08-cb96a4131a47-kube-api-access-x24qc\") pod \"neutron-654489f6f-92jdq\" (UID: \"feae1958-0b14-4a24-af08-cb96a4131a47\") " pod="openstack/neutron-654489f6f-92jdq" Feb 23 09:09:56 crc kubenswrapper[4940]: I0223 09:09:56.474240 4940 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6af3cab0-e7d8-461f-9092-6b5afefff5cc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 09:09:56 crc kubenswrapper[4940]: I0223 09:09:56.474253 4940 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6af3cab0-e7d8-461f-9092-6b5afefff5cc-config-data\") on node \"crc\" DevicePath \"\"" Feb 23 09:09:56 crc kubenswrapper[4940]: I0223 09:09:56.481241 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/feae1958-0b14-4a24-af08-cb96a4131a47-ovndb-tls-certs\") pod \"neutron-654489f6f-92jdq\" (UID: \"feae1958-0b14-4a24-af08-cb96a4131a47\") " pod="openstack/neutron-654489f6f-92jdq" Feb 23 09:09:56 crc kubenswrapper[4940]: I0223 09:09:56.482293 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/feae1958-0b14-4a24-af08-cb96a4131a47-public-tls-certs\") pod \"neutron-654489f6f-92jdq\" (UID: \"feae1958-0b14-4a24-af08-cb96a4131a47\") " pod="openstack/neutron-654489f6f-92jdq" Feb 23 09:09:56 crc kubenswrapper[4940]: I0223 09:09:56.486258 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/feae1958-0b14-4a24-af08-cb96a4131a47-httpd-config\") pod \"neutron-654489f6f-92jdq\" (UID: \"feae1958-0b14-4a24-af08-cb96a4131a47\") " pod="openstack/neutron-654489f6f-92jdq" Feb 23 09:09:56 crc kubenswrapper[4940]: I0223 09:09:56.487119 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/feae1958-0b14-4a24-af08-cb96a4131a47-config\") pod \"neutron-654489f6f-92jdq\" (UID: \"feae1958-0b14-4a24-af08-cb96a4131a47\") " pod="openstack/neutron-654489f6f-92jdq" Feb 23 09:09:56 crc kubenswrapper[4940]: I0223 09:09:56.490276 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/feae1958-0b14-4a24-af08-cb96a4131a47-internal-tls-certs\") pod \"neutron-654489f6f-92jdq\" (UID: \"feae1958-0b14-4a24-af08-cb96a4131a47\") " pod="openstack/neutron-654489f6f-92jdq" Feb 23 09:09:56 crc kubenswrapper[4940]: I0223 09:09:56.490542 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/feae1958-0b14-4a24-af08-cb96a4131a47-combined-ca-bundle\") pod \"neutron-654489f6f-92jdq\" (UID: \"feae1958-0b14-4a24-af08-cb96a4131a47\") " pod="openstack/neutron-654489f6f-92jdq" Feb 23 09:09:56 crc kubenswrapper[4940]: I0223 09:09:56.494244 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x24qc\" (UniqueName: \"kubernetes.io/projected/feae1958-0b14-4a24-af08-cb96a4131a47-kube-api-access-x24qc\") pod \"neutron-654489f6f-92jdq\" (UID: \"feae1958-0b14-4a24-af08-cb96a4131a47\") " pod="openstack/neutron-654489f6f-92jdq" Feb 23 09:09:56 crc kubenswrapper[4940]: I0223 09:09:56.621263 4940 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-7ffc6bfc65-qhp9j" podUID="b6ace7c6-781c-4053-8e6e-26232d9355da" containerName="neutron-httpd" probeResult="failure" output="Get \"https://10.217.0.158:9696/\": read tcp 10.217.0.2:57092->10.217.0.158:9696: read: connection reset by peer" Feb 23 09:09:56 crc kubenswrapper[4940]: I0223 09:09:56.627793 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-654489f6f-92jdq" Feb 23 09:09:56 crc kubenswrapper[4940]: I0223 09:09:56.638783 4940 scope.go:117] "RemoveContainer" containerID="45584dc60dfed0903a05c4236e685cbc1320a25de5d826a48c3751896a04aa55" Feb 23 09:09:56 crc kubenswrapper[4940]: E0223 09:09:56.639781 4940 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"45584dc60dfed0903a05c4236e685cbc1320a25de5d826a48c3751896a04aa55\": container with ID starting with 45584dc60dfed0903a05c4236e685cbc1320a25de5d826a48c3751896a04aa55 not found: ID does not exist" containerID="45584dc60dfed0903a05c4236e685cbc1320a25de5d826a48c3751896a04aa55" Feb 23 09:09:56 crc kubenswrapper[4940]: I0223 09:09:56.639815 4940 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"45584dc60dfed0903a05c4236e685cbc1320a25de5d826a48c3751896a04aa55"} err="failed to get container status \"45584dc60dfed0903a05c4236e685cbc1320a25de5d826a48c3751896a04aa55\": rpc error: code = NotFound desc = could not find container \"45584dc60dfed0903a05c4236e685cbc1320a25de5d826a48c3751896a04aa55\": container with ID starting with 45584dc60dfed0903a05c4236e685cbc1320a25de5d826a48c3751896a04aa55 not found: ID does not exist" Feb 23 09:09:56 crc kubenswrapper[4940]: I0223 09:09:56.639840 4940 scope.go:117] "RemoveContainer" containerID="1c5004709c21be0889fea3e58d8bd445fa72a121658b331f7949822e7ea39bf3" Feb 23 09:09:56 crc kubenswrapper[4940]: E0223 09:09:56.643233 4940 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1c5004709c21be0889fea3e58d8bd445fa72a121658b331f7949822e7ea39bf3\": container with ID starting with 1c5004709c21be0889fea3e58d8bd445fa72a121658b331f7949822e7ea39bf3 not found: ID does not exist" containerID="1c5004709c21be0889fea3e58d8bd445fa72a121658b331f7949822e7ea39bf3" Feb 23 09:09:56 crc kubenswrapper[4940]: I0223 09:09:56.643290 4940 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1c5004709c21be0889fea3e58d8bd445fa72a121658b331f7949822e7ea39bf3"} err="failed to get container status \"1c5004709c21be0889fea3e58d8bd445fa72a121658b331f7949822e7ea39bf3\": rpc error: code = NotFound desc = could not find container \"1c5004709c21be0889fea3e58d8bd445fa72a121658b331f7949822e7ea39bf3\": container with ID starting with 1c5004709c21be0889fea3e58d8bd445fa72a121658b331f7949822e7ea39bf3 not found: ID does not exist" Feb 23 09:09:56 crc kubenswrapper[4940]: I0223 09:09:56.643329 4940 scope.go:117] "RemoveContainer" containerID="f8a149cbff71bf8a8cf32a4d15c2ded691d1894cc7b8d4cd9488753cfd672826" Feb 23 09:09:56 crc kubenswrapper[4940]: E0223 09:09:56.645813 4940 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f8a149cbff71bf8a8cf32a4d15c2ded691d1894cc7b8d4cd9488753cfd672826\": container with ID starting with f8a149cbff71bf8a8cf32a4d15c2ded691d1894cc7b8d4cd9488753cfd672826 not found: ID does not exist" containerID="f8a149cbff71bf8a8cf32a4d15c2ded691d1894cc7b8d4cd9488753cfd672826" Feb 23 09:09:56 crc kubenswrapper[4940]: I0223 09:09:56.645859 4940 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f8a149cbff71bf8a8cf32a4d15c2ded691d1894cc7b8d4cd9488753cfd672826"} err="failed to get container status \"f8a149cbff71bf8a8cf32a4d15c2ded691d1894cc7b8d4cd9488753cfd672826\": rpc error: code = NotFound desc = could not find container \"f8a149cbff71bf8a8cf32a4d15c2ded691d1894cc7b8d4cd9488753cfd672826\": container with ID starting with f8a149cbff71bf8a8cf32a4d15c2ded691d1894cc7b8d4cd9488753cfd672826 not found: ID does not exist" Feb 23 09:09:56 crc kubenswrapper[4940]: I0223 09:09:56.655583 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-volume-volume1-0"] Feb 23 09:09:56 crc kubenswrapper[4940]: I0223 09:09:56.784676 4940 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 23 09:09:56 crc kubenswrapper[4940]: I0223 09:09:56.795167 4940 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 23 09:09:56 crc kubenswrapper[4940]: I0223 09:09:56.843801 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 23 09:09:56 crc kubenswrapper[4940]: I0223 09:09:56.851197 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 23 09:09:56 crc kubenswrapper[4940]: I0223 09:09:56.853503 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 23 09:09:56 crc kubenswrapper[4940]: I0223 09:09:56.854632 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 23 09:09:56 crc kubenswrapper[4940]: I0223 09:09:56.878443 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 23 09:09:56 crc kubenswrapper[4940]: I0223 09:09:56.998035 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5278c504-4391-438f-a6c9-39071eade5ae-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"5278c504-4391-438f-a6c9-39071eade5ae\") " pod="openstack/ceilometer-0" Feb 23 09:09:56 crc kubenswrapper[4940]: I0223 09:09:56.998086 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5278c504-4391-438f-a6c9-39071eade5ae-run-httpd\") pod \"ceilometer-0\" (UID: \"5278c504-4391-438f-a6c9-39071eade5ae\") " pod="openstack/ceilometer-0" Feb 23 09:09:56 crc kubenswrapper[4940]: I0223 09:09:56.998167 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5278c504-4391-438f-a6c9-39071eade5ae-log-httpd\") pod \"ceilometer-0\" (UID: \"5278c504-4391-438f-a6c9-39071eade5ae\") " pod="openstack/ceilometer-0" Feb 23 09:09:56 crc kubenswrapper[4940]: I0223 09:09:56.998221 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dfg6b\" (UniqueName: \"kubernetes.io/projected/5278c504-4391-438f-a6c9-39071eade5ae-kube-api-access-dfg6b\") pod \"ceilometer-0\" (UID: \"5278c504-4391-438f-a6c9-39071eade5ae\") " pod="openstack/ceilometer-0" Feb 23 09:09:56 crc kubenswrapper[4940]: I0223 09:09:56.998280 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5278c504-4391-438f-a6c9-39071eade5ae-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"5278c504-4391-438f-a6c9-39071eade5ae\") " pod="openstack/ceilometer-0" Feb 23 09:09:56 crc kubenswrapper[4940]: I0223 09:09:56.998363 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5278c504-4391-438f-a6c9-39071eade5ae-config-data\") pod \"ceilometer-0\" (UID: \"5278c504-4391-438f-a6c9-39071eade5ae\") " pod="openstack/ceilometer-0" Feb 23 09:09:56 crc kubenswrapper[4940]: I0223 09:09:56.998461 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5278c504-4391-438f-a6c9-39071eade5ae-scripts\") pod \"ceilometer-0\" (UID: \"5278c504-4391-438f-a6c9-39071eade5ae\") " pod="openstack/ceilometer-0" Feb 23 09:09:57 crc kubenswrapper[4940]: I0223 09:09:57.094950 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-volume1-0" event={"ID":"3f62a0b8-7bf4-40cb-a561-b2afd12eabdf","Type":"ContainerStarted","Data":"87d3ffcbe822fd2a9e75ee4a029931c54029d633fce9cb569c64e5cd6b00566b"} Feb 23 09:09:57 crc kubenswrapper[4940]: I0223 09:09:57.100577 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"2cb80df2-e9ff-48b8-86c1-301afe49d9ed","Type":"ContainerStarted","Data":"fe937bdbbb23a100b9e5091c768a37eb67282357d73be158e34034f7eeb6763f"} Feb 23 09:09:57 crc kubenswrapper[4940]: I0223 09:09:57.101412 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5278c504-4391-438f-a6c9-39071eade5ae-config-data\") pod \"ceilometer-0\" (UID: \"5278c504-4391-438f-a6c9-39071eade5ae\") " pod="openstack/ceilometer-0" Feb 23 09:09:57 crc kubenswrapper[4940]: I0223 09:09:57.101496 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5278c504-4391-438f-a6c9-39071eade5ae-scripts\") pod \"ceilometer-0\" (UID: \"5278c504-4391-438f-a6c9-39071eade5ae\") " pod="openstack/ceilometer-0" Feb 23 09:09:57 crc kubenswrapper[4940]: I0223 09:09:57.101575 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5278c504-4391-438f-a6c9-39071eade5ae-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"5278c504-4391-438f-a6c9-39071eade5ae\") " pod="openstack/ceilometer-0" Feb 23 09:09:57 crc kubenswrapper[4940]: I0223 09:09:57.101605 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5278c504-4391-438f-a6c9-39071eade5ae-run-httpd\") pod \"ceilometer-0\" (UID: \"5278c504-4391-438f-a6c9-39071eade5ae\") " pod="openstack/ceilometer-0" Feb 23 09:09:57 crc kubenswrapper[4940]: I0223 09:09:57.101680 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5278c504-4391-438f-a6c9-39071eade5ae-log-httpd\") pod \"ceilometer-0\" (UID: \"5278c504-4391-438f-a6c9-39071eade5ae\") " pod="openstack/ceilometer-0" Feb 23 09:09:57 crc kubenswrapper[4940]: I0223 09:09:57.101738 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dfg6b\" (UniqueName: \"kubernetes.io/projected/5278c504-4391-438f-a6c9-39071eade5ae-kube-api-access-dfg6b\") pod \"ceilometer-0\" (UID: \"5278c504-4391-438f-a6c9-39071eade5ae\") " pod="openstack/ceilometer-0" Feb 23 09:09:57 crc kubenswrapper[4940]: I0223 09:09:57.101799 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5278c504-4391-438f-a6c9-39071eade5ae-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"5278c504-4391-438f-a6c9-39071eade5ae\") " pod="openstack/ceilometer-0" Feb 23 09:09:57 crc kubenswrapper[4940]: I0223 09:09:57.107045 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5278c504-4391-438f-a6c9-39071eade5ae-config-data\") pod \"ceilometer-0\" (UID: \"5278c504-4391-438f-a6c9-39071eade5ae\") " pod="openstack/ceilometer-0" Feb 23 09:09:57 crc kubenswrapper[4940]: I0223 09:09:57.108452 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5278c504-4391-438f-a6c9-39071eade5ae-run-httpd\") pod \"ceilometer-0\" (UID: \"5278c504-4391-438f-a6c9-39071eade5ae\") " pod="openstack/ceilometer-0" Feb 23 09:09:57 crc kubenswrapper[4940]: I0223 09:09:57.109593 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5278c504-4391-438f-a6c9-39071eade5ae-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"5278c504-4391-438f-a6c9-39071eade5ae\") " pod="openstack/ceilometer-0" Feb 23 09:09:57 crc kubenswrapper[4940]: I0223 09:09:57.109857 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5278c504-4391-438f-a6c9-39071eade5ae-log-httpd\") pod \"ceilometer-0\" (UID: \"5278c504-4391-438f-a6c9-39071eade5ae\") " pod="openstack/ceilometer-0" Feb 23 09:09:57 crc kubenswrapper[4940]: I0223 09:09:57.115190 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5278c504-4391-438f-a6c9-39071eade5ae-scripts\") pod \"ceilometer-0\" (UID: \"5278c504-4391-438f-a6c9-39071eade5ae\") " pod="openstack/ceilometer-0" Feb 23 09:09:57 crc kubenswrapper[4940]: I0223 09:09:57.126852 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5278c504-4391-438f-a6c9-39071eade5ae-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"5278c504-4391-438f-a6c9-39071eade5ae\") " pod="openstack/ceilometer-0" Feb 23 09:09:57 crc kubenswrapper[4940]: I0223 09:09:57.129865 4940 generic.go:334] "Generic (PLEG): container finished" podID="f94f1199-3d0b-4502-9013-a1e408c7280e" containerID="5bc358fbbd7560a89df27d2309e81e5c53184a1777841e52dd8b88c6f405a4bf" exitCode=0 Feb 23 09:09:57 crc kubenswrapper[4940]: I0223 09:09:57.129970 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-fzrcd" event={"ID":"f94f1199-3d0b-4502-9013-a1e408c7280e","Type":"ContainerDied","Data":"5bc358fbbd7560a89df27d2309e81e5c53184a1777841e52dd8b88c6f405a4bf"} Feb 23 09:09:57 crc kubenswrapper[4940]: I0223 09:09:57.140602 4940 generic.go:334] "Generic (PLEG): container finished" podID="b6ace7c6-781c-4053-8e6e-26232d9355da" containerID="e3c8d0b0648ec94d2501f62779ab5e1aa6d83808ac2d9c0160962627213f80f6" exitCode=0 Feb 23 09:09:57 crc kubenswrapper[4940]: I0223 09:09:57.140661 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7ffc6bfc65-qhp9j" event={"ID":"b6ace7c6-781c-4053-8e6e-26232d9355da","Type":"ContainerDied","Data":"e3c8d0b0648ec94d2501f62779ab5e1aa6d83808ac2d9c0160962627213f80f6"} Feb 23 09:09:57 crc kubenswrapper[4940]: I0223 09:09:57.143677 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dfg6b\" (UniqueName: \"kubernetes.io/projected/5278c504-4391-438f-a6c9-39071eade5ae-kube-api-access-dfg6b\") pod \"ceilometer-0\" (UID: \"5278c504-4391-438f-a6c9-39071eade5ae\") " pod="openstack/ceilometer-0" Feb 23 09:09:57 crc kubenswrapper[4940]: I0223 09:09:57.190151 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 23 09:09:57 crc kubenswrapper[4940]: I0223 09:09:57.194315 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-8f67f879d-fb7mr" Feb 23 09:09:57 crc kubenswrapper[4940]: I0223 09:09:57.403522 4940 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6af3cab0-e7d8-461f-9092-6b5afefff5cc" path="/var/lib/kubelet/pods/6af3cab0-e7d8-461f-9092-6b5afefff5cc/volumes" Feb 23 09:09:57 crc kubenswrapper[4940]: I0223 09:09:57.596514 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-654489f6f-92jdq"] Feb 23 09:09:57 crc kubenswrapper[4940]: I0223 09:09:57.655095 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-8f67f879d-fb7mr" Feb 23 09:09:57 crc kubenswrapper[4940]: I0223 09:09:57.737938 4940 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-64b687bd7d-jhpmr"] Feb 23 09:09:57 crc kubenswrapper[4940]: I0223 09:09:57.738172 4940 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-64b687bd7d-jhpmr" podUID="2f80e757-efd8-4d5f-a2bf-46c03b169956" containerName="barbican-api-log" containerID="cri-o://44e17f3d5963f3c652e966b59a18c53aa8cfdc71dc72079e653c8c9e8206a72b" gracePeriod=30 Feb 23 09:09:57 crc kubenswrapper[4940]: I0223 09:09:57.738337 4940 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-64b687bd7d-jhpmr" podUID="2f80e757-efd8-4d5f-a2bf-46c03b169956" containerName="barbican-api" containerID="cri-o://91408324e2a67395fe5476ad2dc90bf08a1986c8a99b0d971404b15fcd6427e3" gracePeriod=30 Feb 23 09:09:57 crc kubenswrapper[4940]: I0223 09:09:57.766917 4940 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-64b687bd7d-jhpmr" podUID="2f80e757-efd8-4d5f-a2bf-46c03b169956" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.167:9311/healthcheck\": EOF" Feb 23 09:09:57 crc kubenswrapper[4940]: I0223 09:09:57.831525 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 23 09:09:57 crc kubenswrapper[4940]: I0223 09:09:57.855666 4940 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Feb 23 09:09:57 crc kubenswrapper[4940]: W0223 09:09:57.945262 4940 watcher.go:93] Error while processing event ("/sys/fs/cgroup/user.slice/user-0.slice/user-runtime-dir@0.service": 0x40000100 == IN_CREATE|IN_ISDIR): open /sys/fs/cgroup/user.slice/user-0.slice/user-runtime-dir@0.service: no such file or directory Feb 23 09:09:58 crc kubenswrapper[4940]: I0223 09:09:58.177220 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-654489f6f-92jdq" event={"ID":"feae1958-0b14-4a24-af08-cb96a4131a47","Type":"ContainerStarted","Data":"0a1b63103ad414c3b7865cdcd811531cfe575e0269429de1e2f773f4f4cb50e3"} Feb 23 09:09:58 crc kubenswrapper[4940]: I0223 09:09:58.177760 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-654489f6f-92jdq" event={"ID":"feae1958-0b14-4a24-af08-cb96a4131a47","Type":"ContainerStarted","Data":"eb5a2c497236e41e6b9b932e70782250d20dd211a3721e1389b6aacd34ffe689"} Feb 23 09:09:58 crc kubenswrapper[4940]: I0223 09:09:58.199756 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-backup-0" event={"ID":"0911a5da-3249-4858-8246-5334db240255","Type":"ContainerStarted","Data":"fc8258480e70099ee6e8f1fcebd1c467d0da852e3235463d521c802cf2fc23de"} Feb 23 09:09:58 crc kubenswrapper[4940]: I0223 09:09:58.205004 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"2cb80df2-e9ff-48b8-86c1-301afe49d9ed","Type":"ContainerStarted","Data":"ef014b2266efe4f98cd310710c01009d7f5c93e4347f3d5b8d1c34fbfa8ef086"} Feb 23 09:09:58 crc kubenswrapper[4940]: I0223 09:09:58.230862 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-fzrcd" event={"ID":"f94f1199-3d0b-4502-9013-a1e408c7280e","Type":"ContainerStarted","Data":"e1c5f20a98159585833a8912d624d8723128f5cefefb1c490aaeb03ab9b4c6d1"} Feb 23 09:09:58 crc kubenswrapper[4940]: I0223 09:09:58.232638 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5c9776ccc5-fzrcd" Feb 23 09:09:58 crc kubenswrapper[4940]: I0223 09:09:58.238560 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"e5a733b1-ef74-46e6-804f-c0ad92ef38aa","Type":"ContainerStarted","Data":"b12a95c55002c79e218f34a64026209d4d4fd68a1bedaa8a635c24e7ef547553"} Feb 23 09:09:58 crc kubenswrapper[4940]: I0223 09:09:58.239666 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5278c504-4391-438f-a6c9-39071eade5ae","Type":"ContainerStarted","Data":"35f848f0051c146254ae7754c1b502844c27fff6495965f77f0626b1c2e358fa"} Feb 23 09:09:58 crc kubenswrapper[4940]: I0223 09:09:58.242662 4940 generic.go:334] "Generic (PLEG): container finished" podID="2f80e757-efd8-4d5f-a2bf-46c03b169956" containerID="44e17f3d5963f3c652e966b59a18c53aa8cfdc71dc72079e653c8c9e8206a72b" exitCode=143 Feb 23 09:09:58 crc kubenswrapper[4940]: I0223 09:09:58.243409 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-64b687bd7d-jhpmr" event={"ID":"2f80e757-efd8-4d5f-a2bf-46c03b169956","Type":"ContainerDied","Data":"44e17f3d5963f3c652e966b59a18c53aa8cfdc71dc72079e653c8c9e8206a72b"} Feb 23 09:09:58 crc kubenswrapper[4940]: I0223 09:09:58.263467 4940 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5c9776ccc5-fzrcd" podStartSLOduration=4.263444496 podStartE2EDuration="4.263444496s" podCreationTimestamp="2026-02-23 09:09:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 09:09:58.257869971 +0000 UTC m=+1329.641076128" watchObservedRunningTime="2026-02-23 09:09:58.263444496 +0000 UTC m=+1329.646650653" Feb 23 09:09:58 crc kubenswrapper[4940]: I0223 09:09:58.595589 4940 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-7ffc6bfc65-qhp9j" podUID="b6ace7c6-781c-4053-8e6e-26232d9355da" containerName="neutron-httpd" probeResult="failure" output="Get \"https://10.217.0.158:9696/\": dial tcp 10.217.0.158:9696: connect: connection refused" Feb 23 09:09:59 crc kubenswrapper[4940]: I0223 09:09:59.253696 4940 generic.go:334] "Generic (PLEG): container finished" podID="a43f9f8e-d118-4247-b1f0-b6aac984bb4d" containerID="342f7def9f50941425d743518c4769503c938bad72ec71dd786fa1971cffb42d" exitCode=0 Feb 23 09:09:59 crc kubenswrapper[4940]: I0223 09:09:59.253770 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-sync-ktg94" event={"ID":"a43f9f8e-d118-4247-b1f0-b6aac984bb4d","Type":"ContainerDied","Data":"342f7def9f50941425d743518c4769503c938bad72ec71dd786fa1971cffb42d"} Feb 23 09:09:59 crc kubenswrapper[4940]: I0223 09:09:59.260041 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-backup-0" event={"ID":"0911a5da-3249-4858-8246-5334db240255","Type":"ContainerStarted","Data":"9dc625dfed1efcdc6c459d2aaae2a1308f6983bc1ca6992dd85cdee635c328a8"} Feb 23 09:09:59 crc kubenswrapper[4940]: I0223 09:09:59.306728 4940 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-backup-0" podStartSLOduration=3.771385009 podStartE2EDuration="5.306704649s" podCreationTimestamp="2026-02-23 09:09:54 +0000 UTC" firstStartedPulling="2026-02-23 09:09:55.681834767 +0000 UTC m=+1327.065040924" lastFinishedPulling="2026-02-23 09:09:57.217154407 +0000 UTC m=+1328.600360564" observedRunningTime="2026-02-23 09:09:59.301231997 +0000 UTC m=+1330.684438174" watchObservedRunningTime="2026-02-23 09:09:59.306704649 +0000 UTC m=+1330.689910836" Feb 23 09:09:59 crc kubenswrapper[4940]: I0223 09:09:59.781298 4940 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-backup-0" Feb 23 09:10:00 crc kubenswrapper[4940]: I0223 09:10:00.985035 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-sync-ktg94" Feb 23 09:10:01 crc kubenswrapper[4940]: I0223 09:10:01.107555 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a43f9f8e-d118-4247-b1f0-b6aac984bb4d-combined-ca-bundle\") pod \"a43f9f8e-d118-4247-b1f0-b6aac984bb4d\" (UID: \"a43f9f8e-d118-4247-b1f0-b6aac984bb4d\") " Feb 23 09:10:01 crc kubenswrapper[4940]: I0223 09:10:01.107669 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a43f9f8e-d118-4247-b1f0-b6aac984bb4d-config-data\") pod \"a43f9f8e-d118-4247-b1f0-b6aac984bb4d\" (UID: \"a43f9f8e-d118-4247-b1f0-b6aac984bb4d\") " Feb 23 09:10:01 crc kubenswrapper[4940]: I0223 09:10:01.107809 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"job-config-data\" (UniqueName: \"kubernetes.io/secret/a43f9f8e-d118-4247-b1f0-b6aac984bb4d-job-config-data\") pod \"a43f9f8e-d118-4247-b1f0-b6aac984bb4d\" (UID: \"a43f9f8e-d118-4247-b1f0-b6aac984bb4d\") " Feb 23 09:10:01 crc kubenswrapper[4940]: I0223 09:10:01.107865 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sthvh\" (UniqueName: \"kubernetes.io/projected/a43f9f8e-d118-4247-b1f0-b6aac984bb4d-kube-api-access-sthvh\") pod \"a43f9f8e-d118-4247-b1f0-b6aac984bb4d\" (UID: \"a43f9f8e-d118-4247-b1f0-b6aac984bb4d\") " Feb 23 09:10:01 crc kubenswrapper[4940]: I0223 09:10:01.118763 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a43f9f8e-d118-4247-b1f0-b6aac984bb4d-kube-api-access-sthvh" (OuterVolumeSpecName: "kube-api-access-sthvh") pod "a43f9f8e-d118-4247-b1f0-b6aac984bb4d" (UID: "a43f9f8e-d118-4247-b1f0-b6aac984bb4d"). InnerVolumeSpecName "kube-api-access-sthvh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 09:10:01 crc kubenswrapper[4940]: I0223 09:10:01.120476 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a43f9f8e-d118-4247-b1f0-b6aac984bb4d-job-config-data" (OuterVolumeSpecName: "job-config-data") pod "a43f9f8e-d118-4247-b1f0-b6aac984bb4d" (UID: "a43f9f8e-d118-4247-b1f0-b6aac984bb4d"). InnerVolumeSpecName "job-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 09:10:01 crc kubenswrapper[4940]: I0223 09:10:01.125553 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a43f9f8e-d118-4247-b1f0-b6aac984bb4d-config-data" (OuterVolumeSpecName: "config-data") pod "a43f9f8e-d118-4247-b1f0-b6aac984bb4d" (UID: "a43f9f8e-d118-4247-b1f0-b6aac984bb4d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 09:10:01 crc kubenswrapper[4940]: I0223 09:10:01.182656 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a43f9f8e-d118-4247-b1f0-b6aac984bb4d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a43f9f8e-d118-4247-b1f0-b6aac984bb4d" (UID: "a43f9f8e-d118-4247-b1f0-b6aac984bb4d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 09:10:01 crc kubenswrapper[4940]: I0223 09:10:01.214980 4940 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a43f9f8e-d118-4247-b1f0-b6aac984bb4d-config-data\") on node \"crc\" DevicePath \"\"" Feb 23 09:10:01 crc kubenswrapper[4940]: I0223 09:10:01.215011 4940 reconciler_common.go:293] "Volume detached for volume \"job-config-data\" (UniqueName: \"kubernetes.io/secret/a43f9f8e-d118-4247-b1f0-b6aac984bb4d-job-config-data\") on node \"crc\" DevicePath \"\"" Feb 23 09:10:01 crc kubenswrapper[4940]: I0223 09:10:01.215023 4940 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sthvh\" (UniqueName: \"kubernetes.io/projected/a43f9f8e-d118-4247-b1f0-b6aac984bb4d-kube-api-access-sthvh\") on node \"crc\" DevicePath \"\"" Feb 23 09:10:01 crc kubenswrapper[4940]: I0223 09:10:01.215032 4940 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a43f9f8e-d118-4247-b1f0-b6aac984bb4d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 09:10:01 crc kubenswrapper[4940]: I0223 09:10:01.304309 4940 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="2cb80df2-e9ff-48b8-86c1-301afe49d9ed" containerName="cinder-api-log" containerID="cri-o://ef014b2266efe4f98cd310710c01009d7f5c93e4347f3d5b8d1c34fbfa8ef086" gracePeriod=30 Feb 23 09:10:01 crc kubenswrapper[4940]: I0223 09:10:01.304732 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"2cb80df2-e9ff-48b8-86c1-301afe49d9ed","Type":"ContainerStarted","Data":"d6ba6422de2258f1e311c528e44b43ca195cb07e82d58cb1c86e0becabb0880f"} Feb 23 09:10:01 crc kubenswrapper[4940]: I0223 09:10:01.305770 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Feb 23 09:10:01 crc kubenswrapper[4940]: I0223 09:10:01.304792 4940 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="2cb80df2-e9ff-48b8-86c1-301afe49d9ed" containerName="cinder-api" containerID="cri-o://d6ba6422de2258f1e311c528e44b43ca195cb07e82d58cb1c86e0becabb0880f" gracePeriod=30 Feb 23 09:10:01 crc kubenswrapper[4940]: I0223 09:10:01.311474 4940 generic.go:334] "Generic (PLEG): container finished" podID="b6ace7c6-781c-4053-8e6e-26232d9355da" containerID="74b3089cf0f921509af84286bb5056e18b7337f0e6fb28a9d0aa489627692b95" exitCode=0 Feb 23 09:10:01 crc kubenswrapper[4940]: I0223 09:10:01.311567 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7ffc6bfc65-qhp9j" event={"ID":"b6ace7c6-781c-4053-8e6e-26232d9355da","Type":"ContainerDied","Data":"74b3089cf0f921509af84286bb5056e18b7337f0e6fb28a9d0aa489627692b95"} Feb 23 09:10:01 crc kubenswrapper[4940]: I0223 09:10:01.327547 4940 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=7.327524931 podStartE2EDuration="7.327524931s" podCreationTimestamp="2026-02-23 09:09:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 09:10:01.322696469 +0000 UTC m=+1332.705902626" watchObservedRunningTime="2026-02-23 09:10:01.327524931 +0000 UTC m=+1332.710731088" Feb 23 09:10:01 crc kubenswrapper[4940]: I0223 09:10:01.340767 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"e5a733b1-ef74-46e6-804f-c0ad92ef38aa","Type":"ContainerStarted","Data":"17618246250e8a923236e125ed618609457b538532a5862c77e536ed6573ebfe"} Feb 23 09:10:01 crc kubenswrapper[4940]: I0223 09:10:01.358881 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5278c504-4391-438f-a6c9-39071eade5ae","Type":"ContainerStarted","Data":"f8c9e1f5b64f331ab938034be459c623bf79569ea26e4ea027e3ccb0e17c22fa"} Feb 23 09:10:01 crc kubenswrapper[4940]: I0223 09:10:01.366301 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-654489f6f-92jdq" event={"ID":"feae1958-0b14-4a24-af08-cb96a4131a47","Type":"ContainerStarted","Data":"db44615f322564843fa044620feb17cfbc8cd13c529dab4bcb0c0142c4c1bf9e"} Feb 23 09:10:01 crc kubenswrapper[4940]: I0223 09:10:01.366544 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-654489f6f-92jdq" Feb 23 09:10:01 crc kubenswrapper[4940]: I0223 09:10:01.369434 4940 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=6.007875265 podStartE2EDuration="7.369418637s" podCreationTimestamp="2026-02-23 09:09:54 +0000 UTC" firstStartedPulling="2026-02-23 09:09:55.153582212 +0000 UTC m=+1326.536788359" lastFinishedPulling="2026-02-23 09:09:56.515125574 +0000 UTC m=+1327.898331731" observedRunningTime="2026-02-23 09:10:01.364524473 +0000 UTC m=+1332.747730640" watchObservedRunningTime="2026-02-23 09:10:01.369418637 +0000 UTC m=+1332.752624784" Feb 23 09:10:01 crc kubenswrapper[4940]: I0223 09:10:01.373824 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-sync-ktg94" Feb 23 09:10:01 crc kubenswrapper[4940]: I0223 09:10:01.373870 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-sync-ktg94" event={"ID":"a43f9f8e-d118-4247-b1f0-b6aac984bb4d","Type":"ContainerDied","Data":"47eab27bf9314c0fb748ddfa5f443dbc290e10b214e146aa78d69aa57e2c2ece"} Feb 23 09:10:01 crc kubenswrapper[4940]: I0223 09:10:01.373911 4940 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="47eab27bf9314c0fb748ddfa5f443dbc290e10b214e146aa78d69aa57e2c2ece" Feb 23 09:10:01 crc kubenswrapper[4940]: I0223 09:10:01.395023 4940 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-654489f6f-92jdq" podStartSLOduration=5.395003891 podStartE2EDuration="5.395003891s" podCreationTimestamp="2026-02-23 09:09:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 09:10:01.38800687 +0000 UTC m=+1332.771213047" watchObservedRunningTime="2026-02-23 09:10:01.395003891 +0000 UTC m=+1332.778210048" Feb 23 09:10:01 crc kubenswrapper[4940]: I0223 09:10:01.426744 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7ffc6bfc65-qhp9j" Feb 23 09:10:01 crc kubenswrapper[4940]: I0223 09:10:01.429051 4940 patch_prober.go:28] interesting pod/machine-config-daemon-26mgs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 23 09:10:01 crc kubenswrapper[4940]: I0223 09:10:01.429117 4940 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" podUID="f3f2cfd6-5ddf-436d-998f-440f1cc642b1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 23 09:10:01 crc kubenswrapper[4940]: I0223 09:10:01.528810 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f5s77\" (UniqueName: \"kubernetes.io/projected/b6ace7c6-781c-4053-8e6e-26232d9355da-kube-api-access-f5s77\") pod \"b6ace7c6-781c-4053-8e6e-26232d9355da\" (UID: \"b6ace7c6-781c-4053-8e6e-26232d9355da\") " Feb 23 09:10:01 crc kubenswrapper[4940]: I0223 09:10:01.528863 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b6ace7c6-781c-4053-8e6e-26232d9355da-public-tls-certs\") pod \"b6ace7c6-781c-4053-8e6e-26232d9355da\" (UID: \"b6ace7c6-781c-4053-8e6e-26232d9355da\") " Feb 23 09:10:01 crc kubenswrapper[4940]: I0223 09:10:01.528917 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/b6ace7c6-781c-4053-8e6e-26232d9355da-httpd-config\") pod \"b6ace7c6-781c-4053-8e6e-26232d9355da\" (UID: \"b6ace7c6-781c-4053-8e6e-26232d9355da\") " Feb 23 09:10:01 crc kubenswrapper[4940]: I0223 09:10:01.528991 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6ace7c6-781c-4053-8e6e-26232d9355da-combined-ca-bundle\") pod \"b6ace7c6-781c-4053-8e6e-26232d9355da\" (UID: \"b6ace7c6-781c-4053-8e6e-26232d9355da\") " Feb 23 09:10:01 crc kubenswrapper[4940]: I0223 09:10:01.529056 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/b6ace7c6-781c-4053-8e6e-26232d9355da-ovndb-tls-certs\") pod \"b6ace7c6-781c-4053-8e6e-26232d9355da\" (UID: \"b6ace7c6-781c-4053-8e6e-26232d9355da\") " Feb 23 09:10:01 crc kubenswrapper[4940]: I0223 09:10:01.529116 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/b6ace7c6-781c-4053-8e6e-26232d9355da-config\") pod \"b6ace7c6-781c-4053-8e6e-26232d9355da\" (UID: \"b6ace7c6-781c-4053-8e6e-26232d9355da\") " Feb 23 09:10:01 crc kubenswrapper[4940]: I0223 09:10:01.529147 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b6ace7c6-781c-4053-8e6e-26232d9355da-internal-tls-certs\") pod \"b6ace7c6-781c-4053-8e6e-26232d9355da\" (UID: \"b6ace7c6-781c-4053-8e6e-26232d9355da\") " Feb 23 09:10:01 crc kubenswrapper[4940]: I0223 09:10:01.576423 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6ace7c6-781c-4053-8e6e-26232d9355da-kube-api-access-f5s77" (OuterVolumeSpecName: "kube-api-access-f5s77") pod "b6ace7c6-781c-4053-8e6e-26232d9355da" (UID: "b6ace7c6-781c-4053-8e6e-26232d9355da"). InnerVolumeSpecName "kube-api-access-f5s77". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 09:10:01 crc kubenswrapper[4940]: I0223 09:10:01.579894 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6ace7c6-781c-4053-8e6e-26232d9355da-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "b6ace7c6-781c-4053-8e6e-26232d9355da" (UID: "b6ace7c6-781c-4053-8e6e-26232d9355da"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 09:10:01 crc kubenswrapper[4940]: I0223 09:10:01.615081 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-share-share1-0"] Feb 23 09:10:01 crc kubenswrapper[4940]: E0223 09:10:01.615444 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b6ace7c6-781c-4053-8e6e-26232d9355da" containerName="neutron-httpd" Feb 23 09:10:01 crc kubenswrapper[4940]: I0223 09:10:01.615467 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="b6ace7c6-781c-4053-8e6e-26232d9355da" containerName="neutron-httpd" Feb 23 09:10:01 crc kubenswrapper[4940]: E0223 09:10:01.615491 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b6ace7c6-781c-4053-8e6e-26232d9355da" containerName="neutron-api" Feb 23 09:10:01 crc kubenswrapper[4940]: I0223 09:10:01.615499 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="b6ace7c6-781c-4053-8e6e-26232d9355da" containerName="neutron-api" Feb 23 09:10:01 crc kubenswrapper[4940]: E0223 09:10:01.615540 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a43f9f8e-d118-4247-b1f0-b6aac984bb4d" containerName="manila-db-sync" Feb 23 09:10:01 crc kubenswrapper[4940]: I0223 09:10:01.615546 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="a43f9f8e-d118-4247-b1f0-b6aac984bb4d" containerName="manila-db-sync" Feb 23 09:10:01 crc kubenswrapper[4940]: I0223 09:10:01.615762 4940 memory_manager.go:354] "RemoveStaleState removing state" podUID="b6ace7c6-781c-4053-8e6e-26232d9355da" containerName="neutron-api" Feb 23 09:10:01 crc kubenswrapper[4940]: I0223 09:10:01.615791 4940 memory_manager.go:354] "RemoveStaleState removing state" podUID="a43f9f8e-d118-4247-b1f0-b6aac984bb4d" containerName="manila-db-sync" Feb 23 09:10:01 crc kubenswrapper[4940]: I0223 09:10:01.615805 4940 memory_manager.go:354] "RemoveStaleState removing state" podUID="b6ace7c6-781c-4053-8e6e-26232d9355da" containerName="neutron-httpd" Feb 23 09:10:01 crc kubenswrapper[4940]: I0223 09:10:01.618909 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-share-share1-0" Feb 23 09:10:01 crc kubenswrapper[4940]: I0223 09:10:01.621159 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-share-share1-config-data" Feb 23 09:10:01 crc kubenswrapper[4940]: I0223 09:10:01.621411 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-config-data" Feb 23 09:10:01 crc kubenswrapper[4940]: I0223 09:10:01.621548 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-scripts" Feb 23 09:10:01 crc kubenswrapper[4940]: I0223 09:10:01.621807 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-manila-dockercfg-wsgxw" Feb 23 09:10:01 crc kubenswrapper[4940]: I0223 09:10:01.634207 4940 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f5s77\" (UniqueName: \"kubernetes.io/projected/b6ace7c6-781c-4053-8e6e-26232d9355da-kube-api-access-f5s77\") on node \"crc\" DevicePath \"\"" Feb 23 09:10:01 crc kubenswrapper[4940]: I0223 09:10:01.634238 4940 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/b6ace7c6-781c-4053-8e6e-26232d9355da-httpd-config\") on node \"crc\" DevicePath \"\"" Feb 23 09:10:01 crc kubenswrapper[4940]: I0223 09:10:01.634661 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-share-share1-0"] Feb 23 09:10:01 crc kubenswrapper[4940]: I0223 09:10:01.683583 4940 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-fzrcd"] Feb 23 09:10:01 crc kubenswrapper[4940]: I0223 09:10:01.683840 4940 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5c9776ccc5-fzrcd" podUID="f94f1199-3d0b-4502-9013-a1e408c7280e" containerName="dnsmasq-dns" containerID="cri-o://e1c5f20a98159585833a8912d624d8723128f5cefefb1c490aaeb03ab9b4c6d1" gracePeriod=10 Feb 23 09:10:01 crc kubenswrapper[4940]: I0223 09:10:01.738165 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec61a030-1b08-4c8f-8008-842f8c7decb0-config-data\") pod \"manila-share-share1-0\" (UID: \"ec61a030-1b08-4c8f-8008-842f8c7decb0\") " pod="openstack/manila-share-share1-0" Feb 23 09:10:01 crc kubenswrapper[4940]: I0223 09:10:01.738225 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ec61a030-1b08-4c8f-8008-842f8c7decb0-etc-machine-id\") pod \"manila-share-share1-0\" (UID: \"ec61a030-1b08-4c8f-8008-842f8c7decb0\") " pod="openstack/manila-share-share1-0" Feb 23 09:10:01 crc kubenswrapper[4940]: I0223 09:10:01.738278 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ec61a030-1b08-4c8f-8008-842f8c7decb0-scripts\") pod \"manila-share-share1-0\" (UID: \"ec61a030-1b08-4c8f-8008-842f8c7decb0\") " pod="openstack/manila-share-share1-0" Feb 23 09:10:01 crc kubenswrapper[4940]: I0223 09:10:01.738298 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec61a030-1b08-4c8f-8008-842f8c7decb0-combined-ca-bundle\") pod \"manila-share-share1-0\" (UID: \"ec61a030-1b08-4c8f-8008-842f8c7decb0\") " pod="openstack/manila-share-share1-0" Feb 23 09:10:01 crc kubenswrapper[4940]: I0223 09:10:01.738359 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/ec61a030-1b08-4c8f-8008-842f8c7decb0-ceph\") pod \"manila-share-share1-0\" (UID: \"ec61a030-1b08-4c8f-8008-842f8c7decb0\") " pod="openstack/manila-share-share1-0" Feb 23 09:10:01 crc kubenswrapper[4940]: I0223 09:10:01.738398 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ec61a030-1b08-4c8f-8008-842f8c7decb0-config-data-custom\") pod \"manila-share-share1-0\" (UID: \"ec61a030-1b08-4c8f-8008-842f8c7decb0\") " pod="openstack/manila-share-share1-0" Feb 23 09:10:01 crc kubenswrapper[4940]: I0223 09:10:01.738442 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/ec61a030-1b08-4c8f-8008-842f8c7decb0-var-lib-manila\") pod \"manila-share-share1-0\" (UID: \"ec61a030-1b08-4c8f-8008-842f8c7decb0\") " pod="openstack/manila-share-share1-0" Feb 23 09:10:01 crc kubenswrapper[4940]: I0223 09:10:01.738490 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bhzg2\" (UniqueName: \"kubernetes.io/projected/ec61a030-1b08-4c8f-8008-842f8c7decb0-kube-api-access-bhzg2\") pod \"manila-share-share1-0\" (UID: \"ec61a030-1b08-4c8f-8008-842f8c7decb0\") " pod="openstack/manila-share-share1-0" Feb 23 09:10:01 crc kubenswrapper[4940]: I0223 09:10:01.748579 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-scheduler-0"] Feb 23 09:10:01 crc kubenswrapper[4940]: I0223 09:10:01.750226 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-scheduler-0" Feb 23 09:10:01 crc kubenswrapper[4940]: I0223 09:10:01.752524 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-scheduler-config-data" Feb 23 09:10:01 crc kubenswrapper[4940]: I0223 09:10:01.808105 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-56696ff475-gv984"] Feb 23 09:10:01 crc kubenswrapper[4940]: I0223 09:10:01.810189 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56696ff475-gv984" Feb 23 09:10:01 crc kubenswrapper[4940]: I0223 09:10:01.826261 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-scheduler-0"] Feb 23 09:10:01 crc kubenswrapper[4940]: I0223 09:10:01.840826 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/ec61a030-1b08-4c8f-8008-842f8c7decb0-ceph\") pod \"manila-share-share1-0\" (UID: \"ec61a030-1b08-4c8f-8008-842f8c7decb0\") " pod="openstack/manila-share-share1-0" Feb 23 09:10:01 crc kubenswrapper[4940]: I0223 09:10:01.840888 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ec61a030-1b08-4c8f-8008-842f8c7decb0-config-data-custom\") pod \"manila-share-share1-0\" (UID: \"ec61a030-1b08-4c8f-8008-842f8c7decb0\") " pod="openstack/manila-share-share1-0" Feb 23 09:10:01 crc kubenswrapper[4940]: I0223 09:10:01.840942 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/eafa7497-50f0-456e-9764-826840da7372-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"eafa7497-50f0-456e-9764-826840da7372\") " pod="openstack/manila-scheduler-0" Feb 23 09:10:01 crc kubenswrapper[4940]: I0223 09:10:01.840962 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/ec61a030-1b08-4c8f-8008-842f8c7decb0-var-lib-manila\") pod \"manila-share-share1-0\" (UID: \"ec61a030-1b08-4c8f-8008-842f8c7decb0\") " pod="openstack/manila-share-share1-0" Feb 23 09:10:01 crc kubenswrapper[4940]: I0223 09:10:01.841001 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bhzg2\" (UniqueName: \"kubernetes.io/projected/ec61a030-1b08-4c8f-8008-842f8c7decb0-kube-api-access-bhzg2\") pod \"manila-share-share1-0\" (UID: \"ec61a030-1b08-4c8f-8008-842f8c7decb0\") " pod="openstack/manila-share-share1-0" Feb 23 09:10:01 crc kubenswrapper[4940]: I0223 09:10:01.841020 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/eafa7497-50f0-456e-9764-826840da7372-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"eafa7497-50f0-456e-9764-826840da7372\") " pod="openstack/manila-scheduler-0" Feb 23 09:10:01 crc kubenswrapper[4940]: I0223 09:10:01.841035 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eafa7497-50f0-456e-9764-826840da7372-scripts\") pod \"manila-scheduler-0\" (UID: \"eafa7497-50f0-456e-9764-826840da7372\") " pod="openstack/manila-scheduler-0" Feb 23 09:10:01 crc kubenswrapper[4940]: I0223 09:10:01.841051 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec61a030-1b08-4c8f-8008-842f8c7decb0-config-data\") pod \"manila-share-share1-0\" (UID: \"ec61a030-1b08-4c8f-8008-842f8c7decb0\") " pod="openstack/manila-share-share1-0" Feb 23 09:10:01 crc kubenswrapper[4940]: I0223 09:10:01.841093 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eafa7497-50f0-456e-9764-826840da7372-combined-ca-bundle\") pod \"manila-scheduler-0\" (UID: \"eafa7497-50f0-456e-9764-826840da7372\") " pod="openstack/manila-scheduler-0" Feb 23 09:10:01 crc kubenswrapper[4940]: I0223 09:10:01.841117 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eafa7497-50f0-456e-9764-826840da7372-config-data\") pod \"manila-scheduler-0\" (UID: \"eafa7497-50f0-456e-9764-826840da7372\") " pod="openstack/manila-scheduler-0" Feb 23 09:10:01 crc kubenswrapper[4940]: I0223 09:10:01.841133 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9ddrr\" (UniqueName: \"kubernetes.io/projected/eafa7497-50f0-456e-9764-826840da7372-kube-api-access-9ddrr\") pod \"manila-scheduler-0\" (UID: \"eafa7497-50f0-456e-9764-826840da7372\") " pod="openstack/manila-scheduler-0" Feb 23 09:10:01 crc kubenswrapper[4940]: I0223 09:10:01.841159 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ec61a030-1b08-4c8f-8008-842f8c7decb0-etc-machine-id\") pod \"manila-share-share1-0\" (UID: \"ec61a030-1b08-4c8f-8008-842f8c7decb0\") " pod="openstack/manila-share-share1-0" Feb 23 09:10:01 crc kubenswrapper[4940]: I0223 09:10:01.841204 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ec61a030-1b08-4c8f-8008-842f8c7decb0-scripts\") pod \"manila-share-share1-0\" (UID: \"ec61a030-1b08-4c8f-8008-842f8c7decb0\") " pod="openstack/manila-share-share1-0" Feb 23 09:10:01 crc kubenswrapper[4940]: I0223 09:10:01.841222 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec61a030-1b08-4c8f-8008-842f8c7decb0-combined-ca-bundle\") pod \"manila-share-share1-0\" (UID: \"ec61a030-1b08-4c8f-8008-842f8c7decb0\") " pod="openstack/manila-share-share1-0" Feb 23 09:10:01 crc kubenswrapper[4940]: I0223 09:10:01.847776 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/ec61a030-1b08-4c8f-8008-842f8c7decb0-var-lib-manila\") pod \"manila-share-share1-0\" (UID: \"ec61a030-1b08-4c8f-8008-842f8c7decb0\") " pod="openstack/manila-share-share1-0" Feb 23 09:10:01 crc kubenswrapper[4940]: I0223 09:10:01.848748 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ec61a030-1b08-4c8f-8008-842f8c7decb0-etc-machine-id\") pod \"manila-share-share1-0\" (UID: \"ec61a030-1b08-4c8f-8008-842f8c7decb0\") " pod="openstack/manila-share-share1-0" Feb 23 09:10:01 crc kubenswrapper[4940]: I0223 09:10:01.856854 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/ec61a030-1b08-4c8f-8008-842f8c7decb0-ceph\") pod \"manila-share-share1-0\" (UID: \"ec61a030-1b08-4c8f-8008-842f8c7decb0\") " pod="openstack/manila-share-share1-0" Feb 23 09:10:01 crc kubenswrapper[4940]: I0223 09:10:01.864366 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec61a030-1b08-4c8f-8008-842f8c7decb0-combined-ca-bundle\") pod \"manila-share-share1-0\" (UID: \"ec61a030-1b08-4c8f-8008-842f8c7decb0\") " pod="openstack/manila-share-share1-0" Feb 23 09:10:01 crc kubenswrapper[4940]: I0223 09:10:01.865302 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ec61a030-1b08-4c8f-8008-842f8c7decb0-config-data-custom\") pod \"manila-share-share1-0\" (UID: \"ec61a030-1b08-4c8f-8008-842f8c7decb0\") " pod="openstack/manila-share-share1-0" Feb 23 09:10:01 crc kubenswrapper[4940]: I0223 09:10:01.869799 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-56696ff475-gv984"] Feb 23 09:10:01 crc kubenswrapper[4940]: I0223 09:10:01.870328 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ec61a030-1b08-4c8f-8008-842f8c7decb0-scripts\") pod \"manila-share-share1-0\" (UID: \"ec61a030-1b08-4c8f-8008-842f8c7decb0\") " pod="openstack/manila-share-share1-0" Feb 23 09:10:01 crc kubenswrapper[4940]: I0223 09:10:01.901377 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec61a030-1b08-4c8f-8008-842f8c7decb0-config-data\") pod \"manila-share-share1-0\" (UID: \"ec61a030-1b08-4c8f-8008-842f8c7decb0\") " pod="openstack/manila-share-share1-0" Feb 23 09:10:01 crc kubenswrapper[4940]: I0223 09:10:01.948915 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6ace7c6-781c-4053-8e6e-26232d9355da-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b6ace7c6-781c-4053-8e6e-26232d9355da" (UID: "b6ace7c6-781c-4053-8e6e-26232d9355da"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 09:10:01 crc kubenswrapper[4940]: I0223 09:10:01.949664 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bhzg2\" (UniqueName: \"kubernetes.io/projected/ec61a030-1b08-4c8f-8008-842f8c7decb0-kube-api-access-bhzg2\") pod \"manila-share-share1-0\" (UID: \"ec61a030-1b08-4c8f-8008-842f8c7decb0\") " pod="openstack/manila-share-share1-0" Feb 23 09:10:01 crc kubenswrapper[4940]: I0223 09:10:01.957724 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-share-share1-0" Feb 23 09:10:02 crc kubenswrapper[4940]: I0223 09:10:01.999603 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-api-0"] Feb 23 09:10:02 crc kubenswrapper[4940]: I0223 09:10:02.012000 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4ddde6a1-2d30-4c95-aac1-ab2f32130f14-dns-svc\") pod \"dnsmasq-dns-56696ff475-gv984\" (UID: \"4ddde6a1-2d30-4c95-aac1-ab2f32130f14\") " pod="openstack/dnsmasq-dns-56696ff475-gv984" Feb 23 09:10:02 crc kubenswrapper[4940]: I0223 09:10:02.012113 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4ddde6a1-2d30-4c95-aac1-ab2f32130f14-ovsdbserver-sb\") pod \"dnsmasq-dns-56696ff475-gv984\" (UID: \"4ddde6a1-2d30-4c95-aac1-ab2f32130f14\") " pod="openstack/dnsmasq-dns-56696ff475-gv984" Feb 23 09:10:02 crc kubenswrapper[4940]: I0223 09:10:02.012299 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4ddde6a1-2d30-4c95-aac1-ab2f32130f14-ovsdbserver-nb\") pod \"dnsmasq-dns-56696ff475-gv984\" (UID: \"4ddde6a1-2d30-4c95-aac1-ab2f32130f14\") " pod="openstack/dnsmasq-dns-56696ff475-gv984" Feb 23 09:10:02 crc kubenswrapper[4940]: I0223 09:10:02.012372 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/eafa7497-50f0-456e-9764-826840da7372-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"eafa7497-50f0-456e-9764-826840da7372\") " pod="openstack/manila-scheduler-0" Feb 23 09:10:02 crc kubenswrapper[4940]: I0223 09:10:02.013965 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/eafa7497-50f0-456e-9764-826840da7372-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"eafa7497-50f0-456e-9764-826840da7372\") " pod="openstack/manila-scheduler-0" Feb 23 09:10:02 crc kubenswrapper[4940]: I0223 09:10:02.013999 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eafa7497-50f0-456e-9764-826840da7372-scripts\") pod \"manila-scheduler-0\" (UID: \"eafa7497-50f0-456e-9764-826840da7372\") " pod="openstack/manila-scheduler-0" Feb 23 09:10:02 crc kubenswrapper[4940]: I0223 09:10:02.014024 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eafa7497-50f0-456e-9764-826840da7372-combined-ca-bundle\") pod \"manila-scheduler-0\" (UID: \"eafa7497-50f0-456e-9764-826840da7372\") " pod="openstack/manila-scheduler-0" Feb 23 09:10:02 crc kubenswrapper[4940]: I0223 09:10:02.014079 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4ddde6a1-2d30-4c95-aac1-ab2f32130f14-config\") pod \"dnsmasq-dns-56696ff475-gv984\" (UID: \"4ddde6a1-2d30-4c95-aac1-ab2f32130f14\") " pod="openstack/dnsmasq-dns-56696ff475-gv984" Feb 23 09:10:02 crc kubenswrapper[4940]: I0223 09:10:02.014181 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wghvg\" (UniqueName: \"kubernetes.io/projected/4ddde6a1-2d30-4c95-aac1-ab2f32130f14-kube-api-access-wghvg\") pod \"dnsmasq-dns-56696ff475-gv984\" (UID: \"4ddde6a1-2d30-4c95-aac1-ab2f32130f14\") " pod="openstack/dnsmasq-dns-56696ff475-gv984" Feb 23 09:10:02 crc kubenswrapper[4940]: I0223 09:10:02.014222 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eafa7497-50f0-456e-9764-826840da7372-config-data\") pod \"manila-scheduler-0\" (UID: \"eafa7497-50f0-456e-9764-826840da7372\") " pod="openstack/manila-scheduler-0" Feb 23 09:10:02 crc kubenswrapper[4940]: I0223 09:10:02.014329 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9ddrr\" (UniqueName: \"kubernetes.io/projected/eafa7497-50f0-456e-9764-826840da7372-kube-api-access-9ddrr\") pod \"manila-scheduler-0\" (UID: \"eafa7497-50f0-456e-9764-826840da7372\") " pod="openstack/manila-scheduler-0" Feb 23 09:10:02 crc kubenswrapper[4940]: I0223 09:10:02.014673 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-api-0"] Feb 23 09:10:02 crc kubenswrapper[4940]: I0223 09:10:02.014699 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/eafa7497-50f0-456e-9764-826840da7372-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"eafa7497-50f0-456e-9764-826840da7372\") " pod="openstack/manila-scheduler-0" Feb 23 09:10:02 crc kubenswrapper[4940]: I0223 09:10:02.014878 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4ddde6a1-2d30-4c95-aac1-ab2f32130f14-dns-swift-storage-0\") pod \"dnsmasq-dns-56696ff475-gv984\" (UID: \"4ddde6a1-2d30-4c95-aac1-ab2f32130f14\") " pod="openstack/dnsmasq-dns-56696ff475-gv984" Feb 23 09:10:02 crc kubenswrapper[4940]: I0223 09:10:02.015696 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-api-0" Feb 23 09:10:02 crc kubenswrapper[4940]: I0223 09:10:02.016280 4940 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6ace7c6-781c-4053-8e6e-26232d9355da-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 09:10:02 crc kubenswrapper[4940]: I0223 09:10:02.024168 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-api-config-data" Feb 23 09:10:02 crc kubenswrapper[4940]: I0223 09:10:02.042576 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9ddrr\" (UniqueName: \"kubernetes.io/projected/eafa7497-50f0-456e-9764-826840da7372-kube-api-access-9ddrr\") pod \"manila-scheduler-0\" (UID: \"eafa7497-50f0-456e-9764-826840da7372\") " pod="openstack/manila-scheduler-0" Feb 23 09:10:02 crc kubenswrapper[4940]: I0223 09:10:02.045290 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/eafa7497-50f0-456e-9764-826840da7372-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"eafa7497-50f0-456e-9764-826840da7372\") " pod="openstack/manila-scheduler-0" Feb 23 09:10:02 crc kubenswrapper[4940]: I0223 09:10:02.056484 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eafa7497-50f0-456e-9764-826840da7372-combined-ca-bundle\") pod \"manila-scheduler-0\" (UID: \"eafa7497-50f0-456e-9764-826840da7372\") " pod="openstack/manila-scheduler-0" Feb 23 09:10:02 crc kubenswrapper[4940]: I0223 09:10:02.056799 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eafa7497-50f0-456e-9764-826840da7372-scripts\") pod \"manila-scheduler-0\" (UID: \"eafa7497-50f0-456e-9764-826840da7372\") " pod="openstack/manila-scheduler-0" Feb 23 09:10:02 crc kubenswrapper[4940]: I0223 09:10:02.078297 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eafa7497-50f0-456e-9764-826840da7372-config-data\") pod \"manila-scheduler-0\" (UID: \"eafa7497-50f0-456e-9764-826840da7372\") " pod="openstack/manila-scheduler-0" Feb 23 09:10:02 crc kubenswrapper[4940]: I0223 09:10:02.122906 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6ace7c6-781c-4053-8e6e-26232d9355da-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "b6ace7c6-781c-4053-8e6e-26232d9355da" (UID: "b6ace7c6-781c-4053-8e6e-26232d9355da"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 09:10:02 crc kubenswrapper[4940]: I0223 09:10:02.127188 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-22pwn\" (UniqueName: \"kubernetes.io/projected/036763ca-16f2-4880-8381-1b9330182312-kube-api-access-22pwn\") pod \"manila-api-0\" (UID: \"036763ca-16f2-4880-8381-1b9330182312\") " pod="openstack/manila-api-0" Feb 23 09:10:02 crc kubenswrapper[4940]: I0223 09:10:02.127279 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/036763ca-16f2-4880-8381-1b9330182312-config-data-custom\") pod \"manila-api-0\" (UID: \"036763ca-16f2-4880-8381-1b9330182312\") " pod="openstack/manila-api-0" Feb 23 09:10:02 crc kubenswrapper[4940]: I0223 09:10:02.127314 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4ddde6a1-2d30-4c95-aac1-ab2f32130f14-config\") pod \"dnsmasq-dns-56696ff475-gv984\" (UID: \"4ddde6a1-2d30-4c95-aac1-ab2f32130f14\") " pod="openstack/dnsmasq-dns-56696ff475-gv984" Feb 23 09:10:02 crc kubenswrapper[4940]: I0223 09:10:02.127350 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wghvg\" (UniqueName: \"kubernetes.io/projected/4ddde6a1-2d30-4c95-aac1-ab2f32130f14-kube-api-access-wghvg\") pod \"dnsmasq-dns-56696ff475-gv984\" (UID: \"4ddde6a1-2d30-4c95-aac1-ab2f32130f14\") " pod="openstack/dnsmasq-dns-56696ff475-gv984" Feb 23 09:10:02 crc kubenswrapper[4940]: I0223 09:10:02.127409 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/036763ca-16f2-4880-8381-1b9330182312-scripts\") pod \"manila-api-0\" (UID: \"036763ca-16f2-4880-8381-1b9330182312\") " pod="openstack/manila-api-0" Feb 23 09:10:02 crc kubenswrapper[4940]: I0223 09:10:02.127509 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4ddde6a1-2d30-4c95-aac1-ab2f32130f14-dns-swift-storage-0\") pod \"dnsmasq-dns-56696ff475-gv984\" (UID: \"4ddde6a1-2d30-4c95-aac1-ab2f32130f14\") " pod="openstack/dnsmasq-dns-56696ff475-gv984" Feb 23 09:10:02 crc kubenswrapper[4940]: I0223 09:10:02.127705 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/036763ca-16f2-4880-8381-1b9330182312-config-data\") pod \"manila-api-0\" (UID: \"036763ca-16f2-4880-8381-1b9330182312\") " pod="openstack/manila-api-0" Feb 23 09:10:02 crc kubenswrapper[4940]: I0223 09:10:02.127805 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4ddde6a1-2d30-4c95-aac1-ab2f32130f14-dns-svc\") pod \"dnsmasq-dns-56696ff475-gv984\" (UID: \"4ddde6a1-2d30-4c95-aac1-ab2f32130f14\") " pod="openstack/dnsmasq-dns-56696ff475-gv984" Feb 23 09:10:02 crc kubenswrapper[4940]: I0223 09:10:02.127857 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4ddde6a1-2d30-4c95-aac1-ab2f32130f14-ovsdbserver-sb\") pod \"dnsmasq-dns-56696ff475-gv984\" (UID: \"4ddde6a1-2d30-4c95-aac1-ab2f32130f14\") " pod="openstack/dnsmasq-dns-56696ff475-gv984" Feb 23 09:10:02 crc kubenswrapper[4940]: I0223 09:10:02.127882 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/036763ca-16f2-4880-8381-1b9330182312-combined-ca-bundle\") pod \"manila-api-0\" (UID: \"036763ca-16f2-4880-8381-1b9330182312\") " pod="openstack/manila-api-0" Feb 23 09:10:02 crc kubenswrapper[4940]: I0223 09:10:02.127974 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/036763ca-16f2-4880-8381-1b9330182312-logs\") pod \"manila-api-0\" (UID: \"036763ca-16f2-4880-8381-1b9330182312\") " pod="openstack/manila-api-0" Feb 23 09:10:02 crc kubenswrapper[4940]: I0223 09:10:02.128026 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4ddde6a1-2d30-4c95-aac1-ab2f32130f14-ovsdbserver-nb\") pod \"dnsmasq-dns-56696ff475-gv984\" (UID: \"4ddde6a1-2d30-4c95-aac1-ab2f32130f14\") " pod="openstack/dnsmasq-dns-56696ff475-gv984" Feb 23 09:10:02 crc kubenswrapper[4940]: I0223 09:10:02.128097 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/036763ca-16f2-4880-8381-1b9330182312-etc-machine-id\") pod \"manila-api-0\" (UID: \"036763ca-16f2-4880-8381-1b9330182312\") " pod="openstack/manila-api-0" Feb 23 09:10:02 crc kubenswrapper[4940]: I0223 09:10:02.128163 4940 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b6ace7c6-781c-4053-8e6e-26232d9355da-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 23 09:10:02 crc kubenswrapper[4940]: I0223 09:10:02.129288 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4ddde6a1-2d30-4c95-aac1-ab2f32130f14-config\") pod \"dnsmasq-dns-56696ff475-gv984\" (UID: \"4ddde6a1-2d30-4c95-aac1-ab2f32130f14\") " pod="openstack/dnsmasq-dns-56696ff475-gv984" Feb 23 09:10:02 crc kubenswrapper[4940]: I0223 09:10:02.130286 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4ddde6a1-2d30-4c95-aac1-ab2f32130f14-dns-swift-storage-0\") pod \"dnsmasq-dns-56696ff475-gv984\" (UID: \"4ddde6a1-2d30-4c95-aac1-ab2f32130f14\") " pod="openstack/dnsmasq-dns-56696ff475-gv984" Feb 23 09:10:02 crc kubenswrapper[4940]: I0223 09:10:02.137161 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4ddde6a1-2d30-4c95-aac1-ab2f32130f14-ovsdbserver-nb\") pod \"dnsmasq-dns-56696ff475-gv984\" (UID: \"4ddde6a1-2d30-4c95-aac1-ab2f32130f14\") " pod="openstack/dnsmasq-dns-56696ff475-gv984" Feb 23 09:10:02 crc kubenswrapper[4940]: I0223 09:10:02.140563 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4ddde6a1-2d30-4c95-aac1-ab2f32130f14-ovsdbserver-sb\") pod \"dnsmasq-dns-56696ff475-gv984\" (UID: \"4ddde6a1-2d30-4c95-aac1-ab2f32130f14\") " pod="openstack/dnsmasq-dns-56696ff475-gv984" Feb 23 09:10:02 crc kubenswrapper[4940]: I0223 09:10:02.157589 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wghvg\" (UniqueName: \"kubernetes.io/projected/4ddde6a1-2d30-4c95-aac1-ab2f32130f14-kube-api-access-wghvg\") pod \"dnsmasq-dns-56696ff475-gv984\" (UID: \"4ddde6a1-2d30-4c95-aac1-ab2f32130f14\") " pod="openstack/dnsmasq-dns-56696ff475-gv984" Feb 23 09:10:02 crc kubenswrapper[4940]: I0223 09:10:02.180904 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4ddde6a1-2d30-4c95-aac1-ab2f32130f14-dns-svc\") pod \"dnsmasq-dns-56696ff475-gv984\" (UID: \"4ddde6a1-2d30-4c95-aac1-ab2f32130f14\") " pod="openstack/dnsmasq-dns-56696ff475-gv984" Feb 23 09:10:02 crc kubenswrapper[4940]: I0223 09:10:02.205903 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6ace7c6-781c-4053-8e6e-26232d9355da-config" (OuterVolumeSpecName: "config") pod "b6ace7c6-781c-4053-8e6e-26232d9355da" (UID: "b6ace7c6-781c-4053-8e6e-26232d9355da"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 09:10:02 crc kubenswrapper[4940]: I0223 09:10:02.217927 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6ace7c6-781c-4053-8e6e-26232d9355da-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "b6ace7c6-781c-4053-8e6e-26232d9355da" (UID: "b6ace7c6-781c-4053-8e6e-26232d9355da"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 09:10:02 crc kubenswrapper[4940]: I0223 09:10:02.231278 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/036763ca-16f2-4880-8381-1b9330182312-config-data\") pod \"manila-api-0\" (UID: \"036763ca-16f2-4880-8381-1b9330182312\") " pod="openstack/manila-api-0" Feb 23 09:10:02 crc kubenswrapper[4940]: I0223 09:10:02.231358 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/036763ca-16f2-4880-8381-1b9330182312-combined-ca-bundle\") pod \"manila-api-0\" (UID: \"036763ca-16f2-4880-8381-1b9330182312\") " pod="openstack/manila-api-0" Feb 23 09:10:02 crc kubenswrapper[4940]: I0223 09:10:02.231412 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/036763ca-16f2-4880-8381-1b9330182312-logs\") pod \"manila-api-0\" (UID: \"036763ca-16f2-4880-8381-1b9330182312\") " pod="openstack/manila-api-0" Feb 23 09:10:02 crc kubenswrapper[4940]: I0223 09:10:02.231450 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/036763ca-16f2-4880-8381-1b9330182312-etc-machine-id\") pod \"manila-api-0\" (UID: \"036763ca-16f2-4880-8381-1b9330182312\") " pod="openstack/manila-api-0" Feb 23 09:10:02 crc kubenswrapper[4940]: I0223 09:10:02.231476 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-22pwn\" (UniqueName: \"kubernetes.io/projected/036763ca-16f2-4880-8381-1b9330182312-kube-api-access-22pwn\") pod \"manila-api-0\" (UID: \"036763ca-16f2-4880-8381-1b9330182312\") " pod="openstack/manila-api-0" Feb 23 09:10:02 crc kubenswrapper[4940]: I0223 09:10:02.231496 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/036763ca-16f2-4880-8381-1b9330182312-config-data-custom\") pod \"manila-api-0\" (UID: \"036763ca-16f2-4880-8381-1b9330182312\") " pod="openstack/manila-api-0" Feb 23 09:10:02 crc kubenswrapper[4940]: I0223 09:10:02.231522 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/036763ca-16f2-4880-8381-1b9330182312-scripts\") pod \"manila-api-0\" (UID: \"036763ca-16f2-4880-8381-1b9330182312\") " pod="openstack/manila-api-0" Feb 23 09:10:02 crc kubenswrapper[4940]: I0223 09:10:02.231961 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/036763ca-16f2-4880-8381-1b9330182312-etc-machine-id\") pod \"manila-api-0\" (UID: \"036763ca-16f2-4880-8381-1b9330182312\") " pod="openstack/manila-api-0" Feb 23 09:10:02 crc kubenswrapper[4940]: I0223 09:10:02.232340 4940 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/b6ace7c6-781c-4053-8e6e-26232d9355da-config\") on node \"crc\" DevicePath \"\"" Feb 23 09:10:02 crc kubenswrapper[4940]: I0223 09:10:02.232499 4940 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b6ace7c6-781c-4053-8e6e-26232d9355da-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 23 09:10:02 crc kubenswrapper[4940]: I0223 09:10:02.232916 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/036763ca-16f2-4880-8381-1b9330182312-logs\") pod \"manila-api-0\" (UID: \"036763ca-16f2-4880-8381-1b9330182312\") " pod="openstack/manila-api-0" Feb 23 09:10:02 crc kubenswrapper[4940]: I0223 09:10:02.235780 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/036763ca-16f2-4880-8381-1b9330182312-scripts\") pod \"manila-api-0\" (UID: \"036763ca-16f2-4880-8381-1b9330182312\") " pod="openstack/manila-api-0" Feb 23 09:10:02 crc kubenswrapper[4940]: I0223 09:10:02.241572 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/036763ca-16f2-4880-8381-1b9330182312-combined-ca-bundle\") pod \"manila-api-0\" (UID: \"036763ca-16f2-4880-8381-1b9330182312\") " pod="openstack/manila-api-0" Feb 23 09:10:02 crc kubenswrapper[4940]: I0223 09:10:02.250582 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/036763ca-16f2-4880-8381-1b9330182312-config-data-custom\") pod \"manila-api-0\" (UID: \"036763ca-16f2-4880-8381-1b9330182312\") " pod="openstack/manila-api-0" Feb 23 09:10:02 crc kubenswrapper[4940]: I0223 09:10:02.255964 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/036763ca-16f2-4880-8381-1b9330182312-config-data\") pod \"manila-api-0\" (UID: \"036763ca-16f2-4880-8381-1b9330182312\") " pod="openstack/manila-api-0" Feb 23 09:10:02 crc kubenswrapper[4940]: I0223 09:10:02.264680 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-22pwn\" (UniqueName: \"kubernetes.io/projected/036763ca-16f2-4880-8381-1b9330182312-kube-api-access-22pwn\") pod \"manila-api-0\" (UID: \"036763ca-16f2-4880-8381-1b9330182312\") " pod="openstack/manila-api-0" Feb 23 09:10:02 crc kubenswrapper[4940]: I0223 09:10:02.270417 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6ace7c6-781c-4053-8e6e-26232d9355da-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "b6ace7c6-781c-4053-8e6e-26232d9355da" (UID: "b6ace7c6-781c-4053-8e6e-26232d9355da"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 09:10:02 crc kubenswrapper[4940]: I0223 09:10:02.354499 4940 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-64b687bd7d-jhpmr" podUID="2f80e757-efd8-4d5f-a2bf-46c03b169956" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.167:9311/healthcheck\": read tcp 10.217.0.2:48982->10.217.0.167:9311: read: connection reset by peer" Feb 23 09:10:02 crc kubenswrapper[4940]: I0223 09:10:02.354797 4940 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-64b687bd7d-jhpmr" podUID="2f80e757-efd8-4d5f-a2bf-46c03b169956" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.167:9311/healthcheck\": read tcp 10.217.0.2:48988->10.217.0.167:9311: read: connection reset by peer" Feb 23 09:10:02 crc kubenswrapper[4940]: I0223 09:10:02.356238 4940 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/b6ace7c6-781c-4053-8e6e-26232d9355da-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 23 09:10:02 crc kubenswrapper[4940]: I0223 09:10:02.373352 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-scheduler-0" Feb 23 09:10:02 crc kubenswrapper[4940]: I0223 09:10:02.439771 4940 generic.go:334] "Generic (PLEG): container finished" podID="2cb80df2-e9ff-48b8-86c1-301afe49d9ed" containerID="d6ba6422de2258f1e311c528e44b43ca195cb07e82d58cb1c86e0becabb0880f" exitCode=0 Feb 23 09:10:02 crc kubenswrapper[4940]: I0223 09:10:02.439798 4940 generic.go:334] "Generic (PLEG): container finished" podID="2cb80df2-e9ff-48b8-86c1-301afe49d9ed" containerID="ef014b2266efe4f98cd310710c01009d7f5c93e4347f3d5b8d1c34fbfa8ef086" exitCode=143 Feb 23 09:10:02 crc kubenswrapper[4940]: I0223 09:10:02.439877 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"2cb80df2-e9ff-48b8-86c1-301afe49d9ed","Type":"ContainerDied","Data":"d6ba6422de2258f1e311c528e44b43ca195cb07e82d58cb1c86e0becabb0880f"} Feb 23 09:10:02 crc kubenswrapper[4940]: I0223 09:10:02.439903 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"2cb80df2-e9ff-48b8-86c1-301afe49d9ed","Type":"ContainerDied","Data":"ef014b2266efe4f98cd310710c01009d7f5c93e4347f3d5b8d1c34fbfa8ef086"} Feb 23 09:10:02 crc kubenswrapper[4940]: I0223 09:10:02.439912 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"2cb80df2-e9ff-48b8-86c1-301afe49d9ed","Type":"ContainerDied","Data":"fe937bdbbb23a100b9e5091c768a37eb67282357d73be158e34034f7eeb6763f"} Feb 23 09:10:02 crc kubenswrapper[4940]: I0223 09:10:02.439921 4940 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fe937bdbbb23a100b9e5091c768a37eb67282357d73be158e34034f7eeb6763f" Feb 23 09:10:02 crc kubenswrapper[4940]: I0223 09:10:02.461651 4940 generic.go:334] "Generic (PLEG): container finished" podID="f94f1199-3d0b-4502-9013-a1e408c7280e" containerID="e1c5f20a98159585833a8912d624d8723128f5cefefb1c490aaeb03ab9b4c6d1" exitCode=0 Feb 23 09:10:02 crc kubenswrapper[4940]: I0223 09:10:02.461986 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-fzrcd" event={"ID":"f94f1199-3d0b-4502-9013-a1e408c7280e","Type":"ContainerDied","Data":"e1c5f20a98159585833a8912d624d8723128f5cefefb1c490aaeb03ab9b4c6d1"} Feb 23 09:10:02 crc kubenswrapper[4940]: I0223 09:10:02.479269 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7ffc6bfc65-qhp9j" event={"ID":"b6ace7c6-781c-4053-8e6e-26232d9355da","Type":"ContainerDied","Data":"822dbf834a34eff4ed53946e3e7f9d40658456a755d08ba57863c8f211a2ef0a"} Feb 23 09:10:02 crc kubenswrapper[4940]: I0223 09:10:02.479341 4940 scope.go:117] "RemoveContainer" containerID="e3c8d0b0648ec94d2501f62779ab5e1aa6d83808ac2d9c0160962627213f80f6" Feb 23 09:10:02 crc kubenswrapper[4940]: I0223 09:10:02.479556 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7ffc6bfc65-qhp9j" Feb 23 09:10:02 crc kubenswrapper[4940]: I0223 09:10:02.509944 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5278c504-4391-438f-a6c9-39071eade5ae","Type":"ContainerStarted","Data":"4795d11a8831ccafb873cf48b024c7a56ba6297d9af727d45b808e6052ded532"} Feb 23 09:10:02 crc kubenswrapper[4940]: I0223 09:10:02.533395 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-volume1-0" event={"ID":"3f62a0b8-7bf4-40cb-a561-b2afd12eabdf","Type":"ContainerStarted","Data":"bcb31e77411c89974d1b48c0334789218d6edc4aeea297dd9885286842c5824d"} Feb 23 09:10:02 crc kubenswrapper[4940]: I0223 09:10:02.643842 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56696ff475-gv984" Feb 23 09:10:02 crc kubenswrapper[4940]: I0223 09:10:02.702147 4940 scope.go:117] "RemoveContainer" containerID="74b3089cf0f921509af84286bb5056e18b7337f0e6fb28a9d0aa489627692b95" Feb 23 09:10:02 crc kubenswrapper[4940]: I0223 09:10:02.715552 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-api-0" Feb 23 09:10:02 crc kubenswrapper[4940]: I0223 09:10:02.740074 4940 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-7ffc6bfc65-qhp9j"] Feb 23 09:10:02 crc kubenswrapper[4940]: I0223 09:10:02.745948 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 23 09:10:02 crc kubenswrapper[4940]: I0223 09:10:02.756253 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-fzrcd" Feb 23 09:10:02 crc kubenswrapper[4940]: I0223 09:10:02.763810 4940 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-7ffc6bfc65-qhp9j"] Feb 23 09:10:02 crc kubenswrapper[4940]: I0223 09:10:02.885424 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mhhnh\" (UniqueName: \"kubernetes.io/projected/f94f1199-3d0b-4502-9013-a1e408c7280e-kube-api-access-mhhnh\") pod \"f94f1199-3d0b-4502-9013-a1e408c7280e\" (UID: \"f94f1199-3d0b-4502-9013-a1e408c7280e\") " Feb 23 09:10:02 crc kubenswrapper[4940]: I0223 09:10:02.885490 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pdnp7\" (UniqueName: \"kubernetes.io/projected/2cb80df2-e9ff-48b8-86c1-301afe49d9ed-kube-api-access-pdnp7\") pod \"2cb80df2-e9ff-48b8-86c1-301afe49d9ed\" (UID: \"2cb80df2-e9ff-48b8-86c1-301afe49d9ed\") " Feb 23 09:10:02 crc kubenswrapper[4940]: I0223 09:10:02.885591 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2cb80df2-e9ff-48b8-86c1-301afe49d9ed-etc-machine-id\") pod \"2cb80df2-e9ff-48b8-86c1-301afe49d9ed\" (UID: \"2cb80df2-e9ff-48b8-86c1-301afe49d9ed\") " Feb 23 09:10:02 crc kubenswrapper[4940]: I0223 09:10:02.885627 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2cb80df2-e9ff-48b8-86c1-301afe49d9ed-logs\") pod \"2cb80df2-e9ff-48b8-86c1-301afe49d9ed\" (UID: \"2cb80df2-e9ff-48b8-86c1-301afe49d9ed\") " Feb 23 09:10:02 crc kubenswrapper[4940]: I0223 09:10:02.885656 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2cb80df2-e9ff-48b8-86c1-301afe49d9ed-scripts\") pod \"2cb80df2-e9ff-48b8-86c1-301afe49d9ed\" (UID: \"2cb80df2-e9ff-48b8-86c1-301afe49d9ed\") " Feb 23 09:10:02 crc kubenswrapper[4940]: I0223 09:10:02.885691 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f94f1199-3d0b-4502-9013-a1e408c7280e-ovsdbserver-nb\") pod \"f94f1199-3d0b-4502-9013-a1e408c7280e\" (UID: \"f94f1199-3d0b-4502-9013-a1e408c7280e\") " Feb 23 09:10:02 crc kubenswrapper[4940]: I0223 09:10:02.885730 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2cb80df2-e9ff-48b8-86c1-301afe49d9ed-combined-ca-bundle\") pod \"2cb80df2-e9ff-48b8-86c1-301afe49d9ed\" (UID: \"2cb80df2-e9ff-48b8-86c1-301afe49d9ed\") " Feb 23 09:10:02 crc kubenswrapper[4940]: I0223 09:10:02.885778 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2cb80df2-e9ff-48b8-86c1-301afe49d9ed-config-data\") pod \"2cb80df2-e9ff-48b8-86c1-301afe49d9ed\" (UID: \"2cb80df2-e9ff-48b8-86c1-301afe49d9ed\") " Feb 23 09:10:02 crc kubenswrapper[4940]: I0223 09:10:02.885796 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2cb80df2-e9ff-48b8-86c1-301afe49d9ed-config-data-custom\") pod \"2cb80df2-e9ff-48b8-86c1-301afe49d9ed\" (UID: \"2cb80df2-e9ff-48b8-86c1-301afe49d9ed\") " Feb 23 09:10:02 crc kubenswrapper[4940]: I0223 09:10:02.885829 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f94f1199-3d0b-4502-9013-a1e408c7280e-ovsdbserver-sb\") pod \"f94f1199-3d0b-4502-9013-a1e408c7280e\" (UID: \"f94f1199-3d0b-4502-9013-a1e408c7280e\") " Feb 23 09:10:02 crc kubenswrapper[4940]: I0223 09:10:02.885875 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f94f1199-3d0b-4502-9013-a1e408c7280e-config\") pod \"f94f1199-3d0b-4502-9013-a1e408c7280e\" (UID: \"f94f1199-3d0b-4502-9013-a1e408c7280e\") " Feb 23 09:10:02 crc kubenswrapper[4940]: I0223 09:10:02.885930 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f94f1199-3d0b-4502-9013-a1e408c7280e-dns-svc\") pod \"f94f1199-3d0b-4502-9013-a1e408c7280e\" (UID: \"f94f1199-3d0b-4502-9013-a1e408c7280e\") " Feb 23 09:10:02 crc kubenswrapper[4940]: I0223 09:10:02.885953 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f94f1199-3d0b-4502-9013-a1e408c7280e-dns-swift-storage-0\") pod \"f94f1199-3d0b-4502-9013-a1e408c7280e\" (UID: \"f94f1199-3d0b-4502-9013-a1e408c7280e\") " Feb 23 09:10:02 crc kubenswrapper[4940]: I0223 09:10:02.886148 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2cb80df2-e9ff-48b8-86c1-301afe49d9ed-logs" (OuterVolumeSpecName: "logs") pod "2cb80df2-e9ff-48b8-86c1-301afe49d9ed" (UID: "2cb80df2-e9ff-48b8-86c1-301afe49d9ed"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 09:10:02 crc kubenswrapper[4940]: I0223 09:10:02.886998 4940 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2cb80df2-e9ff-48b8-86c1-301afe49d9ed-logs\") on node \"crc\" DevicePath \"\"" Feb 23 09:10:02 crc kubenswrapper[4940]: I0223 09:10:02.887063 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2cb80df2-e9ff-48b8-86c1-301afe49d9ed-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "2cb80df2-e9ff-48b8-86c1-301afe49d9ed" (UID: "2cb80df2-e9ff-48b8-86c1-301afe49d9ed"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 09:10:02 crc kubenswrapper[4940]: I0223 09:10:02.942317 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f94f1199-3d0b-4502-9013-a1e408c7280e-kube-api-access-mhhnh" (OuterVolumeSpecName: "kube-api-access-mhhnh") pod "f94f1199-3d0b-4502-9013-a1e408c7280e" (UID: "f94f1199-3d0b-4502-9013-a1e408c7280e"). InnerVolumeSpecName "kube-api-access-mhhnh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 09:10:02 crc kubenswrapper[4940]: I0223 09:10:02.943339 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2cb80df2-e9ff-48b8-86c1-301afe49d9ed-kube-api-access-pdnp7" (OuterVolumeSpecName: "kube-api-access-pdnp7") pod "2cb80df2-e9ff-48b8-86c1-301afe49d9ed" (UID: "2cb80df2-e9ff-48b8-86c1-301afe49d9ed"). InnerVolumeSpecName "kube-api-access-pdnp7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 09:10:02 crc kubenswrapper[4940]: I0223 09:10:02.948330 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2cb80df2-e9ff-48b8-86c1-301afe49d9ed-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "2cb80df2-e9ff-48b8-86c1-301afe49d9ed" (UID: "2cb80df2-e9ff-48b8-86c1-301afe49d9ed"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 09:10:02 crc kubenswrapper[4940]: I0223 09:10:02.962929 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2cb80df2-e9ff-48b8-86c1-301afe49d9ed-scripts" (OuterVolumeSpecName: "scripts") pod "2cb80df2-e9ff-48b8-86c1-301afe49d9ed" (UID: "2cb80df2-e9ff-48b8-86c1-301afe49d9ed"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 09:10:02 crc kubenswrapper[4940]: I0223 09:10:02.992348 4940 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2cb80df2-e9ff-48b8-86c1-301afe49d9ed-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 23 09:10:02 crc kubenswrapper[4940]: I0223 09:10:02.992390 4940 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2cb80df2-e9ff-48b8-86c1-301afe49d9ed-scripts\") on node \"crc\" DevicePath \"\"" Feb 23 09:10:02 crc kubenswrapper[4940]: I0223 09:10:02.992400 4940 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2cb80df2-e9ff-48b8-86c1-301afe49d9ed-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 23 09:10:02 crc kubenswrapper[4940]: I0223 09:10:02.992409 4940 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mhhnh\" (UniqueName: \"kubernetes.io/projected/f94f1199-3d0b-4502-9013-a1e408c7280e-kube-api-access-mhhnh\") on node \"crc\" DevicePath \"\"" Feb 23 09:10:02 crc kubenswrapper[4940]: I0223 09:10:02.992419 4940 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pdnp7\" (UniqueName: \"kubernetes.io/projected/2cb80df2-e9ff-48b8-86c1-301afe49d9ed-kube-api-access-pdnp7\") on node \"crc\" DevicePath \"\"" Feb 23 09:10:03 crc kubenswrapper[4940]: I0223 09:10:03.123499 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-share-share1-0"] Feb 23 09:10:03 crc kubenswrapper[4940]: W0223 09:10:03.138701 4940 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podec61a030_1b08_4c8f_8008_842f8c7decb0.slice/crio-b37d5a91c4195d4558ffb2fab7bf666a9fbf5bdc822c5acd147ab5c8ea4927df WatchSource:0}: Error finding container b37d5a91c4195d4558ffb2fab7bf666a9fbf5bdc822c5acd147ab5c8ea4927df: Status 404 returned error can't find the container with id b37d5a91c4195d4558ffb2fab7bf666a9fbf5bdc822c5acd147ab5c8ea4927df Feb 23 09:10:03 crc kubenswrapper[4940]: I0223 09:10:03.179601 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f94f1199-3d0b-4502-9013-a1e408c7280e-config" (OuterVolumeSpecName: "config") pod "f94f1199-3d0b-4502-9013-a1e408c7280e" (UID: "f94f1199-3d0b-4502-9013-a1e408c7280e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 09:10:03 crc kubenswrapper[4940]: I0223 09:10:03.199032 4940 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f94f1199-3d0b-4502-9013-a1e408c7280e-config\") on node \"crc\" DevicePath \"\"" Feb 23 09:10:03 crc kubenswrapper[4940]: I0223 09:10:03.244209 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f94f1199-3d0b-4502-9013-a1e408c7280e-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "f94f1199-3d0b-4502-9013-a1e408c7280e" (UID: "f94f1199-3d0b-4502-9013-a1e408c7280e"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 09:10:03 crc kubenswrapper[4940]: I0223 09:10:03.254666 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-scheduler-0"] Feb 23 09:10:03 crc kubenswrapper[4940]: I0223 09:10:03.278390 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2cb80df2-e9ff-48b8-86c1-301afe49d9ed-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2cb80df2-e9ff-48b8-86c1-301afe49d9ed" (UID: "2cb80df2-e9ff-48b8-86c1-301afe49d9ed"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 09:10:03 crc kubenswrapper[4940]: I0223 09:10:03.304447 4940 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2cb80df2-e9ff-48b8-86c1-301afe49d9ed-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 09:10:03 crc kubenswrapper[4940]: I0223 09:10:03.304492 4940 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f94f1199-3d0b-4502-9013-a1e408c7280e-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 23 09:10:03 crc kubenswrapper[4940]: I0223 09:10:03.305788 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f94f1199-3d0b-4502-9013-a1e408c7280e-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "f94f1199-3d0b-4502-9013-a1e408c7280e" (UID: "f94f1199-3d0b-4502-9013-a1e408c7280e"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 09:10:03 crc kubenswrapper[4940]: I0223 09:10:03.321444 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f94f1199-3d0b-4502-9013-a1e408c7280e-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "f94f1199-3d0b-4502-9013-a1e408c7280e" (UID: "f94f1199-3d0b-4502-9013-a1e408c7280e"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 09:10:03 crc kubenswrapper[4940]: I0223 09:10:03.330188 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f94f1199-3d0b-4502-9013-a1e408c7280e-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "f94f1199-3d0b-4502-9013-a1e408c7280e" (UID: "f94f1199-3d0b-4502-9013-a1e408c7280e"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 09:10:03 crc kubenswrapper[4940]: I0223 09:10:03.365462 4940 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6ace7c6-781c-4053-8e6e-26232d9355da" path="/var/lib/kubelet/pods/b6ace7c6-781c-4053-8e6e-26232d9355da/volumes" Feb 23 09:10:03 crc kubenswrapper[4940]: I0223 09:10:03.382441 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2cb80df2-e9ff-48b8-86c1-301afe49d9ed-config-data" (OuterVolumeSpecName: "config-data") pod "2cb80df2-e9ff-48b8-86c1-301afe49d9ed" (UID: "2cb80df2-e9ff-48b8-86c1-301afe49d9ed"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 09:10:03 crc kubenswrapper[4940]: I0223 09:10:03.406906 4940 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f94f1199-3d0b-4502-9013-a1e408c7280e-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 23 09:10:03 crc kubenswrapper[4940]: I0223 09:10:03.407186 4940 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2cb80df2-e9ff-48b8-86c1-301afe49d9ed-config-data\") on node \"crc\" DevicePath \"\"" Feb 23 09:10:03 crc kubenswrapper[4940]: I0223 09:10:03.407198 4940 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f94f1199-3d0b-4502-9013-a1e408c7280e-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 23 09:10:03 crc kubenswrapper[4940]: I0223 09:10:03.407206 4940 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f94f1199-3d0b-4502-9013-a1e408c7280e-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 23 09:10:03 crc kubenswrapper[4940]: I0223 09:10:03.574334 4940 generic.go:334] "Generic (PLEG): container finished" podID="2f80e757-efd8-4d5f-a2bf-46c03b169956" containerID="91408324e2a67395fe5476ad2dc90bf08a1986c8a99b0d971404b15fcd6427e3" exitCode=0 Feb 23 09:10:03 crc kubenswrapper[4940]: I0223 09:10:03.574397 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-64b687bd7d-jhpmr" event={"ID":"2f80e757-efd8-4d5f-a2bf-46c03b169956","Type":"ContainerDied","Data":"91408324e2a67395fe5476ad2dc90bf08a1986c8a99b0d971404b15fcd6427e3"} Feb 23 09:10:03 crc kubenswrapper[4940]: I0223 09:10:03.585682 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-64b687bd7d-jhpmr" Feb 23 09:10:03 crc kubenswrapper[4940]: I0223 09:10:03.588984 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-fzrcd" Feb 23 09:10:03 crc kubenswrapper[4940]: I0223 09:10:03.589445 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-fzrcd" event={"ID":"f94f1199-3d0b-4502-9013-a1e408c7280e","Type":"ContainerDied","Data":"dbbe5efec5b413474aac62d752f5790ebad57d9ebdea6ce825db27674de2ca69"} Feb 23 09:10:03 crc kubenswrapper[4940]: I0223 09:10:03.589488 4940 scope.go:117] "RemoveContainer" containerID="e1c5f20a98159585833a8912d624d8723128f5cefefb1c490aaeb03ab9b4c6d1" Feb 23 09:10:03 crc kubenswrapper[4940]: I0223 09:10:03.597583 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-api-0"] Feb 23 09:10:03 crc kubenswrapper[4940]: I0223 09:10:03.599160 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"eafa7497-50f0-456e-9764-826840da7372","Type":"ContainerStarted","Data":"81f200d191f48539e09ffcc091106b30fe16021977bafd2143d0a222d6b63d18"} Feb 23 09:10:03 crc kubenswrapper[4940]: W0223 09:10:03.600216 4940 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod036763ca_16f2_4880_8381_1b9330182312.slice/crio-dd53e4fadd0e2d6af46fff00023e38ac3242d913a381a586a849287465ecdcba WatchSource:0}: Error finding container dd53e4fadd0e2d6af46fff00023e38ac3242d913a381a586a849287465ecdcba: Status 404 returned error can't find the container with id dd53e4fadd0e2d6af46fff00023e38ac3242d913a381a586a849287465ecdcba Feb 23 09:10:03 crc kubenswrapper[4940]: I0223 09:10:03.629014 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5278c504-4391-438f-a6c9-39071eade5ae","Type":"ContainerStarted","Data":"2a394c3fd47099b97de59751ef4eb8ae13d9edc47d15bd550d03fa9dd04ca446"} Feb 23 09:10:03 crc kubenswrapper[4940]: I0223 09:10:03.639219 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"ec61a030-1b08-4c8f-8008-842f8c7decb0","Type":"ContainerStarted","Data":"b37d5a91c4195d4558ffb2fab7bf666a9fbf5bdc822c5acd147ab5c8ea4927df"} Feb 23 09:10:03 crc kubenswrapper[4940]: I0223 09:10:03.647679 4940 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-fzrcd"] Feb 23 09:10:03 crc kubenswrapper[4940]: I0223 09:10:03.651802 4940 scope.go:117] "RemoveContainer" containerID="5bc358fbbd7560a89df27d2309e81e5c53184a1777841e52dd8b88c6f405a4bf" Feb 23 09:10:03 crc kubenswrapper[4940]: I0223 09:10:03.652979 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 23 09:10:03 crc kubenswrapper[4940]: I0223 09:10:03.653724 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-volume1-0" event={"ID":"3f62a0b8-7bf4-40cb-a561-b2afd12eabdf","Type":"ContainerStarted","Data":"11797bef9538eb6edb5e371221d846a781a4b92d10e5219e36aba40f8881e8b1"} Feb 23 09:10:03 crc kubenswrapper[4940]: I0223 09:10:03.661407 4940 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-fzrcd"] Feb 23 09:10:03 crc kubenswrapper[4940]: I0223 09:10:03.701874 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-56696ff475-gv984"] Feb 23 09:10:03 crc kubenswrapper[4940]: I0223 09:10:03.706655 4940 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-volume-volume1-0" podStartSLOduration=5.433669787 podStartE2EDuration="9.706638338s" podCreationTimestamp="2026-02-23 09:09:54 +0000 UTC" firstStartedPulling="2026-02-23 09:09:56.747878546 +0000 UTC m=+1328.131084703" lastFinishedPulling="2026-02-23 09:10:01.020847087 +0000 UTC m=+1332.404053254" observedRunningTime="2026-02-23 09:10:03.702709835 +0000 UTC m=+1335.085916012" watchObservedRunningTime="2026-02-23 09:10:03.706638338 +0000 UTC m=+1335.089844495" Feb 23 09:10:03 crc kubenswrapper[4940]: I0223 09:10:03.711498 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2f80e757-efd8-4d5f-a2bf-46c03b169956-config-data-custom\") pod \"2f80e757-efd8-4d5f-a2bf-46c03b169956\" (UID: \"2f80e757-efd8-4d5f-a2bf-46c03b169956\") " Feb 23 09:10:03 crc kubenswrapper[4940]: I0223 09:10:03.711546 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f80e757-efd8-4d5f-a2bf-46c03b169956-combined-ca-bundle\") pod \"2f80e757-efd8-4d5f-a2bf-46c03b169956\" (UID: \"2f80e757-efd8-4d5f-a2bf-46c03b169956\") " Feb 23 09:10:03 crc kubenswrapper[4940]: I0223 09:10:03.711645 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2f80e757-efd8-4d5f-a2bf-46c03b169956-config-data\") pod \"2f80e757-efd8-4d5f-a2bf-46c03b169956\" (UID: \"2f80e757-efd8-4d5f-a2bf-46c03b169956\") " Feb 23 09:10:03 crc kubenswrapper[4940]: I0223 09:10:03.711789 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2f80e757-efd8-4d5f-a2bf-46c03b169956-logs\") pod \"2f80e757-efd8-4d5f-a2bf-46c03b169956\" (UID: \"2f80e757-efd8-4d5f-a2bf-46c03b169956\") " Feb 23 09:10:03 crc kubenswrapper[4940]: I0223 09:10:03.711827 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w5495\" (UniqueName: \"kubernetes.io/projected/2f80e757-efd8-4d5f-a2bf-46c03b169956-kube-api-access-w5495\") pod \"2f80e757-efd8-4d5f-a2bf-46c03b169956\" (UID: \"2f80e757-efd8-4d5f-a2bf-46c03b169956\") " Feb 23 09:10:03 crc kubenswrapper[4940]: I0223 09:10:03.714142 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2f80e757-efd8-4d5f-a2bf-46c03b169956-logs" (OuterVolumeSpecName: "logs") pod "2f80e757-efd8-4d5f-a2bf-46c03b169956" (UID: "2f80e757-efd8-4d5f-a2bf-46c03b169956"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 09:10:03 crc kubenswrapper[4940]: I0223 09:10:03.725821 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f80e757-efd8-4d5f-a2bf-46c03b169956-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "2f80e757-efd8-4d5f-a2bf-46c03b169956" (UID: "2f80e757-efd8-4d5f-a2bf-46c03b169956"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 09:10:03 crc kubenswrapper[4940]: I0223 09:10:03.731025 4940 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Feb 23 09:10:03 crc kubenswrapper[4940]: I0223 09:10:03.763342 4940 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Feb 23 09:10:03 crc kubenswrapper[4940]: I0223 09:10:03.764029 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2f80e757-efd8-4d5f-a2bf-46c03b169956-kube-api-access-w5495" (OuterVolumeSpecName: "kube-api-access-w5495") pod "2f80e757-efd8-4d5f-a2bf-46c03b169956" (UID: "2f80e757-efd8-4d5f-a2bf-46c03b169956"). InnerVolumeSpecName "kube-api-access-w5495". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 09:10:03 crc kubenswrapper[4940]: I0223 09:10:03.800664 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Feb 23 09:10:03 crc kubenswrapper[4940]: E0223 09:10:03.801302 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f94f1199-3d0b-4502-9013-a1e408c7280e" containerName="init" Feb 23 09:10:03 crc kubenswrapper[4940]: I0223 09:10:03.801318 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="f94f1199-3d0b-4502-9013-a1e408c7280e" containerName="init" Feb 23 09:10:03 crc kubenswrapper[4940]: E0223 09:10:03.801333 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f94f1199-3d0b-4502-9013-a1e408c7280e" containerName="dnsmasq-dns" Feb 23 09:10:03 crc kubenswrapper[4940]: I0223 09:10:03.801339 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="f94f1199-3d0b-4502-9013-a1e408c7280e" containerName="dnsmasq-dns" Feb 23 09:10:03 crc kubenswrapper[4940]: E0223 09:10:03.801356 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2cb80df2-e9ff-48b8-86c1-301afe49d9ed" containerName="cinder-api" Feb 23 09:10:03 crc kubenswrapper[4940]: I0223 09:10:03.801362 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="2cb80df2-e9ff-48b8-86c1-301afe49d9ed" containerName="cinder-api" Feb 23 09:10:03 crc kubenswrapper[4940]: E0223 09:10:03.801373 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2cb80df2-e9ff-48b8-86c1-301afe49d9ed" containerName="cinder-api-log" Feb 23 09:10:03 crc kubenswrapper[4940]: I0223 09:10:03.801379 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="2cb80df2-e9ff-48b8-86c1-301afe49d9ed" containerName="cinder-api-log" Feb 23 09:10:03 crc kubenswrapper[4940]: E0223 09:10:03.801385 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f80e757-efd8-4d5f-a2bf-46c03b169956" containerName="barbican-api-log" Feb 23 09:10:03 crc kubenswrapper[4940]: I0223 09:10:03.801391 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f80e757-efd8-4d5f-a2bf-46c03b169956" containerName="barbican-api-log" Feb 23 09:10:03 crc kubenswrapper[4940]: E0223 09:10:03.801401 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f80e757-efd8-4d5f-a2bf-46c03b169956" containerName="barbican-api" Feb 23 09:10:03 crc kubenswrapper[4940]: I0223 09:10:03.801407 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f80e757-efd8-4d5f-a2bf-46c03b169956" containerName="barbican-api" Feb 23 09:10:03 crc kubenswrapper[4940]: I0223 09:10:03.801576 4940 memory_manager.go:354] "RemoveStaleState removing state" podUID="2cb80df2-e9ff-48b8-86c1-301afe49d9ed" containerName="cinder-api-log" Feb 23 09:10:03 crc kubenswrapper[4940]: I0223 09:10:03.801585 4940 memory_manager.go:354] "RemoveStaleState removing state" podUID="f94f1199-3d0b-4502-9013-a1e408c7280e" containerName="dnsmasq-dns" Feb 23 09:10:03 crc kubenswrapper[4940]: I0223 09:10:03.801595 4940 memory_manager.go:354] "RemoveStaleState removing state" podUID="2f80e757-efd8-4d5f-a2bf-46c03b169956" containerName="barbican-api" Feb 23 09:10:03 crc kubenswrapper[4940]: I0223 09:10:03.801619 4940 memory_manager.go:354] "RemoveStaleState removing state" podUID="2cb80df2-e9ff-48b8-86c1-301afe49d9ed" containerName="cinder-api" Feb 23 09:10:03 crc kubenswrapper[4940]: I0223 09:10:03.801636 4940 memory_manager.go:354] "RemoveStaleState removing state" podUID="2f80e757-efd8-4d5f-a2bf-46c03b169956" containerName="barbican-api-log" Feb 23 09:10:03 crc kubenswrapper[4940]: I0223 09:10:03.802601 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 23 09:10:03 crc kubenswrapper[4940]: I0223 09:10:03.805872 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f80e757-efd8-4d5f-a2bf-46c03b169956-config-data" (OuterVolumeSpecName: "config-data") pod "2f80e757-efd8-4d5f-a2bf-46c03b169956" (UID: "2f80e757-efd8-4d5f-a2bf-46c03b169956"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 09:10:03 crc kubenswrapper[4940]: I0223 09:10:03.808085 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Feb 23 09:10:03 crc kubenswrapper[4940]: I0223 09:10:03.808325 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Feb 23 09:10:03 crc kubenswrapper[4940]: I0223 09:10:03.810828 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Feb 23 09:10:03 crc kubenswrapper[4940]: I0223 09:10:03.828690 4940 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2f80e757-efd8-4d5f-a2bf-46c03b169956-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 23 09:10:03 crc kubenswrapper[4940]: I0223 09:10:03.828726 4940 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2f80e757-efd8-4d5f-a2bf-46c03b169956-config-data\") on node \"crc\" DevicePath \"\"" Feb 23 09:10:03 crc kubenswrapper[4940]: I0223 09:10:03.828737 4940 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2f80e757-efd8-4d5f-a2bf-46c03b169956-logs\") on node \"crc\" DevicePath \"\"" Feb 23 09:10:03 crc kubenswrapper[4940]: I0223 09:10:03.828746 4940 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w5495\" (UniqueName: \"kubernetes.io/projected/2f80e757-efd8-4d5f-a2bf-46c03b169956-kube-api-access-w5495\") on node \"crc\" DevicePath \"\"" Feb 23 09:10:03 crc kubenswrapper[4940]: I0223 09:10:03.847769 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f80e757-efd8-4d5f-a2bf-46c03b169956-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2f80e757-efd8-4d5f-a2bf-46c03b169956" (UID: "2f80e757-efd8-4d5f-a2bf-46c03b169956"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 09:10:03 crc kubenswrapper[4940]: I0223 09:10:03.853509 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 23 09:10:03 crc kubenswrapper[4940]: I0223 09:10:03.931294 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f91c0e0d-08da-47b9-acef-5e4e9856fc85-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"f91c0e0d-08da-47b9-acef-5e4e9856fc85\") " pod="openstack/cinder-api-0" Feb 23 09:10:03 crc kubenswrapper[4940]: I0223 09:10:03.931460 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f91c0e0d-08da-47b9-acef-5e4e9856fc85-public-tls-certs\") pod \"cinder-api-0\" (UID: \"f91c0e0d-08da-47b9-acef-5e4e9856fc85\") " pod="openstack/cinder-api-0" Feb 23 09:10:03 crc kubenswrapper[4940]: I0223 09:10:03.931555 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rqcvp\" (UniqueName: \"kubernetes.io/projected/f91c0e0d-08da-47b9-acef-5e4e9856fc85-kube-api-access-rqcvp\") pod \"cinder-api-0\" (UID: \"f91c0e0d-08da-47b9-acef-5e4e9856fc85\") " pod="openstack/cinder-api-0" Feb 23 09:10:03 crc kubenswrapper[4940]: I0223 09:10:03.931687 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f91c0e0d-08da-47b9-acef-5e4e9856fc85-etc-machine-id\") pod \"cinder-api-0\" (UID: \"f91c0e0d-08da-47b9-acef-5e4e9856fc85\") " pod="openstack/cinder-api-0" Feb 23 09:10:03 crc kubenswrapper[4940]: I0223 09:10:03.931767 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f91c0e0d-08da-47b9-acef-5e4e9856fc85-logs\") pod \"cinder-api-0\" (UID: \"f91c0e0d-08da-47b9-acef-5e4e9856fc85\") " pod="openstack/cinder-api-0" Feb 23 09:10:03 crc kubenswrapper[4940]: I0223 09:10:03.931989 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f91c0e0d-08da-47b9-acef-5e4e9856fc85-config-data\") pod \"cinder-api-0\" (UID: \"f91c0e0d-08da-47b9-acef-5e4e9856fc85\") " pod="openstack/cinder-api-0" Feb 23 09:10:03 crc kubenswrapper[4940]: I0223 09:10:03.932254 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f91c0e0d-08da-47b9-acef-5e4e9856fc85-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"f91c0e0d-08da-47b9-acef-5e4e9856fc85\") " pod="openstack/cinder-api-0" Feb 23 09:10:03 crc kubenswrapper[4940]: I0223 09:10:03.932360 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f91c0e0d-08da-47b9-acef-5e4e9856fc85-config-data-custom\") pod \"cinder-api-0\" (UID: \"f91c0e0d-08da-47b9-acef-5e4e9856fc85\") " pod="openstack/cinder-api-0" Feb 23 09:10:03 crc kubenswrapper[4940]: I0223 09:10:03.932500 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f91c0e0d-08da-47b9-acef-5e4e9856fc85-scripts\") pod \"cinder-api-0\" (UID: \"f91c0e0d-08da-47b9-acef-5e4e9856fc85\") " pod="openstack/cinder-api-0" Feb 23 09:10:03 crc kubenswrapper[4940]: I0223 09:10:03.932669 4940 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f80e757-efd8-4d5f-a2bf-46c03b169956-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 09:10:04 crc kubenswrapper[4940]: I0223 09:10:04.034988 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f91c0e0d-08da-47b9-acef-5e4e9856fc85-etc-machine-id\") pod \"cinder-api-0\" (UID: \"f91c0e0d-08da-47b9-acef-5e4e9856fc85\") " pod="openstack/cinder-api-0" Feb 23 09:10:04 crc kubenswrapper[4940]: I0223 09:10:04.036142 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f91c0e0d-08da-47b9-acef-5e4e9856fc85-logs\") pod \"cinder-api-0\" (UID: \"f91c0e0d-08da-47b9-acef-5e4e9856fc85\") " pod="openstack/cinder-api-0" Feb 23 09:10:04 crc kubenswrapper[4940]: I0223 09:10:04.036219 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f91c0e0d-08da-47b9-acef-5e4e9856fc85-config-data\") pod \"cinder-api-0\" (UID: \"f91c0e0d-08da-47b9-acef-5e4e9856fc85\") " pod="openstack/cinder-api-0" Feb 23 09:10:04 crc kubenswrapper[4940]: I0223 09:10:04.036325 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f91c0e0d-08da-47b9-acef-5e4e9856fc85-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"f91c0e0d-08da-47b9-acef-5e4e9856fc85\") " pod="openstack/cinder-api-0" Feb 23 09:10:04 crc kubenswrapper[4940]: I0223 09:10:04.036353 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f91c0e0d-08da-47b9-acef-5e4e9856fc85-config-data-custom\") pod \"cinder-api-0\" (UID: \"f91c0e0d-08da-47b9-acef-5e4e9856fc85\") " pod="openstack/cinder-api-0" Feb 23 09:10:04 crc kubenswrapper[4940]: I0223 09:10:04.036448 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f91c0e0d-08da-47b9-acef-5e4e9856fc85-scripts\") pod \"cinder-api-0\" (UID: \"f91c0e0d-08da-47b9-acef-5e4e9856fc85\") " pod="openstack/cinder-api-0" Feb 23 09:10:04 crc kubenswrapper[4940]: I0223 09:10:04.036501 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f91c0e0d-08da-47b9-acef-5e4e9856fc85-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"f91c0e0d-08da-47b9-acef-5e4e9856fc85\") " pod="openstack/cinder-api-0" Feb 23 09:10:04 crc kubenswrapper[4940]: I0223 09:10:04.036574 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f91c0e0d-08da-47b9-acef-5e4e9856fc85-public-tls-certs\") pod \"cinder-api-0\" (UID: \"f91c0e0d-08da-47b9-acef-5e4e9856fc85\") " pod="openstack/cinder-api-0" Feb 23 09:10:04 crc kubenswrapper[4940]: I0223 09:10:04.036627 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rqcvp\" (UniqueName: \"kubernetes.io/projected/f91c0e0d-08da-47b9-acef-5e4e9856fc85-kube-api-access-rqcvp\") pod \"cinder-api-0\" (UID: \"f91c0e0d-08da-47b9-acef-5e4e9856fc85\") " pod="openstack/cinder-api-0" Feb 23 09:10:04 crc kubenswrapper[4940]: I0223 09:10:04.035213 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f91c0e0d-08da-47b9-acef-5e4e9856fc85-etc-machine-id\") pod \"cinder-api-0\" (UID: \"f91c0e0d-08da-47b9-acef-5e4e9856fc85\") " pod="openstack/cinder-api-0" Feb 23 09:10:04 crc kubenswrapper[4940]: I0223 09:10:04.040286 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f91c0e0d-08da-47b9-acef-5e4e9856fc85-logs\") pod \"cinder-api-0\" (UID: \"f91c0e0d-08da-47b9-acef-5e4e9856fc85\") " pod="openstack/cinder-api-0" Feb 23 09:10:04 crc kubenswrapper[4940]: I0223 09:10:04.044179 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f91c0e0d-08da-47b9-acef-5e4e9856fc85-config-data-custom\") pod \"cinder-api-0\" (UID: \"f91c0e0d-08da-47b9-acef-5e4e9856fc85\") " pod="openstack/cinder-api-0" Feb 23 09:10:04 crc kubenswrapper[4940]: I0223 09:10:04.044413 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f91c0e0d-08da-47b9-acef-5e4e9856fc85-config-data\") pod \"cinder-api-0\" (UID: \"f91c0e0d-08da-47b9-acef-5e4e9856fc85\") " pod="openstack/cinder-api-0" Feb 23 09:10:04 crc kubenswrapper[4940]: I0223 09:10:04.045154 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f91c0e0d-08da-47b9-acef-5e4e9856fc85-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"f91c0e0d-08da-47b9-acef-5e4e9856fc85\") " pod="openstack/cinder-api-0" Feb 23 09:10:04 crc kubenswrapper[4940]: I0223 09:10:04.045646 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f91c0e0d-08da-47b9-acef-5e4e9856fc85-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"f91c0e0d-08da-47b9-acef-5e4e9856fc85\") " pod="openstack/cinder-api-0" Feb 23 09:10:04 crc kubenswrapper[4940]: I0223 09:10:04.053424 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f91c0e0d-08da-47b9-acef-5e4e9856fc85-scripts\") pod \"cinder-api-0\" (UID: \"f91c0e0d-08da-47b9-acef-5e4e9856fc85\") " pod="openstack/cinder-api-0" Feb 23 09:10:04 crc kubenswrapper[4940]: I0223 09:10:04.053588 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rqcvp\" (UniqueName: \"kubernetes.io/projected/f91c0e0d-08da-47b9-acef-5e4e9856fc85-kube-api-access-rqcvp\") pod \"cinder-api-0\" (UID: \"f91c0e0d-08da-47b9-acef-5e4e9856fc85\") " pod="openstack/cinder-api-0" Feb 23 09:10:04 crc kubenswrapper[4940]: I0223 09:10:04.065168 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f91c0e0d-08da-47b9-acef-5e4e9856fc85-public-tls-certs\") pod \"cinder-api-0\" (UID: \"f91c0e0d-08da-47b9-acef-5e4e9856fc85\") " pod="openstack/cinder-api-0" Feb 23 09:10:04 crc kubenswrapper[4940]: I0223 09:10:04.314642 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 23 09:10:04 crc kubenswrapper[4940]: I0223 09:10:04.507814 4940 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Feb 23 09:10:04 crc kubenswrapper[4940]: I0223 09:10:04.685927 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-64b687bd7d-jhpmr" event={"ID":"2f80e757-efd8-4d5f-a2bf-46c03b169956","Type":"ContainerDied","Data":"6208bf17e39601a8aab0fe3fe464b39191a36eed24d0d5b693c197aeeb819382"} Feb 23 09:10:04 crc kubenswrapper[4940]: I0223 09:10:04.686225 4940 scope.go:117] "RemoveContainer" containerID="91408324e2a67395fe5476ad2dc90bf08a1986c8a99b0d971404b15fcd6427e3" Feb 23 09:10:04 crc kubenswrapper[4940]: I0223 09:10:04.686393 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-64b687bd7d-jhpmr" Feb 23 09:10:04 crc kubenswrapper[4940]: I0223 09:10:04.723441 4940 generic.go:334] "Generic (PLEG): container finished" podID="4ddde6a1-2d30-4c95-aac1-ab2f32130f14" containerID="db15bfaf7a8b72c195d6fba810b00b10561032db7fdd4cbff5d73c18bb181fe0" exitCode=0 Feb 23 09:10:04 crc kubenswrapper[4940]: I0223 09:10:04.723483 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56696ff475-gv984" event={"ID":"4ddde6a1-2d30-4c95-aac1-ab2f32130f14","Type":"ContainerDied","Data":"db15bfaf7a8b72c195d6fba810b00b10561032db7fdd4cbff5d73c18bb181fe0"} Feb 23 09:10:04 crc kubenswrapper[4940]: I0223 09:10:04.723524 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56696ff475-gv984" event={"ID":"4ddde6a1-2d30-4c95-aac1-ab2f32130f14","Type":"ContainerStarted","Data":"88e47992b3f03052d1d84eb4c4fb42281e097e3b818b50306ec6e627d936ad0d"} Feb 23 09:10:04 crc kubenswrapper[4940]: I0223 09:10:04.738714 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"036763ca-16f2-4880-8381-1b9330182312","Type":"ContainerStarted","Data":"c763a9d872a420837f57148b304dfd674439c18ea4898799aa5da0625339e876"} Feb 23 09:10:04 crc kubenswrapper[4940]: I0223 09:10:04.738765 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"036763ca-16f2-4880-8381-1b9330182312","Type":"ContainerStarted","Data":"dd53e4fadd0e2d6af46fff00023e38ac3242d913a381a586a849287465ecdcba"} Feb 23 09:10:04 crc kubenswrapper[4940]: I0223 09:10:04.771501 4940 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-64b687bd7d-jhpmr"] Feb 23 09:10:04 crc kubenswrapper[4940]: I0223 09:10:04.773664 4940 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-volume-volume1-0" Feb 23 09:10:04 crc kubenswrapper[4940]: I0223 09:10:04.784019 4940 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-64b687bd7d-jhpmr"] Feb 23 09:10:04 crc kubenswrapper[4940]: I0223 09:10:04.818690 4940 scope.go:117] "RemoveContainer" containerID="44e17f3d5963f3c652e966b59a18c53aa8cfdc71dc72079e653c8c9e8206a72b" Feb 23 09:10:04 crc kubenswrapper[4940]: I0223 09:10:04.958159 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 23 09:10:05 crc kubenswrapper[4940]: I0223 09:10:05.027931 4940 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-api-0"] Feb 23 09:10:05 crc kubenswrapper[4940]: I0223 09:10:05.063050 4940 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Feb 23 09:10:05 crc kubenswrapper[4940]: I0223 09:10:05.212454 4940 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 23 09:10:05 crc kubenswrapper[4940]: I0223 09:10:05.327568 4940 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-8485464bb-cvmj5" Feb 23 09:10:05 crc kubenswrapper[4940]: I0223 09:10:05.333791 4940 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-backup-0" Feb 23 09:10:05 crc kubenswrapper[4940]: I0223 09:10:05.380836 4940 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2cb80df2-e9ff-48b8-86c1-301afe49d9ed" path="/var/lib/kubelet/pods/2cb80df2-e9ff-48b8-86c1-301afe49d9ed/volumes" Feb 23 09:10:05 crc kubenswrapper[4940]: I0223 09:10:05.381643 4940 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2f80e757-efd8-4d5f-a2bf-46c03b169956" path="/var/lib/kubelet/pods/2f80e757-efd8-4d5f-a2bf-46c03b169956/volumes" Feb 23 09:10:05 crc kubenswrapper[4940]: I0223 09:10:05.382244 4940 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f94f1199-3d0b-4502-9013-a1e408c7280e" path="/var/lib/kubelet/pods/f94f1199-3d0b-4502-9013-a1e408c7280e/volumes" Feb 23 09:10:05 crc kubenswrapper[4940]: I0223 09:10:05.476636 4940 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-6884678d78-ckt87" Feb 23 09:10:05 crc kubenswrapper[4940]: I0223 09:10:05.513015 4940 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-backup-0"] Feb 23 09:10:05 crc kubenswrapper[4940]: I0223 09:10:05.785726 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56696ff475-gv984" event={"ID":"4ddde6a1-2d30-4c95-aac1-ab2f32130f14","Type":"ContainerStarted","Data":"6c4b56f59540d71e867900ff830d197fc4c40d5bdb06f597b5ce3e5b60639c97"} Feb 23 09:10:05 crc kubenswrapper[4940]: I0223 09:10:05.786244 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-56696ff475-gv984" Feb 23 09:10:05 crc kubenswrapper[4940]: I0223 09:10:05.790814 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"eafa7497-50f0-456e-9764-826840da7372","Type":"ContainerStarted","Data":"e72875b833bfe94af04f9db504ebf0b2eaf410cdc9ae1f3a46b2cfef4e24be0d"} Feb 23 09:10:05 crc kubenswrapper[4940]: I0223 09:10:05.801908 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"036763ca-16f2-4880-8381-1b9330182312","Type":"ContainerStarted","Data":"459cea5c4c83deee595f16f69abf0fbb5813d9c3ddb1304555043331bad5a3c9"} Feb 23 09:10:05 crc kubenswrapper[4940]: I0223 09:10:05.802056 4940 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/manila-api-0" podUID="036763ca-16f2-4880-8381-1b9330182312" containerName="manila-api-log" containerID="cri-o://c763a9d872a420837f57148b304dfd674439c18ea4898799aa5da0625339e876" gracePeriod=30 Feb 23 09:10:05 crc kubenswrapper[4940]: I0223 09:10:05.802269 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/manila-api-0" Feb 23 09:10:05 crc kubenswrapper[4940]: I0223 09:10:05.802270 4940 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/manila-api-0" podUID="036763ca-16f2-4880-8381-1b9330182312" containerName="manila-api" containerID="cri-o://459cea5c4c83deee595f16f69abf0fbb5813d9c3ddb1304555043331bad5a3c9" gracePeriod=30 Feb 23 09:10:05 crc kubenswrapper[4940]: I0223 09:10:05.803761 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"f91c0e0d-08da-47b9-acef-5e4e9856fc85","Type":"ContainerStarted","Data":"2c958c7d03c01e388da77a968e5526f338e9f28db6563b456105e0fd0bd28604"} Feb 23 09:10:05 crc kubenswrapper[4940]: I0223 09:10:05.840150 4940 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-56696ff475-gv984" podStartSLOduration=4.84013064 podStartE2EDuration="4.84013064s" podCreationTimestamp="2026-02-23 09:10:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 09:10:05.816036013 +0000 UTC m=+1337.199242170" watchObservedRunningTime="2026-02-23 09:10:05.84013064 +0000 UTC m=+1337.223336797" Feb 23 09:10:05 crc kubenswrapper[4940]: I0223 09:10:05.846927 4940 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-api-0" podStartSLOduration=4.846907663 podStartE2EDuration="4.846907663s" podCreationTimestamp="2026-02-23 09:10:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 09:10:05.839031106 +0000 UTC m=+1337.222237263" watchObservedRunningTime="2026-02-23 09:10:05.846907663 +0000 UTC m=+1337.230113820" Feb 23 09:10:05 crc kubenswrapper[4940]: I0223 09:10:05.849115 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5278c504-4391-438f-a6c9-39071eade5ae","Type":"ContainerStarted","Data":"ab8a86f359525935b23dd226737644653c8c81f21f8a71e78f7cb5d4e05d2df2"} Feb 23 09:10:05 crc kubenswrapper[4940]: I0223 09:10:05.850254 4940 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="e5a733b1-ef74-46e6-804f-c0ad92ef38aa" containerName="cinder-scheduler" containerID="cri-o://b12a95c55002c79e218f34a64026209d4d4fd68a1bedaa8a635c24e7ef547553" gracePeriod=30 Feb 23 09:10:05 crc kubenswrapper[4940]: I0223 09:10:05.850368 4940 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="e5a733b1-ef74-46e6-804f-c0ad92ef38aa" containerName="probe" containerID="cri-o://17618246250e8a923236e125ed618609457b538532a5862c77e536ed6573ebfe" gracePeriod=30 Feb 23 09:10:05 crc kubenswrapper[4940]: I0223 09:10:05.850420 4940 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-backup-0" podUID="0911a5da-3249-4858-8246-5334db240255" containerName="cinder-backup" containerID="cri-o://fc8258480e70099ee6e8f1fcebd1c467d0da852e3235463d521c802cf2fc23de" gracePeriod=30 Feb 23 09:10:05 crc kubenswrapper[4940]: I0223 09:10:05.850531 4940 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-backup-0" podUID="0911a5da-3249-4858-8246-5334db240255" containerName="probe" containerID="cri-o://9dc625dfed1efcdc6c459d2aaae2a1308f6983bc1ca6992dd85cdee635c328a8" gracePeriod=30 Feb 23 09:10:05 crc kubenswrapper[4940]: I0223 09:10:05.883823 4940 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.23635928 podStartE2EDuration="9.883807533s" podCreationTimestamp="2026-02-23 09:09:56 +0000 UTC" firstStartedPulling="2026-02-23 09:09:57.899343568 +0000 UTC m=+1329.282549725" lastFinishedPulling="2026-02-23 09:10:04.546791821 +0000 UTC m=+1335.929997978" observedRunningTime="2026-02-23 09:10:05.8824816 +0000 UTC m=+1337.265687767" watchObservedRunningTime="2026-02-23 09:10:05.883807533 +0000 UTC m=+1337.267013690" Feb 23 09:10:06 crc kubenswrapper[4940]: I0223 09:10:06.892673 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"f91c0e0d-08da-47b9-acef-5e4e9856fc85","Type":"ContainerStarted","Data":"3210a4685204d1654290e57e04d857802112cd441532a6c9b77bc9c6fa66fb38"} Feb 23 09:10:06 crc kubenswrapper[4940]: I0223 09:10:06.895299 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"eafa7497-50f0-456e-9764-826840da7372","Type":"ContainerStarted","Data":"043f3c89b263f3dc8987df0790dbb76fe90597d156f7760b6973738f63334b58"} Feb 23 09:10:06 crc kubenswrapper[4940]: I0223 09:10:06.901664 4940 generic.go:334] "Generic (PLEG): container finished" podID="036763ca-16f2-4880-8381-1b9330182312" containerID="459cea5c4c83deee595f16f69abf0fbb5813d9c3ddb1304555043331bad5a3c9" exitCode=0 Feb 23 09:10:06 crc kubenswrapper[4940]: I0223 09:10:06.901687 4940 generic.go:334] "Generic (PLEG): container finished" podID="036763ca-16f2-4880-8381-1b9330182312" containerID="c763a9d872a420837f57148b304dfd674439c18ea4898799aa5da0625339e876" exitCode=143 Feb 23 09:10:06 crc kubenswrapper[4940]: I0223 09:10:06.901726 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"036763ca-16f2-4880-8381-1b9330182312","Type":"ContainerDied","Data":"459cea5c4c83deee595f16f69abf0fbb5813d9c3ddb1304555043331bad5a3c9"} Feb 23 09:10:06 crc kubenswrapper[4940]: I0223 09:10:06.901750 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"036763ca-16f2-4880-8381-1b9330182312","Type":"ContainerDied","Data":"c763a9d872a420837f57148b304dfd674439c18ea4898799aa5da0625339e876"} Feb 23 09:10:06 crc kubenswrapper[4940]: I0223 09:10:06.916052 4940 generic.go:334] "Generic (PLEG): container finished" podID="e5a733b1-ef74-46e6-804f-c0ad92ef38aa" containerID="17618246250e8a923236e125ed618609457b538532a5862c77e536ed6573ebfe" exitCode=0 Feb 23 09:10:06 crc kubenswrapper[4940]: I0223 09:10:06.921541 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"e5a733b1-ef74-46e6-804f-c0ad92ef38aa","Type":"ContainerDied","Data":"17618246250e8a923236e125ed618609457b538532a5862c77e536ed6573ebfe"} Feb 23 09:10:06 crc kubenswrapper[4940]: I0223 09:10:06.922181 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 23 09:10:06 crc kubenswrapper[4940]: I0223 09:10:06.934720 4940 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-scheduler-0" podStartSLOduration=4.787766865 podStartE2EDuration="5.934702875s" podCreationTimestamp="2026-02-23 09:10:01 +0000 UTC" firstStartedPulling="2026-02-23 09:10:03.267764221 +0000 UTC m=+1334.650970368" lastFinishedPulling="2026-02-23 09:10:04.414700221 +0000 UTC m=+1335.797906378" observedRunningTime="2026-02-23 09:10:06.92179815 +0000 UTC m=+1338.305004307" watchObservedRunningTime="2026-02-23 09:10:06.934702875 +0000 UTC m=+1338.317909032" Feb 23 09:10:07 crc kubenswrapper[4940]: I0223 09:10:07.282777 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-api-0" Feb 23 09:10:07 crc kubenswrapper[4940]: I0223 09:10:07.396767 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/036763ca-16f2-4880-8381-1b9330182312-scripts\") pod \"036763ca-16f2-4880-8381-1b9330182312\" (UID: \"036763ca-16f2-4880-8381-1b9330182312\") " Feb 23 09:10:07 crc kubenswrapper[4940]: I0223 09:10:07.397072 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/036763ca-16f2-4880-8381-1b9330182312-etc-machine-id\") pod \"036763ca-16f2-4880-8381-1b9330182312\" (UID: \"036763ca-16f2-4880-8381-1b9330182312\") " Feb 23 09:10:07 crc kubenswrapper[4940]: I0223 09:10:07.397180 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/036763ca-16f2-4880-8381-1b9330182312-config-data-custom\") pod \"036763ca-16f2-4880-8381-1b9330182312\" (UID: \"036763ca-16f2-4880-8381-1b9330182312\") " Feb 23 09:10:07 crc kubenswrapper[4940]: I0223 09:10:07.397228 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-22pwn\" (UniqueName: \"kubernetes.io/projected/036763ca-16f2-4880-8381-1b9330182312-kube-api-access-22pwn\") pod \"036763ca-16f2-4880-8381-1b9330182312\" (UID: \"036763ca-16f2-4880-8381-1b9330182312\") " Feb 23 09:10:07 crc kubenswrapper[4940]: I0223 09:10:07.397313 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/036763ca-16f2-4880-8381-1b9330182312-combined-ca-bundle\") pod \"036763ca-16f2-4880-8381-1b9330182312\" (UID: \"036763ca-16f2-4880-8381-1b9330182312\") " Feb 23 09:10:07 crc kubenswrapper[4940]: I0223 09:10:07.397335 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/036763ca-16f2-4880-8381-1b9330182312-config-data\") pod \"036763ca-16f2-4880-8381-1b9330182312\" (UID: \"036763ca-16f2-4880-8381-1b9330182312\") " Feb 23 09:10:07 crc kubenswrapper[4940]: I0223 09:10:07.397376 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/036763ca-16f2-4880-8381-1b9330182312-logs\") pod \"036763ca-16f2-4880-8381-1b9330182312\" (UID: \"036763ca-16f2-4880-8381-1b9330182312\") " Feb 23 09:10:07 crc kubenswrapper[4940]: I0223 09:10:07.398259 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/036763ca-16f2-4880-8381-1b9330182312-logs" (OuterVolumeSpecName: "logs") pod "036763ca-16f2-4880-8381-1b9330182312" (UID: "036763ca-16f2-4880-8381-1b9330182312"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 09:10:07 crc kubenswrapper[4940]: I0223 09:10:07.400155 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/036763ca-16f2-4880-8381-1b9330182312-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "036763ca-16f2-4880-8381-1b9330182312" (UID: "036763ca-16f2-4880-8381-1b9330182312"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 09:10:07 crc kubenswrapper[4940]: I0223 09:10:07.431757 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/036763ca-16f2-4880-8381-1b9330182312-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "036763ca-16f2-4880-8381-1b9330182312" (UID: "036763ca-16f2-4880-8381-1b9330182312"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 09:10:07 crc kubenswrapper[4940]: I0223 09:10:07.432055 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/036763ca-16f2-4880-8381-1b9330182312-kube-api-access-22pwn" (OuterVolumeSpecName: "kube-api-access-22pwn") pod "036763ca-16f2-4880-8381-1b9330182312" (UID: "036763ca-16f2-4880-8381-1b9330182312"). InnerVolumeSpecName "kube-api-access-22pwn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 09:10:07 crc kubenswrapper[4940]: I0223 09:10:07.435388 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/036763ca-16f2-4880-8381-1b9330182312-scripts" (OuterVolumeSpecName: "scripts") pod "036763ca-16f2-4880-8381-1b9330182312" (UID: "036763ca-16f2-4880-8381-1b9330182312"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 09:10:07 crc kubenswrapper[4940]: I0223 09:10:07.444002 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/036763ca-16f2-4880-8381-1b9330182312-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "036763ca-16f2-4880-8381-1b9330182312" (UID: "036763ca-16f2-4880-8381-1b9330182312"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 09:10:07 crc kubenswrapper[4940]: I0223 09:10:07.476962 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/036763ca-16f2-4880-8381-1b9330182312-config-data" (OuterVolumeSpecName: "config-data") pod "036763ca-16f2-4880-8381-1b9330182312" (UID: "036763ca-16f2-4880-8381-1b9330182312"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 09:10:07 crc kubenswrapper[4940]: I0223 09:10:07.490625 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 23 09:10:07 crc kubenswrapper[4940]: I0223 09:10:07.501994 4940 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-22pwn\" (UniqueName: \"kubernetes.io/projected/036763ca-16f2-4880-8381-1b9330182312-kube-api-access-22pwn\") on node \"crc\" DevicePath \"\"" Feb 23 09:10:07 crc kubenswrapper[4940]: I0223 09:10:07.502026 4940 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/036763ca-16f2-4880-8381-1b9330182312-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 09:10:07 crc kubenswrapper[4940]: I0223 09:10:07.502035 4940 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/036763ca-16f2-4880-8381-1b9330182312-config-data\") on node \"crc\" DevicePath \"\"" Feb 23 09:10:07 crc kubenswrapper[4940]: I0223 09:10:07.502043 4940 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/036763ca-16f2-4880-8381-1b9330182312-logs\") on node \"crc\" DevicePath \"\"" Feb 23 09:10:07 crc kubenswrapper[4940]: I0223 09:10:07.502051 4940 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/036763ca-16f2-4880-8381-1b9330182312-scripts\") on node \"crc\" DevicePath \"\"" Feb 23 09:10:07 crc kubenswrapper[4940]: I0223 09:10:07.502061 4940 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/036763ca-16f2-4880-8381-1b9330182312-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 23 09:10:07 crc kubenswrapper[4940]: I0223 09:10:07.502072 4940 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/036763ca-16f2-4880-8381-1b9330182312-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 23 09:10:07 crc kubenswrapper[4940]: I0223 09:10:07.612389 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e5a733b1-ef74-46e6-804f-c0ad92ef38aa-scripts\") pod \"e5a733b1-ef74-46e6-804f-c0ad92ef38aa\" (UID: \"e5a733b1-ef74-46e6-804f-c0ad92ef38aa\") " Feb 23 09:10:07 crc kubenswrapper[4940]: I0223 09:10:07.612788 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4z84q\" (UniqueName: \"kubernetes.io/projected/e5a733b1-ef74-46e6-804f-c0ad92ef38aa-kube-api-access-4z84q\") pod \"e5a733b1-ef74-46e6-804f-c0ad92ef38aa\" (UID: \"e5a733b1-ef74-46e6-804f-c0ad92ef38aa\") " Feb 23 09:10:07 crc kubenswrapper[4940]: I0223 09:10:07.613671 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e5a733b1-ef74-46e6-804f-c0ad92ef38aa-config-data-custom\") pod \"e5a733b1-ef74-46e6-804f-c0ad92ef38aa\" (UID: \"e5a733b1-ef74-46e6-804f-c0ad92ef38aa\") " Feb 23 09:10:07 crc kubenswrapper[4940]: I0223 09:10:07.613764 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e5a733b1-ef74-46e6-804f-c0ad92ef38aa-config-data\") pod \"e5a733b1-ef74-46e6-804f-c0ad92ef38aa\" (UID: \"e5a733b1-ef74-46e6-804f-c0ad92ef38aa\") " Feb 23 09:10:07 crc kubenswrapper[4940]: I0223 09:10:07.613791 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5a733b1-ef74-46e6-804f-c0ad92ef38aa-combined-ca-bundle\") pod \"e5a733b1-ef74-46e6-804f-c0ad92ef38aa\" (UID: \"e5a733b1-ef74-46e6-804f-c0ad92ef38aa\") " Feb 23 09:10:07 crc kubenswrapper[4940]: I0223 09:10:07.613878 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e5a733b1-ef74-46e6-804f-c0ad92ef38aa-etc-machine-id\") pod \"e5a733b1-ef74-46e6-804f-c0ad92ef38aa\" (UID: \"e5a733b1-ef74-46e6-804f-c0ad92ef38aa\") " Feb 23 09:10:07 crc kubenswrapper[4940]: I0223 09:10:07.614902 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e5a733b1-ef74-46e6-804f-c0ad92ef38aa-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "e5a733b1-ef74-46e6-804f-c0ad92ef38aa" (UID: "e5a733b1-ef74-46e6-804f-c0ad92ef38aa"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 09:10:07 crc kubenswrapper[4940]: I0223 09:10:07.622837 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e5a733b1-ef74-46e6-804f-c0ad92ef38aa-scripts" (OuterVolumeSpecName: "scripts") pod "e5a733b1-ef74-46e6-804f-c0ad92ef38aa" (UID: "e5a733b1-ef74-46e6-804f-c0ad92ef38aa"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 09:10:07 crc kubenswrapper[4940]: I0223 09:10:07.626054 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e5a733b1-ef74-46e6-804f-c0ad92ef38aa-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "e5a733b1-ef74-46e6-804f-c0ad92ef38aa" (UID: "e5a733b1-ef74-46e6-804f-c0ad92ef38aa"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 09:10:07 crc kubenswrapper[4940]: I0223 09:10:07.634865 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e5a733b1-ef74-46e6-804f-c0ad92ef38aa-kube-api-access-4z84q" (OuterVolumeSpecName: "kube-api-access-4z84q") pod "e5a733b1-ef74-46e6-804f-c0ad92ef38aa" (UID: "e5a733b1-ef74-46e6-804f-c0ad92ef38aa"). InnerVolumeSpecName "kube-api-access-4z84q". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 09:10:07 crc kubenswrapper[4940]: I0223 09:10:07.718535 4940 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e5a733b1-ef74-46e6-804f-c0ad92ef38aa-scripts\") on node \"crc\" DevicePath \"\"" Feb 23 09:10:07 crc kubenswrapper[4940]: I0223 09:10:07.718569 4940 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4z84q\" (UniqueName: \"kubernetes.io/projected/e5a733b1-ef74-46e6-804f-c0ad92ef38aa-kube-api-access-4z84q\") on node \"crc\" DevicePath \"\"" Feb 23 09:10:07 crc kubenswrapper[4940]: I0223 09:10:07.718582 4940 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e5a733b1-ef74-46e6-804f-c0ad92ef38aa-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 23 09:10:07 crc kubenswrapper[4940]: I0223 09:10:07.718592 4940 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e5a733b1-ef74-46e6-804f-c0ad92ef38aa-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 23 09:10:07 crc kubenswrapper[4940]: I0223 09:10:07.768544 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e5a733b1-ef74-46e6-804f-c0ad92ef38aa-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e5a733b1-ef74-46e6-804f-c0ad92ef38aa" (UID: "e5a733b1-ef74-46e6-804f-c0ad92ef38aa"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 09:10:07 crc kubenswrapper[4940]: I0223 09:10:07.794633 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-backup-0" Feb 23 09:10:07 crc kubenswrapper[4940]: I0223 09:10:07.806759 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e5a733b1-ef74-46e6-804f-c0ad92ef38aa-config-data" (OuterVolumeSpecName: "config-data") pod "e5a733b1-ef74-46e6-804f-c0ad92ef38aa" (UID: "e5a733b1-ef74-46e6-804f-c0ad92ef38aa"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 09:10:07 crc kubenswrapper[4940]: I0223 09:10:07.819901 4940 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e5a733b1-ef74-46e6-804f-c0ad92ef38aa-config-data\") on node \"crc\" DevicePath \"\"" Feb 23 09:10:07 crc kubenswrapper[4940]: I0223 09:10:07.819931 4940 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5a733b1-ef74-46e6-804f-c0ad92ef38aa-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 09:10:07 crc kubenswrapper[4940]: I0223 09:10:07.921635 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/0911a5da-3249-4858-8246-5334db240255-sys\") pod \"0911a5da-3249-4858-8246-5334db240255\" (UID: \"0911a5da-3249-4858-8246-5334db240255\") " Feb 23 09:10:07 crc kubenswrapper[4940]: I0223 09:10:07.921694 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/0911a5da-3249-4858-8246-5334db240255-etc-iscsi\") pod \"0911a5da-3249-4858-8246-5334db240255\" (UID: \"0911a5da-3249-4858-8246-5334db240255\") " Feb 23 09:10:07 crc kubenswrapper[4940]: I0223 09:10:07.921727 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0911a5da-3249-4858-8246-5334db240255-scripts\") pod \"0911a5da-3249-4858-8246-5334db240255\" (UID: \"0911a5da-3249-4858-8246-5334db240255\") " Feb 23 09:10:07 crc kubenswrapper[4940]: I0223 09:10:07.921756 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0911a5da-3249-4858-8246-5334db240255-config-data-custom\") pod \"0911a5da-3249-4858-8246-5334db240255\" (UID: \"0911a5da-3249-4858-8246-5334db240255\") " Feb 23 09:10:07 crc kubenswrapper[4940]: I0223 09:10:07.921787 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0911a5da-3249-4858-8246-5334db240255-config-data\") pod \"0911a5da-3249-4858-8246-5334db240255\" (UID: \"0911a5da-3249-4858-8246-5334db240255\") " Feb 23 09:10:07 crc kubenswrapper[4940]: I0223 09:10:07.921814 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0911a5da-3249-4858-8246-5334db240255-combined-ca-bundle\") pod \"0911a5da-3249-4858-8246-5334db240255\" (UID: \"0911a5da-3249-4858-8246-5334db240255\") " Feb 23 09:10:07 crc kubenswrapper[4940]: I0223 09:10:07.921866 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nq7dq\" (UniqueName: \"kubernetes.io/projected/0911a5da-3249-4858-8246-5334db240255-kube-api-access-nq7dq\") pod \"0911a5da-3249-4858-8246-5334db240255\" (UID: \"0911a5da-3249-4858-8246-5334db240255\") " Feb 23 09:10:07 crc kubenswrapper[4940]: I0223 09:10:07.921895 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/0911a5da-3249-4858-8246-5334db240255-ceph\") pod \"0911a5da-3249-4858-8246-5334db240255\" (UID: \"0911a5da-3249-4858-8246-5334db240255\") " Feb 23 09:10:07 crc kubenswrapper[4940]: I0223 09:10:07.921938 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/0911a5da-3249-4858-8246-5334db240255-var-lib-cinder\") pod \"0911a5da-3249-4858-8246-5334db240255\" (UID: \"0911a5da-3249-4858-8246-5334db240255\") " Feb 23 09:10:07 crc kubenswrapper[4940]: I0223 09:10:07.921928 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0911a5da-3249-4858-8246-5334db240255-etc-iscsi" (OuterVolumeSpecName: "etc-iscsi") pod "0911a5da-3249-4858-8246-5334db240255" (UID: "0911a5da-3249-4858-8246-5334db240255"). InnerVolumeSpecName "etc-iscsi". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 09:10:07 crc kubenswrapper[4940]: I0223 09:10:07.921972 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0911a5da-3249-4858-8246-5334db240255-lib-modules\") pod \"0911a5da-3249-4858-8246-5334db240255\" (UID: \"0911a5da-3249-4858-8246-5334db240255\") " Feb 23 09:10:07 crc kubenswrapper[4940]: I0223 09:10:07.921992 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/0911a5da-3249-4858-8246-5334db240255-dev\") pod \"0911a5da-3249-4858-8246-5334db240255\" (UID: \"0911a5da-3249-4858-8246-5334db240255\") " Feb 23 09:10:07 crc kubenswrapper[4940]: I0223 09:10:07.921990 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0911a5da-3249-4858-8246-5334db240255-sys" (OuterVolumeSpecName: "sys") pod "0911a5da-3249-4858-8246-5334db240255" (UID: "0911a5da-3249-4858-8246-5334db240255"). InnerVolumeSpecName "sys". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 09:10:07 crc kubenswrapper[4940]: I0223 09:10:07.922005 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0911a5da-3249-4858-8246-5334db240255-etc-machine-id\") pod \"0911a5da-3249-4858-8246-5334db240255\" (UID: \"0911a5da-3249-4858-8246-5334db240255\") " Feb 23 09:10:07 crc kubenswrapper[4940]: I0223 09:10:07.922025 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/0911a5da-3249-4858-8246-5334db240255-var-locks-cinder\") pod \"0911a5da-3249-4858-8246-5334db240255\" (UID: \"0911a5da-3249-4858-8246-5334db240255\") " Feb 23 09:10:07 crc kubenswrapper[4940]: I0223 09:10:07.922045 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/0911a5da-3249-4858-8246-5334db240255-var-locks-brick\") pod \"0911a5da-3249-4858-8246-5334db240255\" (UID: \"0911a5da-3249-4858-8246-5334db240255\") " Feb 23 09:10:07 crc kubenswrapper[4940]: I0223 09:10:07.922099 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/0911a5da-3249-4858-8246-5334db240255-run\") pod \"0911a5da-3249-4858-8246-5334db240255\" (UID: \"0911a5da-3249-4858-8246-5334db240255\") " Feb 23 09:10:07 crc kubenswrapper[4940]: I0223 09:10:07.922122 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/0911a5da-3249-4858-8246-5334db240255-etc-nvme\") pod \"0911a5da-3249-4858-8246-5334db240255\" (UID: \"0911a5da-3249-4858-8246-5334db240255\") " Feb 23 09:10:07 crc kubenswrapper[4940]: I0223 09:10:07.922242 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0911a5da-3249-4858-8246-5334db240255-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "0911a5da-3249-4858-8246-5334db240255" (UID: "0911a5da-3249-4858-8246-5334db240255"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 09:10:07 crc kubenswrapper[4940]: I0223 09:10:07.922565 4940 reconciler_common.go:293] "Volume detached for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/0911a5da-3249-4858-8246-5334db240255-etc-iscsi\") on node \"crc\" DevicePath \"\"" Feb 23 09:10:07 crc kubenswrapper[4940]: I0223 09:10:07.922582 4940 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0911a5da-3249-4858-8246-5334db240255-lib-modules\") on node \"crc\" DevicePath \"\"" Feb 23 09:10:07 crc kubenswrapper[4940]: I0223 09:10:07.922590 4940 reconciler_common.go:293] "Volume detached for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/0911a5da-3249-4858-8246-5334db240255-sys\") on node \"crc\" DevicePath \"\"" Feb 23 09:10:07 crc kubenswrapper[4940]: I0223 09:10:07.922653 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0911a5da-3249-4858-8246-5334db240255-etc-nvme" (OuterVolumeSpecName: "etc-nvme") pod "0911a5da-3249-4858-8246-5334db240255" (UID: "0911a5da-3249-4858-8246-5334db240255"). InnerVolumeSpecName "etc-nvme". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 09:10:07 crc kubenswrapper[4940]: I0223 09:10:07.922682 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0911a5da-3249-4858-8246-5334db240255-dev" (OuterVolumeSpecName: "dev") pod "0911a5da-3249-4858-8246-5334db240255" (UID: "0911a5da-3249-4858-8246-5334db240255"). InnerVolumeSpecName "dev". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 09:10:07 crc kubenswrapper[4940]: I0223 09:10:07.922727 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0911a5da-3249-4858-8246-5334db240255-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "0911a5da-3249-4858-8246-5334db240255" (UID: "0911a5da-3249-4858-8246-5334db240255"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 09:10:07 crc kubenswrapper[4940]: I0223 09:10:07.922745 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0911a5da-3249-4858-8246-5334db240255-var-locks-cinder" (OuterVolumeSpecName: "var-locks-cinder") pod "0911a5da-3249-4858-8246-5334db240255" (UID: "0911a5da-3249-4858-8246-5334db240255"). InnerVolumeSpecName "var-locks-cinder". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 09:10:07 crc kubenswrapper[4940]: I0223 09:10:07.922763 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0911a5da-3249-4858-8246-5334db240255-var-locks-brick" (OuterVolumeSpecName: "var-locks-brick") pod "0911a5da-3249-4858-8246-5334db240255" (UID: "0911a5da-3249-4858-8246-5334db240255"). InnerVolumeSpecName "var-locks-brick". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 09:10:07 crc kubenswrapper[4940]: I0223 09:10:07.922780 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0911a5da-3249-4858-8246-5334db240255-run" (OuterVolumeSpecName: "run") pod "0911a5da-3249-4858-8246-5334db240255" (UID: "0911a5da-3249-4858-8246-5334db240255"). InnerVolumeSpecName "run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 09:10:07 crc kubenswrapper[4940]: I0223 09:10:07.924936 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0911a5da-3249-4858-8246-5334db240255-var-lib-cinder" (OuterVolumeSpecName: "var-lib-cinder") pod "0911a5da-3249-4858-8246-5334db240255" (UID: "0911a5da-3249-4858-8246-5334db240255"). InnerVolumeSpecName "var-lib-cinder". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 09:10:07 crc kubenswrapper[4940]: I0223 09:10:07.925142 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0911a5da-3249-4858-8246-5334db240255-scripts" (OuterVolumeSpecName: "scripts") pod "0911a5da-3249-4858-8246-5334db240255" (UID: "0911a5da-3249-4858-8246-5334db240255"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 09:10:07 crc kubenswrapper[4940]: I0223 09:10:07.928795 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0911a5da-3249-4858-8246-5334db240255-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "0911a5da-3249-4858-8246-5334db240255" (UID: "0911a5da-3249-4858-8246-5334db240255"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 09:10:07 crc kubenswrapper[4940]: I0223 09:10:07.929850 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0911a5da-3249-4858-8246-5334db240255-kube-api-access-nq7dq" (OuterVolumeSpecName: "kube-api-access-nq7dq") pod "0911a5da-3249-4858-8246-5334db240255" (UID: "0911a5da-3249-4858-8246-5334db240255"). InnerVolumeSpecName "kube-api-access-nq7dq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 09:10:07 crc kubenswrapper[4940]: I0223 09:10:07.929964 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0911a5da-3249-4858-8246-5334db240255-ceph" (OuterVolumeSpecName: "ceph") pod "0911a5da-3249-4858-8246-5334db240255" (UID: "0911a5da-3249-4858-8246-5334db240255"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 09:10:07 crc kubenswrapper[4940]: I0223 09:10:07.948321 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"036763ca-16f2-4880-8381-1b9330182312","Type":"ContainerDied","Data":"dd53e4fadd0e2d6af46fff00023e38ac3242d913a381a586a849287465ecdcba"} Feb 23 09:10:07 crc kubenswrapper[4940]: I0223 09:10:07.948370 4940 scope.go:117] "RemoveContainer" containerID="459cea5c4c83deee595f16f69abf0fbb5813d9c3ddb1304555043331bad5a3c9" Feb 23 09:10:07 crc kubenswrapper[4940]: I0223 09:10:07.948391 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-api-0" Feb 23 09:10:07 crc kubenswrapper[4940]: I0223 09:10:07.960507 4940 generic.go:334] "Generic (PLEG): container finished" podID="e5a733b1-ef74-46e6-804f-c0ad92ef38aa" containerID="b12a95c55002c79e218f34a64026209d4d4fd68a1bedaa8a635c24e7ef547553" exitCode=0 Feb 23 09:10:07 crc kubenswrapper[4940]: I0223 09:10:07.960834 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"e5a733b1-ef74-46e6-804f-c0ad92ef38aa","Type":"ContainerDied","Data":"b12a95c55002c79e218f34a64026209d4d4fd68a1bedaa8a635c24e7ef547553"} Feb 23 09:10:07 crc kubenswrapper[4940]: I0223 09:10:07.960893 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"e5a733b1-ef74-46e6-804f-c0ad92ef38aa","Type":"ContainerDied","Data":"81da9152bc042f95853473ae615973c62f415f4fefb401ce0805b43028302201"} Feb 23 09:10:07 crc kubenswrapper[4940]: I0223 09:10:07.960850 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 23 09:10:07 crc kubenswrapper[4940]: I0223 09:10:07.971502 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"f91c0e0d-08da-47b9-acef-5e4e9856fc85","Type":"ContainerStarted","Data":"07e253898d0b9af6d7ff985bfbb08236f20f8d225ba0200cff8fe6e2fb805c63"} Feb 23 09:10:07 crc kubenswrapper[4940]: I0223 09:10:07.972311 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Feb 23 09:10:07 crc kubenswrapper[4940]: I0223 09:10:07.984597 4940 generic.go:334] "Generic (PLEG): container finished" podID="0911a5da-3249-4858-8246-5334db240255" containerID="9dc625dfed1efcdc6c459d2aaae2a1308f6983bc1ca6992dd85cdee635c328a8" exitCode=0 Feb 23 09:10:07 crc kubenswrapper[4940]: I0223 09:10:07.984640 4940 generic.go:334] "Generic (PLEG): container finished" podID="0911a5da-3249-4858-8246-5334db240255" containerID="fc8258480e70099ee6e8f1fcebd1c467d0da852e3235463d521c802cf2fc23de" exitCode=0 Feb 23 09:10:07 crc kubenswrapper[4940]: I0223 09:10:07.984854 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-backup-0" Feb 23 09:10:07 crc kubenswrapper[4940]: I0223 09:10:07.984972 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-backup-0" event={"ID":"0911a5da-3249-4858-8246-5334db240255","Type":"ContainerDied","Data":"9dc625dfed1efcdc6c459d2aaae2a1308f6983bc1ca6992dd85cdee635c328a8"} Feb 23 09:10:07 crc kubenswrapper[4940]: I0223 09:10:07.985028 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-backup-0" event={"ID":"0911a5da-3249-4858-8246-5334db240255","Type":"ContainerDied","Data":"fc8258480e70099ee6e8f1fcebd1c467d0da852e3235463d521c802cf2fc23de"} Feb 23 09:10:07 crc kubenswrapper[4940]: I0223 09:10:07.985039 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-backup-0" event={"ID":"0911a5da-3249-4858-8246-5334db240255","Type":"ContainerDied","Data":"f819f014d4e2ddd6f0104c3634bbb56bb28b87dbd06d18188332947a2453706b"} Feb 23 09:10:07 crc kubenswrapper[4940]: I0223 09:10:07.989740 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0911a5da-3249-4858-8246-5334db240255-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0911a5da-3249-4858-8246-5334db240255" (UID: "0911a5da-3249-4858-8246-5334db240255"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 09:10:08 crc kubenswrapper[4940]: I0223 09:10:08.024149 4940 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0911a5da-3249-4858-8246-5334db240255-scripts\") on node \"crc\" DevicePath \"\"" Feb 23 09:10:08 crc kubenswrapper[4940]: I0223 09:10:08.024375 4940 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0911a5da-3249-4858-8246-5334db240255-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 23 09:10:08 crc kubenswrapper[4940]: I0223 09:10:08.024387 4940 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0911a5da-3249-4858-8246-5334db240255-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 09:10:08 crc kubenswrapper[4940]: I0223 09:10:08.024396 4940 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nq7dq\" (UniqueName: \"kubernetes.io/projected/0911a5da-3249-4858-8246-5334db240255-kube-api-access-nq7dq\") on node \"crc\" DevicePath \"\"" Feb 23 09:10:08 crc kubenswrapper[4940]: I0223 09:10:08.024404 4940 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/0911a5da-3249-4858-8246-5334db240255-ceph\") on node \"crc\" DevicePath \"\"" Feb 23 09:10:08 crc kubenswrapper[4940]: I0223 09:10:08.024412 4940 reconciler_common.go:293] "Volume detached for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/0911a5da-3249-4858-8246-5334db240255-var-lib-cinder\") on node \"crc\" DevicePath \"\"" Feb 23 09:10:08 crc kubenswrapper[4940]: I0223 09:10:08.024419 4940 reconciler_common.go:293] "Volume detached for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/0911a5da-3249-4858-8246-5334db240255-dev\") on node \"crc\" DevicePath \"\"" Feb 23 09:10:08 crc kubenswrapper[4940]: I0223 09:10:08.024427 4940 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0911a5da-3249-4858-8246-5334db240255-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 23 09:10:08 crc kubenswrapper[4940]: I0223 09:10:08.024434 4940 reconciler_common.go:293] "Volume detached for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/0911a5da-3249-4858-8246-5334db240255-var-locks-cinder\") on node \"crc\" DevicePath \"\"" Feb 23 09:10:08 crc kubenswrapper[4940]: I0223 09:10:08.024443 4940 reconciler_common.go:293] "Volume detached for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/0911a5da-3249-4858-8246-5334db240255-var-locks-brick\") on node \"crc\" DevicePath \"\"" Feb 23 09:10:08 crc kubenswrapper[4940]: I0223 09:10:08.024452 4940 reconciler_common.go:293] "Volume detached for volume \"run\" (UniqueName: \"kubernetes.io/host-path/0911a5da-3249-4858-8246-5334db240255-run\") on node \"crc\" DevicePath \"\"" Feb 23 09:10:08 crc kubenswrapper[4940]: I0223 09:10:08.024460 4940 reconciler_common.go:293] "Volume detached for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/0911a5da-3249-4858-8246-5334db240255-etc-nvme\") on node \"crc\" DevicePath \"\"" Feb 23 09:10:08 crc kubenswrapper[4940]: I0223 09:10:08.029606 4940 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=5.02958095 podStartE2EDuration="5.02958095s" podCreationTimestamp="2026-02-23 09:10:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 09:10:08.007415743 +0000 UTC m=+1339.390621920" watchObservedRunningTime="2026-02-23 09:10:08.02958095 +0000 UTC m=+1339.412787107" Feb 23 09:10:08 crc kubenswrapper[4940]: I0223 09:10:08.096744 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0911a5da-3249-4858-8246-5334db240255-config-data" (OuterVolumeSpecName: "config-data") pod "0911a5da-3249-4858-8246-5334db240255" (UID: "0911a5da-3249-4858-8246-5334db240255"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 09:10:08 crc kubenswrapper[4940]: I0223 09:10:08.127490 4940 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0911a5da-3249-4858-8246-5334db240255-config-data\") on node \"crc\" DevicePath \"\"" Feb 23 09:10:08 crc kubenswrapper[4940]: I0223 09:10:08.171283 4940 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-api-0"] Feb 23 09:10:08 crc kubenswrapper[4940]: I0223 09:10:08.190533 4940 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/manila-api-0"] Feb 23 09:10:08 crc kubenswrapper[4940]: I0223 09:10:08.194147 4940 scope.go:117] "RemoveContainer" containerID="c763a9d872a420837f57148b304dfd674439c18ea4898799aa5da0625339e876" Feb 23 09:10:08 crc kubenswrapper[4940]: I0223 09:10:08.215817 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-api-0"] Feb 23 09:10:08 crc kubenswrapper[4940]: E0223 09:10:08.216291 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0911a5da-3249-4858-8246-5334db240255" containerName="probe" Feb 23 09:10:08 crc kubenswrapper[4940]: I0223 09:10:08.216316 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="0911a5da-3249-4858-8246-5334db240255" containerName="probe" Feb 23 09:10:08 crc kubenswrapper[4940]: E0223 09:10:08.216335 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="036763ca-16f2-4880-8381-1b9330182312" containerName="manila-api" Feb 23 09:10:08 crc kubenswrapper[4940]: I0223 09:10:08.216344 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="036763ca-16f2-4880-8381-1b9330182312" containerName="manila-api" Feb 23 09:10:08 crc kubenswrapper[4940]: E0223 09:10:08.216381 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0911a5da-3249-4858-8246-5334db240255" containerName="cinder-backup" Feb 23 09:10:08 crc kubenswrapper[4940]: I0223 09:10:08.216387 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="0911a5da-3249-4858-8246-5334db240255" containerName="cinder-backup" Feb 23 09:10:08 crc kubenswrapper[4940]: E0223 09:10:08.216402 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5a733b1-ef74-46e6-804f-c0ad92ef38aa" containerName="probe" Feb 23 09:10:08 crc kubenswrapper[4940]: I0223 09:10:08.216408 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5a733b1-ef74-46e6-804f-c0ad92ef38aa" containerName="probe" Feb 23 09:10:08 crc kubenswrapper[4940]: E0223 09:10:08.216415 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5a733b1-ef74-46e6-804f-c0ad92ef38aa" containerName="cinder-scheduler" Feb 23 09:10:08 crc kubenswrapper[4940]: I0223 09:10:08.216421 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5a733b1-ef74-46e6-804f-c0ad92ef38aa" containerName="cinder-scheduler" Feb 23 09:10:08 crc kubenswrapper[4940]: E0223 09:10:08.216440 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="036763ca-16f2-4880-8381-1b9330182312" containerName="manila-api-log" Feb 23 09:10:08 crc kubenswrapper[4940]: I0223 09:10:08.216447 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="036763ca-16f2-4880-8381-1b9330182312" containerName="manila-api-log" Feb 23 09:10:08 crc kubenswrapper[4940]: I0223 09:10:08.216665 4940 memory_manager.go:354] "RemoveStaleState removing state" podUID="0911a5da-3249-4858-8246-5334db240255" containerName="probe" Feb 23 09:10:08 crc kubenswrapper[4940]: I0223 09:10:08.216691 4940 memory_manager.go:354] "RemoveStaleState removing state" podUID="e5a733b1-ef74-46e6-804f-c0ad92ef38aa" containerName="probe" Feb 23 09:10:08 crc kubenswrapper[4940]: I0223 09:10:08.216705 4940 memory_manager.go:354] "RemoveStaleState removing state" podUID="036763ca-16f2-4880-8381-1b9330182312" containerName="manila-api" Feb 23 09:10:08 crc kubenswrapper[4940]: I0223 09:10:08.216719 4940 memory_manager.go:354] "RemoveStaleState removing state" podUID="e5a733b1-ef74-46e6-804f-c0ad92ef38aa" containerName="cinder-scheduler" Feb 23 09:10:08 crc kubenswrapper[4940]: I0223 09:10:08.216731 4940 memory_manager.go:354] "RemoveStaleState removing state" podUID="0911a5da-3249-4858-8246-5334db240255" containerName="cinder-backup" Feb 23 09:10:08 crc kubenswrapper[4940]: I0223 09:10:08.216747 4940 memory_manager.go:354] "RemoveStaleState removing state" podUID="036763ca-16f2-4880-8381-1b9330182312" containerName="manila-api-log" Feb 23 09:10:08 crc kubenswrapper[4940]: I0223 09:10:08.217930 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-api-0" Feb 23 09:10:08 crc kubenswrapper[4940]: I0223 09:10:08.220584 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-manila-public-svc" Feb 23 09:10:08 crc kubenswrapper[4940]: I0223 09:10:08.220778 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-api-config-data" Feb 23 09:10:08 crc kubenswrapper[4940]: I0223 09:10:08.222103 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-manila-internal-svc" Feb 23 09:10:08 crc kubenswrapper[4940]: I0223 09:10:08.236839 4940 scope.go:117] "RemoveContainer" containerID="17618246250e8a923236e125ed618609457b538532a5862c77e536ed6573ebfe" Feb 23 09:10:08 crc kubenswrapper[4940]: I0223 09:10:08.242659 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-8485464bb-cvmj5" Feb 23 09:10:08 crc kubenswrapper[4940]: I0223 09:10:08.246706 4940 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 23 09:10:08 crc kubenswrapper[4940]: I0223 09:10:08.269376 4940 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 23 09:10:08 crc kubenswrapper[4940]: I0223 09:10:08.275944 4940 scope.go:117] "RemoveContainer" containerID="b12a95c55002c79e218f34a64026209d4d4fd68a1bedaa8a635c24e7ef547553" Feb 23 09:10:08 crc kubenswrapper[4940]: I0223 09:10:08.291560 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-api-0"] Feb 23 09:10:08 crc kubenswrapper[4940]: I0223 09:10:08.343970 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3f78e173-a538-4fa3-804d-25bff89a23ca-scripts\") pod \"manila-api-0\" (UID: \"3f78e173-a538-4fa3-804d-25bff89a23ca\") " pod="openstack/manila-api-0" Feb 23 09:10:08 crc kubenswrapper[4940]: I0223 09:10:08.344087 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3f78e173-a538-4fa3-804d-25bff89a23ca-logs\") pod \"manila-api-0\" (UID: \"3f78e173-a538-4fa3-804d-25bff89a23ca\") " pod="openstack/manila-api-0" Feb 23 09:10:08 crc kubenswrapper[4940]: I0223 09:10:08.344239 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f78e173-a538-4fa3-804d-25bff89a23ca-config-data\") pod \"manila-api-0\" (UID: \"3f78e173-a538-4fa3-804d-25bff89a23ca\") " pod="openstack/manila-api-0" Feb 23 09:10:08 crc kubenswrapper[4940]: I0223 09:10:08.344282 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3f78e173-a538-4fa3-804d-25bff89a23ca-internal-tls-certs\") pod \"manila-api-0\" (UID: \"3f78e173-a538-4fa3-804d-25bff89a23ca\") " pod="openstack/manila-api-0" Feb 23 09:10:08 crc kubenswrapper[4940]: I0223 09:10:08.344311 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3f78e173-a538-4fa3-804d-25bff89a23ca-config-data-custom\") pod \"manila-api-0\" (UID: \"3f78e173-a538-4fa3-804d-25bff89a23ca\") " pod="openstack/manila-api-0" Feb 23 09:10:08 crc kubenswrapper[4940]: I0223 09:10:08.344408 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f2bxp\" (UniqueName: \"kubernetes.io/projected/3f78e173-a538-4fa3-804d-25bff89a23ca-kube-api-access-f2bxp\") pod \"manila-api-0\" (UID: \"3f78e173-a538-4fa3-804d-25bff89a23ca\") " pod="openstack/manila-api-0" Feb 23 09:10:08 crc kubenswrapper[4940]: I0223 09:10:08.344455 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3f78e173-a538-4fa3-804d-25bff89a23ca-etc-machine-id\") pod \"manila-api-0\" (UID: \"3f78e173-a538-4fa3-804d-25bff89a23ca\") " pod="openstack/manila-api-0" Feb 23 09:10:08 crc kubenswrapper[4940]: I0223 09:10:08.344557 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3f78e173-a538-4fa3-804d-25bff89a23ca-public-tls-certs\") pod \"manila-api-0\" (UID: \"3f78e173-a538-4fa3-804d-25bff89a23ca\") " pod="openstack/manila-api-0" Feb 23 09:10:08 crc kubenswrapper[4940]: I0223 09:10:08.346469 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f78e173-a538-4fa3-804d-25bff89a23ca-combined-ca-bundle\") pod \"manila-api-0\" (UID: \"3f78e173-a538-4fa3-804d-25bff89a23ca\") " pod="openstack/manila-api-0" Feb 23 09:10:08 crc kubenswrapper[4940]: I0223 09:10:08.346706 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Feb 23 09:10:08 crc kubenswrapper[4940]: I0223 09:10:08.350975 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 23 09:10:08 crc kubenswrapper[4940]: I0223 09:10:08.357274 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Feb 23 09:10:08 crc kubenswrapper[4940]: I0223 09:10:08.400801 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 23 09:10:08 crc kubenswrapper[4940]: I0223 09:10:08.346798 4940 scope.go:117] "RemoveContainer" containerID="17618246250e8a923236e125ed618609457b538532a5862c77e536ed6573ebfe" Feb 23 09:10:08 crc kubenswrapper[4940]: E0223 09:10:08.410045 4940 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"17618246250e8a923236e125ed618609457b538532a5862c77e536ed6573ebfe\": container with ID starting with 17618246250e8a923236e125ed618609457b538532a5862c77e536ed6573ebfe not found: ID does not exist" containerID="17618246250e8a923236e125ed618609457b538532a5862c77e536ed6573ebfe" Feb 23 09:10:08 crc kubenswrapper[4940]: I0223 09:10:08.410092 4940 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"17618246250e8a923236e125ed618609457b538532a5862c77e536ed6573ebfe"} err="failed to get container status \"17618246250e8a923236e125ed618609457b538532a5862c77e536ed6573ebfe\": rpc error: code = NotFound desc = could not find container \"17618246250e8a923236e125ed618609457b538532a5862c77e536ed6573ebfe\": container with ID starting with 17618246250e8a923236e125ed618609457b538532a5862c77e536ed6573ebfe not found: ID does not exist" Feb 23 09:10:08 crc kubenswrapper[4940]: I0223 09:10:08.410120 4940 scope.go:117] "RemoveContainer" containerID="b12a95c55002c79e218f34a64026209d4d4fd68a1bedaa8a635c24e7ef547553" Feb 23 09:10:08 crc kubenswrapper[4940]: E0223 09:10:08.413889 4940 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b12a95c55002c79e218f34a64026209d4d4fd68a1bedaa8a635c24e7ef547553\": container with ID starting with b12a95c55002c79e218f34a64026209d4d4fd68a1bedaa8a635c24e7ef547553 not found: ID does not exist" containerID="b12a95c55002c79e218f34a64026209d4d4fd68a1bedaa8a635c24e7ef547553" Feb 23 09:10:08 crc kubenswrapper[4940]: I0223 09:10:08.413930 4940 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b12a95c55002c79e218f34a64026209d4d4fd68a1bedaa8a635c24e7ef547553"} err="failed to get container status \"b12a95c55002c79e218f34a64026209d4d4fd68a1bedaa8a635c24e7ef547553\": rpc error: code = NotFound desc = could not find container \"b12a95c55002c79e218f34a64026209d4d4fd68a1bedaa8a635c24e7ef547553\": container with ID starting with b12a95c55002c79e218f34a64026209d4d4fd68a1bedaa8a635c24e7ef547553 not found: ID does not exist" Feb 23 09:10:08 crc kubenswrapper[4940]: I0223 09:10:08.413956 4940 scope.go:117] "RemoveContainer" containerID="9dc625dfed1efcdc6c459d2aaae2a1308f6983bc1ca6992dd85cdee635c328a8" Feb 23 09:10:08 crc kubenswrapper[4940]: I0223 09:10:08.452740 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8ea914bd-a046-42ba-942e-7d3d778d0b52-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"8ea914bd-a046-42ba-942e-7d3d778d0b52\") " pod="openstack/cinder-scheduler-0" Feb 23 09:10:08 crc kubenswrapper[4940]: I0223 09:10:08.453759 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8ea914bd-a046-42ba-942e-7d3d778d0b52-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"8ea914bd-a046-42ba-942e-7d3d778d0b52\") " pod="openstack/cinder-scheduler-0" Feb 23 09:10:08 crc kubenswrapper[4940]: I0223 09:10:08.453871 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f78e173-a538-4fa3-804d-25bff89a23ca-combined-ca-bundle\") pod \"manila-api-0\" (UID: \"3f78e173-a538-4fa3-804d-25bff89a23ca\") " pod="openstack/manila-api-0" Feb 23 09:10:08 crc kubenswrapper[4940]: I0223 09:10:08.453952 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3f78e173-a538-4fa3-804d-25bff89a23ca-scripts\") pod \"manila-api-0\" (UID: \"3f78e173-a538-4fa3-804d-25bff89a23ca\") " pod="openstack/manila-api-0" Feb 23 09:10:08 crc kubenswrapper[4940]: I0223 09:10:08.454037 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gtcbk\" (UniqueName: \"kubernetes.io/projected/8ea914bd-a046-42ba-942e-7d3d778d0b52-kube-api-access-gtcbk\") pod \"cinder-scheduler-0\" (UID: \"8ea914bd-a046-42ba-942e-7d3d778d0b52\") " pod="openstack/cinder-scheduler-0" Feb 23 09:10:08 crc kubenswrapper[4940]: I0223 09:10:08.454065 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3f78e173-a538-4fa3-804d-25bff89a23ca-logs\") pod \"manila-api-0\" (UID: \"3f78e173-a538-4fa3-804d-25bff89a23ca\") " pod="openstack/manila-api-0" Feb 23 09:10:08 crc kubenswrapper[4940]: I0223 09:10:08.454162 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f78e173-a538-4fa3-804d-25bff89a23ca-config-data\") pod \"manila-api-0\" (UID: \"3f78e173-a538-4fa3-804d-25bff89a23ca\") " pod="openstack/manila-api-0" Feb 23 09:10:08 crc kubenswrapper[4940]: I0223 09:10:08.454193 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3f78e173-a538-4fa3-804d-25bff89a23ca-internal-tls-certs\") pod \"manila-api-0\" (UID: \"3f78e173-a538-4fa3-804d-25bff89a23ca\") " pod="openstack/manila-api-0" Feb 23 09:10:08 crc kubenswrapper[4940]: I0223 09:10:08.454217 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3f78e173-a538-4fa3-804d-25bff89a23ca-config-data-custom\") pod \"manila-api-0\" (UID: \"3f78e173-a538-4fa3-804d-25bff89a23ca\") " pod="openstack/manila-api-0" Feb 23 09:10:08 crc kubenswrapper[4940]: I0223 09:10:08.454255 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ea914bd-a046-42ba-942e-7d3d778d0b52-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"8ea914bd-a046-42ba-942e-7d3d778d0b52\") " pod="openstack/cinder-scheduler-0" Feb 23 09:10:08 crc kubenswrapper[4940]: I0223 09:10:08.454336 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f2bxp\" (UniqueName: \"kubernetes.io/projected/3f78e173-a538-4fa3-804d-25bff89a23ca-kube-api-access-f2bxp\") pod \"manila-api-0\" (UID: \"3f78e173-a538-4fa3-804d-25bff89a23ca\") " pod="openstack/manila-api-0" Feb 23 09:10:08 crc kubenswrapper[4940]: I0223 09:10:08.454361 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ea914bd-a046-42ba-942e-7d3d778d0b52-config-data\") pod \"cinder-scheduler-0\" (UID: \"8ea914bd-a046-42ba-942e-7d3d778d0b52\") " pod="openstack/cinder-scheduler-0" Feb 23 09:10:08 crc kubenswrapper[4940]: I0223 09:10:08.454397 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3f78e173-a538-4fa3-804d-25bff89a23ca-etc-machine-id\") pod \"manila-api-0\" (UID: \"3f78e173-a538-4fa3-804d-25bff89a23ca\") " pod="openstack/manila-api-0" Feb 23 09:10:08 crc kubenswrapper[4940]: I0223 09:10:08.454478 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8ea914bd-a046-42ba-942e-7d3d778d0b52-scripts\") pod \"cinder-scheduler-0\" (UID: \"8ea914bd-a046-42ba-942e-7d3d778d0b52\") " pod="openstack/cinder-scheduler-0" Feb 23 09:10:08 crc kubenswrapper[4940]: I0223 09:10:08.454513 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3f78e173-a538-4fa3-804d-25bff89a23ca-public-tls-certs\") pod \"manila-api-0\" (UID: \"3f78e173-a538-4fa3-804d-25bff89a23ca\") " pod="openstack/manila-api-0" Feb 23 09:10:08 crc kubenswrapper[4940]: I0223 09:10:08.455694 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3f78e173-a538-4fa3-804d-25bff89a23ca-etc-machine-id\") pod \"manila-api-0\" (UID: \"3f78e173-a538-4fa3-804d-25bff89a23ca\") " pod="openstack/manila-api-0" Feb 23 09:10:08 crc kubenswrapper[4940]: I0223 09:10:08.461237 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3f78e173-a538-4fa3-804d-25bff89a23ca-public-tls-certs\") pod \"manila-api-0\" (UID: \"3f78e173-a538-4fa3-804d-25bff89a23ca\") " pod="openstack/manila-api-0" Feb 23 09:10:08 crc kubenswrapper[4940]: I0223 09:10:08.461263 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3f78e173-a538-4fa3-804d-25bff89a23ca-logs\") pod \"manila-api-0\" (UID: \"3f78e173-a538-4fa3-804d-25bff89a23ca\") " pod="openstack/manila-api-0" Feb 23 09:10:08 crc kubenswrapper[4940]: I0223 09:10:08.461295 4940 scope.go:117] "RemoveContainer" containerID="fc8258480e70099ee6e8f1fcebd1c467d0da852e3235463d521c802cf2fc23de" Feb 23 09:10:08 crc kubenswrapper[4940]: I0223 09:10:08.462663 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f78e173-a538-4fa3-804d-25bff89a23ca-config-data\") pod \"manila-api-0\" (UID: \"3f78e173-a538-4fa3-804d-25bff89a23ca\") " pod="openstack/manila-api-0" Feb 23 09:10:08 crc kubenswrapper[4940]: I0223 09:10:08.463202 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f78e173-a538-4fa3-804d-25bff89a23ca-combined-ca-bundle\") pod \"manila-api-0\" (UID: \"3f78e173-a538-4fa3-804d-25bff89a23ca\") " pod="openstack/manila-api-0" Feb 23 09:10:08 crc kubenswrapper[4940]: I0223 09:10:08.464656 4940 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-6884678d78-ckt87"] Feb 23 09:10:08 crc kubenswrapper[4940]: I0223 09:10:08.464863 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3f78e173-a538-4fa3-804d-25bff89a23ca-config-data-custom\") pod \"manila-api-0\" (UID: \"3f78e173-a538-4fa3-804d-25bff89a23ca\") " pod="openstack/manila-api-0" Feb 23 09:10:08 crc kubenswrapper[4940]: I0223 09:10:08.464926 4940 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-6884678d78-ckt87" podUID="e330abf6-9282-4221-b286-672ffc3985e7" containerName="horizon-log" containerID="cri-o://9839860a41439a44231b70aa313797924a814f32443061171c7749de8e9583fd" gracePeriod=30 Feb 23 09:10:08 crc kubenswrapper[4940]: I0223 09:10:08.465040 4940 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-6884678d78-ckt87" podUID="e330abf6-9282-4221-b286-672ffc3985e7" containerName="horizon" containerID="cri-o://7867726af076a08e4add2f52c56486390e581182e1eadda89120ffd72a88d736" gracePeriod=30 Feb 23 09:10:08 crc kubenswrapper[4940]: I0223 09:10:08.465569 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3f78e173-a538-4fa3-804d-25bff89a23ca-internal-tls-certs\") pod \"manila-api-0\" (UID: \"3f78e173-a538-4fa3-804d-25bff89a23ca\") " pod="openstack/manila-api-0" Feb 23 09:10:08 crc kubenswrapper[4940]: I0223 09:10:08.470047 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3f78e173-a538-4fa3-804d-25bff89a23ca-scripts\") pod \"manila-api-0\" (UID: \"3f78e173-a538-4fa3-804d-25bff89a23ca\") " pod="openstack/manila-api-0" Feb 23 09:10:08 crc kubenswrapper[4940]: I0223 09:10:08.474664 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f2bxp\" (UniqueName: \"kubernetes.io/projected/3f78e173-a538-4fa3-804d-25bff89a23ca-kube-api-access-f2bxp\") pod \"manila-api-0\" (UID: \"3f78e173-a538-4fa3-804d-25bff89a23ca\") " pod="openstack/manila-api-0" Feb 23 09:10:08 crc kubenswrapper[4940]: I0223 09:10:08.475683 4940 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-backup-0"] Feb 23 09:10:08 crc kubenswrapper[4940]: I0223 09:10:08.483964 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-6884678d78-ckt87" Feb 23 09:10:08 crc kubenswrapper[4940]: I0223 09:10:08.487205 4940 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-backup-0"] Feb 23 09:10:08 crc kubenswrapper[4940]: I0223 09:10:08.487784 4940 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-6884678d78-ckt87" podUID="e330abf6-9282-4221-b286-672ffc3985e7" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.151:8443/dashboard/auth/login/?next=/dashboard/\": read tcp 10.217.0.2:59534->10.217.0.151:8443: read: connection reset by peer" Feb 23 09:10:08 crc kubenswrapper[4940]: I0223 09:10:08.526579 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-backup-0"] Feb 23 09:10:08 crc kubenswrapper[4940]: I0223 09:10:08.531592 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-backup-0"] Feb 23 09:10:08 crc kubenswrapper[4940]: I0223 09:10:08.531719 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-backup-0" Feb 23 09:10:08 crc kubenswrapper[4940]: I0223 09:10:08.545136 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-api-0" Feb 23 09:10:08 crc kubenswrapper[4940]: I0223 09:10:08.546979 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-backup-config-data" Feb 23 09:10:08 crc kubenswrapper[4940]: I0223 09:10:08.556653 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8ea914bd-a046-42ba-942e-7d3d778d0b52-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"8ea914bd-a046-42ba-942e-7d3d778d0b52\") " pod="openstack/cinder-scheduler-0" Feb 23 09:10:08 crc kubenswrapper[4940]: I0223 09:10:08.556723 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8ea914bd-a046-42ba-942e-7d3d778d0b52-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"8ea914bd-a046-42ba-942e-7d3d778d0b52\") " pod="openstack/cinder-scheduler-0" Feb 23 09:10:08 crc kubenswrapper[4940]: I0223 09:10:08.557576 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gtcbk\" (UniqueName: \"kubernetes.io/projected/8ea914bd-a046-42ba-942e-7d3d778d0b52-kube-api-access-gtcbk\") pod \"cinder-scheduler-0\" (UID: \"8ea914bd-a046-42ba-942e-7d3d778d0b52\") " pod="openstack/cinder-scheduler-0" Feb 23 09:10:08 crc kubenswrapper[4940]: I0223 09:10:08.557821 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ea914bd-a046-42ba-942e-7d3d778d0b52-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"8ea914bd-a046-42ba-942e-7d3d778d0b52\") " pod="openstack/cinder-scheduler-0" Feb 23 09:10:08 crc kubenswrapper[4940]: I0223 09:10:08.557889 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ea914bd-a046-42ba-942e-7d3d778d0b52-config-data\") pod \"cinder-scheduler-0\" (UID: \"8ea914bd-a046-42ba-942e-7d3d778d0b52\") " pod="openstack/cinder-scheduler-0" Feb 23 09:10:08 crc kubenswrapper[4940]: I0223 09:10:08.557967 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8ea914bd-a046-42ba-942e-7d3d778d0b52-scripts\") pod \"cinder-scheduler-0\" (UID: \"8ea914bd-a046-42ba-942e-7d3d778d0b52\") " pod="openstack/cinder-scheduler-0" Feb 23 09:10:08 crc kubenswrapper[4940]: I0223 09:10:08.558241 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8ea914bd-a046-42ba-942e-7d3d778d0b52-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"8ea914bd-a046-42ba-942e-7d3d778d0b52\") " pod="openstack/cinder-scheduler-0" Feb 23 09:10:08 crc kubenswrapper[4940]: I0223 09:10:08.563265 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8ea914bd-a046-42ba-942e-7d3d778d0b52-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"8ea914bd-a046-42ba-942e-7d3d778d0b52\") " pod="openstack/cinder-scheduler-0" Feb 23 09:10:08 crc kubenswrapper[4940]: I0223 09:10:08.573572 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8ea914bd-a046-42ba-942e-7d3d778d0b52-scripts\") pod \"cinder-scheduler-0\" (UID: \"8ea914bd-a046-42ba-942e-7d3d778d0b52\") " pod="openstack/cinder-scheduler-0" Feb 23 09:10:08 crc kubenswrapper[4940]: I0223 09:10:08.573625 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ea914bd-a046-42ba-942e-7d3d778d0b52-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"8ea914bd-a046-42ba-942e-7d3d778d0b52\") " pod="openstack/cinder-scheduler-0" Feb 23 09:10:08 crc kubenswrapper[4940]: I0223 09:10:08.573649 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ea914bd-a046-42ba-942e-7d3d778d0b52-config-data\") pod \"cinder-scheduler-0\" (UID: \"8ea914bd-a046-42ba-942e-7d3d778d0b52\") " pod="openstack/cinder-scheduler-0" Feb 23 09:10:08 crc kubenswrapper[4940]: I0223 09:10:08.580335 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gtcbk\" (UniqueName: \"kubernetes.io/projected/8ea914bd-a046-42ba-942e-7d3d778d0b52-kube-api-access-gtcbk\") pod \"cinder-scheduler-0\" (UID: \"8ea914bd-a046-42ba-942e-7d3d778d0b52\") " pod="openstack/cinder-scheduler-0" Feb 23 09:10:08 crc kubenswrapper[4940]: I0223 09:10:08.595870 4940 scope.go:117] "RemoveContainer" containerID="9dc625dfed1efcdc6c459d2aaae2a1308f6983bc1ca6992dd85cdee635c328a8" Feb 23 09:10:08 crc kubenswrapper[4940]: E0223 09:10:08.596485 4940 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9dc625dfed1efcdc6c459d2aaae2a1308f6983bc1ca6992dd85cdee635c328a8\": container with ID starting with 9dc625dfed1efcdc6c459d2aaae2a1308f6983bc1ca6992dd85cdee635c328a8 not found: ID does not exist" containerID="9dc625dfed1efcdc6c459d2aaae2a1308f6983bc1ca6992dd85cdee635c328a8" Feb 23 09:10:08 crc kubenswrapper[4940]: I0223 09:10:08.596570 4940 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9dc625dfed1efcdc6c459d2aaae2a1308f6983bc1ca6992dd85cdee635c328a8"} err="failed to get container status \"9dc625dfed1efcdc6c459d2aaae2a1308f6983bc1ca6992dd85cdee635c328a8\": rpc error: code = NotFound desc = could not find container \"9dc625dfed1efcdc6c459d2aaae2a1308f6983bc1ca6992dd85cdee635c328a8\": container with ID starting with 9dc625dfed1efcdc6c459d2aaae2a1308f6983bc1ca6992dd85cdee635c328a8 not found: ID does not exist" Feb 23 09:10:08 crc kubenswrapper[4940]: I0223 09:10:08.596600 4940 scope.go:117] "RemoveContainer" containerID="fc8258480e70099ee6e8f1fcebd1c467d0da852e3235463d521c802cf2fc23de" Feb 23 09:10:08 crc kubenswrapper[4940]: E0223 09:10:08.597709 4940 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fc8258480e70099ee6e8f1fcebd1c467d0da852e3235463d521c802cf2fc23de\": container with ID starting with fc8258480e70099ee6e8f1fcebd1c467d0da852e3235463d521c802cf2fc23de not found: ID does not exist" containerID="fc8258480e70099ee6e8f1fcebd1c467d0da852e3235463d521c802cf2fc23de" Feb 23 09:10:08 crc kubenswrapper[4940]: I0223 09:10:08.597738 4940 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fc8258480e70099ee6e8f1fcebd1c467d0da852e3235463d521c802cf2fc23de"} err="failed to get container status \"fc8258480e70099ee6e8f1fcebd1c467d0da852e3235463d521c802cf2fc23de\": rpc error: code = NotFound desc = could not find container \"fc8258480e70099ee6e8f1fcebd1c467d0da852e3235463d521c802cf2fc23de\": container with ID starting with fc8258480e70099ee6e8f1fcebd1c467d0da852e3235463d521c802cf2fc23de not found: ID does not exist" Feb 23 09:10:08 crc kubenswrapper[4940]: I0223 09:10:08.597757 4940 scope.go:117] "RemoveContainer" containerID="9dc625dfed1efcdc6c459d2aaae2a1308f6983bc1ca6992dd85cdee635c328a8" Feb 23 09:10:08 crc kubenswrapper[4940]: I0223 09:10:08.601784 4940 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9dc625dfed1efcdc6c459d2aaae2a1308f6983bc1ca6992dd85cdee635c328a8"} err="failed to get container status \"9dc625dfed1efcdc6c459d2aaae2a1308f6983bc1ca6992dd85cdee635c328a8\": rpc error: code = NotFound desc = could not find container \"9dc625dfed1efcdc6c459d2aaae2a1308f6983bc1ca6992dd85cdee635c328a8\": container with ID starting with 9dc625dfed1efcdc6c459d2aaae2a1308f6983bc1ca6992dd85cdee635c328a8 not found: ID does not exist" Feb 23 09:10:08 crc kubenswrapper[4940]: I0223 09:10:08.601825 4940 scope.go:117] "RemoveContainer" containerID="fc8258480e70099ee6e8f1fcebd1c467d0da852e3235463d521c802cf2fc23de" Feb 23 09:10:08 crc kubenswrapper[4940]: I0223 09:10:08.602603 4940 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fc8258480e70099ee6e8f1fcebd1c467d0da852e3235463d521c802cf2fc23de"} err="failed to get container status \"fc8258480e70099ee6e8f1fcebd1c467d0da852e3235463d521c802cf2fc23de\": rpc error: code = NotFound desc = could not find container \"fc8258480e70099ee6e8f1fcebd1c467d0da852e3235463d521c802cf2fc23de\": container with ID starting with fc8258480e70099ee6e8f1fcebd1c467d0da852e3235463d521c802cf2fc23de not found: ID does not exist" Feb 23 09:10:08 crc kubenswrapper[4940]: I0223 09:10:08.659203 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/20de4506-b14e-4f9b-9afc-c4d9ac6aef52-run\") pod \"cinder-backup-0\" (UID: \"20de4506-b14e-4f9b-9afc-c4d9ac6aef52\") " pod="openstack/cinder-backup-0" Feb 23 09:10:08 crc kubenswrapper[4940]: I0223 09:10:08.659674 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20de4506-b14e-4f9b-9afc-c4d9ac6aef52-config-data\") pod \"cinder-backup-0\" (UID: \"20de4506-b14e-4f9b-9afc-c4d9ac6aef52\") " pod="openstack/cinder-backup-0" Feb 23 09:10:08 crc kubenswrapper[4940]: I0223 09:10:08.659705 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w8kxv\" (UniqueName: \"kubernetes.io/projected/20de4506-b14e-4f9b-9afc-c4d9ac6aef52-kube-api-access-w8kxv\") pod \"cinder-backup-0\" (UID: \"20de4506-b14e-4f9b-9afc-c4d9ac6aef52\") " pod="openstack/cinder-backup-0" Feb 23 09:10:08 crc kubenswrapper[4940]: I0223 09:10:08.659742 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/20de4506-b14e-4f9b-9afc-c4d9ac6aef52-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"20de4506-b14e-4f9b-9afc-c4d9ac6aef52\") " pod="openstack/cinder-backup-0" Feb 23 09:10:08 crc kubenswrapper[4940]: I0223 09:10:08.659756 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/20de4506-b14e-4f9b-9afc-c4d9ac6aef52-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"20de4506-b14e-4f9b-9afc-c4d9ac6aef52\") " pod="openstack/cinder-backup-0" Feb 23 09:10:08 crc kubenswrapper[4940]: I0223 09:10:08.659872 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/20de4506-b14e-4f9b-9afc-c4d9ac6aef52-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"20de4506-b14e-4f9b-9afc-c4d9ac6aef52\") " pod="openstack/cinder-backup-0" Feb 23 09:10:08 crc kubenswrapper[4940]: I0223 09:10:08.659893 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/20de4506-b14e-4f9b-9afc-c4d9ac6aef52-ceph\") pod \"cinder-backup-0\" (UID: \"20de4506-b14e-4f9b-9afc-c4d9ac6aef52\") " pod="openstack/cinder-backup-0" Feb 23 09:10:08 crc kubenswrapper[4940]: I0223 09:10:08.659922 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/20de4506-b14e-4f9b-9afc-c4d9ac6aef52-lib-modules\") pod \"cinder-backup-0\" (UID: \"20de4506-b14e-4f9b-9afc-c4d9ac6aef52\") " pod="openstack/cinder-backup-0" Feb 23 09:10:08 crc kubenswrapper[4940]: I0223 09:10:08.660016 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/20de4506-b14e-4f9b-9afc-c4d9ac6aef52-dev\") pod \"cinder-backup-0\" (UID: \"20de4506-b14e-4f9b-9afc-c4d9ac6aef52\") " pod="openstack/cinder-backup-0" Feb 23 09:10:08 crc kubenswrapper[4940]: I0223 09:10:08.660370 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/20de4506-b14e-4f9b-9afc-c4d9ac6aef52-etc-nvme\") pod \"cinder-backup-0\" (UID: \"20de4506-b14e-4f9b-9afc-c4d9ac6aef52\") " pod="openstack/cinder-backup-0" Feb 23 09:10:08 crc kubenswrapper[4940]: I0223 09:10:08.660531 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/20de4506-b14e-4f9b-9afc-c4d9ac6aef52-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"20de4506-b14e-4f9b-9afc-c4d9ac6aef52\") " pod="openstack/cinder-backup-0" Feb 23 09:10:08 crc kubenswrapper[4940]: I0223 09:10:08.660561 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/20de4506-b14e-4f9b-9afc-c4d9ac6aef52-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"20de4506-b14e-4f9b-9afc-c4d9ac6aef52\") " pod="openstack/cinder-backup-0" Feb 23 09:10:08 crc kubenswrapper[4940]: I0223 09:10:08.660625 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/20de4506-b14e-4f9b-9afc-c4d9ac6aef52-scripts\") pod \"cinder-backup-0\" (UID: \"20de4506-b14e-4f9b-9afc-c4d9ac6aef52\") " pod="openstack/cinder-backup-0" Feb 23 09:10:08 crc kubenswrapper[4940]: I0223 09:10:08.660761 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/20de4506-b14e-4f9b-9afc-c4d9ac6aef52-sys\") pod \"cinder-backup-0\" (UID: \"20de4506-b14e-4f9b-9afc-c4d9ac6aef52\") " pod="openstack/cinder-backup-0" Feb 23 09:10:08 crc kubenswrapper[4940]: I0223 09:10:08.660942 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/20de4506-b14e-4f9b-9afc-c4d9ac6aef52-config-data-custom\") pod \"cinder-backup-0\" (UID: \"20de4506-b14e-4f9b-9afc-c4d9ac6aef52\") " pod="openstack/cinder-backup-0" Feb 23 09:10:08 crc kubenswrapper[4940]: I0223 09:10:08.661045 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20de4506-b14e-4f9b-9afc-c4d9ac6aef52-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"20de4506-b14e-4f9b-9afc-c4d9ac6aef52\") " pod="openstack/cinder-backup-0" Feb 23 09:10:08 crc kubenswrapper[4940]: I0223 09:10:08.671389 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 23 09:10:08 crc kubenswrapper[4940]: I0223 09:10:08.771982 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/20de4506-b14e-4f9b-9afc-c4d9ac6aef52-run\") pod \"cinder-backup-0\" (UID: \"20de4506-b14e-4f9b-9afc-c4d9ac6aef52\") " pod="openstack/cinder-backup-0" Feb 23 09:10:08 crc kubenswrapper[4940]: I0223 09:10:08.772118 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/20de4506-b14e-4f9b-9afc-c4d9ac6aef52-run\") pod \"cinder-backup-0\" (UID: \"20de4506-b14e-4f9b-9afc-c4d9ac6aef52\") " pod="openstack/cinder-backup-0" Feb 23 09:10:08 crc kubenswrapper[4940]: I0223 09:10:08.772677 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20de4506-b14e-4f9b-9afc-c4d9ac6aef52-config-data\") pod \"cinder-backup-0\" (UID: \"20de4506-b14e-4f9b-9afc-c4d9ac6aef52\") " pod="openstack/cinder-backup-0" Feb 23 09:10:08 crc kubenswrapper[4940]: I0223 09:10:08.772714 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w8kxv\" (UniqueName: \"kubernetes.io/projected/20de4506-b14e-4f9b-9afc-c4d9ac6aef52-kube-api-access-w8kxv\") pod \"cinder-backup-0\" (UID: \"20de4506-b14e-4f9b-9afc-c4d9ac6aef52\") " pod="openstack/cinder-backup-0" Feb 23 09:10:08 crc kubenswrapper[4940]: I0223 09:10:08.772772 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/20de4506-b14e-4f9b-9afc-c4d9ac6aef52-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"20de4506-b14e-4f9b-9afc-c4d9ac6aef52\") " pod="openstack/cinder-backup-0" Feb 23 09:10:08 crc kubenswrapper[4940]: I0223 09:10:08.772788 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/20de4506-b14e-4f9b-9afc-c4d9ac6aef52-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"20de4506-b14e-4f9b-9afc-c4d9ac6aef52\") " pod="openstack/cinder-backup-0" Feb 23 09:10:08 crc kubenswrapper[4940]: I0223 09:10:08.772824 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/20de4506-b14e-4f9b-9afc-c4d9ac6aef52-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"20de4506-b14e-4f9b-9afc-c4d9ac6aef52\") " pod="openstack/cinder-backup-0" Feb 23 09:10:08 crc kubenswrapper[4940]: I0223 09:10:08.772845 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/20de4506-b14e-4f9b-9afc-c4d9ac6aef52-ceph\") pod \"cinder-backup-0\" (UID: \"20de4506-b14e-4f9b-9afc-c4d9ac6aef52\") " pod="openstack/cinder-backup-0" Feb 23 09:10:08 crc kubenswrapper[4940]: I0223 09:10:08.772876 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/20de4506-b14e-4f9b-9afc-c4d9ac6aef52-lib-modules\") pod \"cinder-backup-0\" (UID: \"20de4506-b14e-4f9b-9afc-c4d9ac6aef52\") " pod="openstack/cinder-backup-0" Feb 23 09:10:08 crc kubenswrapper[4940]: I0223 09:10:08.772899 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/20de4506-b14e-4f9b-9afc-c4d9ac6aef52-dev\") pod \"cinder-backup-0\" (UID: \"20de4506-b14e-4f9b-9afc-c4d9ac6aef52\") " pod="openstack/cinder-backup-0" Feb 23 09:10:08 crc kubenswrapper[4940]: I0223 09:10:08.772923 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/20de4506-b14e-4f9b-9afc-c4d9ac6aef52-etc-nvme\") pod \"cinder-backup-0\" (UID: \"20de4506-b14e-4f9b-9afc-c4d9ac6aef52\") " pod="openstack/cinder-backup-0" Feb 23 09:10:08 crc kubenswrapper[4940]: I0223 09:10:08.772954 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/20de4506-b14e-4f9b-9afc-c4d9ac6aef52-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"20de4506-b14e-4f9b-9afc-c4d9ac6aef52\") " pod="openstack/cinder-backup-0" Feb 23 09:10:08 crc kubenswrapper[4940]: I0223 09:10:08.773009 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/20de4506-b14e-4f9b-9afc-c4d9ac6aef52-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"20de4506-b14e-4f9b-9afc-c4d9ac6aef52\") " pod="openstack/cinder-backup-0" Feb 23 09:10:08 crc kubenswrapper[4940]: I0223 09:10:08.773033 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/20de4506-b14e-4f9b-9afc-c4d9ac6aef52-scripts\") pod \"cinder-backup-0\" (UID: \"20de4506-b14e-4f9b-9afc-c4d9ac6aef52\") " pod="openstack/cinder-backup-0" Feb 23 09:10:08 crc kubenswrapper[4940]: I0223 09:10:08.773114 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/20de4506-b14e-4f9b-9afc-c4d9ac6aef52-sys\") pod \"cinder-backup-0\" (UID: \"20de4506-b14e-4f9b-9afc-c4d9ac6aef52\") " pod="openstack/cinder-backup-0" Feb 23 09:10:08 crc kubenswrapper[4940]: I0223 09:10:08.773193 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/20de4506-b14e-4f9b-9afc-c4d9ac6aef52-config-data-custom\") pod \"cinder-backup-0\" (UID: \"20de4506-b14e-4f9b-9afc-c4d9ac6aef52\") " pod="openstack/cinder-backup-0" Feb 23 09:10:08 crc kubenswrapper[4940]: I0223 09:10:08.773229 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20de4506-b14e-4f9b-9afc-c4d9ac6aef52-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"20de4506-b14e-4f9b-9afc-c4d9ac6aef52\") " pod="openstack/cinder-backup-0" Feb 23 09:10:08 crc kubenswrapper[4940]: I0223 09:10:08.774010 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/20de4506-b14e-4f9b-9afc-c4d9ac6aef52-lib-modules\") pod \"cinder-backup-0\" (UID: \"20de4506-b14e-4f9b-9afc-c4d9ac6aef52\") " pod="openstack/cinder-backup-0" Feb 23 09:10:08 crc kubenswrapper[4940]: I0223 09:10:08.775316 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/20de4506-b14e-4f9b-9afc-c4d9ac6aef52-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"20de4506-b14e-4f9b-9afc-c4d9ac6aef52\") " pod="openstack/cinder-backup-0" Feb 23 09:10:08 crc kubenswrapper[4940]: I0223 09:10:08.775351 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/20de4506-b14e-4f9b-9afc-c4d9ac6aef52-etc-nvme\") pod \"cinder-backup-0\" (UID: \"20de4506-b14e-4f9b-9afc-c4d9ac6aef52\") " pod="openstack/cinder-backup-0" Feb 23 09:10:08 crc kubenswrapper[4940]: I0223 09:10:08.775389 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/20de4506-b14e-4f9b-9afc-c4d9ac6aef52-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"20de4506-b14e-4f9b-9afc-c4d9ac6aef52\") " pod="openstack/cinder-backup-0" Feb 23 09:10:08 crc kubenswrapper[4940]: I0223 09:10:08.775383 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/20de4506-b14e-4f9b-9afc-c4d9ac6aef52-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"20de4506-b14e-4f9b-9afc-c4d9ac6aef52\") " pod="openstack/cinder-backup-0" Feb 23 09:10:08 crc kubenswrapper[4940]: I0223 09:10:08.775417 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/20de4506-b14e-4f9b-9afc-c4d9ac6aef52-dev\") pod \"cinder-backup-0\" (UID: \"20de4506-b14e-4f9b-9afc-c4d9ac6aef52\") " pod="openstack/cinder-backup-0" Feb 23 09:10:08 crc kubenswrapper[4940]: I0223 09:10:08.775357 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/20de4506-b14e-4f9b-9afc-c4d9ac6aef52-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"20de4506-b14e-4f9b-9afc-c4d9ac6aef52\") " pod="openstack/cinder-backup-0" Feb 23 09:10:08 crc kubenswrapper[4940]: I0223 09:10:08.775503 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/20de4506-b14e-4f9b-9afc-c4d9ac6aef52-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"20de4506-b14e-4f9b-9afc-c4d9ac6aef52\") " pod="openstack/cinder-backup-0" Feb 23 09:10:08 crc kubenswrapper[4940]: I0223 09:10:08.775587 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/20de4506-b14e-4f9b-9afc-c4d9ac6aef52-sys\") pod \"cinder-backup-0\" (UID: \"20de4506-b14e-4f9b-9afc-c4d9ac6aef52\") " pod="openstack/cinder-backup-0" Feb 23 09:10:08 crc kubenswrapper[4940]: I0223 09:10:08.778622 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20de4506-b14e-4f9b-9afc-c4d9ac6aef52-config-data\") pod \"cinder-backup-0\" (UID: \"20de4506-b14e-4f9b-9afc-c4d9ac6aef52\") " pod="openstack/cinder-backup-0" Feb 23 09:10:08 crc kubenswrapper[4940]: I0223 09:10:08.780041 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/20de4506-b14e-4f9b-9afc-c4d9ac6aef52-scripts\") pod \"cinder-backup-0\" (UID: \"20de4506-b14e-4f9b-9afc-c4d9ac6aef52\") " pod="openstack/cinder-backup-0" Feb 23 09:10:08 crc kubenswrapper[4940]: I0223 09:10:08.783387 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/20de4506-b14e-4f9b-9afc-c4d9ac6aef52-config-data-custom\") pod \"cinder-backup-0\" (UID: \"20de4506-b14e-4f9b-9afc-c4d9ac6aef52\") " pod="openstack/cinder-backup-0" Feb 23 09:10:08 crc kubenswrapper[4940]: I0223 09:10:08.786586 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/20de4506-b14e-4f9b-9afc-c4d9ac6aef52-ceph\") pod \"cinder-backup-0\" (UID: \"20de4506-b14e-4f9b-9afc-c4d9ac6aef52\") " pod="openstack/cinder-backup-0" Feb 23 09:10:08 crc kubenswrapper[4940]: I0223 09:10:08.789243 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20de4506-b14e-4f9b-9afc-c4d9ac6aef52-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"20de4506-b14e-4f9b-9afc-c4d9ac6aef52\") " pod="openstack/cinder-backup-0" Feb 23 09:10:08 crc kubenswrapper[4940]: I0223 09:10:08.795220 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w8kxv\" (UniqueName: \"kubernetes.io/projected/20de4506-b14e-4f9b-9afc-c4d9ac6aef52-kube-api-access-w8kxv\") pod \"cinder-backup-0\" (UID: \"20de4506-b14e-4f9b-9afc-c4d9ac6aef52\") " pod="openstack/cinder-backup-0" Feb 23 09:10:08 crc kubenswrapper[4940]: I0223 09:10:08.902082 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-backup-0" Feb 23 09:10:09 crc kubenswrapper[4940]: I0223 09:10:09.247086 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 23 09:10:09 crc kubenswrapper[4940]: I0223 09:10:09.272133 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-api-0"] Feb 23 09:10:09 crc kubenswrapper[4940]: I0223 09:10:09.377536 4940 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="036763ca-16f2-4880-8381-1b9330182312" path="/var/lib/kubelet/pods/036763ca-16f2-4880-8381-1b9330182312/volumes" Feb 23 09:10:09 crc kubenswrapper[4940]: I0223 09:10:09.379193 4940 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0911a5da-3249-4858-8246-5334db240255" path="/var/lib/kubelet/pods/0911a5da-3249-4858-8246-5334db240255/volumes" Feb 23 09:10:09 crc kubenswrapper[4940]: I0223 09:10:09.379973 4940 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e5a733b1-ef74-46e6-804f-c0ad92ef38aa" path="/var/lib/kubelet/pods/e5a733b1-ef74-46e6-804f-c0ad92ef38aa/volumes" Feb 23 09:10:09 crc kubenswrapper[4940]: I0223 09:10:09.616600 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-backup-0"] Feb 23 09:10:09 crc kubenswrapper[4940]: W0223 09:10:09.632863 4940 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod20de4506_b14e_4f9b_9afc_c4d9ac6aef52.slice/crio-be90ca16b82e99063e4a3f190c2f50f12f8ccd55279444398847be45fe2276ef WatchSource:0}: Error finding container be90ca16b82e99063e4a3f190c2f50f12f8ccd55279444398847be45fe2276ef: Status 404 returned error can't find the container with id be90ca16b82e99063e4a3f190c2f50f12f8ccd55279444398847be45fe2276ef Feb 23 09:10:10 crc kubenswrapper[4940]: I0223 09:10:10.080412 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-backup-0" event={"ID":"20de4506-b14e-4f9b-9afc-c4d9ac6aef52","Type":"ContainerStarted","Data":"ff7092724fa5deefe4eb6d8a8c1b3989dea6c413b5b5331666c0718e5e65ac93"} Feb 23 09:10:10 crc kubenswrapper[4940]: I0223 09:10:10.080831 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-backup-0" event={"ID":"20de4506-b14e-4f9b-9afc-c4d9ac6aef52","Type":"ContainerStarted","Data":"be90ca16b82e99063e4a3f190c2f50f12f8ccd55279444398847be45fe2276ef"} Feb 23 09:10:10 crc kubenswrapper[4940]: I0223 09:10:10.100183 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"3f78e173-a538-4fa3-804d-25bff89a23ca","Type":"ContainerStarted","Data":"4f4f4a4586d2a032fd35f6d02d18718810e9f5412cb87ff59aa3b5faa8e04c0c"} Feb 23 09:10:10 crc kubenswrapper[4940]: I0223 09:10:10.100478 4940 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-volume-volume1-0" Feb 23 09:10:10 crc kubenswrapper[4940]: I0223 09:10:10.116393 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"8ea914bd-a046-42ba-942e-7d3d778d0b52","Type":"ContainerStarted","Data":"6b11bffbd37970e57a15f773d75b94742d36482ffcf29942a2c139cbe51b743b"} Feb 23 09:10:10 crc kubenswrapper[4940]: I0223 09:10:10.227656 4940 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-volume-volume1-0"] Feb 23 09:10:11 crc kubenswrapper[4940]: I0223 09:10:11.019975 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-6c6749c74d-ng8p9" Feb 23 09:10:11 crc kubenswrapper[4940]: I0223 09:10:11.121742 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-6c6749c74d-ng8p9" Feb 23 09:10:11 crc kubenswrapper[4940]: I0223 09:10:11.150092 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"3f78e173-a538-4fa3-804d-25bff89a23ca","Type":"ContainerStarted","Data":"cbdd5423fab3928118acaf4c72f143bf086b42b071cccbdcaa8464e1f071b831"} Feb 23 09:10:11 crc kubenswrapper[4940]: I0223 09:10:11.150132 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"3f78e173-a538-4fa3-804d-25bff89a23ca","Type":"ContainerStarted","Data":"3a2b863cce7725cea3dbc6c009e6cc14ca5dbd4c36a95d1e680856313d5bb2aa"} Feb 23 09:10:11 crc kubenswrapper[4940]: I0223 09:10:11.151282 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/manila-api-0" Feb 23 09:10:11 crc kubenswrapper[4940]: I0223 09:10:11.189080 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"8ea914bd-a046-42ba-942e-7d3d778d0b52","Type":"ContainerStarted","Data":"ae4f3fe4e42d897014b66afcbc8a33fa4a5915cfc495aca41db420d9dc158afd"} Feb 23 09:10:11 crc kubenswrapper[4940]: I0223 09:10:11.196166 4940 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-volume-volume1-0" podUID="3f62a0b8-7bf4-40cb-a561-b2afd12eabdf" containerName="cinder-volume" containerID="cri-o://bcb31e77411c89974d1b48c0334789218d6edc4aeea297dd9885286842c5824d" gracePeriod=30 Feb 23 09:10:11 crc kubenswrapper[4940]: I0223 09:10:11.197417 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-backup-0" event={"ID":"20de4506-b14e-4f9b-9afc-c4d9ac6aef52","Type":"ContainerStarted","Data":"1660e94b8876d87d7290ee28480c8bd37bd2836f85a89b1da6fa3791d66cfb35"} Feb 23 09:10:11 crc kubenswrapper[4940]: I0223 09:10:11.198219 4940 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-volume-volume1-0" podUID="3f62a0b8-7bf4-40cb-a561-b2afd12eabdf" containerName="probe" containerID="cri-o://11797bef9538eb6edb5e371221d846a781a4b92d10e5219e36aba40f8881e8b1" gracePeriod=30 Feb 23 09:10:11 crc kubenswrapper[4940]: I0223 09:10:11.222805 4940 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-api-0" podStartSLOduration=3.222788231 podStartE2EDuration="3.222788231s" podCreationTimestamp="2026-02-23 09:10:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 09:10:11.188621668 +0000 UTC m=+1342.571827845" watchObservedRunningTime="2026-02-23 09:10:11.222788231 +0000 UTC m=+1342.605994388" Feb 23 09:10:11 crc kubenswrapper[4940]: I0223 09:10:11.246125 4940 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-backup-0" podStartSLOduration=3.246106233 podStartE2EDuration="3.246106233s" podCreationTimestamp="2026-02-23 09:10:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 09:10:11.231977059 +0000 UTC m=+1342.615183216" watchObservedRunningTime="2026-02-23 09:10:11.246106233 +0000 UTC m=+1342.629312390" Feb 23 09:10:12 crc kubenswrapper[4940]: I0223 09:10:12.216197 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-96958f474-956sq" Feb 23 09:10:12 crc kubenswrapper[4940]: I0223 09:10:12.243338 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-96958f474-956sq" Feb 23 09:10:12 crc kubenswrapper[4940]: I0223 09:10:12.254121 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"8ea914bd-a046-42ba-942e-7d3d778d0b52","Type":"ContainerStarted","Data":"f56cc034ed262ee734872d6411fcaafb224133231218fd0302f0fb4103eef184"} Feb 23 09:10:12 crc kubenswrapper[4940]: I0223 09:10:12.277178 4940 generic.go:334] "Generic (PLEG): container finished" podID="e330abf6-9282-4221-b286-672ffc3985e7" containerID="7867726af076a08e4add2f52c56486390e581182e1eadda89120ffd72a88d736" exitCode=0 Feb 23 09:10:12 crc kubenswrapper[4940]: I0223 09:10:12.280675 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6884678d78-ckt87" event={"ID":"e330abf6-9282-4221-b286-672ffc3985e7","Type":"ContainerDied","Data":"7867726af076a08e4add2f52c56486390e581182e1eadda89120ffd72a88d736"} Feb 23 09:10:12 crc kubenswrapper[4940]: I0223 09:10:12.310326 4940 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=4.310307524 podStartE2EDuration="4.310307524s" podCreationTimestamp="2026-02-23 09:10:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 09:10:12.30699453 +0000 UTC m=+1343.690200687" watchObservedRunningTime="2026-02-23 09:10:12.310307524 +0000 UTC m=+1343.693513681" Feb 23 09:10:12 crc kubenswrapper[4940]: I0223 09:10:12.380116 4940 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/manila-scheduler-0" Feb 23 09:10:12 crc kubenswrapper[4940]: I0223 09:10:12.386961 4940 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-6c6749c74d-ng8p9"] Feb 23 09:10:12 crc kubenswrapper[4940]: I0223 09:10:12.387218 4940 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-6c6749c74d-ng8p9" podUID="f433de8f-71fb-4f02-a223-871cc2959145" containerName="placement-log" containerID="cri-o://74f58278e9eaf1ba3aaa6c4c89dd754902de0e73978ad252afed9398bcb8f2e6" gracePeriod=30 Feb 23 09:10:12 crc kubenswrapper[4940]: I0223 09:10:12.387352 4940 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-6c6749c74d-ng8p9" podUID="f433de8f-71fb-4f02-a223-871cc2959145" containerName="placement-api" containerID="cri-o://7a06c9772d7b7b2824187fdbe6566e63cba4d50fa670b25290e09e202ad9a4db" gracePeriod=30 Feb 23 09:10:12 crc kubenswrapper[4940]: I0223 09:10:12.577756 4940 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-6884678d78-ckt87" podUID="e330abf6-9282-4221-b286-672ffc3985e7" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.151:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.151:8443: connect: connection refused" Feb 23 09:10:12 crc kubenswrapper[4940]: I0223 09:10:12.592247 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-657b46f66d-5snf5" Feb 23 09:10:12 crc kubenswrapper[4940]: I0223 09:10:12.648791 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-56696ff475-gv984" Feb 23 09:10:12 crc kubenswrapper[4940]: I0223 09:10:12.744687 4940 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-ghmbn"] Feb 23 09:10:12 crc kubenswrapper[4940]: I0223 09:10:12.745221 4940 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-85ff748b95-ghmbn" podUID="b39bf698-8f6d-4434-a926-239c936bbdca" containerName="dnsmasq-dns" containerID="cri-o://da71d54e51d93b687f2d84195190423a48acc1e3c64d191b6bead08f5304d0ec" gracePeriod=10 Feb 23 09:10:13 crc kubenswrapper[4940]: I0223 09:10:13.019796 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Feb 23 09:10:13 crc kubenswrapper[4940]: I0223 09:10:13.021040 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 23 09:10:13 crc kubenswrapper[4940]: I0223 09:10:13.025171 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Feb 23 09:10:13 crc kubenswrapper[4940]: I0223 09:10:13.025544 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-cxzl5" Feb 23 09:10:13 crc kubenswrapper[4940]: I0223 09:10:13.025693 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Feb 23 09:10:13 crc kubenswrapper[4940]: I0223 09:10:13.033524 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/1a7ead03-cd14-44b3-967b-9daaf4070687-openstack-config-secret\") pod \"openstackclient\" (UID: \"1a7ead03-cd14-44b3-967b-9daaf4070687\") " pod="openstack/openstackclient" Feb 23 09:10:13 crc kubenswrapper[4940]: I0223 09:10:13.033717 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a7ead03-cd14-44b3-967b-9daaf4070687-combined-ca-bundle\") pod \"openstackclient\" (UID: \"1a7ead03-cd14-44b3-967b-9daaf4070687\") " pod="openstack/openstackclient" Feb 23 09:10:13 crc kubenswrapper[4940]: I0223 09:10:13.033781 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/1a7ead03-cd14-44b3-967b-9daaf4070687-openstack-config\") pod \"openstackclient\" (UID: \"1a7ead03-cd14-44b3-967b-9daaf4070687\") " pod="openstack/openstackclient" Feb 23 09:10:13 crc kubenswrapper[4940]: I0223 09:10:13.033806 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2shv6\" (UniqueName: \"kubernetes.io/projected/1a7ead03-cd14-44b3-967b-9daaf4070687-kube-api-access-2shv6\") pod \"openstackclient\" (UID: \"1a7ead03-cd14-44b3-967b-9daaf4070687\") " pod="openstack/openstackclient" Feb 23 09:10:13 crc kubenswrapper[4940]: I0223 09:10:13.044010 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Feb 23 09:10:13 crc kubenswrapper[4940]: I0223 09:10:13.154298 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/1a7ead03-cd14-44b3-967b-9daaf4070687-openstack-config\") pod \"openstackclient\" (UID: \"1a7ead03-cd14-44b3-967b-9daaf4070687\") " pod="openstack/openstackclient" Feb 23 09:10:13 crc kubenswrapper[4940]: I0223 09:10:13.154353 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2shv6\" (UniqueName: \"kubernetes.io/projected/1a7ead03-cd14-44b3-967b-9daaf4070687-kube-api-access-2shv6\") pod \"openstackclient\" (UID: \"1a7ead03-cd14-44b3-967b-9daaf4070687\") " pod="openstack/openstackclient" Feb 23 09:10:13 crc kubenswrapper[4940]: I0223 09:10:13.155340 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/1a7ead03-cd14-44b3-967b-9daaf4070687-openstack-config-secret\") pod \"openstackclient\" (UID: \"1a7ead03-cd14-44b3-967b-9daaf4070687\") " pod="openstack/openstackclient" Feb 23 09:10:13 crc kubenswrapper[4940]: I0223 09:10:13.155508 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a7ead03-cd14-44b3-967b-9daaf4070687-combined-ca-bundle\") pod \"openstackclient\" (UID: \"1a7ead03-cd14-44b3-967b-9daaf4070687\") " pod="openstack/openstackclient" Feb 23 09:10:13 crc kubenswrapper[4940]: I0223 09:10:13.158696 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/1a7ead03-cd14-44b3-967b-9daaf4070687-openstack-config\") pod \"openstackclient\" (UID: \"1a7ead03-cd14-44b3-967b-9daaf4070687\") " pod="openstack/openstackclient" Feb 23 09:10:13 crc kubenswrapper[4940]: I0223 09:10:13.164294 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a7ead03-cd14-44b3-967b-9daaf4070687-combined-ca-bundle\") pod \"openstackclient\" (UID: \"1a7ead03-cd14-44b3-967b-9daaf4070687\") " pod="openstack/openstackclient" Feb 23 09:10:13 crc kubenswrapper[4940]: I0223 09:10:13.169908 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/1a7ead03-cd14-44b3-967b-9daaf4070687-openstack-config-secret\") pod \"openstackclient\" (UID: \"1a7ead03-cd14-44b3-967b-9daaf4070687\") " pod="openstack/openstackclient" Feb 23 09:10:13 crc kubenswrapper[4940]: I0223 09:10:13.185541 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2shv6\" (UniqueName: \"kubernetes.io/projected/1a7ead03-cd14-44b3-967b-9daaf4070687-kube-api-access-2shv6\") pod \"openstackclient\" (UID: \"1a7ead03-cd14-44b3-967b-9daaf4070687\") " pod="openstack/openstackclient" Feb 23 09:10:13 crc kubenswrapper[4940]: I0223 09:10:13.298566 4940 generic.go:334] "Generic (PLEG): container finished" podID="b39bf698-8f6d-4434-a926-239c936bbdca" containerID="da71d54e51d93b687f2d84195190423a48acc1e3c64d191b6bead08f5304d0ec" exitCode=0 Feb 23 09:10:13 crc kubenswrapper[4940]: I0223 09:10:13.298875 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-ghmbn" event={"ID":"b39bf698-8f6d-4434-a926-239c936bbdca","Type":"ContainerDied","Data":"da71d54e51d93b687f2d84195190423a48acc1e3c64d191b6bead08f5304d0ec"} Feb 23 09:10:13 crc kubenswrapper[4940]: I0223 09:10:13.300299 4940 generic.go:334] "Generic (PLEG): container finished" podID="3f62a0b8-7bf4-40cb-a561-b2afd12eabdf" containerID="11797bef9538eb6edb5e371221d846a781a4b92d10e5219e36aba40f8881e8b1" exitCode=0 Feb 23 09:10:13 crc kubenswrapper[4940]: I0223 09:10:13.300313 4940 generic.go:334] "Generic (PLEG): container finished" podID="3f62a0b8-7bf4-40cb-a561-b2afd12eabdf" containerID="bcb31e77411c89974d1b48c0334789218d6edc4aeea297dd9885286842c5824d" exitCode=0 Feb 23 09:10:13 crc kubenswrapper[4940]: I0223 09:10:13.300342 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-volume1-0" event={"ID":"3f62a0b8-7bf4-40cb-a561-b2afd12eabdf","Type":"ContainerDied","Data":"11797bef9538eb6edb5e371221d846a781a4b92d10e5219e36aba40f8881e8b1"} Feb 23 09:10:13 crc kubenswrapper[4940]: I0223 09:10:13.300357 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-volume1-0" event={"ID":"3f62a0b8-7bf4-40cb-a561-b2afd12eabdf","Type":"ContainerDied","Data":"bcb31e77411c89974d1b48c0334789218d6edc4aeea297dd9885286842c5824d"} Feb 23 09:10:13 crc kubenswrapper[4940]: I0223 09:10:13.301964 4940 generic.go:334] "Generic (PLEG): container finished" podID="f433de8f-71fb-4f02-a223-871cc2959145" containerID="74f58278e9eaf1ba3aaa6c4c89dd754902de0e73978ad252afed9398bcb8f2e6" exitCode=143 Feb 23 09:10:13 crc kubenswrapper[4940]: I0223 09:10:13.302950 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6c6749c74d-ng8p9" event={"ID":"f433de8f-71fb-4f02-a223-871cc2959145","Type":"ContainerDied","Data":"74f58278e9eaf1ba3aaa6c4c89dd754902de0e73978ad252afed9398bcb8f2e6"} Feb 23 09:10:13 crc kubenswrapper[4940]: I0223 09:10:13.346151 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 23 09:10:13 crc kubenswrapper[4940]: I0223 09:10:13.379157 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-volume-volume1-0" Feb 23 09:10:13 crc kubenswrapper[4940]: I0223 09:10:13.440143 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-ghmbn" Feb 23 09:10:13 crc kubenswrapper[4940]: I0223 09:10:13.569793 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/3f62a0b8-7bf4-40cb-a561-b2afd12eabdf-var-locks-cinder\") pod \"3f62a0b8-7bf4-40cb-a561-b2afd12eabdf\" (UID: \"3f62a0b8-7bf4-40cb-a561-b2afd12eabdf\") " Feb 23 09:10:13 crc kubenswrapper[4940]: I0223 09:10:13.569851 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/3f62a0b8-7bf4-40cb-a561-b2afd12eabdf-var-lib-cinder\") pod \"3f62a0b8-7bf4-40cb-a561-b2afd12eabdf\" (UID: \"3f62a0b8-7bf4-40cb-a561-b2afd12eabdf\") " Feb 23 09:10:13 crc kubenswrapper[4940]: I0223 09:10:13.569887 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/3f62a0b8-7bf4-40cb-a561-b2afd12eabdf-etc-nvme\") pod \"3f62a0b8-7bf4-40cb-a561-b2afd12eabdf\" (UID: \"3f62a0b8-7bf4-40cb-a561-b2afd12eabdf\") " Feb 23 09:10:13 crc kubenswrapper[4940]: I0223 09:10:13.569908 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/3f62a0b8-7bf4-40cb-a561-b2afd12eabdf-dev\") pod \"3f62a0b8-7bf4-40cb-a561-b2afd12eabdf\" (UID: \"3f62a0b8-7bf4-40cb-a561-b2afd12eabdf\") " Feb 23 09:10:13 crc kubenswrapper[4940]: I0223 09:10:13.569937 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b39bf698-8f6d-4434-a926-239c936bbdca-ovsdbserver-sb\") pod \"b39bf698-8f6d-4434-a926-239c936bbdca\" (UID: \"b39bf698-8f6d-4434-a926-239c936bbdca\") " Feb 23 09:10:13 crc kubenswrapper[4940]: I0223 09:10:13.569951 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/3f62a0b8-7bf4-40cb-a561-b2afd12eabdf-run\") pod \"3f62a0b8-7bf4-40cb-a561-b2afd12eabdf\" (UID: \"3f62a0b8-7bf4-40cb-a561-b2afd12eabdf\") " Feb 23 09:10:13 crc kubenswrapper[4940]: I0223 09:10:13.569974 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3f62a0b8-7bf4-40cb-a561-b2afd12eabdf-scripts\") pod \"3f62a0b8-7bf4-40cb-a561-b2afd12eabdf\" (UID: \"3f62a0b8-7bf4-40cb-a561-b2afd12eabdf\") " Feb 23 09:10:13 crc kubenswrapper[4940]: I0223 09:10:13.569999 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5zhcq\" (UniqueName: \"kubernetes.io/projected/3f62a0b8-7bf4-40cb-a561-b2afd12eabdf-kube-api-access-5zhcq\") pod \"3f62a0b8-7bf4-40cb-a561-b2afd12eabdf\" (UID: \"3f62a0b8-7bf4-40cb-a561-b2afd12eabdf\") " Feb 23 09:10:13 crc kubenswrapper[4940]: I0223 09:10:13.570048 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/3f62a0b8-7bf4-40cb-a561-b2afd12eabdf-ceph\") pod \"3f62a0b8-7bf4-40cb-a561-b2afd12eabdf\" (UID: \"3f62a0b8-7bf4-40cb-a561-b2afd12eabdf\") " Feb 23 09:10:13 crc kubenswrapper[4940]: I0223 09:10:13.570068 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3f62a0b8-7bf4-40cb-a561-b2afd12eabdf-config-data-custom\") pod \"3f62a0b8-7bf4-40cb-a561-b2afd12eabdf\" (UID: \"3f62a0b8-7bf4-40cb-a561-b2afd12eabdf\") " Feb 23 09:10:13 crc kubenswrapper[4940]: I0223 09:10:13.570095 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f62a0b8-7bf4-40cb-a561-b2afd12eabdf-config-data\") pod \"3f62a0b8-7bf4-40cb-a561-b2afd12eabdf\" (UID: \"3f62a0b8-7bf4-40cb-a561-b2afd12eabdf\") " Feb 23 09:10:13 crc kubenswrapper[4940]: I0223 09:10:13.570120 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tsstc\" (UniqueName: \"kubernetes.io/projected/b39bf698-8f6d-4434-a926-239c936bbdca-kube-api-access-tsstc\") pod \"b39bf698-8f6d-4434-a926-239c936bbdca\" (UID: \"b39bf698-8f6d-4434-a926-239c936bbdca\") " Feb 23 09:10:13 crc kubenswrapper[4940]: I0223 09:10:13.570158 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/3f62a0b8-7bf4-40cb-a561-b2afd12eabdf-etc-iscsi\") pod \"3f62a0b8-7bf4-40cb-a561-b2afd12eabdf\" (UID: \"3f62a0b8-7bf4-40cb-a561-b2afd12eabdf\") " Feb 23 09:10:13 crc kubenswrapper[4940]: I0223 09:10:13.570183 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b39bf698-8f6d-4434-a926-239c936bbdca-ovsdbserver-nb\") pod \"b39bf698-8f6d-4434-a926-239c936bbdca\" (UID: \"b39bf698-8f6d-4434-a926-239c936bbdca\") " Feb 23 09:10:13 crc kubenswrapper[4940]: I0223 09:10:13.570202 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b39bf698-8f6d-4434-a926-239c936bbdca-dns-swift-storage-0\") pod \"b39bf698-8f6d-4434-a926-239c936bbdca\" (UID: \"b39bf698-8f6d-4434-a926-239c936bbdca\") " Feb 23 09:10:13 crc kubenswrapper[4940]: I0223 09:10:13.570228 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f62a0b8-7bf4-40cb-a561-b2afd12eabdf-combined-ca-bundle\") pod \"3f62a0b8-7bf4-40cb-a561-b2afd12eabdf\" (UID: \"3f62a0b8-7bf4-40cb-a561-b2afd12eabdf\") " Feb 23 09:10:13 crc kubenswrapper[4940]: I0223 09:10:13.570262 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b39bf698-8f6d-4434-a926-239c936bbdca-dns-svc\") pod \"b39bf698-8f6d-4434-a926-239c936bbdca\" (UID: \"b39bf698-8f6d-4434-a926-239c936bbdca\") " Feb 23 09:10:13 crc kubenswrapper[4940]: I0223 09:10:13.570284 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/3f62a0b8-7bf4-40cb-a561-b2afd12eabdf-sys\") pod \"3f62a0b8-7bf4-40cb-a561-b2afd12eabdf\" (UID: \"3f62a0b8-7bf4-40cb-a561-b2afd12eabdf\") " Feb 23 09:10:13 crc kubenswrapper[4940]: I0223 09:10:13.570309 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b39bf698-8f6d-4434-a926-239c936bbdca-config\") pod \"b39bf698-8f6d-4434-a926-239c936bbdca\" (UID: \"b39bf698-8f6d-4434-a926-239c936bbdca\") " Feb 23 09:10:13 crc kubenswrapper[4940]: I0223 09:10:13.570349 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/3f62a0b8-7bf4-40cb-a561-b2afd12eabdf-var-locks-brick\") pod \"3f62a0b8-7bf4-40cb-a561-b2afd12eabdf\" (UID: \"3f62a0b8-7bf4-40cb-a561-b2afd12eabdf\") " Feb 23 09:10:13 crc kubenswrapper[4940]: I0223 09:10:13.570428 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3f62a0b8-7bf4-40cb-a561-b2afd12eabdf-lib-modules\") pod \"3f62a0b8-7bf4-40cb-a561-b2afd12eabdf\" (UID: \"3f62a0b8-7bf4-40cb-a561-b2afd12eabdf\") " Feb 23 09:10:13 crc kubenswrapper[4940]: I0223 09:10:13.570458 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3f62a0b8-7bf4-40cb-a561-b2afd12eabdf-etc-machine-id\") pod \"3f62a0b8-7bf4-40cb-a561-b2afd12eabdf\" (UID: \"3f62a0b8-7bf4-40cb-a561-b2afd12eabdf\") " Feb 23 09:10:13 crc kubenswrapper[4940]: I0223 09:10:13.570918 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3f62a0b8-7bf4-40cb-a561-b2afd12eabdf-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "3f62a0b8-7bf4-40cb-a561-b2afd12eabdf" (UID: "3f62a0b8-7bf4-40cb-a561-b2afd12eabdf"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 09:10:13 crc kubenswrapper[4940]: I0223 09:10:13.571679 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3f62a0b8-7bf4-40cb-a561-b2afd12eabdf-etc-iscsi" (OuterVolumeSpecName: "etc-iscsi") pod "3f62a0b8-7bf4-40cb-a561-b2afd12eabdf" (UID: "3f62a0b8-7bf4-40cb-a561-b2afd12eabdf"). InnerVolumeSpecName "etc-iscsi". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 09:10:13 crc kubenswrapper[4940]: I0223 09:10:13.573752 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3f62a0b8-7bf4-40cb-a561-b2afd12eabdf-sys" (OuterVolumeSpecName: "sys") pod "3f62a0b8-7bf4-40cb-a561-b2afd12eabdf" (UID: "3f62a0b8-7bf4-40cb-a561-b2afd12eabdf"). InnerVolumeSpecName "sys". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 09:10:13 crc kubenswrapper[4940]: I0223 09:10:13.574721 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3f62a0b8-7bf4-40cb-a561-b2afd12eabdf-var-locks-brick" (OuterVolumeSpecName: "var-locks-brick") pod "3f62a0b8-7bf4-40cb-a561-b2afd12eabdf" (UID: "3f62a0b8-7bf4-40cb-a561-b2afd12eabdf"). InnerVolumeSpecName "var-locks-brick". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 09:10:13 crc kubenswrapper[4940]: I0223 09:10:13.574785 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3f62a0b8-7bf4-40cb-a561-b2afd12eabdf-var-lib-cinder" (OuterVolumeSpecName: "var-lib-cinder") pod "3f62a0b8-7bf4-40cb-a561-b2afd12eabdf" (UID: "3f62a0b8-7bf4-40cb-a561-b2afd12eabdf"). InnerVolumeSpecName "var-lib-cinder". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 09:10:13 crc kubenswrapper[4940]: I0223 09:10:13.574807 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3f62a0b8-7bf4-40cb-a561-b2afd12eabdf-var-locks-cinder" (OuterVolumeSpecName: "var-locks-cinder") pod "3f62a0b8-7bf4-40cb-a561-b2afd12eabdf" (UID: "3f62a0b8-7bf4-40cb-a561-b2afd12eabdf"). InnerVolumeSpecName "var-locks-cinder". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 09:10:13 crc kubenswrapper[4940]: I0223 09:10:13.574828 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3f62a0b8-7bf4-40cb-a561-b2afd12eabdf-etc-nvme" (OuterVolumeSpecName: "etc-nvme") pod "3f62a0b8-7bf4-40cb-a561-b2afd12eabdf" (UID: "3f62a0b8-7bf4-40cb-a561-b2afd12eabdf"). InnerVolumeSpecName "etc-nvme". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 09:10:13 crc kubenswrapper[4940]: I0223 09:10:13.574848 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3f62a0b8-7bf4-40cb-a561-b2afd12eabdf-dev" (OuterVolumeSpecName: "dev") pod "3f62a0b8-7bf4-40cb-a561-b2afd12eabdf" (UID: "3f62a0b8-7bf4-40cb-a561-b2afd12eabdf"). InnerVolumeSpecName "dev". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 09:10:13 crc kubenswrapper[4940]: I0223 09:10:13.574979 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b39bf698-8f6d-4434-a926-239c936bbdca-kube-api-access-tsstc" (OuterVolumeSpecName: "kube-api-access-tsstc") pod "b39bf698-8f6d-4434-a926-239c936bbdca" (UID: "b39bf698-8f6d-4434-a926-239c936bbdca"). InnerVolumeSpecName "kube-api-access-tsstc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 09:10:13 crc kubenswrapper[4940]: I0223 09:10:13.575034 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3f62a0b8-7bf4-40cb-a561-b2afd12eabdf-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "3f62a0b8-7bf4-40cb-a561-b2afd12eabdf" (UID: "3f62a0b8-7bf4-40cb-a561-b2afd12eabdf"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 09:10:13 crc kubenswrapper[4940]: I0223 09:10:13.578742 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3f62a0b8-7bf4-40cb-a561-b2afd12eabdf-run" (OuterVolumeSpecName: "run") pod "3f62a0b8-7bf4-40cb-a561-b2afd12eabdf" (UID: "3f62a0b8-7bf4-40cb-a561-b2afd12eabdf"). InnerVolumeSpecName "run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 09:10:13 crc kubenswrapper[4940]: I0223 09:10:13.628194 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f62a0b8-7bf4-40cb-a561-b2afd12eabdf-scripts" (OuterVolumeSpecName: "scripts") pod "3f62a0b8-7bf4-40cb-a561-b2afd12eabdf" (UID: "3f62a0b8-7bf4-40cb-a561-b2afd12eabdf"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 09:10:13 crc kubenswrapper[4940]: I0223 09:10:13.629946 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f62a0b8-7bf4-40cb-a561-b2afd12eabdf-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "3f62a0b8-7bf4-40cb-a561-b2afd12eabdf" (UID: "3f62a0b8-7bf4-40cb-a561-b2afd12eabdf"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 09:10:13 crc kubenswrapper[4940]: I0223 09:10:13.633419 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3f62a0b8-7bf4-40cb-a561-b2afd12eabdf-kube-api-access-5zhcq" (OuterVolumeSpecName: "kube-api-access-5zhcq") pod "3f62a0b8-7bf4-40cb-a561-b2afd12eabdf" (UID: "3f62a0b8-7bf4-40cb-a561-b2afd12eabdf"). InnerVolumeSpecName "kube-api-access-5zhcq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 09:10:13 crc kubenswrapper[4940]: I0223 09:10:13.650811 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3f62a0b8-7bf4-40cb-a561-b2afd12eabdf-ceph" (OuterVolumeSpecName: "ceph") pod "3f62a0b8-7bf4-40cb-a561-b2afd12eabdf" (UID: "3f62a0b8-7bf4-40cb-a561-b2afd12eabdf"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 09:10:13 crc kubenswrapper[4940]: I0223 09:10:13.673125 4940 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3f62a0b8-7bf4-40cb-a561-b2afd12eabdf-lib-modules\") on node \"crc\" DevicePath \"\"" Feb 23 09:10:13 crc kubenswrapper[4940]: I0223 09:10:13.673157 4940 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3f62a0b8-7bf4-40cb-a561-b2afd12eabdf-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 23 09:10:13 crc kubenswrapper[4940]: I0223 09:10:13.673167 4940 reconciler_common.go:293] "Volume detached for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/3f62a0b8-7bf4-40cb-a561-b2afd12eabdf-var-locks-cinder\") on node \"crc\" DevicePath \"\"" Feb 23 09:10:13 crc kubenswrapper[4940]: I0223 09:10:13.673176 4940 reconciler_common.go:293] "Volume detached for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/3f62a0b8-7bf4-40cb-a561-b2afd12eabdf-var-lib-cinder\") on node \"crc\" DevicePath \"\"" Feb 23 09:10:13 crc kubenswrapper[4940]: I0223 09:10:13.673185 4940 reconciler_common.go:293] "Volume detached for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/3f62a0b8-7bf4-40cb-a561-b2afd12eabdf-etc-nvme\") on node \"crc\" DevicePath \"\"" Feb 23 09:10:13 crc kubenswrapper[4940]: I0223 09:10:13.673192 4940 reconciler_common.go:293] "Volume detached for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/3f62a0b8-7bf4-40cb-a561-b2afd12eabdf-dev\") on node \"crc\" DevicePath \"\"" Feb 23 09:10:13 crc kubenswrapper[4940]: I0223 09:10:13.673201 4940 reconciler_common.go:293] "Volume detached for volume \"run\" (UniqueName: \"kubernetes.io/host-path/3f62a0b8-7bf4-40cb-a561-b2afd12eabdf-run\") on node \"crc\" DevicePath \"\"" Feb 23 09:10:13 crc kubenswrapper[4940]: I0223 09:10:13.673208 4940 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3f62a0b8-7bf4-40cb-a561-b2afd12eabdf-scripts\") on node \"crc\" DevicePath \"\"" Feb 23 09:10:13 crc kubenswrapper[4940]: I0223 09:10:13.673216 4940 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5zhcq\" (UniqueName: \"kubernetes.io/projected/3f62a0b8-7bf4-40cb-a561-b2afd12eabdf-kube-api-access-5zhcq\") on node \"crc\" DevicePath \"\"" Feb 23 09:10:13 crc kubenswrapper[4940]: I0223 09:10:13.673224 4940 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/3f62a0b8-7bf4-40cb-a561-b2afd12eabdf-ceph\") on node \"crc\" DevicePath \"\"" Feb 23 09:10:13 crc kubenswrapper[4940]: I0223 09:10:13.673232 4940 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3f62a0b8-7bf4-40cb-a561-b2afd12eabdf-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 23 09:10:13 crc kubenswrapper[4940]: I0223 09:10:13.673241 4940 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tsstc\" (UniqueName: \"kubernetes.io/projected/b39bf698-8f6d-4434-a926-239c936bbdca-kube-api-access-tsstc\") on node \"crc\" DevicePath \"\"" Feb 23 09:10:13 crc kubenswrapper[4940]: I0223 09:10:13.673248 4940 reconciler_common.go:293] "Volume detached for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/3f62a0b8-7bf4-40cb-a561-b2afd12eabdf-etc-iscsi\") on node \"crc\" DevicePath \"\"" Feb 23 09:10:13 crc kubenswrapper[4940]: I0223 09:10:13.673256 4940 reconciler_common.go:293] "Volume detached for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/3f62a0b8-7bf4-40cb-a561-b2afd12eabdf-sys\") on node \"crc\" DevicePath \"\"" Feb 23 09:10:13 crc kubenswrapper[4940]: I0223 09:10:13.673265 4940 reconciler_common.go:293] "Volume detached for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/3f62a0b8-7bf4-40cb-a561-b2afd12eabdf-var-locks-brick\") on node \"crc\" DevicePath \"\"" Feb 23 09:10:13 crc kubenswrapper[4940]: I0223 09:10:13.673312 4940 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Feb 23 09:10:13 crc kubenswrapper[4940]: I0223 09:10:13.717455 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b39bf698-8f6d-4434-a926-239c936bbdca-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "b39bf698-8f6d-4434-a926-239c936bbdca" (UID: "b39bf698-8f6d-4434-a926-239c936bbdca"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 09:10:13 crc kubenswrapper[4940]: I0223 09:10:13.741186 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b39bf698-8f6d-4434-a926-239c936bbdca-config" (OuterVolumeSpecName: "config") pod "b39bf698-8f6d-4434-a926-239c936bbdca" (UID: "b39bf698-8f6d-4434-a926-239c936bbdca"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 09:10:13 crc kubenswrapper[4940]: I0223 09:10:13.787818 4940 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b39bf698-8f6d-4434-a926-239c936bbdca-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 23 09:10:13 crc kubenswrapper[4940]: I0223 09:10:13.787851 4940 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b39bf698-8f6d-4434-a926-239c936bbdca-config\") on node \"crc\" DevicePath \"\"" Feb 23 09:10:13 crc kubenswrapper[4940]: I0223 09:10:13.830767 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f62a0b8-7bf4-40cb-a561-b2afd12eabdf-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3f62a0b8-7bf4-40cb-a561-b2afd12eabdf" (UID: "3f62a0b8-7bf4-40cb-a561-b2afd12eabdf"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 09:10:13 crc kubenswrapper[4940]: I0223 09:10:13.850316 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b39bf698-8f6d-4434-a926-239c936bbdca-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "b39bf698-8f6d-4434-a926-239c936bbdca" (UID: "b39bf698-8f6d-4434-a926-239c936bbdca"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 09:10:13 crc kubenswrapper[4940]: I0223 09:10:13.895890 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b39bf698-8f6d-4434-a926-239c936bbdca-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "b39bf698-8f6d-4434-a926-239c936bbdca" (UID: "b39bf698-8f6d-4434-a926-239c936bbdca"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 09:10:13 crc kubenswrapper[4940]: I0223 09:10:13.897144 4940 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b39bf698-8f6d-4434-a926-239c936bbdca-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 23 09:10:13 crc kubenswrapper[4940]: I0223 09:10:13.897168 4940 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b39bf698-8f6d-4434-a926-239c936bbdca-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 23 09:10:13 crc kubenswrapper[4940]: I0223 09:10:13.897179 4940 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f62a0b8-7bf4-40cb-a561-b2afd12eabdf-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 09:10:13 crc kubenswrapper[4940]: I0223 09:10:13.902765 4940 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-backup-0" Feb 23 09:10:13 crc kubenswrapper[4940]: I0223 09:10:13.921192 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b39bf698-8f6d-4434-a926-239c936bbdca-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "b39bf698-8f6d-4434-a926-239c936bbdca" (UID: "b39bf698-8f6d-4434-a926-239c936bbdca"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 09:10:13 crc kubenswrapper[4940]: I0223 09:10:13.993298 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f62a0b8-7bf4-40cb-a561-b2afd12eabdf-config-data" (OuterVolumeSpecName: "config-data") pod "3f62a0b8-7bf4-40cb-a561-b2afd12eabdf" (UID: "3f62a0b8-7bf4-40cb-a561-b2afd12eabdf"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 09:10:14 crc kubenswrapper[4940]: I0223 09:10:14.001446 4940 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b39bf698-8f6d-4434-a926-239c936bbdca-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 23 09:10:14 crc kubenswrapper[4940]: I0223 09:10:14.001652 4940 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f62a0b8-7bf4-40cb-a561-b2afd12eabdf-config-data\") on node \"crc\" DevicePath \"\"" Feb 23 09:10:14 crc kubenswrapper[4940]: I0223 09:10:14.063143 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Feb 23 09:10:14 crc kubenswrapper[4940]: I0223 09:10:14.315904 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"1a7ead03-cd14-44b3-967b-9daaf4070687","Type":"ContainerStarted","Data":"b50bc4740ab1d0249183724aaf41c55e5bb163866ebb034d669968b3d736a0b7"} Feb 23 09:10:14 crc kubenswrapper[4940]: I0223 09:10:14.327021 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-ghmbn" Feb 23 09:10:14 crc kubenswrapper[4940]: I0223 09:10:14.327014 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-ghmbn" event={"ID":"b39bf698-8f6d-4434-a926-239c936bbdca","Type":"ContainerDied","Data":"afca7315b672da42c17cfa6c3463952df3efc5ea90dce4c6d570d8228ab1518e"} Feb 23 09:10:14 crc kubenswrapper[4940]: I0223 09:10:14.327731 4940 scope.go:117] "RemoveContainer" containerID="da71d54e51d93b687f2d84195190423a48acc1e3c64d191b6bead08f5304d0ec" Feb 23 09:10:14 crc kubenswrapper[4940]: I0223 09:10:14.332812 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-volume-volume1-0" Feb 23 09:10:14 crc kubenswrapper[4940]: I0223 09:10:14.332852 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-volume1-0" event={"ID":"3f62a0b8-7bf4-40cb-a561-b2afd12eabdf","Type":"ContainerDied","Data":"87d3ffcbe822fd2a9e75ee4a029931c54029d633fce9cb569c64e5cd6b00566b"} Feb 23 09:10:14 crc kubenswrapper[4940]: I0223 09:10:14.410785 4940 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-ghmbn"] Feb 23 09:10:14 crc kubenswrapper[4940]: I0223 09:10:14.431100 4940 scope.go:117] "RemoveContainer" containerID="596291a8e998dffd49a123d45c88e3c1ffab983d33f61afe467c6d13867dbe9b" Feb 23 09:10:14 crc kubenswrapper[4940]: I0223 09:10:14.447471 4940 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-ghmbn"] Feb 23 09:10:14 crc kubenswrapper[4940]: I0223 09:10:14.461857 4940 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-volume-volume1-0"] Feb 23 09:10:14 crc kubenswrapper[4940]: I0223 09:10:14.475269 4940 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-volume-volume1-0"] Feb 23 09:10:14 crc kubenswrapper[4940]: I0223 09:10:14.486052 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-volume-volume1-0"] Feb 23 09:10:14 crc kubenswrapper[4940]: E0223 09:10:14.486492 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f62a0b8-7bf4-40cb-a561-b2afd12eabdf" containerName="cinder-volume" Feb 23 09:10:14 crc kubenswrapper[4940]: I0223 09:10:14.486505 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f62a0b8-7bf4-40cb-a561-b2afd12eabdf" containerName="cinder-volume" Feb 23 09:10:14 crc kubenswrapper[4940]: E0223 09:10:14.486522 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b39bf698-8f6d-4434-a926-239c936bbdca" containerName="dnsmasq-dns" Feb 23 09:10:14 crc kubenswrapper[4940]: I0223 09:10:14.486528 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="b39bf698-8f6d-4434-a926-239c936bbdca" containerName="dnsmasq-dns" Feb 23 09:10:14 crc kubenswrapper[4940]: E0223 09:10:14.486546 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b39bf698-8f6d-4434-a926-239c936bbdca" containerName="init" Feb 23 09:10:14 crc kubenswrapper[4940]: I0223 09:10:14.486553 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="b39bf698-8f6d-4434-a926-239c936bbdca" containerName="init" Feb 23 09:10:14 crc kubenswrapper[4940]: E0223 09:10:14.486581 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f62a0b8-7bf4-40cb-a561-b2afd12eabdf" containerName="probe" Feb 23 09:10:14 crc kubenswrapper[4940]: I0223 09:10:14.486589 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f62a0b8-7bf4-40cb-a561-b2afd12eabdf" containerName="probe" Feb 23 09:10:14 crc kubenswrapper[4940]: I0223 09:10:14.486831 4940 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f62a0b8-7bf4-40cb-a561-b2afd12eabdf" containerName="probe" Feb 23 09:10:14 crc kubenswrapper[4940]: I0223 09:10:14.486852 4940 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f62a0b8-7bf4-40cb-a561-b2afd12eabdf" containerName="cinder-volume" Feb 23 09:10:14 crc kubenswrapper[4940]: I0223 09:10:14.486878 4940 memory_manager.go:354] "RemoveStaleState removing state" podUID="b39bf698-8f6d-4434-a926-239c936bbdca" containerName="dnsmasq-dns" Feb 23 09:10:14 crc kubenswrapper[4940]: I0223 09:10:14.488220 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-volume-volume1-0" Feb 23 09:10:14 crc kubenswrapper[4940]: I0223 09:10:14.492867 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-volume-volume1-0"] Feb 23 09:10:14 crc kubenswrapper[4940]: I0223 09:10:14.493208 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-volume-volume1-config-data" Feb 23 09:10:14 crc kubenswrapper[4940]: I0223 09:10:14.508812 4940 scope.go:117] "RemoveContainer" containerID="11797bef9538eb6edb5e371221d846a781a4b92d10e5219e36aba40f8881e8b1" Feb 23 09:10:14 crc kubenswrapper[4940]: I0223 09:10:14.635066 4940 scope.go:117] "RemoveContainer" containerID="bcb31e77411c89974d1b48c0334789218d6edc4aeea297dd9885286842c5824d" Feb 23 09:10:14 crc kubenswrapper[4940]: I0223 09:10:14.637545 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/f446400f-c44a-49c0-891b-83b475c43e39-run\") pod \"cinder-volume-volume1-0\" (UID: \"f446400f-c44a-49c0-891b-83b475c43e39\") " pod="openstack/cinder-volume-volume1-0" Feb 23 09:10:14 crc kubenswrapper[4940]: I0223 09:10:14.637577 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f446400f-c44a-49c0-891b-83b475c43e39-scripts\") pod \"cinder-volume-volume1-0\" (UID: \"f446400f-c44a-49c0-891b-83b475c43e39\") " pod="openstack/cinder-volume-volume1-0" Feb 23 09:10:14 crc kubenswrapper[4940]: I0223 09:10:14.637634 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/f446400f-c44a-49c0-891b-83b475c43e39-sys\") pod \"cinder-volume-volume1-0\" (UID: \"f446400f-c44a-49c0-891b-83b475c43e39\") " pod="openstack/cinder-volume-volume1-0" Feb 23 09:10:14 crc kubenswrapper[4940]: I0223 09:10:14.637656 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f446400f-c44a-49c0-891b-83b475c43e39-etc-machine-id\") pod \"cinder-volume-volume1-0\" (UID: \"f446400f-c44a-49c0-891b-83b475c43e39\") " pod="openstack/cinder-volume-volume1-0" Feb 23 09:10:14 crc kubenswrapper[4940]: I0223 09:10:14.637676 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/f446400f-c44a-49c0-891b-83b475c43e39-etc-nvme\") pod \"cinder-volume-volume1-0\" (UID: \"f446400f-c44a-49c0-891b-83b475c43e39\") " pod="openstack/cinder-volume-volume1-0" Feb 23 09:10:14 crc kubenswrapper[4940]: I0223 09:10:14.637694 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-stcdt\" (UniqueName: \"kubernetes.io/projected/f446400f-c44a-49c0-891b-83b475c43e39-kube-api-access-stcdt\") pod \"cinder-volume-volume1-0\" (UID: \"f446400f-c44a-49c0-891b-83b475c43e39\") " pod="openstack/cinder-volume-volume1-0" Feb 23 09:10:14 crc kubenswrapper[4940]: I0223 09:10:14.637749 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/f446400f-c44a-49c0-891b-83b475c43e39-var-lib-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"f446400f-c44a-49c0-891b-83b475c43e39\") " pod="openstack/cinder-volume-volume1-0" Feb 23 09:10:14 crc kubenswrapper[4940]: I0223 09:10:14.637765 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/f446400f-c44a-49c0-891b-83b475c43e39-ceph\") pod \"cinder-volume-volume1-0\" (UID: \"f446400f-c44a-49c0-891b-83b475c43e39\") " pod="openstack/cinder-volume-volume1-0" Feb 23 09:10:14 crc kubenswrapper[4940]: I0223 09:10:14.637790 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/f446400f-c44a-49c0-891b-83b475c43e39-etc-iscsi\") pod \"cinder-volume-volume1-0\" (UID: \"f446400f-c44a-49c0-891b-83b475c43e39\") " pod="openstack/cinder-volume-volume1-0" Feb 23 09:10:14 crc kubenswrapper[4940]: I0223 09:10:14.637812 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f446400f-c44a-49c0-891b-83b475c43e39-config-data-custom\") pod \"cinder-volume-volume1-0\" (UID: \"f446400f-c44a-49c0-891b-83b475c43e39\") " pod="openstack/cinder-volume-volume1-0" Feb 23 09:10:14 crc kubenswrapper[4940]: I0223 09:10:14.637839 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f446400f-c44a-49c0-891b-83b475c43e39-config-data\") pod \"cinder-volume-volume1-0\" (UID: \"f446400f-c44a-49c0-891b-83b475c43e39\") " pod="openstack/cinder-volume-volume1-0" Feb 23 09:10:14 crc kubenswrapper[4940]: I0223 09:10:14.637887 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/f446400f-c44a-49c0-891b-83b475c43e39-dev\") pod \"cinder-volume-volume1-0\" (UID: \"f446400f-c44a-49c0-891b-83b475c43e39\") " pod="openstack/cinder-volume-volume1-0" Feb 23 09:10:14 crc kubenswrapper[4940]: I0223 09:10:14.637909 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f446400f-c44a-49c0-891b-83b475c43e39-combined-ca-bundle\") pod \"cinder-volume-volume1-0\" (UID: \"f446400f-c44a-49c0-891b-83b475c43e39\") " pod="openstack/cinder-volume-volume1-0" Feb 23 09:10:14 crc kubenswrapper[4940]: I0223 09:10:14.637924 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/f446400f-c44a-49c0-891b-83b475c43e39-var-locks-brick\") pod \"cinder-volume-volume1-0\" (UID: \"f446400f-c44a-49c0-891b-83b475c43e39\") " pod="openstack/cinder-volume-volume1-0" Feb 23 09:10:14 crc kubenswrapper[4940]: I0223 09:10:14.637944 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f446400f-c44a-49c0-891b-83b475c43e39-lib-modules\") pod \"cinder-volume-volume1-0\" (UID: \"f446400f-c44a-49c0-891b-83b475c43e39\") " pod="openstack/cinder-volume-volume1-0" Feb 23 09:10:14 crc kubenswrapper[4940]: I0223 09:10:14.637967 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/f446400f-c44a-49c0-891b-83b475c43e39-var-locks-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"f446400f-c44a-49c0-891b-83b475c43e39\") " pod="openstack/cinder-volume-volume1-0" Feb 23 09:10:14 crc kubenswrapper[4940]: I0223 09:10:14.740289 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f446400f-c44a-49c0-891b-83b475c43e39-config-data-custom\") pod \"cinder-volume-volume1-0\" (UID: \"f446400f-c44a-49c0-891b-83b475c43e39\") " pod="openstack/cinder-volume-volume1-0" Feb 23 09:10:14 crc kubenswrapper[4940]: I0223 09:10:14.740688 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f446400f-c44a-49c0-891b-83b475c43e39-config-data\") pod \"cinder-volume-volume1-0\" (UID: \"f446400f-c44a-49c0-891b-83b475c43e39\") " pod="openstack/cinder-volume-volume1-0" Feb 23 09:10:14 crc kubenswrapper[4940]: I0223 09:10:14.740784 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/f446400f-c44a-49c0-891b-83b475c43e39-dev\") pod \"cinder-volume-volume1-0\" (UID: \"f446400f-c44a-49c0-891b-83b475c43e39\") " pod="openstack/cinder-volume-volume1-0" Feb 23 09:10:14 crc kubenswrapper[4940]: I0223 09:10:14.740826 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/f446400f-c44a-49c0-891b-83b475c43e39-var-locks-brick\") pod \"cinder-volume-volume1-0\" (UID: \"f446400f-c44a-49c0-891b-83b475c43e39\") " pod="openstack/cinder-volume-volume1-0" Feb 23 09:10:14 crc kubenswrapper[4940]: I0223 09:10:14.740849 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f446400f-c44a-49c0-891b-83b475c43e39-combined-ca-bundle\") pod \"cinder-volume-volume1-0\" (UID: \"f446400f-c44a-49c0-891b-83b475c43e39\") " pod="openstack/cinder-volume-volume1-0" Feb 23 09:10:14 crc kubenswrapper[4940]: I0223 09:10:14.740883 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f446400f-c44a-49c0-891b-83b475c43e39-lib-modules\") pod \"cinder-volume-volume1-0\" (UID: \"f446400f-c44a-49c0-891b-83b475c43e39\") " pod="openstack/cinder-volume-volume1-0" Feb 23 09:10:14 crc kubenswrapper[4940]: I0223 09:10:14.740921 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/f446400f-c44a-49c0-891b-83b475c43e39-var-locks-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"f446400f-c44a-49c0-891b-83b475c43e39\") " pod="openstack/cinder-volume-volume1-0" Feb 23 09:10:14 crc kubenswrapper[4940]: I0223 09:10:14.740971 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/f446400f-c44a-49c0-891b-83b475c43e39-run\") pod \"cinder-volume-volume1-0\" (UID: \"f446400f-c44a-49c0-891b-83b475c43e39\") " pod="openstack/cinder-volume-volume1-0" Feb 23 09:10:14 crc kubenswrapper[4940]: I0223 09:10:14.740996 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f446400f-c44a-49c0-891b-83b475c43e39-scripts\") pod \"cinder-volume-volume1-0\" (UID: \"f446400f-c44a-49c0-891b-83b475c43e39\") " pod="openstack/cinder-volume-volume1-0" Feb 23 09:10:14 crc kubenswrapper[4940]: I0223 09:10:14.741073 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/f446400f-c44a-49c0-891b-83b475c43e39-sys\") pod \"cinder-volume-volume1-0\" (UID: \"f446400f-c44a-49c0-891b-83b475c43e39\") " pod="openstack/cinder-volume-volume1-0" Feb 23 09:10:14 crc kubenswrapper[4940]: I0223 09:10:14.741104 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f446400f-c44a-49c0-891b-83b475c43e39-etc-machine-id\") pod \"cinder-volume-volume1-0\" (UID: \"f446400f-c44a-49c0-891b-83b475c43e39\") " pod="openstack/cinder-volume-volume1-0" Feb 23 09:10:14 crc kubenswrapper[4940]: I0223 09:10:14.741139 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/f446400f-c44a-49c0-891b-83b475c43e39-etc-nvme\") pod \"cinder-volume-volume1-0\" (UID: \"f446400f-c44a-49c0-891b-83b475c43e39\") " pod="openstack/cinder-volume-volume1-0" Feb 23 09:10:14 crc kubenswrapper[4940]: I0223 09:10:14.741168 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-stcdt\" (UniqueName: \"kubernetes.io/projected/f446400f-c44a-49c0-891b-83b475c43e39-kube-api-access-stcdt\") pod \"cinder-volume-volume1-0\" (UID: \"f446400f-c44a-49c0-891b-83b475c43e39\") " pod="openstack/cinder-volume-volume1-0" Feb 23 09:10:14 crc kubenswrapper[4940]: I0223 09:10:14.741234 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f446400f-c44a-49c0-891b-83b475c43e39-lib-modules\") pod \"cinder-volume-volume1-0\" (UID: \"f446400f-c44a-49c0-891b-83b475c43e39\") " pod="openstack/cinder-volume-volume1-0" Feb 23 09:10:14 crc kubenswrapper[4940]: I0223 09:10:14.741339 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/f446400f-c44a-49c0-891b-83b475c43e39-var-lib-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"f446400f-c44a-49c0-891b-83b475c43e39\") " pod="openstack/cinder-volume-volume1-0" Feb 23 09:10:14 crc kubenswrapper[4940]: I0223 09:10:14.741343 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/f446400f-c44a-49c0-891b-83b475c43e39-var-locks-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"f446400f-c44a-49c0-891b-83b475c43e39\") " pod="openstack/cinder-volume-volume1-0" Feb 23 09:10:14 crc kubenswrapper[4940]: I0223 09:10:14.741375 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/f446400f-c44a-49c0-891b-83b475c43e39-run\") pod \"cinder-volume-volume1-0\" (UID: \"f446400f-c44a-49c0-891b-83b475c43e39\") " pod="openstack/cinder-volume-volume1-0" Feb 23 09:10:14 crc kubenswrapper[4940]: I0223 09:10:14.741566 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/f446400f-c44a-49c0-891b-83b475c43e39-dev\") pod \"cinder-volume-volume1-0\" (UID: \"f446400f-c44a-49c0-891b-83b475c43e39\") " pod="openstack/cinder-volume-volume1-0" Feb 23 09:10:14 crc kubenswrapper[4940]: I0223 09:10:14.741588 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/f446400f-c44a-49c0-891b-83b475c43e39-var-locks-brick\") pod \"cinder-volume-volume1-0\" (UID: \"f446400f-c44a-49c0-891b-83b475c43e39\") " pod="openstack/cinder-volume-volume1-0" Feb 23 09:10:14 crc kubenswrapper[4940]: I0223 09:10:14.741254 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/f446400f-c44a-49c0-891b-83b475c43e39-var-lib-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"f446400f-c44a-49c0-891b-83b475c43e39\") " pod="openstack/cinder-volume-volume1-0" Feb 23 09:10:14 crc kubenswrapper[4940]: I0223 09:10:14.741718 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/f446400f-c44a-49c0-891b-83b475c43e39-etc-nvme\") pod \"cinder-volume-volume1-0\" (UID: \"f446400f-c44a-49c0-891b-83b475c43e39\") " pod="openstack/cinder-volume-volume1-0" Feb 23 09:10:14 crc kubenswrapper[4940]: I0223 09:10:14.741749 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/f446400f-c44a-49c0-891b-83b475c43e39-sys\") pod \"cinder-volume-volume1-0\" (UID: \"f446400f-c44a-49c0-891b-83b475c43e39\") " pod="openstack/cinder-volume-volume1-0" Feb 23 09:10:14 crc kubenswrapper[4940]: I0223 09:10:14.742083 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/f446400f-c44a-49c0-891b-83b475c43e39-ceph\") pod \"cinder-volume-volume1-0\" (UID: \"f446400f-c44a-49c0-891b-83b475c43e39\") " pod="openstack/cinder-volume-volume1-0" Feb 23 09:10:14 crc kubenswrapper[4940]: I0223 09:10:14.742125 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/f446400f-c44a-49c0-891b-83b475c43e39-etc-iscsi\") pod \"cinder-volume-volume1-0\" (UID: \"f446400f-c44a-49c0-891b-83b475c43e39\") " pod="openstack/cinder-volume-volume1-0" Feb 23 09:10:14 crc kubenswrapper[4940]: I0223 09:10:14.742126 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f446400f-c44a-49c0-891b-83b475c43e39-etc-machine-id\") pod \"cinder-volume-volume1-0\" (UID: \"f446400f-c44a-49c0-891b-83b475c43e39\") " pod="openstack/cinder-volume-volume1-0" Feb 23 09:10:14 crc kubenswrapper[4940]: I0223 09:10:14.742202 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/f446400f-c44a-49c0-891b-83b475c43e39-etc-iscsi\") pod \"cinder-volume-volume1-0\" (UID: \"f446400f-c44a-49c0-891b-83b475c43e39\") " pod="openstack/cinder-volume-volume1-0" Feb 23 09:10:14 crc kubenswrapper[4940]: I0223 09:10:14.745007 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f446400f-c44a-49c0-891b-83b475c43e39-config-data-custom\") pod \"cinder-volume-volume1-0\" (UID: \"f446400f-c44a-49c0-891b-83b475c43e39\") " pod="openstack/cinder-volume-volume1-0" Feb 23 09:10:14 crc kubenswrapper[4940]: I0223 09:10:14.748954 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f446400f-c44a-49c0-891b-83b475c43e39-config-data\") pod \"cinder-volume-volume1-0\" (UID: \"f446400f-c44a-49c0-891b-83b475c43e39\") " pod="openstack/cinder-volume-volume1-0" Feb 23 09:10:14 crc kubenswrapper[4940]: I0223 09:10:14.749225 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f446400f-c44a-49c0-891b-83b475c43e39-combined-ca-bundle\") pod \"cinder-volume-volume1-0\" (UID: \"f446400f-c44a-49c0-891b-83b475c43e39\") " pod="openstack/cinder-volume-volume1-0" Feb 23 09:10:14 crc kubenswrapper[4940]: I0223 09:10:14.752046 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/f446400f-c44a-49c0-891b-83b475c43e39-ceph\") pod \"cinder-volume-volume1-0\" (UID: \"f446400f-c44a-49c0-891b-83b475c43e39\") " pod="openstack/cinder-volume-volume1-0" Feb 23 09:10:14 crc kubenswrapper[4940]: I0223 09:10:14.755834 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f446400f-c44a-49c0-891b-83b475c43e39-scripts\") pod \"cinder-volume-volume1-0\" (UID: \"f446400f-c44a-49c0-891b-83b475c43e39\") " pod="openstack/cinder-volume-volume1-0" Feb 23 09:10:14 crc kubenswrapper[4940]: I0223 09:10:14.758496 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-stcdt\" (UniqueName: \"kubernetes.io/projected/f446400f-c44a-49c0-891b-83b475c43e39-kube-api-access-stcdt\") pod \"cinder-volume-volume1-0\" (UID: \"f446400f-c44a-49c0-891b-83b475c43e39\") " pod="openstack/cinder-volume-volume1-0" Feb 23 09:10:14 crc kubenswrapper[4940]: I0223 09:10:14.887959 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-volume-volume1-0" Feb 23 09:10:15 crc kubenswrapper[4940]: I0223 09:10:15.372631 4940 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3f62a0b8-7bf4-40cb-a561-b2afd12eabdf" path="/var/lib/kubelet/pods/3f62a0b8-7bf4-40cb-a561-b2afd12eabdf/volumes" Feb 23 09:10:15 crc kubenswrapper[4940]: I0223 09:10:15.373672 4940 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b39bf698-8f6d-4434-a926-239c936bbdca" path="/var/lib/kubelet/pods/b39bf698-8f6d-4434-a926-239c936bbdca/volumes" Feb 23 09:10:15 crc kubenswrapper[4940]: I0223 09:10:15.609568 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-volume-volume1-0"] Feb 23 09:10:16 crc kubenswrapper[4940]: I0223 09:10:16.376421 4940 generic.go:334] "Generic (PLEG): container finished" podID="f433de8f-71fb-4f02-a223-871cc2959145" containerID="7a06c9772d7b7b2824187fdbe6566e63cba4d50fa670b25290e09e202ad9a4db" exitCode=0 Feb 23 09:10:16 crc kubenswrapper[4940]: I0223 09:10:16.376486 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6c6749c74d-ng8p9" event={"ID":"f433de8f-71fb-4f02-a223-871cc2959145","Type":"ContainerDied","Data":"7a06c9772d7b7b2824187fdbe6566e63cba4d50fa670b25290e09e202ad9a4db"} Feb 23 09:10:16 crc kubenswrapper[4940]: I0223 09:10:16.729121 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Feb 23 09:10:18 crc kubenswrapper[4940]: I0223 09:10:18.729099 4940 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 23 09:10:18 crc kubenswrapper[4940]: I0223 09:10:18.729763 4940 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="5278c504-4391-438f-a6c9-39071eade5ae" containerName="ceilometer-central-agent" containerID="cri-o://f8c9e1f5b64f331ab938034be459c623bf79569ea26e4ea027e3ccb0e17c22fa" gracePeriod=30 Feb 23 09:10:18 crc kubenswrapper[4940]: I0223 09:10:18.729870 4940 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="5278c504-4391-438f-a6c9-39071eade5ae" containerName="proxy-httpd" containerID="cri-o://ab8a86f359525935b23dd226737644653c8c81f21f8a71e78f7cb5d4e05d2df2" gracePeriod=30 Feb 23 09:10:18 crc kubenswrapper[4940]: I0223 09:10:18.729919 4940 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="5278c504-4391-438f-a6c9-39071eade5ae" containerName="sg-core" containerID="cri-o://2a394c3fd47099b97de59751ef4eb8ae13d9edc47d15bd550d03fa9dd04ca446" gracePeriod=30 Feb 23 09:10:18 crc kubenswrapper[4940]: I0223 09:10:18.729933 4940 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="5278c504-4391-438f-a6c9-39071eade5ae" containerName="ceilometer-notification-agent" containerID="cri-o://4795d11a8831ccafb873cf48b024c7a56ba6297d9af727d45b808e6052ded532" gracePeriod=30 Feb 23 09:10:18 crc kubenswrapper[4940]: I0223 09:10:18.742311 4940 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="5278c504-4391-438f-a6c9-39071eade5ae" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.175:3000/\": EOF" Feb 23 09:10:18 crc kubenswrapper[4940]: I0223 09:10:18.967796 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-sjg4x"] Feb 23 09:10:18 crc kubenswrapper[4940]: I0223 09:10:18.969698 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-sjg4x" Feb 23 09:10:19 crc kubenswrapper[4940]: I0223 09:10:19.005049 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-sjg4x"] Feb 23 09:10:19 crc kubenswrapper[4940]: I0223 09:10:19.044792 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2ea39198-ffee-4b2e-9561-71a16fab5149-operator-scripts\") pod \"nova-api-db-create-sjg4x\" (UID: \"2ea39198-ffee-4b2e-9561-71a16fab5149\") " pod="openstack/nova-api-db-create-sjg4x" Feb 23 09:10:19 crc kubenswrapper[4940]: I0223 09:10:19.044864 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h4gvn\" (UniqueName: \"kubernetes.io/projected/2ea39198-ffee-4b2e-9561-71a16fab5149-kube-api-access-h4gvn\") pod \"nova-api-db-create-sjg4x\" (UID: \"2ea39198-ffee-4b2e-9561-71a16fab5149\") " pod="openstack/nova-api-db-create-sjg4x" Feb 23 09:10:19 crc kubenswrapper[4940]: I0223 09:10:19.094847 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-j6xpr"] Feb 23 09:10:19 crc kubenswrapper[4940]: I0223 09:10:19.096331 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-j6xpr" Feb 23 09:10:19 crc kubenswrapper[4940]: I0223 09:10:19.121536 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-j6xpr"] Feb 23 09:10:19 crc kubenswrapper[4940]: I0223 09:10:19.146722 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h4gvn\" (UniqueName: \"kubernetes.io/projected/2ea39198-ffee-4b2e-9561-71a16fab5149-kube-api-access-h4gvn\") pod \"nova-api-db-create-sjg4x\" (UID: \"2ea39198-ffee-4b2e-9561-71a16fab5149\") " pod="openstack/nova-api-db-create-sjg4x" Feb 23 09:10:19 crc kubenswrapper[4940]: I0223 09:10:19.146817 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e9332095-7a85-4e0d-8a06-da6462a9397b-operator-scripts\") pod \"nova-cell0-db-create-j6xpr\" (UID: \"e9332095-7a85-4e0d-8a06-da6462a9397b\") " pod="openstack/nova-cell0-db-create-j6xpr" Feb 23 09:10:19 crc kubenswrapper[4940]: I0223 09:10:19.146889 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zsmc4\" (UniqueName: \"kubernetes.io/projected/e9332095-7a85-4e0d-8a06-da6462a9397b-kube-api-access-zsmc4\") pod \"nova-cell0-db-create-j6xpr\" (UID: \"e9332095-7a85-4e0d-8a06-da6462a9397b\") " pod="openstack/nova-cell0-db-create-j6xpr" Feb 23 09:10:19 crc kubenswrapper[4940]: I0223 09:10:19.146918 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2ea39198-ffee-4b2e-9561-71a16fab5149-operator-scripts\") pod \"nova-api-db-create-sjg4x\" (UID: \"2ea39198-ffee-4b2e-9561-71a16fab5149\") " pod="openstack/nova-api-db-create-sjg4x" Feb 23 09:10:19 crc kubenswrapper[4940]: I0223 09:10:19.147683 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2ea39198-ffee-4b2e-9561-71a16fab5149-operator-scripts\") pod \"nova-api-db-create-sjg4x\" (UID: \"2ea39198-ffee-4b2e-9561-71a16fab5149\") " pod="openstack/nova-api-db-create-sjg4x" Feb 23 09:10:19 crc kubenswrapper[4940]: I0223 09:10:19.191933 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h4gvn\" (UniqueName: \"kubernetes.io/projected/2ea39198-ffee-4b2e-9561-71a16fab5149-kube-api-access-h4gvn\") pod \"nova-api-db-create-sjg4x\" (UID: \"2ea39198-ffee-4b2e-9561-71a16fab5149\") " pod="openstack/nova-api-db-create-sjg4x" Feb 23 09:10:19 crc kubenswrapper[4940]: I0223 09:10:19.210695 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-53e3-account-create-update-dlb86"] Feb 23 09:10:19 crc kubenswrapper[4940]: I0223 09:10:19.211994 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-53e3-account-create-update-dlb86" Feb 23 09:10:19 crc kubenswrapper[4940]: I0223 09:10:19.216580 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Feb 23 09:10:19 crc kubenswrapper[4940]: I0223 09:10:19.248328 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e9332095-7a85-4e0d-8a06-da6462a9397b-operator-scripts\") pod \"nova-cell0-db-create-j6xpr\" (UID: \"e9332095-7a85-4e0d-8a06-da6462a9397b\") " pod="openstack/nova-cell0-db-create-j6xpr" Feb 23 09:10:19 crc kubenswrapper[4940]: I0223 09:10:19.248456 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zsmc4\" (UniqueName: \"kubernetes.io/projected/e9332095-7a85-4e0d-8a06-da6462a9397b-kube-api-access-zsmc4\") pod \"nova-cell0-db-create-j6xpr\" (UID: \"e9332095-7a85-4e0d-8a06-da6462a9397b\") " pod="openstack/nova-cell0-db-create-j6xpr" Feb 23 09:10:19 crc kubenswrapper[4940]: I0223 09:10:19.258877 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e9332095-7a85-4e0d-8a06-da6462a9397b-operator-scripts\") pod \"nova-cell0-db-create-j6xpr\" (UID: \"e9332095-7a85-4e0d-8a06-da6462a9397b\") " pod="openstack/nova-cell0-db-create-j6xpr" Feb 23 09:10:19 crc kubenswrapper[4940]: I0223 09:10:19.265682 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-53e3-account-create-update-dlb86"] Feb 23 09:10:19 crc kubenswrapper[4940]: I0223 09:10:19.295867 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zsmc4\" (UniqueName: \"kubernetes.io/projected/e9332095-7a85-4e0d-8a06-da6462a9397b-kube-api-access-zsmc4\") pod \"nova-cell0-db-create-j6xpr\" (UID: \"e9332095-7a85-4e0d-8a06-da6462a9397b\") " pod="openstack/nova-cell0-db-create-j6xpr" Feb 23 09:10:19 crc kubenswrapper[4940]: I0223 09:10:19.305398 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-8slbq"] Feb 23 09:10:19 crc kubenswrapper[4940]: I0223 09:10:19.306557 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-8slbq" Feb 23 09:10:19 crc kubenswrapper[4940]: I0223 09:10:19.313022 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-8slbq"] Feb 23 09:10:19 crc kubenswrapper[4940]: I0223 09:10:19.313256 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-sjg4x" Feb 23 09:10:19 crc kubenswrapper[4940]: I0223 09:10:19.352820 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/502149a1-62b1-4e45-831b-51d1d10d4265-operator-scripts\") pod \"nova-api-53e3-account-create-update-dlb86\" (UID: \"502149a1-62b1-4e45-831b-51d1d10d4265\") " pod="openstack/nova-api-53e3-account-create-update-dlb86" Feb 23 09:10:19 crc kubenswrapper[4940]: I0223 09:10:19.352882 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-792pk\" (UniqueName: \"kubernetes.io/projected/502149a1-62b1-4e45-831b-51d1d10d4265-kube-api-access-792pk\") pod \"nova-api-53e3-account-create-update-dlb86\" (UID: \"502149a1-62b1-4e45-831b-51d1d10d4265\") " pod="openstack/nova-api-53e3-account-create-update-dlb86" Feb 23 09:10:19 crc kubenswrapper[4940]: I0223 09:10:19.391834 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-50ec-account-create-update-mqlhz"] Feb 23 09:10:19 crc kubenswrapper[4940]: I0223 09:10:19.393310 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-50ec-account-create-update-mqlhz" Feb 23 09:10:19 crc kubenswrapper[4940]: I0223 09:10:19.395385 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Feb 23 09:10:19 crc kubenswrapper[4940]: I0223 09:10:19.420482 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-50ec-account-create-update-mqlhz"] Feb 23 09:10:19 crc kubenswrapper[4940]: I0223 09:10:19.443541 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-j6xpr" Feb 23 09:10:19 crc kubenswrapper[4940]: I0223 09:10:19.459180 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bb9a8d01-0012-4ea6-b812-7ef38a27fe7b-operator-scripts\") pod \"nova-cell0-50ec-account-create-update-mqlhz\" (UID: \"bb9a8d01-0012-4ea6-b812-7ef38a27fe7b\") " pod="openstack/nova-cell0-50ec-account-create-update-mqlhz" Feb 23 09:10:19 crc kubenswrapper[4940]: I0223 09:10:19.460415 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5684e490-a8e8-435c-8d93-6b510ca1f90f-operator-scripts\") pod \"nova-cell1-db-create-8slbq\" (UID: \"5684e490-a8e8-435c-8d93-6b510ca1f90f\") " pod="openstack/nova-cell1-db-create-8slbq" Feb 23 09:10:19 crc kubenswrapper[4940]: I0223 09:10:19.460485 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nz8bf\" (UniqueName: \"kubernetes.io/projected/5684e490-a8e8-435c-8d93-6b510ca1f90f-kube-api-access-nz8bf\") pod \"nova-cell1-db-create-8slbq\" (UID: \"5684e490-a8e8-435c-8d93-6b510ca1f90f\") " pod="openstack/nova-cell1-db-create-8slbq" Feb 23 09:10:19 crc kubenswrapper[4940]: I0223 09:10:19.460575 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/502149a1-62b1-4e45-831b-51d1d10d4265-operator-scripts\") pod \"nova-api-53e3-account-create-update-dlb86\" (UID: \"502149a1-62b1-4e45-831b-51d1d10d4265\") " pod="openstack/nova-api-53e3-account-create-update-dlb86" Feb 23 09:10:19 crc kubenswrapper[4940]: I0223 09:10:19.460663 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-792pk\" (UniqueName: \"kubernetes.io/projected/502149a1-62b1-4e45-831b-51d1d10d4265-kube-api-access-792pk\") pod \"nova-api-53e3-account-create-update-dlb86\" (UID: \"502149a1-62b1-4e45-831b-51d1d10d4265\") " pod="openstack/nova-api-53e3-account-create-update-dlb86" Feb 23 09:10:19 crc kubenswrapper[4940]: I0223 09:10:19.460797 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zpx8k\" (UniqueName: \"kubernetes.io/projected/bb9a8d01-0012-4ea6-b812-7ef38a27fe7b-kube-api-access-zpx8k\") pod \"nova-cell0-50ec-account-create-update-mqlhz\" (UID: \"bb9a8d01-0012-4ea6-b812-7ef38a27fe7b\") " pod="openstack/nova-cell0-50ec-account-create-update-mqlhz" Feb 23 09:10:19 crc kubenswrapper[4940]: I0223 09:10:19.464198 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/502149a1-62b1-4e45-831b-51d1d10d4265-operator-scripts\") pod \"nova-api-53e3-account-create-update-dlb86\" (UID: \"502149a1-62b1-4e45-831b-51d1d10d4265\") " pod="openstack/nova-api-53e3-account-create-update-dlb86" Feb 23 09:10:19 crc kubenswrapper[4940]: I0223 09:10:19.478856 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-792pk\" (UniqueName: \"kubernetes.io/projected/502149a1-62b1-4e45-831b-51d1d10d4265-kube-api-access-792pk\") pod \"nova-api-53e3-account-create-update-dlb86\" (UID: \"502149a1-62b1-4e45-831b-51d1d10d4265\") " pod="openstack/nova-api-53e3-account-create-update-dlb86" Feb 23 09:10:19 crc kubenswrapper[4940]: I0223 09:10:19.487881 4940 generic.go:334] "Generic (PLEG): container finished" podID="5278c504-4391-438f-a6c9-39071eade5ae" containerID="ab8a86f359525935b23dd226737644653c8c81f21f8a71e78f7cb5d4e05d2df2" exitCode=0 Feb 23 09:10:19 crc kubenswrapper[4940]: I0223 09:10:19.487916 4940 generic.go:334] "Generic (PLEG): container finished" podID="5278c504-4391-438f-a6c9-39071eade5ae" containerID="2a394c3fd47099b97de59751ef4eb8ae13d9edc47d15bd550d03fa9dd04ca446" exitCode=2 Feb 23 09:10:19 crc kubenswrapper[4940]: I0223 09:10:19.487938 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5278c504-4391-438f-a6c9-39071eade5ae","Type":"ContainerDied","Data":"ab8a86f359525935b23dd226737644653c8c81f21f8a71e78f7cb5d4e05d2df2"} Feb 23 09:10:19 crc kubenswrapper[4940]: I0223 09:10:19.487966 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5278c504-4391-438f-a6c9-39071eade5ae","Type":"ContainerDied","Data":"2a394c3fd47099b97de59751ef4eb8ae13d9edc47d15bd550d03fa9dd04ca446"} Feb 23 09:10:19 crc kubenswrapper[4940]: I0223 09:10:19.562568 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bb9a8d01-0012-4ea6-b812-7ef38a27fe7b-operator-scripts\") pod \"nova-cell0-50ec-account-create-update-mqlhz\" (UID: \"bb9a8d01-0012-4ea6-b812-7ef38a27fe7b\") " pod="openstack/nova-cell0-50ec-account-create-update-mqlhz" Feb 23 09:10:19 crc kubenswrapper[4940]: I0223 09:10:19.562698 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5684e490-a8e8-435c-8d93-6b510ca1f90f-operator-scripts\") pod \"nova-cell1-db-create-8slbq\" (UID: \"5684e490-a8e8-435c-8d93-6b510ca1f90f\") " pod="openstack/nova-cell1-db-create-8slbq" Feb 23 09:10:19 crc kubenswrapper[4940]: I0223 09:10:19.562919 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nz8bf\" (UniqueName: \"kubernetes.io/projected/5684e490-a8e8-435c-8d93-6b510ca1f90f-kube-api-access-nz8bf\") pod \"nova-cell1-db-create-8slbq\" (UID: \"5684e490-a8e8-435c-8d93-6b510ca1f90f\") " pod="openstack/nova-cell1-db-create-8slbq" Feb 23 09:10:19 crc kubenswrapper[4940]: I0223 09:10:19.563000 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zpx8k\" (UniqueName: \"kubernetes.io/projected/bb9a8d01-0012-4ea6-b812-7ef38a27fe7b-kube-api-access-zpx8k\") pod \"nova-cell0-50ec-account-create-update-mqlhz\" (UID: \"bb9a8d01-0012-4ea6-b812-7ef38a27fe7b\") " pod="openstack/nova-cell0-50ec-account-create-update-mqlhz" Feb 23 09:10:19 crc kubenswrapper[4940]: I0223 09:10:19.564152 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bb9a8d01-0012-4ea6-b812-7ef38a27fe7b-operator-scripts\") pod \"nova-cell0-50ec-account-create-update-mqlhz\" (UID: \"bb9a8d01-0012-4ea6-b812-7ef38a27fe7b\") " pod="openstack/nova-cell0-50ec-account-create-update-mqlhz" Feb 23 09:10:19 crc kubenswrapper[4940]: I0223 09:10:19.565098 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5684e490-a8e8-435c-8d93-6b510ca1f90f-operator-scripts\") pod \"nova-cell1-db-create-8slbq\" (UID: \"5684e490-a8e8-435c-8d93-6b510ca1f90f\") " pod="openstack/nova-cell1-db-create-8slbq" Feb 23 09:10:19 crc kubenswrapper[4940]: I0223 09:10:19.574080 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-c0be-account-create-update-2b67b"] Feb 23 09:10:19 crc kubenswrapper[4940]: I0223 09:10:19.576447 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-c0be-account-create-update-2b67b" Feb 23 09:10:19 crc kubenswrapper[4940]: I0223 09:10:19.591235 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Feb 23 09:10:19 crc kubenswrapper[4940]: I0223 09:10:19.591292 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-53e3-account-create-update-dlb86" Feb 23 09:10:19 crc kubenswrapper[4940]: I0223 09:10:19.591646 4940 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-backup-0" Feb 23 09:10:19 crc kubenswrapper[4940]: I0223 09:10:19.593419 4940 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Feb 23 09:10:19 crc kubenswrapper[4940]: I0223 09:10:19.598274 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nz8bf\" (UniqueName: \"kubernetes.io/projected/5684e490-a8e8-435c-8d93-6b510ca1f90f-kube-api-access-nz8bf\") pod \"nova-cell1-db-create-8slbq\" (UID: \"5684e490-a8e8-435c-8d93-6b510ca1f90f\") " pod="openstack/nova-cell1-db-create-8slbq" Feb 23 09:10:19 crc kubenswrapper[4940]: I0223 09:10:19.612412 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zpx8k\" (UniqueName: \"kubernetes.io/projected/bb9a8d01-0012-4ea6-b812-7ef38a27fe7b-kube-api-access-zpx8k\") pod \"nova-cell0-50ec-account-create-update-mqlhz\" (UID: \"bb9a8d01-0012-4ea6-b812-7ef38a27fe7b\") " pod="openstack/nova-cell0-50ec-account-create-update-mqlhz" Feb 23 09:10:19 crc kubenswrapper[4940]: I0223 09:10:19.617015 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-c0be-account-create-update-2b67b"] Feb 23 09:10:19 crc kubenswrapper[4940]: I0223 09:10:19.662313 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-8slbq" Feb 23 09:10:19 crc kubenswrapper[4940]: I0223 09:10:19.665403 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wsj9h\" (UniqueName: \"kubernetes.io/projected/871fc6a4-b1d8-4676-977e-10c7bd9bf609-kube-api-access-wsj9h\") pod \"nova-cell1-c0be-account-create-update-2b67b\" (UID: \"871fc6a4-b1d8-4676-977e-10c7bd9bf609\") " pod="openstack/nova-cell1-c0be-account-create-update-2b67b" Feb 23 09:10:19 crc kubenswrapper[4940]: I0223 09:10:19.665500 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/871fc6a4-b1d8-4676-977e-10c7bd9bf609-operator-scripts\") pod \"nova-cell1-c0be-account-create-update-2b67b\" (UID: \"871fc6a4-b1d8-4676-977e-10c7bd9bf609\") " pod="openstack/nova-cell1-c0be-account-create-update-2b67b" Feb 23 09:10:19 crc kubenswrapper[4940]: I0223 09:10:19.721233 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-50ec-account-create-update-mqlhz" Feb 23 09:10:19 crc kubenswrapper[4940]: I0223 09:10:19.768108 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/871fc6a4-b1d8-4676-977e-10c7bd9bf609-operator-scripts\") pod \"nova-cell1-c0be-account-create-update-2b67b\" (UID: \"871fc6a4-b1d8-4676-977e-10c7bd9bf609\") " pod="openstack/nova-cell1-c0be-account-create-update-2b67b" Feb 23 09:10:19 crc kubenswrapper[4940]: I0223 09:10:19.768386 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wsj9h\" (UniqueName: \"kubernetes.io/projected/871fc6a4-b1d8-4676-977e-10c7bd9bf609-kube-api-access-wsj9h\") pod \"nova-cell1-c0be-account-create-update-2b67b\" (UID: \"871fc6a4-b1d8-4676-977e-10c7bd9bf609\") " pod="openstack/nova-cell1-c0be-account-create-update-2b67b" Feb 23 09:10:19 crc kubenswrapper[4940]: I0223 09:10:19.769574 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/871fc6a4-b1d8-4676-977e-10c7bd9bf609-operator-scripts\") pod \"nova-cell1-c0be-account-create-update-2b67b\" (UID: \"871fc6a4-b1d8-4676-977e-10c7bd9bf609\") " pod="openstack/nova-cell1-c0be-account-create-update-2b67b" Feb 23 09:10:19 crc kubenswrapper[4940]: I0223 09:10:19.770746 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-6c475756fc-pxxbv"] Feb 23 09:10:19 crc kubenswrapper[4940]: I0223 09:10:19.772453 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-6c475756fc-pxxbv" Feb 23 09:10:19 crc kubenswrapper[4940]: I0223 09:10:19.787137 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Feb 23 09:10:19 crc kubenswrapper[4940]: I0223 09:10:19.787298 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Feb 23 09:10:19 crc kubenswrapper[4940]: I0223 09:10:19.787537 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Feb 23 09:10:19 crc kubenswrapper[4940]: I0223 09:10:19.789537 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-6c475756fc-pxxbv"] Feb 23 09:10:19 crc kubenswrapper[4940]: I0223 09:10:19.791309 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wsj9h\" (UniqueName: \"kubernetes.io/projected/871fc6a4-b1d8-4676-977e-10c7bd9bf609-kube-api-access-wsj9h\") pod \"nova-cell1-c0be-account-create-update-2b67b\" (UID: \"871fc6a4-b1d8-4676-977e-10c7bd9bf609\") " pod="openstack/nova-cell1-c0be-account-create-update-2b67b" Feb 23 09:10:19 crc kubenswrapper[4940]: I0223 09:10:19.869570 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/418704a3-dc2d-440f-8beb-2c00795cf4d4-combined-ca-bundle\") pod \"swift-proxy-6c475756fc-pxxbv\" (UID: \"418704a3-dc2d-440f-8beb-2c00795cf4d4\") " pod="openstack/swift-proxy-6c475756fc-pxxbv" Feb 23 09:10:19 crc kubenswrapper[4940]: I0223 09:10:19.869646 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/418704a3-dc2d-440f-8beb-2c00795cf4d4-etc-swift\") pod \"swift-proxy-6c475756fc-pxxbv\" (UID: \"418704a3-dc2d-440f-8beb-2c00795cf4d4\") " pod="openstack/swift-proxy-6c475756fc-pxxbv" Feb 23 09:10:19 crc kubenswrapper[4940]: I0223 09:10:19.869697 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/418704a3-dc2d-440f-8beb-2c00795cf4d4-public-tls-certs\") pod \"swift-proxy-6c475756fc-pxxbv\" (UID: \"418704a3-dc2d-440f-8beb-2c00795cf4d4\") " pod="openstack/swift-proxy-6c475756fc-pxxbv" Feb 23 09:10:19 crc kubenswrapper[4940]: I0223 09:10:19.869788 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/418704a3-dc2d-440f-8beb-2c00795cf4d4-run-httpd\") pod \"swift-proxy-6c475756fc-pxxbv\" (UID: \"418704a3-dc2d-440f-8beb-2c00795cf4d4\") " pod="openstack/swift-proxy-6c475756fc-pxxbv" Feb 23 09:10:19 crc kubenswrapper[4940]: I0223 09:10:19.869825 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/418704a3-dc2d-440f-8beb-2c00795cf4d4-log-httpd\") pod \"swift-proxy-6c475756fc-pxxbv\" (UID: \"418704a3-dc2d-440f-8beb-2c00795cf4d4\") " pod="openstack/swift-proxy-6c475756fc-pxxbv" Feb 23 09:10:19 crc kubenswrapper[4940]: I0223 09:10:19.869847 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/418704a3-dc2d-440f-8beb-2c00795cf4d4-internal-tls-certs\") pod \"swift-proxy-6c475756fc-pxxbv\" (UID: \"418704a3-dc2d-440f-8beb-2c00795cf4d4\") " pod="openstack/swift-proxy-6c475756fc-pxxbv" Feb 23 09:10:19 crc kubenswrapper[4940]: I0223 09:10:19.869908 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fsp6q\" (UniqueName: \"kubernetes.io/projected/418704a3-dc2d-440f-8beb-2c00795cf4d4-kube-api-access-fsp6q\") pod \"swift-proxy-6c475756fc-pxxbv\" (UID: \"418704a3-dc2d-440f-8beb-2c00795cf4d4\") " pod="openstack/swift-proxy-6c475756fc-pxxbv" Feb 23 09:10:19 crc kubenswrapper[4940]: I0223 09:10:19.869934 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/418704a3-dc2d-440f-8beb-2c00795cf4d4-config-data\") pod \"swift-proxy-6c475756fc-pxxbv\" (UID: \"418704a3-dc2d-440f-8beb-2c00795cf4d4\") " pod="openstack/swift-proxy-6c475756fc-pxxbv" Feb 23 09:10:19 crc kubenswrapper[4940]: I0223 09:10:19.970924 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fsp6q\" (UniqueName: \"kubernetes.io/projected/418704a3-dc2d-440f-8beb-2c00795cf4d4-kube-api-access-fsp6q\") pod \"swift-proxy-6c475756fc-pxxbv\" (UID: \"418704a3-dc2d-440f-8beb-2c00795cf4d4\") " pod="openstack/swift-proxy-6c475756fc-pxxbv" Feb 23 09:10:19 crc kubenswrapper[4940]: I0223 09:10:19.970973 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/418704a3-dc2d-440f-8beb-2c00795cf4d4-config-data\") pod \"swift-proxy-6c475756fc-pxxbv\" (UID: \"418704a3-dc2d-440f-8beb-2c00795cf4d4\") " pod="openstack/swift-proxy-6c475756fc-pxxbv" Feb 23 09:10:19 crc kubenswrapper[4940]: I0223 09:10:19.971021 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/418704a3-dc2d-440f-8beb-2c00795cf4d4-combined-ca-bundle\") pod \"swift-proxy-6c475756fc-pxxbv\" (UID: \"418704a3-dc2d-440f-8beb-2c00795cf4d4\") " pod="openstack/swift-proxy-6c475756fc-pxxbv" Feb 23 09:10:19 crc kubenswrapper[4940]: I0223 09:10:19.971039 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/418704a3-dc2d-440f-8beb-2c00795cf4d4-etc-swift\") pod \"swift-proxy-6c475756fc-pxxbv\" (UID: \"418704a3-dc2d-440f-8beb-2c00795cf4d4\") " pod="openstack/swift-proxy-6c475756fc-pxxbv" Feb 23 09:10:19 crc kubenswrapper[4940]: I0223 09:10:19.971074 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/418704a3-dc2d-440f-8beb-2c00795cf4d4-public-tls-certs\") pod \"swift-proxy-6c475756fc-pxxbv\" (UID: \"418704a3-dc2d-440f-8beb-2c00795cf4d4\") " pod="openstack/swift-proxy-6c475756fc-pxxbv" Feb 23 09:10:19 crc kubenswrapper[4940]: I0223 09:10:19.971145 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/418704a3-dc2d-440f-8beb-2c00795cf4d4-run-httpd\") pod \"swift-proxy-6c475756fc-pxxbv\" (UID: \"418704a3-dc2d-440f-8beb-2c00795cf4d4\") " pod="openstack/swift-proxy-6c475756fc-pxxbv" Feb 23 09:10:19 crc kubenswrapper[4940]: I0223 09:10:19.971171 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/418704a3-dc2d-440f-8beb-2c00795cf4d4-log-httpd\") pod \"swift-proxy-6c475756fc-pxxbv\" (UID: \"418704a3-dc2d-440f-8beb-2c00795cf4d4\") " pod="openstack/swift-proxy-6c475756fc-pxxbv" Feb 23 09:10:19 crc kubenswrapper[4940]: I0223 09:10:19.971189 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/418704a3-dc2d-440f-8beb-2c00795cf4d4-internal-tls-certs\") pod \"swift-proxy-6c475756fc-pxxbv\" (UID: \"418704a3-dc2d-440f-8beb-2c00795cf4d4\") " pod="openstack/swift-proxy-6c475756fc-pxxbv" Feb 23 09:10:19 crc kubenswrapper[4940]: I0223 09:10:19.973054 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/418704a3-dc2d-440f-8beb-2c00795cf4d4-run-httpd\") pod \"swift-proxy-6c475756fc-pxxbv\" (UID: \"418704a3-dc2d-440f-8beb-2c00795cf4d4\") " pod="openstack/swift-proxy-6c475756fc-pxxbv" Feb 23 09:10:19 crc kubenswrapper[4940]: I0223 09:10:19.973369 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/418704a3-dc2d-440f-8beb-2c00795cf4d4-log-httpd\") pod \"swift-proxy-6c475756fc-pxxbv\" (UID: \"418704a3-dc2d-440f-8beb-2c00795cf4d4\") " pod="openstack/swift-proxy-6c475756fc-pxxbv" Feb 23 09:10:19 crc kubenswrapper[4940]: I0223 09:10:19.978818 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/418704a3-dc2d-440f-8beb-2c00795cf4d4-etc-swift\") pod \"swift-proxy-6c475756fc-pxxbv\" (UID: \"418704a3-dc2d-440f-8beb-2c00795cf4d4\") " pod="openstack/swift-proxy-6c475756fc-pxxbv" Feb 23 09:10:19 crc kubenswrapper[4940]: I0223 09:10:19.980695 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/418704a3-dc2d-440f-8beb-2c00795cf4d4-config-data\") pod \"swift-proxy-6c475756fc-pxxbv\" (UID: \"418704a3-dc2d-440f-8beb-2c00795cf4d4\") " pod="openstack/swift-proxy-6c475756fc-pxxbv" Feb 23 09:10:19 crc kubenswrapper[4940]: I0223 09:10:19.986242 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/418704a3-dc2d-440f-8beb-2c00795cf4d4-combined-ca-bundle\") pod \"swift-proxy-6c475756fc-pxxbv\" (UID: \"418704a3-dc2d-440f-8beb-2c00795cf4d4\") " pod="openstack/swift-proxy-6c475756fc-pxxbv" Feb 23 09:10:19 crc kubenswrapper[4940]: I0223 09:10:19.987301 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/418704a3-dc2d-440f-8beb-2c00795cf4d4-public-tls-certs\") pod \"swift-proxy-6c475756fc-pxxbv\" (UID: \"418704a3-dc2d-440f-8beb-2c00795cf4d4\") " pod="openstack/swift-proxy-6c475756fc-pxxbv" Feb 23 09:10:20 crc kubenswrapper[4940]: I0223 09:10:20.002475 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/418704a3-dc2d-440f-8beb-2c00795cf4d4-internal-tls-certs\") pod \"swift-proxy-6c475756fc-pxxbv\" (UID: \"418704a3-dc2d-440f-8beb-2c00795cf4d4\") " pod="openstack/swift-proxy-6c475756fc-pxxbv" Feb 23 09:10:20 crc kubenswrapper[4940]: I0223 09:10:20.011398 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-c0be-account-create-update-2b67b" Feb 23 09:10:20 crc kubenswrapper[4940]: I0223 09:10:20.013039 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fsp6q\" (UniqueName: \"kubernetes.io/projected/418704a3-dc2d-440f-8beb-2c00795cf4d4-kube-api-access-fsp6q\") pod \"swift-proxy-6c475756fc-pxxbv\" (UID: \"418704a3-dc2d-440f-8beb-2c00795cf4d4\") " pod="openstack/swift-proxy-6c475756fc-pxxbv" Feb 23 09:10:20 crc kubenswrapper[4940]: I0223 09:10:20.115935 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-6c475756fc-pxxbv" Feb 23 09:10:20 crc kubenswrapper[4940]: I0223 09:10:20.502083 4940 generic.go:334] "Generic (PLEG): container finished" podID="5278c504-4391-438f-a6c9-39071eade5ae" containerID="4795d11a8831ccafb873cf48b024c7a56ba6297d9af727d45b808e6052ded532" exitCode=0 Feb 23 09:10:20 crc kubenswrapper[4940]: I0223 09:10:20.502379 4940 generic.go:334] "Generic (PLEG): container finished" podID="5278c504-4391-438f-a6c9-39071eade5ae" containerID="f8c9e1f5b64f331ab938034be459c623bf79569ea26e4ea027e3ccb0e17c22fa" exitCode=0 Feb 23 09:10:20 crc kubenswrapper[4940]: I0223 09:10:20.502139 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5278c504-4391-438f-a6c9-39071eade5ae","Type":"ContainerDied","Data":"4795d11a8831ccafb873cf48b024c7a56ba6297d9af727d45b808e6052ded532"} Feb 23 09:10:20 crc kubenswrapper[4940]: I0223 09:10:20.502420 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5278c504-4391-438f-a6c9-39071eade5ae","Type":"ContainerDied","Data":"f8c9e1f5b64f331ab938034be459c623bf79569ea26e4ea027e3ccb0e17c22fa"} Feb 23 09:10:21 crc kubenswrapper[4940]: I0223 09:10:21.446794 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-6c6749c74d-ng8p9" Feb 23 09:10:21 crc kubenswrapper[4940]: I0223 09:10:21.536428 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-volume1-0" event={"ID":"f446400f-c44a-49c0-891b-83b475c43e39","Type":"ContainerStarted","Data":"f13ba925855d3e745d135c4403566ea9ab3bef4dcf333c1ae8693d4ea62ba8b3"} Feb 23 09:10:21 crc kubenswrapper[4940]: I0223 09:10:21.544715 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6c6749c74d-ng8p9" event={"ID":"f433de8f-71fb-4f02-a223-871cc2959145","Type":"ContainerDied","Data":"49eaa9426bb097e691ef23490366aa36cbbb95591a71cc92ebcb85c243fba664"} Feb 23 09:10:21 crc kubenswrapper[4940]: I0223 09:10:21.544803 4940 scope.go:117] "RemoveContainer" containerID="7a06c9772d7b7b2824187fdbe6566e63cba4d50fa670b25290e09e202ad9a4db" Feb 23 09:10:21 crc kubenswrapper[4940]: I0223 09:10:21.544923 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-6c6749c74d-ng8p9" Feb 23 09:10:21 crc kubenswrapper[4940]: I0223 09:10:21.618748 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s7xts\" (UniqueName: \"kubernetes.io/projected/f433de8f-71fb-4f02-a223-871cc2959145-kube-api-access-s7xts\") pod \"f433de8f-71fb-4f02-a223-871cc2959145\" (UID: \"f433de8f-71fb-4f02-a223-871cc2959145\") " Feb 23 09:10:21 crc kubenswrapper[4940]: I0223 09:10:21.618839 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f433de8f-71fb-4f02-a223-871cc2959145-internal-tls-certs\") pod \"f433de8f-71fb-4f02-a223-871cc2959145\" (UID: \"f433de8f-71fb-4f02-a223-871cc2959145\") " Feb 23 09:10:21 crc kubenswrapper[4940]: I0223 09:10:21.618867 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f433de8f-71fb-4f02-a223-871cc2959145-combined-ca-bundle\") pod \"f433de8f-71fb-4f02-a223-871cc2959145\" (UID: \"f433de8f-71fb-4f02-a223-871cc2959145\") " Feb 23 09:10:21 crc kubenswrapper[4940]: I0223 09:10:21.618909 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f433de8f-71fb-4f02-a223-871cc2959145-config-data\") pod \"f433de8f-71fb-4f02-a223-871cc2959145\" (UID: \"f433de8f-71fb-4f02-a223-871cc2959145\") " Feb 23 09:10:21 crc kubenswrapper[4940]: I0223 09:10:21.618954 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f433de8f-71fb-4f02-a223-871cc2959145-logs\") pod \"f433de8f-71fb-4f02-a223-871cc2959145\" (UID: \"f433de8f-71fb-4f02-a223-871cc2959145\") " Feb 23 09:10:21 crc kubenswrapper[4940]: I0223 09:10:21.618974 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f433de8f-71fb-4f02-a223-871cc2959145-public-tls-certs\") pod \"f433de8f-71fb-4f02-a223-871cc2959145\" (UID: \"f433de8f-71fb-4f02-a223-871cc2959145\") " Feb 23 09:10:21 crc kubenswrapper[4940]: I0223 09:10:21.619102 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f433de8f-71fb-4f02-a223-871cc2959145-scripts\") pod \"f433de8f-71fb-4f02-a223-871cc2959145\" (UID: \"f433de8f-71fb-4f02-a223-871cc2959145\") " Feb 23 09:10:21 crc kubenswrapper[4940]: I0223 09:10:21.620112 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f433de8f-71fb-4f02-a223-871cc2959145-logs" (OuterVolumeSpecName: "logs") pod "f433de8f-71fb-4f02-a223-871cc2959145" (UID: "f433de8f-71fb-4f02-a223-871cc2959145"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 09:10:21 crc kubenswrapper[4940]: I0223 09:10:21.624954 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f433de8f-71fb-4f02-a223-871cc2959145-kube-api-access-s7xts" (OuterVolumeSpecName: "kube-api-access-s7xts") pod "f433de8f-71fb-4f02-a223-871cc2959145" (UID: "f433de8f-71fb-4f02-a223-871cc2959145"). InnerVolumeSpecName "kube-api-access-s7xts". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 09:10:21 crc kubenswrapper[4940]: I0223 09:10:21.661592 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f433de8f-71fb-4f02-a223-871cc2959145-scripts" (OuterVolumeSpecName: "scripts") pod "f433de8f-71fb-4f02-a223-871cc2959145" (UID: "f433de8f-71fb-4f02-a223-871cc2959145"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 09:10:21 crc kubenswrapper[4940]: I0223 09:10:21.699066 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-j6xpr"] Feb 23 09:10:21 crc kubenswrapper[4940]: I0223 09:10:21.716557 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 23 09:10:21 crc kubenswrapper[4940]: I0223 09:10:21.721029 4940 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f433de8f-71fb-4f02-a223-871cc2959145-scripts\") on node \"crc\" DevicePath \"\"" Feb 23 09:10:21 crc kubenswrapper[4940]: I0223 09:10:21.721243 4940 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s7xts\" (UniqueName: \"kubernetes.io/projected/f433de8f-71fb-4f02-a223-871cc2959145-kube-api-access-s7xts\") on node \"crc\" DevicePath \"\"" Feb 23 09:10:21 crc kubenswrapper[4940]: I0223 09:10:21.721364 4940 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f433de8f-71fb-4f02-a223-871cc2959145-logs\") on node \"crc\" DevicePath \"\"" Feb 23 09:10:21 crc kubenswrapper[4940]: I0223 09:10:21.729832 4940 scope.go:117] "RemoveContainer" containerID="74f58278e9eaf1ba3aaa6c4c89dd754902de0e73978ad252afed9398bcb8f2e6" Feb 23 09:10:21 crc kubenswrapper[4940]: I0223 09:10:21.753890 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f433de8f-71fb-4f02-a223-871cc2959145-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f433de8f-71fb-4f02-a223-871cc2959145" (UID: "f433de8f-71fb-4f02-a223-871cc2959145"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 09:10:21 crc kubenswrapper[4940]: I0223 09:10:21.759537 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f433de8f-71fb-4f02-a223-871cc2959145-config-data" (OuterVolumeSpecName: "config-data") pod "f433de8f-71fb-4f02-a223-871cc2959145" (UID: "f433de8f-71fb-4f02-a223-871cc2959145"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 09:10:21 crc kubenswrapper[4940]: I0223 09:10:21.812822 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f433de8f-71fb-4f02-a223-871cc2959145-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "f433de8f-71fb-4f02-a223-871cc2959145" (UID: "f433de8f-71fb-4f02-a223-871cc2959145"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 09:10:21 crc kubenswrapper[4940]: I0223 09:10:21.814757 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f433de8f-71fb-4f02-a223-871cc2959145-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "f433de8f-71fb-4f02-a223-871cc2959145" (UID: "f433de8f-71fb-4f02-a223-871cc2959145"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 09:10:21 crc kubenswrapper[4940]: I0223 09:10:21.822466 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5278c504-4391-438f-a6c9-39071eade5ae-combined-ca-bundle\") pod \"5278c504-4391-438f-a6c9-39071eade5ae\" (UID: \"5278c504-4391-438f-a6c9-39071eade5ae\") " Feb 23 09:10:21 crc kubenswrapper[4940]: I0223 09:10:21.822518 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5278c504-4391-438f-a6c9-39071eade5ae-log-httpd\") pod \"5278c504-4391-438f-a6c9-39071eade5ae\" (UID: \"5278c504-4391-438f-a6c9-39071eade5ae\") " Feb 23 09:10:21 crc kubenswrapper[4940]: I0223 09:10:21.822712 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5278c504-4391-438f-a6c9-39071eade5ae-run-httpd\") pod \"5278c504-4391-438f-a6c9-39071eade5ae\" (UID: \"5278c504-4391-438f-a6c9-39071eade5ae\") " Feb 23 09:10:21 crc kubenswrapper[4940]: I0223 09:10:21.822770 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5278c504-4391-438f-a6c9-39071eade5ae-scripts\") pod \"5278c504-4391-438f-a6c9-39071eade5ae\" (UID: \"5278c504-4391-438f-a6c9-39071eade5ae\") " Feb 23 09:10:21 crc kubenswrapper[4940]: I0223 09:10:21.822809 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5278c504-4391-438f-a6c9-39071eade5ae-sg-core-conf-yaml\") pod \"5278c504-4391-438f-a6c9-39071eade5ae\" (UID: \"5278c504-4391-438f-a6c9-39071eade5ae\") " Feb 23 09:10:21 crc kubenswrapper[4940]: I0223 09:10:21.822902 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5278c504-4391-438f-a6c9-39071eade5ae-config-data\") pod \"5278c504-4391-438f-a6c9-39071eade5ae\" (UID: \"5278c504-4391-438f-a6c9-39071eade5ae\") " Feb 23 09:10:21 crc kubenswrapper[4940]: I0223 09:10:21.822966 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dfg6b\" (UniqueName: \"kubernetes.io/projected/5278c504-4391-438f-a6c9-39071eade5ae-kube-api-access-dfg6b\") pod \"5278c504-4391-438f-a6c9-39071eade5ae\" (UID: \"5278c504-4391-438f-a6c9-39071eade5ae\") " Feb 23 09:10:21 crc kubenswrapper[4940]: I0223 09:10:21.823407 4940 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f433de8f-71fb-4f02-a223-871cc2959145-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 23 09:10:21 crc kubenswrapper[4940]: I0223 09:10:21.823426 4940 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f433de8f-71fb-4f02-a223-871cc2959145-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 09:10:21 crc kubenswrapper[4940]: I0223 09:10:21.823436 4940 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f433de8f-71fb-4f02-a223-871cc2959145-config-data\") on node \"crc\" DevicePath \"\"" Feb 23 09:10:21 crc kubenswrapper[4940]: I0223 09:10:21.823444 4940 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f433de8f-71fb-4f02-a223-871cc2959145-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 23 09:10:21 crc kubenswrapper[4940]: I0223 09:10:21.827594 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5278c504-4391-438f-a6c9-39071eade5ae-kube-api-access-dfg6b" (OuterVolumeSpecName: "kube-api-access-dfg6b") pod "5278c504-4391-438f-a6c9-39071eade5ae" (UID: "5278c504-4391-438f-a6c9-39071eade5ae"). InnerVolumeSpecName "kube-api-access-dfg6b". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 09:10:21 crc kubenswrapper[4940]: I0223 09:10:21.830460 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5278c504-4391-438f-a6c9-39071eade5ae-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "5278c504-4391-438f-a6c9-39071eade5ae" (UID: "5278c504-4391-438f-a6c9-39071eade5ae"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 09:10:21 crc kubenswrapper[4940]: I0223 09:10:21.834184 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5278c504-4391-438f-a6c9-39071eade5ae-scripts" (OuterVolumeSpecName: "scripts") pod "5278c504-4391-438f-a6c9-39071eade5ae" (UID: "5278c504-4391-438f-a6c9-39071eade5ae"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 09:10:21 crc kubenswrapper[4940]: I0223 09:10:21.834302 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5278c504-4391-438f-a6c9-39071eade5ae-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "5278c504-4391-438f-a6c9-39071eade5ae" (UID: "5278c504-4391-438f-a6c9-39071eade5ae"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 09:10:21 crc kubenswrapper[4940]: I0223 09:10:21.871005 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5278c504-4391-438f-a6c9-39071eade5ae-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "5278c504-4391-438f-a6c9-39071eade5ae" (UID: "5278c504-4391-438f-a6c9-39071eade5ae"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 09:10:21 crc kubenswrapper[4940]: I0223 09:10:21.905483 4940 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-6c6749c74d-ng8p9"] Feb 23 09:10:21 crc kubenswrapper[4940]: I0223 09:10:21.915429 4940 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-6c6749c74d-ng8p9"] Feb 23 09:10:21 crc kubenswrapper[4940]: I0223 09:10:21.926394 4940 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dfg6b\" (UniqueName: \"kubernetes.io/projected/5278c504-4391-438f-a6c9-39071eade5ae-kube-api-access-dfg6b\") on node \"crc\" DevicePath \"\"" Feb 23 09:10:21 crc kubenswrapper[4940]: I0223 09:10:21.926418 4940 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5278c504-4391-438f-a6c9-39071eade5ae-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 23 09:10:21 crc kubenswrapper[4940]: I0223 09:10:21.926428 4940 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5278c504-4391-438f-a6c9-39071eade5ae-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 23 09:10:21 crc kubenswrapper[4940]: I0223 09:10:21.926437 4940 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5278c504-4391-438f-a6c9-39071eade5ae-scripts\") on node \"crc\" DevicePath \"\"" Feb 23 09:10:21 crc kubenswrapper[4940]: I0223 09:10:21.926448 4940 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5278c504-4391-438f-a6c9-39071eade5ae-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 23 09:10:22 crc kubenswrapper[4940]: I0223 09:10:22.125596 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-c0be-account-create-update-2b67b"] Feb 23 09:10:22 crc kubenswrapper[4940]: I0223 09:10:22.160812 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5278c504-4391-438f-a6c9-39071eade5ae-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5278c504-4391-438f-a6c9-39071eade5ae" (UID: "5278c504-4391-438f-a6c9-39071eade5ae"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 09:10:22 crc kubenswrapper[4940]: I0223 09:10:22.174647 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-53e3-account-create-update-dlb86"] Feb 23 09:10:22 crc kubenswrapper[4940]: I0223 09:10:22.186942 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5278c504-4391-438f-a6c9-39071eade5ae-config-data" (OuterVolumeSpecName: "config-data") pod "5278c504-4391-438f-a6c9-39071eade5ae" (UID: "5278c504-4391-438f-a6c9-39071eade5ae"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 09:10:22 crc kubenswrapper[4940]: I0223 09:10:22.200600 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-50ec-account-create-update-mqlhz"] Feb 23 09:10:22 crc kubenswrapper[4940]: I0223 09:10:22.234453 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-sjg4x"] Feb 23 09:10:22 crc kubenswrapper[4940]: I0223 09:10:22.237524 4940 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5278c504-4391-438f-a6c9-39071eade5ae-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 09:10:22 crc kubenswrapper[4940]: I0223 09:10:22.237551 4940 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5278c504-4391-438f-a6c9-39071eade5ae-config-data\") on node \"crc\" DevicePath \"\"" Feb 23 09:10:22 crc kubenswrapper[4940]: E0223 09:10:22.246546 4940 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf433de8f_71fb_4f02_a223_871cc2959145.slice/crio-49eaa9426bb097e691ef23490366aa36cbbb95591a71cc92ebcb85c243fba664\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf433de8f_71fb_4f02_a223_871cc2959145.slice\": RecentStats: unable to find data in memory cache]" Feb 23 09:10:22 crc kubenswrapper[4940]: I0223 09:10:22.264199 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-8slbq"] Feb 23 09:10:22 crc kubenswrapper[4940]: I0223 09:10:22.282603 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-6c475756fc-pxxbv"] Feb 23 09:10:22 crc kubenswrapper[4940]: I0223 09:10:22.565064 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 23 09:10:22 crc kubenswrapper[4940]: I0223 09:10:22.565677 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5278c504-4391-438f-a6c9-39071eade5ae","Type":"ContainerDied","Data":"35f848f0051c146254ae7754c1b502844c27fff6495965f77f0626b1c2e358fa"} Feb 23 09:10:22 crc kubenswrapper[4940]: I0223 09:10:22.565725 4940 scope.go:117] "RemoveContainer" containerID="ab8a86f359525935b23dd226737644653c8c81f21f8a71e78f7cb5d4e05d2df2" Feb 23 09:10:22 crc kubenswrapper[4940]: I0223 09:10:22.571857 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"ec61a030-1b08-4c8f-8008-842f8c7decb0","Type":"ContainerStarted","Data":"d4e707da34679415a391142e287a18b6057007359a5330ef977a37142fb5e6fb"} Feb 23 09:10:22 crc kubenswrapper[4940]: I0223 09:10:22.577172 4940 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-6884678d78-ckt87" podUID="e330abf6-9282-4221-b286-672ffc3985e7" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.151:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.151:8443: connect: connection refused" Feb 23 09:10:22 crc kubenswrapper[4940]: I0223 09:10:22.577262 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-6884678d78-ckt87" Feb 23 09:10:22 crc kubenswrapper[4940]: I0223 09:10:22.600189 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-volume1-0" event={"ID":"f446400f-c44a-49c0-891b-83b475c43e39","Type":"ContainerStarted","Data":"1e5cd50e2d9c57d7375b9f48826980a26d2d30775e3005d3e39d466089ec530c"} Feb 23 09:10:22 crc kubenswrapper[4940]: I0223 09:10:22.600434 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-volume1-0" event={"ID":"f446400f-c44a-49c0-891b-83b475c43e39","Type":"ContainerStarted","Data":"7340aefb1d4aae8a2fc58ba9ca457341b40dd72b4395fc39687fc82a57b97556"} Feb 23 09:10:22 crc kubenswrapper[4940]: I0223 09:10:22.653884 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-6c475756fc-pxxbv" event={"ID":"418704a3-dc2d-440f-8beb-2c00795cf4d4","Type":"ContainerStarted","Data":"474dc8f30553d9f4b6ea3f4171c8c1a391133a182a1513d31a75bdefaaa732a9"} Feb 23 09:10:22 crc kubenswrapper[4940]: I0223 09:10:22.678233 4940 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 23 09:10:22 crc kubenswrapper[4940]: I0223 09:10:22.678274 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-50ec-account-create-update-mqlhz" event={"ID":"bb9a8d01-0012-4ea6-b812-7ef38a27fe7b","Type":"ContainerStarted","Data":"99d71e1b1de26f75e0a81b03f42f8dd622adaa846783e71d6f24ebbb01e4902a"} Feb 23 09:10:22 crc kubenswrapper[4940]: I0223 09:10:22.695104 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-8slbq" event={"ID":"5684e490-a8e8-435c-8d93-6b510ca1f90f","Type":"ContainerStarted","Data":"6aa285751dde6b90774bc87334ea721b5435f8c242665412dd868df8ebb9f68f"} Feb 23 09:10:22 crc kubenswrapper[4940]: I0223 09:10:22.698004 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-sjg4x" event={"ID":"2ea39198-ffee-4b2e-9561-71a16fab5149","Type":"ContainerStarted","Data":"6dc432283c177a2e6a312a285330269f56ffd532b26ff10648441f56f8a9da15"} Feb 23 09:10:22 crc kubenswrapper[4940]: I0223 09:10:22.701334 4940 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 23 09:10:22 crc kubenswrapper[4940]: I0223 09:10:22.704063 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-c0be-account-create-update-2b67b" event={"ID":"871fc6a4-b1d8-4676-977e-10c7bd9bf609","Type":"ContainerStarted","Data":"112f50977488ee483eec62315bf24aaf893ea40e13072eb8f51dda773a662d71"} Feb 23 09:10:22 crc kubenswrapper[4940]: I0223 09:10:22.708980 4940 generic.go:334] "Generic (PLEG): container finished" podID="e9332095-7a85-4e0d-8a06-da6462a9397b" containerID="e21b7573134555a09738502f85d5e3873c19ebc7d08f4d9759e38cf1a8aeb82e" exitCode=0 Feb 23 09:10:22 crc kubenswrapper[4940]: I0223 09:10:22.709852 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-j6xpr" event={"ID":"e9332095-7a85-4e0d-8a06-da6462a9397b","Type":"ContainerDied","Data":"e21b7573134555a09738502f85d5e3873c19ebc7d08f4d9759e38cf1a8aeb82e"} Feb 23 09:10:22 crc kubenswrapper[4940]: I0223 09:10:22.709885 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-j6xpr" event={"ID":"e9332095-7a85-4e0d-8a06-da6462a9397b","Type":"ContainerStarted","Data":"4effed95d3350e11636e887768bcee50a837c33cd47befb06cf15f9468078331"} Feb 23 09:10:22 crc kubenswrapper[4940]: I0223 09:10:22.713434 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 23 09:10:22 crc kubenswrapper[4940]: E0223 09:10:22.713857 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5278c504-4391-438f-a6c9-39071eade5ae" containerName="ceilometer-central-agent" Feb 23 09:10:22 crc kubenswrapper[4940]: I0223 09:10:22.713870 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="5278c504-4391-438f-a6c9-39071eade5ae" containerName="ceilometer-central-agent" Feb 23 09:10:22 crc kubenswrapper[4940]: E0223 09:10:22.713882 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f433de8f-71fb-4f02-a223-871cc2959145" containerName="placement-api" Feb 23 09:10:22 crc kubenswrapper[4940]: I0223 09:10:22.713904 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="f433de8f-71fb-4f02-a223-871cc2959145" containerName="placement-api" Feb 23 09:10:22 crc kubenswrapper[4940]: E0223 09:10:22.713953 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5278c504-4391-438f-a6c9-39071eade5ae" containerName="proxy-httpd" Feb 23 09:10:22 crc kubenswrapper[4940]: I0223 09:10:22.713960 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="5278c504-4391-438f-a6c9-39071eade5ae" containerName="proxy-httpd" Feb 23 09:10:22 crc kubenswrapper[4940]: E0223 09:10:22.713972 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5278c504-4391-438f-a6c9-39071eade5ae" containerName="sg-core" Feb 23 09:10:22 crc kubenswrapper[4940]: I0223 09:10:22.713978 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="5278c504-4391-438f-a6c9-39071eade5ae" containerName="sg-core" Feb 23 09:10:22 crc kubenswrapper[4940]: E0223 09:10:22.713984 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f433de8f-71fb-4f02-a223-871cc2959145" containerName="placement-log" Feb 23 09:10:22 crc kubenswrapper[4940]: I0223 09:10:22.713990 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="f433de8f-71fb-4f02-a223-871cc2959145" containerName="placement-log" Feb 23 09:10:22 crc kubenswrapper[4940]: E0223 09:10:22.714004 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5278c504-4391-438f-a6c9-39071eade5ae" containerName="ceilometer-notification-agent" Feb 23 09:10:22 crc kubenswrapper[4940]: I0223 09:10:22.714010 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="5278c504-4391-438f-a6c9-39071eade5ae" containerName="ceilometer-notification-agent" Feb 23 09:10:22 crc kubenswrapper[4940]: I0223 09:10:22.714572 4940 memory_manager.go:354] "RemoveStaleState removing state" podUID="5278c504-4391-438f-a6c9-39071eade5ae" containerName="proxy-httpd" Feb 23 09:10:22 crc kubenswrapper[4940]: I0223 09:10:22.714597 4940 memory_manager.go:354] "RemoveStaleState removing state" podUID="f433de8f-71fb-4f02-a223-871cc2959145" containerName="placement-log" Feb 23 09:10:22 crc kubenswrapper[4940]: I0223 09:10:22.714661 4940 memory_manager.go:354] "RemoveStaleState removing state" podUID="5278c504-4391-438f-a6c9-39071eade5ae" containerName="ceilometer-central-agent" Feb 23 09:10:22 crc kubenswrapper[4940]: I0223 09:10:22.714672 4940 memory_manager.go:354] "RemoveStaleState removing state" podUID="5278c504-4391-438f-a6c9-39071eade5ae" containerName="sg-core" Feb 23 09:10:22 crc kubenswrapper[4940]: I0223 09:10:22.714685 4940 memory_manager.go:354] "RemoveStaleState removing state" podUID="5278c504-4391-438f-a6c9-39071eade5ae" containerName="ceilometer-notification-agent" Feb 23 09:10:22 crc kubenswrapper[4940]: I0223 09:10:22.714696 4940 memory_manager.go:354] "RemoveStaleState removing state" podUID="f433de8f-71fb-4f02-a223-871cc2959145" containerName="placement-api" Feb 23 09:10:22 crc kubenswrapper[4940]: I0223 09:10:22.716834 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 23 09:10:22 crc kubenswrapper[4940]: I0223 09:10:22.721823 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 23 09:10:22 crc kubenswrapper[4940]: I0223 09:10:22.722025 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 23 09:10:22 crc kubenswrapper[4940]: I0223 09:10:22.737202 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 23 09:10:22 crc kubenswrapper[4940]: I0223 09:10:22.750892 4940 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-volume-volume1-0" podStartSLOduration=8.750865964 podStartE2EDuration="8.750865964s" podCreationTimestamp="2026-02-23 09:10:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 09:10:22.695786463 +0000 UTC m=+1354.078992640" watchObservedRunningTime="2026-02-23 09:10:22.750865964 +0000 UTC m=+1354.134072131" Feb 23 09:10:22 crc kubenswrapper[4940]: I0223 09:10:22.762186 4940 scope.go:117] "RemoveContainer" containerID="2a394c3fd47099b97de59751ef4eb8ae13d9edc47d15bd550d03fa9dd04ca446" Feb 23 09:10:22 crc kubenswrapper[4940]: I0223 09:10:22.766426 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-53e3-account-create-update-dlb86" event={"ID":"502149a1-62b1-4e45-831b-51d1d10d4265","Type":"ContainerStarted","Data":"4bdcad8b93235ef34bd8dde0b9c30d6097c3b961dc6f80d0bb5cf3e9e0ed46f0"} Feb 23 09:10:22 crc kubenswrapper[4940]: I0223 09:10:22.797431 4940 scope.go:117] "RemoveContainer" containerID="4795d11a8831ccafb873cf48b024c7a56ba6297d9af727d45b808e6052ded532" Feb 23 09:10:22 crc kubenswrapper[4940]: I0223 09:10:22.863017 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04e3d354-3545-45bc-be74-899fa8a434dd-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"04e3d354-3545-45bc-be74-899fa8a434dd\") " pod="openstack/ceilometer-0" Feb 23 09:10:22 crc kubenswrapper[4940]: I0223 09:10:22.863084 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/04e3d354-3545-45bc-be74-899fa8a434dd-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"04e3d354-3545-45bc-be74-899fa8a434dd\") " pod="openstack/ceilometer-0" Feb 23 09:10:22 crc kubenswrapper[4940]: I0223 09:10:22.863262 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/04e3d354-3545-45bc-be74-899fa8a434dd-log-httpd\") pod \"ceilometer-0\" (UID: \"04e3d354-3545-45bc-be74-899fa8a434dd\") " pod="openstack/ceilometer-0" Feb 23 09:10:22 crc kubenswrapper[4940]: I0223 09:10:22.863424 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/04e3d354-3545-45bc-be74-899fa8a434dd-scripts\") pod \"ceilometer-0\" (UID: \"04e3d354-3545-45bc-be74-899fa8a434dd\") " pod="openstack/ceilometer-0" Feb 23 09:10:22 crc kubenswrapper[4940]: I0223 09:10:22.863668 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04e3d354-3545-45bc-be74-899fa8a434dd-config-data\") pod \"ceilometer-0\" (UID: \"04e3d354-3545-45bc-be74-899fa8a434dd\") " pod="openstack/ceilometer-0" Feb 23 09:10:22 crc kubenswrapper[4940]: I0223 09:10:22.863865 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/04e3d354-3545-45bc-be74-899fa8a434dd-run-httpd\") pod \"ceilometer-0\" (UID: \"04e3d354-3545-45bc-be74-899fa8a434dd\") " pod="openstack/ceilometer-0" Feb 23 09:10:22 crc kubenswrapper[4940]: I0223 09:10:22.864032 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lf9j7\" (UniqueName: \"kubernetes.io/projected/04e3d354-3545-45bc-be74-899fa8a434dd-kube-api-access-lf9j7\") pod \"ceilometer-0\" (UID: \"04e3d354-3545-45bc-be74-899fa8a434dd\") " pod="openstack/ceilometer-0" Feb 23 09:10:22 crc kubenswrapper[4940]: I0223 09:10:22.880088 4940 scope.go:117] "RemoveContainer" containerID="f8c9e1f5b64f331ab938034be459c623bf79569ea26e4ea027e3ccb0e17c22fa" Feb 23 09:10:22 crc kubenswrapper[4940]: I0223 09:10:22.968417 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/04e3d354-3545-45bc-be74-899fa8a434dd-log-httpd\") pod \"ceilometer-0\" (UID: \"04e3d354-3545-45bc-be74-899fa8a434dd\") " pod="openstack/ceilometer-0" Feb 23 09:10:22 crc kubenswrapper[4940]: I0223 09:10:22.968515 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/04e3d354-3545-45bc-be74-899fa8a434dd-scripts\") pod \"ceilometer-0\" (UID: \"04e3d354-3545-45bc-be74-899fa8a434dd\") " pod="openstack/ceilometer-0" Feb 23 09:10:22 crc kubenswrapper[4940]: I0223 09:10:22.968557 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04e3d354-3545-45bc-be74-899fa8a434dd-config-data\") pod \"ceilometer-0\" (UID: \"04e3d354-3545-45bc-be74-899fa8a434dd\") " pod="openstack/ceilometer-0" Feb 23 09:10:22 crc kubenswrapper[4940]: I0223 09:10:22.968662 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/04e3d354-3545-45bc-be74-899fa8a434dd-run-httpd\") pod \"ceilometer-0\" (UID: \"04e3d354-3545-45bc-be74-899fa8a434dd\") " pod="openstack/ceilometer-0" Feb 23 09:10:22 crc kubenswrapper[4940]: I0223 09:10:22.968739 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lf9j7\" (UniqueName: \"kubernetes.io/projected/04e3d354-3545-45bc-be74-899fa8a434dd-kube-api-access-lf9j7\") pod \"ceilometer-0\" (UID: \"04e3d354-3545-45bc-be74-899fa8a434dd\") " pod="openstack/ceilometer-0" Feb 23 09:10:22 crc kubenswrapper[4940]: I0223 09:10:22.968774 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04e3d354-3545-45bc-be74-899fa8a434dd-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"04e3d354-3545-45bc-be74-899fa8a434dd\") " pod="openstack/ceilometer-0" Feb 23 09:10:22 crc kubenswrapper[4940]: I0223 09:10:22.968798 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/04e3d354-3545-45bc-be74-899fa8a434dd-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"04e3d354-3545-45bc-be74-899fa8a434dd\") " pod="openstack/ceilometer-0" Feb 23 09:10:22 crc kubenswrapper[4940]: I0223 09:10:22.971223 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/04e3d354-3545-45bc-be74-899fa8a434dd-log-httpd\") pod \"ceilometer-0\" (UID: \"04e3d354-3545-45bc-be74-899fa8a434dd\") " pod="openstack/ceilometer-0" Feb 23 09:10:22 crc kubenswrapper[4940]: I0223 09:10:22.971930 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/04e3d354-3545-45bc-be74-899fa8a434dd-run-httpd\") pod \"ceilometer-0\" (UID: \"04e3d354-3545-45bc-be74-899fa8a434dd\") " pod="openstack/ceilometer-0" Feb 23 09:10:22 crc kubenswrapper[4940]: I0223 09:10:22.977188 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/04e3d354-3545-45bc-be74-899fa8a434dd-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"04e3d354-3545-45bc-be74-899fa8a434dd\") " pod="openstack/ceilometer-0" Feb 23 09:10:22 crc kubenswrapper[4940]: I0223 09:10:22.977985 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04e3d354-3545-45bc-be74-899fa8a434dd-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"04e3d354-3545-45bc-be74-899fa8a434dd\") " pod="openstack/ceilometer-0" Feb 23 09:10:22 crc kubenswrapper[4940]: I0223 09:10:22.980250 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/04e3d354-3545-45bc-be74-899fa8a434dd-scripts\") pod \"ceilometer-0\" (UID: \"04e3d354-3545-45bc-be74-899fa8a434dd\") " pod="openstack/ceilometer-0" Feb 23 09:10:22 crc kubenswrapper[4940]: I0223 09:10:22.981112 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04e3d354-3545-45bc-be74-899fa8a434dd-config-data\") pod \"ceilometer-0\" (UID: \"04e3d354-3545-45bc-be74-899fa8a434dd\") " pod="openstack/ceilometer-0" Feb 23 09:10:23 crc kubenswrapper[4940]: I0223 09:10:23.001504 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lf9j7\" (UniqueName: \"kubernetes.io/projected/04e3d354-3545-45bc-be74-899fa8a434dd-kube-api-access-lf9j7\") pod \"ceilometer-0\" (UID: \"04e3d354-3545-45bc-be74-899fa8a434dd\") " pod="openstack/ceilometer-0" Feb 23 09:10:23 crc kubenswrapper[4940]: I0223 09:10:23.059078 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 23 09:10:23 crc kubenswrapper[4940]: I0223 09:10:23.386471 4940 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5278c504-4391-438f-a6c9-39071eade5ae" path="/var/lib/kubelet/pods/5278c504-4391-438f-a6c9-39071eade5ae/volumes" Feb 23 09:10:23 crc kubenswrapper[4940]: I0223 09:10:23.389317 4940 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f433de8f-71fb-4f02-a223-871cc2959145" path="/var/lib/kubelet/pods/f433de8f-71fb-4f02-a223-871cc2959145/volumes" Feb 23 09:10:23 crc kubenswrapper[4940]: I0223 09:10:23.780476 4940 generic.go:334] "Generic (PLEG): container finished" podID="bb9a8d01-0012-4ea6-b812-7ef38a27fe7b" containerID="b31dced08ed0d4610a6fc8684712ee95a303e4f689bd23d43d8ed45b13ccae92" exitCode=0 Feb 23 09:10:23 crc kubenswrapper[4940]: I0223 09:10:23.780586 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-50ec-account-create-update-mqlhz" event={"ID":"bb9a8d01-0012-4ea6-b812-7ef38a27fe7b","Type":"ContainerDied","Data":"b31dced08ed0d4610a6fc8684712ee95a303e4f689bd23d43d8ed45b13ccae92"} Feb 23 09:10:23 crc kubenswrapper[4940]: I0223 09:10:23.784495 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-6c475756fc-pxxbv" event={"ID":"418704a3-dc2d-440f-8beb-2c00795cf4d4","Type":"ContainerStarted","Data":"fabc2833a044f67c292a5809c22f427fe0720d71e4c045c6a6973f9aa8879099"} Feb 23 09:10:23 crc kubenswrapper[4940]: I0223 09:10:23.796117 4940 generic.go:334] "Generic (PLEG): container finished" podID="502149a1-62b1-4e45-831b-51d1d10d4265" containerID="9531b6a359564e5acc36c1a844011c56b279a58cb4d140e8d4784d32d1a4405c" exitCode=0 Feb 23 09:10:23 crc kubenswrapper[4940]: I0223 09:10:23.796184 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-53e3-account-create-update-dlb86" event={"ID":"502149a1-62b1-4e45-831b-51d1d10d4265","Type":"ContainerDied","Data":"9531b6a359564e5acc36c1a844011c56b279a58cb4d140e8d4784d32d1a4405c"} Feb 23 09:10:23 crc kubenswrapper[4940]: I0223 09:10:23.800339 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"ec61a030-1b08-4c8f-8008-842f8c7decb0","Type":"ContainerStarted","Data":"65138f81a7970bcde369c1e9f6ca1969251f7ad6a40f85aa2f816e5160390912"} Feb 23 09:10:23 crc kubenswrapper[4940]: I0223 09:10:23.806637 4940 generic.go:334] "Generic (PLEG): container finished" podID="5684e490-a8e8-435c-8d93-6b510ca1f90f" containerID="4f4df3fa4522baf5a0e74fba1c40c93d23867333df4e484b7336fb7618419dc8" exitCode=0 Feb 23 09:10:23 crc kubenswrapper[4940]: I0223 09:10:23.806698 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-8slbq" event={"ID":"5684e490-a8e8-435c-8d93-6b510ca1f90f","Type":"ContainerDied","Data":"4f4df3fa4522baf5a0e74fba1c40c93d23867333df4e484b7336fb7618419dc8"} Feb 23 09:10:23 crc kubenswrapper[4940]: I0223 09:10:23.817175 4940 generic.go:334] "Generic (PLEG): container finished" podID="2ea39198-ffee-4b2e-9561-71a16fab5149" containerID="5e9d238d69a51cb9a14dd2440ed7053be4d703055895ca45dcce44cdb54d3f79" exitCode=0 Feb 23 09:10:23 crc kubenswrapper[4940]: I0223 09:10:23.817241 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-sjg4x" event={"ID":"2ea39198-ffee-4b2e-9561-71a16fab5149","Type":"ContainerDied","Data":"5e9d238d69a51cb9a14dd2440ed7053be4d703055895ca45dcce44cdb54d3f79"} Feb 23 09:10:23 crc kubenswrapper[4940]: I0223 09:10:23.818742 4940 generic.go:334] "Generic (PLEG): container finished" podID="871fc6a4-b1d8-4676-977e-10c7bd9bf609" containerID="4f47c5299d7075d882481ef592b61a33f78a5937be72d9b5e3d4726b2451cd6a" exitCode=0 Feb 23 09:10:23 crc kubenswrapper[4940]: I0223 09:10:23.818933 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-c0be-account-create-update-2b67b" event={"ID":"871fc6a4-b1d8-4676-977e-10c7bd9bf609","Type":"ContainerDied","Data":"4f47c5299d7075d882481ef592b61a33f78a5937be72d9b5e3d4726b2451cd6a"} Feb 23 09:10:23 crc kubenswrapper[4940]: I0223 09:10:23.848749 4940 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-share-share1-0" podStartSLOduration=5.029243741 podStartE2EDuration="22.848720883s" podCreationTimestamp="2026-02-23 09:10:01 +0000 UTC" firstStartedPulling="2026-02-23 09:10:03.142680132 +0000 UTC m=+1334.525886289" lastFinishedPulling="2026-02-23 09:10:20.962157274 +0000 UTC m=+1352.345363431" observedRunningTime="2026-02-23 09:10:23.83081538 +0000 UTC m=+1355.214021557" watchObservedRunningTime="2026-02-23 09:10:23.848720883 +0000 UTC m=+1355.231927040" Feb 23 09:10:24 crc kubenswrapper[4940]: I0223 09:10:24.888926 4940 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-volume-volume1-0" Feb 23 09:10:25 crc kubenswrapper[4940]: I0223 09:10:25.192694 4940 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/manila-scheduler-0" Feb 23 09:10:25 crc kubenswrapper[4940]: I0223 09:10:25.261054 4940 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-scheduler-0"] Feb 23 09:10:25 crc kubenswrapper[4940]: I0223 09:10:25.843104 4940 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/manila-scheduler-0" podUID="eafa7497-50f0-456e-9764-826840da7372" containerName="manila-scheduler" containerID="cri-o://e72875b833bfe94af04f9db504ebf0b2eaf410cdc9ae1f3a46b2cfef4e24be0d" gracePeriod=30 Feb 23 09:10:25 crc kubenswrapper[4940]: I0223 09:10:25.843231 4940 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/manila-scheduler-0" podUID="eafa7497-50f0-456e-9764-826840da7372" containerName="probe" containerID="cri-o://043f3c89b263f3dc8987df0790dbb76fe90597d156f7760b6973738f63334b58" gracePeriod=30 Feb 23 09:10:26 crc kubenswrapper[4940]: I0223 09:10:26.644477 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-654489f6f-92jdq" Feb 23 09:10:26 crc kubenswrapper[4940]: I0223 09:10:26.719585 4940 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-7cc5d5d86-sr2r2"] Feb 23 09:10:26 crc kubenswrapper[4940]: I0223 09:10:26.720008 4940 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-7cc5d5d86-sr2r2" podUID="a587b9bd-1362-449d-92a0-6b2f25b45735" containerName="neutron-httpd" containerID="cri-o://f8f0ef202b54c93d5f890ebcc3a445b472067725bf46cacfd6e9c5cfa9fd63ad" gracePeriod=30 Feb 23 09:10:26 crc kubenswrapper[4940]: I0223 09:10:26.719937 4940 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-7cc5d5d86-sr2r2" podUID="a587b9bd-1362-449d-92a0-6b2f25b45735" containerName="neutron-api" containerID="cri-o://b34f129a279e5f9d6d4a796ffb2079cee98358e47fe89b7c10add880a8f7af7e" gracePeriod=30 Feb 23 09:10:26 crc kubenswrapper[4940]: I0223 09:10:26.898864 4940 generic.go:334] "Generic (PLEG): container finished" podID="eafa7497-50f0-456e-9764-826840da7372" containerID="043f3c89b263f3dc8987df0790dbb76fe90597d156f7760b6973738f63334b58" exitCode=0 Feb 23 09:10:26 crc kubenswrapper[4940]: I0223 09:10:26.898910 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"eafa7497-50f0-456e-9764-826840da7372","Type":"ContainerDied","Data":"043f3c89b263f3dc8987df0790dbb76fe90597d156f7760b6973738f63334b58"} Feb 23 09:10:27 crc kubenswrapper[4940]: I0223 09:10:27.912660 4940 generic.go:334] "Generic (PLEG): container finished" podID="a587b9bd-1362-449d-92a0-6b2f25b45735" containerID="f8f0ef202b54c93d5f890ebcc3a445b472067725bf46cacfd6e9c5cfa9fd63ad" exitCode=0 Feb 23 09:10:27 crc kubenswrapper[4940]: I0223 09:10:27.912724 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7cc5d5d86-sr2r2" event={"ID":"a587b9bd-1362-449d-92a0-6b2f25b45735","Type":"ContainerDied","Data":"f8f0ef202b54c93d5f890ebcc3a445b472067725bf46cacfd6e9c5cfa9fd63ad"} Feb 23 09:10:27 crc kubenswrapper[4940]: I0223 09:10:27.916704 4940 generic.go:334] "Generic (PLEG): container finished" podID="eafa7497-50f0-456e-9764-826840da7372" containerID="e72875b833bfe94af04f9db504ebf0b2eaf410cdc9ae1f3a46b2cfef4e24be0d" exitCode=0 Feb 23 09:10:27 crc kubenswrapper[4940]: I0223 09:10:27.916848 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"eafa7497-50f0-456e-9764-826840da7372","Type":"ContainerDied","Data":"e72875b833bfe94af04f9db504ebf0b2eaf410cdc9ae1f3a46b2cfef4e24be0d"} Feb 23 09:10:30 crc kubenswrapper[4940]: I0223 09:10:30.136140 4940 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-volume-volume1-0" Feb 23 09:10:30 crc kubenswrapper[4940]: I0223 09:10:30.270668 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/manila-api-0" Feb 23 09:10:30 crc kubenswrapper[4940]: I0223 09:10:30.336921 4940 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 23 09:10:30 crc kubenswrapper[4940]: I0223 09:10:30.952437 4940 generic.go:334] "Generic (PLEG): container finished" podID="a587b9bd-1362-449d-92a0-6b2f25b45735" containerID="b34f129a279e5f9d6d4a796ffb2079cee98358e47fe89b7c10add880a8f7af7e" exitCode=0 Feb 23 09:10:30 crc kubenswrapper[4940]: I0223 09:10:30.952560 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7cc5d5d86-sr2r2" event={"ID":"a587b9bd-1362-449d-92a0-6b2f25b45735","Type":"ContainerDied","Data":"b34f129a279e5f9d6d4a796ffb2079cee98358e47fe89b7c10add880a8f7af7e"} Feb 23 09:10:30 crc kubenswrapper[4940]: I0223 09:10:30.956935 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-53e3-account-create-update-dlb86" event={"ID":"502149a1-62b1-4e45-831b-51d1d10d4265","Type":"ContainerDied","Data":"4bdcad8b93235ef34bd8dde0b9c30d6097c3b961dc6f80d0bb5cf3e9e0ed46f0"} Feb 23 09:10:30 crc kubenswrapper[4940]: I0223 09:10:30.956982 4940 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4bdcad8b93235ef34bd8dde0b9c30d6097c3b961dc6f80d0bb5cf3e9e0ed46f0" Feb 23 09:10:31 crc kubenswrapper[4940]: I0223 09:10:31.113083 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-53e3-account-create-update-dlb86" Feb 23 09:10:31 crc kubenswrapper[4940]: I0223 09:10:31.125154 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-sjg4x" Feb 23 09:10:31 crc kubenswrapper[4940]: I0223 09:10:31.125910 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-50ec-account-create-update-mqlhz" Feb 23 09:10:31 crc kubenswrapper[4940]: I0223 09:10:31.140629 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-j6xpr" Feb 23 09:10:31 crc kubenswrapper[4940]: I0223 09:10:31.146582 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-c0be-account-create-update-2b67b" Feb 23 09:10:31 crc kubenswrapper[4940]: I0223 09:10:31.200149 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-8slbq" Feb 23 09:10:31 crc kubenswrapper[4940]: I0223 09:10:31.296474 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nz8bf\" (UniqueName: \"kubernetes.io/projected/5684e490-a8e8-435c-8d93-6b510ca1f90f-kube-api-access-nz8bf\") pod \"5684e490-a8e8-435c-8d93-6b510ca1f90f\" (UID: \"5684e490-a8e8-435c-8d93-6b510ca1f90f\") " Feb 23 09:10:31 crc kubenswrapper[4940]: I0223 09:10:31.297014 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h4gvn\" (UniqueName: \"kubernetes.io/projected/2ea39198-ffee-4b2e-9561-71a16fab5149-kube-api-access-h4gvn\") pod \"2ea39198-ffee-4b2e-9561-71a16fab5149\" (UID: \"2ea39198-ffee-4b2e-9561-71a16fab5149\") " Feb 23 09:10:31 crc kubenswrapper[4940]: I0223 09:10:31.297065 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wsj9h\" (UniqueName: \"kubernetes.io/projected/871fc6a4-b1d8-4676-977e-10c7bd9bf609-kube-api-access-wsj9h\") pod \"871fc6a4-b1d8-4676-977e-10c7bd9bf609\" (UID: \"871fc6a4-b1d8-4676-977e-10c7bd9bf609\") " Feb 23 09:10:31 crc kubenswrapper[4940]: I0223 09:10:31.297136 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/502149a1-62b1-4e45-831b-51d1d10d4265-operator-scripts\") pod \"502149a1-62b1-4e45-831b-51d1d10d4265\" (UID: \"502149a1-62b1-4e45-831b-51d1d10d4265\") " Feb 23 09:10:31 crc kubenswrapper[4940]: I0223 09:10:31.297170 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zpx8k\" (UniqueName: \"kubernetes.io/projected/bb9a8d01-0012-4ea6-b812-7ef38a27fe7b-kube-api-access-zpx8k\") pod \"bb9a8d01-0012-4ea6-b812-7ef38a27fe7b\" (UID: \"bb9a8d01-0012-4ea6-b812-7ef38a27fe7b\") " Feb 23 09:10:31 crc kubenswrapper[4940]: I0223 09:10:31.297201 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2ea39198-ffee-4b2e-9561-71a16fab5149-operator-scripts\") pod \"2ea39198-ffee-4b2e-9561-71a16fab5149\" (UID: \"2ea39198-ffee-4b2e-9561-71a16fab5149\") " Feb 23 09:10:31 crc kubenswrapper[4940]: I0223 09:10:31.297226 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5684e490-a8e8-435c-8d93-6b510ca1f90f-operator-scripts\") pod \"5684e490-a8e8-435c-8d93-6b510ca1f90f\" (UID: \"5684e490-a8e8-435c-8d93-6b510ca1f90f\") " Feb 23 09:10:31 crc kubenswrapper[4940]: I0223 09:10:31.297287 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-792pk\" (UniqueName: \"kubernetes.io/projected/502149a1-62b1-4e45-831b-51d1d10d4265-kube-api-access-792pk\") pod \"502149a1-62b1-4e45-831b-51d1d10d4265\" (UID: \"502149a1-62b1-4e45-831b-51d1d10d4265\") " Feb 23 09:10:31 crc kubenswrapper[4940]: I0223 09:10:31.297326 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zsmc4\" (UniqueName: \"kubernetes.io/projected/e9332095-7a85-4e0d-8a06-da6462a9397b-kube-api-access-zsmc4\") pod \"e9332095-7a85-4e0d-8a06-da6462a9397b\" (UID: \"e9332095-7a85-4e0d-8a06-da6462a9397b\") " Feb 23 09:10:31 crc kubenswrapper[4940]: I0223 09:10:31.297379 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bb9a8d01-0012-4ea6-b812-7ef38a27fe7b-operator-scripts\") pod \"bb9a8d01-0012-4ea6-b812-7ef38a27fe7b\" (UID: \"bb9a8d01-0012-4ea6-b812-7ef38a27fe7b\") " Feb 23 09:10:31 crc kubenswrapper[4940]: I0223 09:10:31.297442 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e9332095-7a85-4e0d-8a06-da6462a9397b-operator-scripts\") pod \"e9332095-7a85-4e0d-8a06-da6462a9397b\" (UID: \"e9332095-7a85-4e0d-8a06-da6462a9397b\") " Feb 23 09:10:31 crc kubenswrapper[4940]: I0223 09:10:31.297492 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/871fc6a4-b1d8-4676-977e-10c7bd9bf609-operator-scripts\") pod \"871fc6a4-b1d8-4676-977e-10c7bd9bf609\" (UID: \"871fc6a4-b1d8-4676-977e-10c7bd9bf609\") " Feb 23 09:10:31 crc kubenswrapper[4940]: I0223 09:10:31.299286 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bb9a8d01-0012-4ea6-b812-7ef38a27fe7b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "bb9a8d01-0012-4ea6-b812-7ef38a27fe7b" (UID: "bb9a8d01-0012-4ea6-b812-7ef38a27fe7b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 09:10:31 crc kubenswrapper[4940]: I0223 09:10:31.299305 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/502149a1-62b1-4e45-831b-51d1d10d4265-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "502149a1-62b1-4e45-831b-51d1d10d4265" (UID: "502149a1-62b1-4e45-831b-51d1d10d4265"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 09:10:31 crc kubenswrapper[4940]: I0223 09:10:31.299597 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2ea39198-ffee-4b2e-9561-71a16fab5149-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2ea39198-ffee-4b2e-9561-71a16fab5149" (UID: "2ea39198-ffee-4b2e-9561-71a16fab5149"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 09:10:31 crc kubenswrapper[4940]: I0223 09:10:31.299750 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e9332095-7a85-4e0d-8a06-da6462a9397b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e9332095-7a85-4e0d-8a06-da6462a9397b" (UID: "e9332095-7a85-4e0d-8a06-da6462a9397b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 09:10:31 crc kubenswrapper[4940]: I0223 09:10:31.299986 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5684e490-a8e8-435c-8d93-6b510ca1f90f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "5684e490-a8e8-435c-8d93-6b510ca1f90f" (UID: "5684e490-a8e8-435c-8d93-6b510ca1f90f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 09:10:31 crc kubenswrapper[4940]: I0223 09:10:31.300172 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/871fc6a4-b1d8-4676-977e-10c7bd9bf609-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "871fc6a4-b1d8-4676-977e-10c7bd9bf609" (UID: "871fc6a4-b1d8-4676-977e-10c7bd9bf609"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 09:10:31 crc kubenswrapper[4940]: I0223 09:10:31.303340 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bb9a8d01-0012-4ea6-b812-7ef38a27fe7b-kube-api-access-zpx8k" (OuterVolumeSpecName: "kube-api-access-zpx8k") pod "bb9a8d01-0012-4ea6-b812-7ef38a27fe7b" (UID: "bb9a8d01-0012-4ea6-b812-7ef38a27fe7b"). InnerVolumeSpecName "kube-api-access-zpx8k". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 09:10:31 crc kubenswrapper[4940]: I0223 09:10:31.308254 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e9332095-7a85-4e0d-8a06-da6462a9397b-kube-api-access-zsmc4" (OuterVolumeSpecName: "kube-api-access-zsmc4") pod "e9332095-7a85-4e0d-8a06-da6462a9397b" (UID: "e9332095-7a85-4e0d-8a06-da6462a9397b"). InnerVolumeSpecName "kube-api-access-zsmc4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 09:10:31 crc kubenswrapper[4940]: I0223 09:10:31.308736 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/502149a1-62b1-4e45-831b-51d1d10d4265-kube-api-access-792pk" (OuterVolumeSpecName: "kube-api-access-792pk") pod "502149a1-62b1-4e45-831b-51d1d10d4265" (UID: "502149a1-62b1-4e45-831b-51d1d10d4265"). InnerVolumeSpecName "kube-api-access-792pk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 09:10:31 crc kubenswrapper[4940]: I0223 09:10:31.315580 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/871fc6a4-b1d8-4676-977e-10c7bd9bf609-kube-api-access-wsj9h" (OuterVolumeSpecName: "kube-api-access-wsj9h") pod "871fc6a4-b1d8-4676-977e-10c7bd9bf609" (UID: "871fc6a4-b1d8-4676-977e-10c7bd9bf609"). InnerVolumeSpecName "kube-api-access-wsj9h". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 09:10:31 crc kubenswrapper[4940]: I0223 09:10:31.315773 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5684e490-a8e8-435c-8d93-6b510ca1f90f-kube-api-access-nz8bf" (OuterVolumeSpecName: "kube-api-access-nz8bf") pod "5684e490-a8e8-435c-8d93-6b510ca1f90f" (UID: "5684e490-a8e8-435c-8d93-6b510ca1f90f"). InnerVolumeSpecName "kube-api-access-nz8bf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 09:10:31 crc kubenswrapper[4940]: I0223 09:10:31.315850 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2ea39198-ffee-4b2e-9561-71a16fab5149-kube-api-access-h4gvn" (OuterVolumeSpecName: "kube-api-access-h4gvn") pod "2ea39198-ffee-4b2e-9561-71a16fab5149" (UID: "2ea39198-ffee-4b2e-9561-71a16fab5149"). InnerVolumeSpecName "kube-api-access-h4gvn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 09:10:31 crc kubenswrapper[4940]: I0223 09:10:31.401136 4940 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nz8bf\" (UniqueName: \"kubernetes.io/projected/5684e490-a8e8-435c-8d93-6b510ca1f90f-kube-api-access-nz8bf\") on node \"crc\" DevicePath \"\"" Feb 23 09:10:31 crc kubenswrapper[4940]: I0223 09:10:31.401176 4940 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h4gvn\" (UniqueName: \"kubernetes.io/projected/2ea39198-ffee-4b2e-9561-71a16fab5149-kube-api-access-h4gvn\") on node \"crc\" DevicePath \"\"" Feb 23 09:10:31 crc kubenswrapper[4940]: I0223 09:10:31.401190 4940 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wsj9h\" (UniqueName: \"kubernetes.io/projected/871fc6a4-b1d8-4676-977e-10c7bd9bf609-kube-api-access-wsj9h\") on node \"crc\" DevicePath \"\"" Feb 23 09:10:31 crc kubenswrapper[4940]: I0223 09:10:31.401204 4940 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/502149a1-62b1-4e45-831b-51d1d10d4265-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 23 09:10:31 crc kubenswrapper[4940]: I0223 09:10:31.401218 4940 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zpx8k\" (UniqueName: \"kubernetes.io/projected/bb9a8d01-0012-4ea6-b812-7ef38a27fe7b-kube-api-access-zpx8k\") on node \"crc\" DevicePath \"\"" Feb 23 09:10:31 crc kubenswrapper[4940]: I0223 09:10:31.401231 4940 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5684e490-a8e8-435c-8d93-6b510ca1f90f-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 23 09:10:31 crc kubenswrapper[4940]: I0223 09:10:31.401243 4940 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2ea39198-ffee-4b2e-9561-71a16fab5149-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 23 09:10:31 crc kubenswrapper[4940]: I0223 09:10:31.401254 4940 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-792pk\" (UniqueName: \"kubernetes.io/projected/502149a1-62b1-4e45-831b-51d1d10d4265-kube-api-access-792pk\") on node \"crc\" DevicePath \"\"" Feb 23 09:10:31 crc kubenswrapper[4940]: I0223 09:10:31.401300 4940 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zsmc4\" (UniqueName: \"kubernetes.io/projected/e9332095-7a85-4e0d-8a06-da6462a9397b-kube-api-access-zsmc4\") on node \"crc\" DevicePath \"\"" Feb 23 09:10:31 crc kubenswrapper[4940]: I0223 09:10:31.401315 4940 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bb9a8d01-0012-4ea6-b812-7ef38a27fe7b-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 23 09:10:31 crc kubenswrapper[4940]: I0223 09:10:31.401329 4940 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e9332095-7a85-4e0d-8a06-da6462a9397b-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 23 09:10:31 crc kubenswrapper[4940]: I0223 09:10:31.401343 4940 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/871fc6a4-b1d8-4676-977e-10c7bd9bf609-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 23 09:10:31 crc kubenswrapper[4940]: I0223 09:10:31.430246 4940 patch_prober.go:28] interesting pod/machine-config-daemon-26mgs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 23 09:10:31 crc kubenswrapper[4940]: I0223 09:10:31.430540 4940 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" podUID="f3f2cfd6-5ddf-436d-998f-440f1cc642b1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 23 09:10:31 crc kubenswrapper[4940]: I0223 09:10:31.458415 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-scheduler-0" Feb 23 09:10:31 crc kubenswrapper[4940]: I0223 09:10:31.606049 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9ddrr\" (UniqueName: \"kubernetes.io/projected/eafa7497-50f0-456e-9764-826840da7372-kube-api-access-9ddrr\") pod \"eafa7497-50f0-456e-9764-826840da7372\" (UID: \"eafa7497-50f0-456e-9764-826840da7372\") " Feb 23 09:10:31 crc kubenswrapper[4940]: I0223 09:10:31.606176 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/eafa7497-50f0-456e-9764-826840da7372-config-data-custom\") pod \"eafa7497-50f0-456e-9764-826840da7372\" (UID: \"eafa7497-50f0-456e-9764-826840da7372\") " Feb 23 09:10:31 crc kubenswrapper[4940]: I0223 09:10:31.606207 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/eafa7497-50f0-456e-9764-826840da7372-etc-machine-id\") pod \"eafa7497-50f0-456e-9764-826840da7372\" (UID: \"eafa7497-50f0-456e-9764-826840da7372\") " Feb 23 09:10:31 crc kubenswrapper[4940]: I0223 09:10:31.606290 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eafa7497-50f0-456e-9764-826840da7372-config-data\") pod \"eafa7497-50f0-456e-9764-826840da7372\" (UID: \"eafa7497-50f0-456e-9764-826840da7372\") " Feb 23 09:10:31 crc kubenswrapper[4940]: I0223 09:10:31.606346 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eafa7497-50f0-456e-9764-826840da7372-scripts\") pod \"eafa7497-50f0-456e-9764-826840da7372\" (UID: \"eafa7497-50f0-456e-9764-826840da7372\") " Feb 23 09:10:31 crc kubenswrapper[4940]: I0223 09:10:31.606374 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/eafa7497-50f0-456e-9764-826840da7372-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "eafa7497-50f0-456e-9764-826840da7372" (UID: "eafa7497-50f0-456e-9764-826840da7372"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 09:10:31 crc kubenswrapper[4940]: I0223 09:10:31.606836 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eafa7497-50f0-456e-9764-826840da7372-combined-ca-bundle\") pod \"eafa7497-50f0-456e-9764-826840da7372\" (UID: \"eafa7497-50f0-456e-9764-826840da7372\") " Feb 23 09:10:31 crc kubenswrapper[4940]: I0223 09:10:31.607730 4940 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/eafa7497-50f0-456e-9764-826840da7372-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 23 09:10:31 crc kubenswrapper[4940]: I0223 09:10:31.613880 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eafa7497-50f0-456e-9764-826840da7372-scripts" (OuterVolumeSpecName: "scripts") pod "eafa7497-50f0-456e-9764-826840da7372" (UID: "eafa7497-50f0-456e-9764-826840da7372"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 09:10:31 crc kubenswrapper[4940]: I0223 09:10:31.613922 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eafa7497-50f0-456e-9764-826840da7372-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "eafa7497-50f0-456e-9764-826840da7372" (UID: "eafa7497-50f0-456e-9764-826840da7372"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 09:10:31 crc kubenswrapper[4940]: I0223 09:10:31.613958 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eafa7497-50f0-456e-9764-826840da7372-kube-api-access-9ddrr" (OuterVolumeSpecName: "kube-api-access-9ddrr") pod "eafa7497-50f0-456e-9764-826840da7372" (UID: "eafa7497-50f0-456e-9764-826840da7372"). InnerVolumeSpecName "kube-api-access-9ddrr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 09:10:31 crc kubenswrapper[4940]: I0223 09:10:31.635895 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7cc5d5d86-sr2r2" Feb 23 09:10:31 crc kubenswrapper[4940]: I0223 09:10:31.709758 4940 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eafa7497-50f0-456e-9764-826840da7372-scripts\") on node \"crc\" DevicePath \"\"" Feb 23 09:10:31 crc kubenswrapper[4940]: I0223 09:10:31.709787 4940 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9ddrr\" (UniqueName: \"kubernetes.io/projected/eafa7497-50f0-456e-9764-826840da7372-kube-api-access-9ddrr\") on node \"crc\" DevicePath \"\"" Feb 23 09:10:31 crc kubenswrapper[4940]: I0223 09:10:31.709796 4940 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/eafa7497-50f0-456e-9764-826840da7372-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 23 09:10:31 crc kubenswrapper[4940]: I0223 09:10:31.712705 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eafa7497-50f0-456e-9764-826840da7372-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "eafa7497-50f0-456e-9764-826840da7372" (UID: "eafa7497-50f0-456e-9764-826840da7372"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 09:10:31 crc kubenswrapper[4940]: I0223 09:10:31.745656 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eafa7497-50f0-456e-9764-826840da7372-config-data" (OuterVolumeSpecName: "config-data") pod "eafa7497-50f0-456e-9764-826840da7372" (UID: "eafa7497-50f0-456e-9764-826840da7372"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 09:10:31 crc kubenswrapper[4940]: I0223 09:10:31.810584 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/a587b9bd-1362-449d-92a0-6b2f25b45735-config\") pod \"a587b9bd-1362-449d-92a0-6b2f25b45735\" (UID: \"a587b9bd-1362-449d-92a0-6b2f25b45735\") " Feb 23 09:10:31 crc kubenswrapper[4940]: I0223 09:10:31.810738 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a587b9bd-1362-449d-92a0-6b2f25b45735-combined-ca-bundle\") pod \"a587b9bd-1362-449d-92a0-6b2f25b45735\" (UID: \"a587b9bd-1362-449d-92a0-6b2f25b45735\") " Feb 23 09:10:31 crc kubenswrapper[4940]: I0223 09:10:31.810785 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lbjt8\" (UniqueName: \"kubernetes.io/projected/a587b9bd-1362-449d-92a0-6b2f25b45735-kube-api-access-lbjt8\") pod \"a587b9bd-1362-449d-92a0-6b2f25b45735\" (UID: \"a587b9bd-1362-449d-92a0-6b2f25b45735\") " Feb 23 09:10:31 crc kubenswrapper[4940]: I0223 09:10:31.810909 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a587b9bd-1362-449d-92a0-6b2f25b45735-ovndb-tls-certs\") pod \"a587b9bd-1362-449d-92a0-6b2f25b45735\" (UID: \"a587b9bd-1362-449d-92a0-6b2f25b45735\") " Feb 23 09:10:31 crc kubenswrapper[4940]: I0223 09:10:31.811094 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/a587b9bd-1362-449d-92a0-6b2f25b45735-httpd-config\") pod \"a587b9bd-1362-449d-92a0-6b2f25b45735\" (UID: \"a587b9bd-1362-449d-92a0-6b2f25b45735\") " Feb 23 09:10:31 crc kubenswrapper[4940]: I0223 09:10:31.811689 4940 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eafa7497-50f0-456e-9764-826840da7372-config-data\") on node \"crc\" DevicePath \"\"" Feb 23 09:10:31 crc kubenswrapper[4940]: I0223 09:10:31.811711 4940 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eafa7497-50f0-456e-9764-826840da7372-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 09:10:31 crc kubenswrapper[4940]: I0223 09:10:31.815722 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a587b9bd-1362-449d-92a0-6b2f25b45735-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "a587b9bd-1362-449d-92a0-6b2f25b45735" (UID: "a587b9bd-1362-449d-92a0-6b2f25b45735"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 09:10:31 crc kubenswrapper[4940]: I0223 09:10:31.815991 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a587b9bd-1362-449d-92a0-6b2f25b45735-kube-api-access-lbjt8" (OuterVolumeSpecName: "kube-api-access-lbjt8") pod "a587b9bd-1362-449d-92a0-6b2f25b45735" (UID: "a587b9bd-1362-449d-92a0-6b2f25b45735"). InnerVolumeSpecName "kube-api-access-lbjt8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 09:10:31 crc kubenswrapper[4940]: I0223 09:10:31.874968 4940 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 23 09:10:31 crc kubenswrapper[4940]: I0223 09:10:31.888940 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a587b9bd-1362-449d-92a0-6b2f25b45735-config" (OuterVolumeSpecName: "config") pod "a587b9bd-1362-449d-92a0-6b2f25b45735" (UID: "a587b9bd-1362-449d-92a0-6b2f25b45735"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 09:10:31 crc kubenswrapper[4940]: I0223 09:10:31.913719 4940 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/a587b9bd-1362-449d-92a0-6b2f25b45735-httpd-config\") on node \"crc\" DevicePath \"\"" Feb 23 09:10:31 crc kubenswrapper[4940]: I0223 09:10:31.913759 4940 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/a587b9bd-1362-449d-92a0-6b2f25b45735-config\") on node \"crc\" DevicePath \"\"" Feb 23 09:10:31 crc kubenswrapper[4940]: I0223 09:10:31.913772 4940 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lbjt8\" (UniqueName: \"kubernetes.io/projected/a587b9bd-1362-449d-92a0-6b2f25b45735-kube-api-access-lbjt8\") on node \"crc\" DevicePath \"\"" Feb 23 09:10:31 crc kubenswrapper[4940]: I0223 09:10:31.922953 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a587b9bd-1362-449d-92a0-6b2f25b45735-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a587b9bd-1362-449d-92a0-6b2f25b45735" (UID: "a587b9bd-1362-449d-92a0-6b2f25b45735"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 09:10:31 crc kubenswrapper[4940]: I0223 09:10:31.953816 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a587b9bd-1362-449d-92a0-6b2f25b45735-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "a587b9bd-1362-449d-92a0-6b2f25b45735" (UID: "a587b9bd-1362-449d-92a0-6b2f25b45735"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 09:10:31 crc kubenswrapper[4940]: I0223 09:10:31.958894 4940 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/manila-share-share1-0" Feb 23 09:10:31 crc kubenswrapper[4940]: I0223 09:10:31.975892 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-sjg4x" event={"ID":"2ea39198-ffee-4b2e-9561-71a16fab5149","Type":"ContainerDied","Data":"6dc432283c177a2e6a312a285330269f56ffd532b26ff10648441f56f8a9da15"} Feb 23 09:10:31 crc kubenswrapper[4940]: I0223 09:10:31.975941 4940 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6dc432283c177a2e6a312a285330269f56ffd532b26ff10648441f56f8a9da15" Feb 23 09:10:31 crc kubenswrapper[4940]: I0223 09:10:31.976015 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-sjg4x" Feb 23 09:10:31 crc kubenswrapper[4940]: I0223 09:10:31.980728 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"eafa7497-50f0-456e-9764-826840da7372","Type":"ContainerDied","Data":"81f200d191f48539e09ffcc091106b30fe16021977bafd2143d0a222d6b63d18"} Feb 23 09:10:31 crc kubenswrapper[4940]: I0223 09:10:31.980911 4940 scope.go:117] "RemoveContainer" containerID="043f3c89b263f3dc8987df0790dbb76fe90597d156f7760b6973738f63334b58" Feb 23 09:10:31 crc kubenswrapper[4940]: I0223 09:10:31.981138 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-scheduler-0" Feb 23 09:10:32 crc kubenswrapper[4940]: I0223 09:10:32.003396 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"1a7ead03-cd14-44b3-967b-9daaf4070687","Type":"ContainerStarted","Data":"be5820922553ef7fbbaa31c42fdb52f4d174da406cd36bb78d8f57f0888797ce"} Feb 23 09:10:32 crc kubenswrapper[4940]: I0223 09:10:32.017711 4940 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a587b9bd-1362-449d-92a0-6b2f25b45735-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 09:10:32 crc kubenswrapper[4940]: I0223 09:10:32.017752 4940 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a587b9bd-1362-449d-92a0-6b2f25b45735-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 23 09:10:32 crc kubenswrapper[4940]: I0223 09:10:32.017778 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-50ec-account-create-update-mqlhz" event={"ID":"bb9a8d01-0012-4ea6-b812-7ef38a27fe7b","Type":"ContainerDied","Data":"99d71e1b1de26f75e0a81b03f42f8dd622adaa846783e71d6f24ebbb01e4902a"} Feb 23 09:10:32 crc kubenswrapper[4940]: I0223 09:10:32.017813 4940 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="99d71e1b1de26f75e0a81b03f42f8dd622adaa846783e71d6f24ebbb01e4902a" Feb 23 09:10:32 crc kubenswrapper[4940]: I0223 09:10:32.017818 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-50ec-account-create-update-mqlhz" Feb 23 09:10:32 crc kubenswrapper[4940]: I0223 09:10:32.023533 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-8slbq" event={"ID":"5684e490-a8e8-435c-8d93-6b510ca1f90f","Type":"ContainerDied","Data":"6aa285751dde6b90774bc87334ea721b5435f8c242665412dd868df8ebb9f68f"} Feb 23 09:10:32 crc kubenswrapper[4940]: I0223 09:10:32.023861 4940 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6aa285751dde6b90774bc87334ea721b5435f8c242665412dd868df8ebb9f68f" Feb 23 09:10:32 crc kubenswrapper[4940]: I0223 09:10:32.023567 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-8slbq" Feb 23 09:10:32 crc kubenswrapper[4940]: I0223 09:10:32.027894 4940 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=1.910579741 podStartE2EDuration="19.027871255s" podCreationTimestamp="2026-02-23 09:10:13 +0000 UTC" firstStartedPulling="2026-02-23 09:10:14.081787703 +0000 UTC m=+1345.464993860" lastFinishedPulling="2026-02-23 09:10:31.199079217 +0000 UTC m=+1362.582285374" observedRunningTime="2026-02-23 09:10:32.021765544 +0000 UTC m=+1363.404971721" watchObservedRunningTime="2026-02-23 09:10:32.027871255 +0000 UTC m=+1363.411077412" Feb 23 09:10:32 crc kubenswrapper[4940]: I0223 09:10:32.030297 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-c0be-account-create-update-2b67b" event={"ID":"871fc6a4-b1d8-4676-977e-10c7bd9bf609","Type":"ContainerDied","Data":"112f50977488ee483eec62315bf24aaf893ea40e13072eb8f51dda773a662d71"} Feb 23 09:10:32 crc kubenswrapper[4940]: I0223 09:10:32.030345 4940 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="112f50977488ee483eec62315bf24aaf893ea40e13072eb8f51dda773a662d71" Feb 23 09:10:32 crc kubenswrapper[4940]: I0223 09:10:32.030409 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-c0be-account-create-update-2b67b" Feb 23 09:10:32 crc kubenswrapper[4940]: I0223 09:10:32.040855 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"04e3d354-3545-45bc-be74-899fa8a434dd","Type":"ContainerStarted","Data":"67c010c5bdd43256e9b3a0b106872969ca1d590ab066541de6d32ab7fd9dd06c"} Feb 23 09:10:32 crc kubenswrapper[4940]: I0223 09:10:32.044914 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-j6xpr" event={"ID":"e9332095-7a85-4e0d-8a06-da6462a9397b","Type":"ContainerDied","Data":"4effed95d3350e11636e887768bcee50a837c33cd47befb06cf15f9468078331"} Feb 23 09:10:32 crc kubenswrapper[4940]: I0223 09:10:32.044959 4940 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4effed95d3350e11636e887768bcee50a837c33cd47befb06cf15f9468078331" Feb 23 09:10:32 crc kubenswrapper[4940]: I0223 09:10:32.045038 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-j6xpr" Feb 23 09:10:32 crc kubenswrapper[4940]: I0223 09:10:32.052243 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7cc5d5d86-sr2r2" Feb 23 09:10:32 crc kubenswrapper[4940]: I0223 09:10:32.052580 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7cc5d5d86-sr2r2" event={"ID":"a587b9bd-1362-449d-92a0-6b2f25b45735","Type":"ContainerDied","Data":"e02385b9d9223fb7aca0f8e3b245a1ec416f44cdc12ca79700aa821d7f0fff1b"} Feb 23 09:10:32 crc kubenswrapper[4940]: I0223 09:10:32.074437 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-53e3-account-create-update-dlb86" Feb 23 09:10:32 crc kubenswrapper[4940]: I0223 09:10:32.076529 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-6c475756fc-pxxbv" event={"ID":"418704a3-dc2d-440f-8beb-2c00795cf4d4","Type":"ContainerStarted","Data":"6b5fabb733601dcdae870bd23ded81bb30b817aee460884ded3640ef34e694e1"} Feb 23 09:10:32 crc kubenswrapper[4940]: I0223 09:10:32.076565 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-6c475756fc-pxxbv" Feb 23 09:10:32 crc kubenswrapper[4940]: I0223 09:10:32.080706 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-6c475756fc-pxxbv" Feb 23 09:10:32 crc kubenswrapper[4940]: I0223 09:10:32.089926 4940 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-scheduler-0"] Feb 23 09:10:32 crc kubenswrapper[4940]: I0223 09:10:32.092038 4940 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/swift-proxy-6c475756fc-pxxbv" podUID="418704a3-dc2d-440f-8beb-2c00795cf4d4" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 23 09:10:32 crc kubenswrapper[4940]: I0223 09:10:32.092458 4940 scope.go:117] "RemoveContainer" containerID="e72875b833bfe94af04f9db504ebf0b2eaf410cdc9ae1f3a46b2cfef4e24be0d" Feb 23 09:10:32 crc kubenswrapper[4940]: I0223 09:10:32.112628 4940 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/manila-scheduler-0"] Feb 23 09:10:32 crc kubenswrapper[4940]: I0223 09:10:32.118638 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-scheduler-0"] Feb 23 09:10:32 crc kubenswrapper[4940]: E0223 09:10:32.119155 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="871fc6a4-b1d8-4676-977e-10c7bd9bf609" containerName="mariadb-account-create-update" Feb 23 09:10:32 crc kubenswrapper[4940]: I0223 09:10:32.119177 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="871fc6a4-b1d8-4676-977e-10c7bd9bf609" containerName="mariadb-account-create-update" Feb 23 09:10:32 crc kubenswrapper[4940]: E0223 09:10:32.119195 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9332095-7a85-4e0d-8a06-da6462a9397b" containerName="mariadb-database-create" Feb 23 09:10:32 crc kubenswrapper[4940]: I0223 09:10:32.119203 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9332095-7a85-4e0d-8a06-da6462a9397b" containerName="mariadb-database-create" Feb 23 09:10:32 crc kubenswrapper[4940]: E0223 09:10:32.119215 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a587b9bd-1362-449d-92a0-6b2f25b45735" containerName="neutron-httpd" Feb 23 09:10:32 crc kubenswrapper[4940]: I0223 09:10:32.119222 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="a587b9bd-1362-449d-92a0-6b2f25b45735" containerName="neutron-httpd" Feb 23 09:10:32 crc kubenswrapper[4940]: E0223 09:10:32.119241 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ea39198-ffee-4b2e-9561-71a16fab5149" containerName="mariadb-database-create" Feb 23 09:10:32 crc kubenswrapper[4940]: I0223 09:10:32.119249 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ea39198-ffee-4b2e-9561-71a16fab5149" containerName="mariadb-database-create" Feb 23 09:10:32 crc kubenswrapper[4940]: E0223 09:10:32.119263 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eafa7497-50f0-456e-9764-826840da7372" containerName="probe" Feb 23 09:10:32 crc kubenswrapper[4940]: I0223 09:10:32.119270 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="eafa7497-50f0-456e-9764-826840da7372" containerName="probe" Feb 23 09:10:32 crc kubenswrapper[4940]: E0223 09:10:32.119287 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="502149a1-62b1-4e45-831b-51d1d10d4265" containerName="mariadb-account-create-update" Feb 23 09:10:32 crc kubenswrapper[4940]: I0223 09:10:32.119295 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="502149a1-62b1-4e45-831b-51d1d10d4265" containerName="mariadb-account-create-update" Feb 23 09:10:32 crc kubenswrapper[4940]: E0223 09:10:32.119327 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eafa7497-50f0-456e-9764-826840da7372" containerName="manila-scheduler" Feb 23 09:10:32 crc kubenswrapper[4940]: I0223 09:10:32.119336 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="eafa7497-50f0-456e-9764-826840da7372" containerName="manila-scheduler" Feb 23 09:10:32 crc kubenswrapper[4940]: E0223 09:10:32.119348 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5684e490-a8e8-435c-8d93-6b510ca1f90f" containerName="mariadb-database-create" Feb 23 09:10:32 crc kubenswrapper[4940]: I0223 09:10:32.119355 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="5684e490-a8e8-435c-8d93-6b510ca1f90f" containerName="mariadb-database-create" Feb 23 09:10:32 crc kubenswrapper[4940]: E0223 09:10:32.119369 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb9a8d01-0012-4ea6-b812-7ef38a27fe7b" containerName="mariadb-account-create-update" Feb 23 09:10:32 crc kubenswrapper[4940]: I0223 09:10:32.119378 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb9a8d01-0012-4ea6-b812-7ef38a27fe7b" containerName="mariadb-account-create-update" Feb 23 09:10:32 crc kubenswrapper[4940]: E0223 09:10:32.119390 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a587b9bd-1362-449d-92a0-6b2f25b45735" containerName="neutron-api" Feb 23 09:10:32 crc kubenswrapper[4940]: I0223 09:10:32.119398 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="a587b9bd-1362-449d-92a0-6b2f25b45735" containerName="neutron-api" Feb 23 09:10:32 crc kubenswrapper[4940]: I0223 09:10:32.119678 4940 memory_manager.go:354] "RemoveStaleState removing state" podUID="5684e490-a8e8-435c-8d93-6b510ca1f90f" containerName="mariadb-database-create" Feb 23 09:10:32 crc kubenswrapper[4940]: I0223 09:10:32.119693 4940 memory_manager.go:354] "RemoveStaleState removing state" podUID="502149a1-62b1-4e45-831b-51d1d10d4265" containerName="mariadb-account-create-update" Feb 23 09:10:32 crc kubenswrapper[4940]: I0223 09:10:32.119707 4940 memory_manager.go:354] "RemoveStaleState removing state" podUID="a587b9bd-1362-449d-92a0-6b2f25b45735" containerName="neutron-api" Feb 23 09:10:32 crc kubenswrapper[4940]: I0223 09:10:32.119723 4940 memory_manager.go:354] "RemoveStaleState removing state" podUID="eafa7497-50f0-456e-9764-826840da7372" containerName="probe" Feb 23 09:10:32 crc kubenswrapper[4940]: I0223 09:10:32.119739 4940 memory_manager.go:354] "RemoveStaleState removing state" podUID="e9332095-7a85-4e0d-8a06-da6462a9397b" containerName="mariadb-database-create" Feb 23 09:10:32 crc kubenswrapper[4940]: I0223 09:10:32.119755 4940 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb9a8d01-0012-4ea6-b812-7ef38a27fe7b" containerName="mariadb-account-create-update" Feb 23 09:10:32 crc kubenswrapper[4940]: I0223 09:10:32.119773 4940 memory_manager.go:354] "RemoveStaleState removing state" podUID="2ea39198-ffee-4b2e-9561-71a16fab5149" containerName="mariadb-database-create" Feb 23 09:10:32 crc kubenswrapper[4940]: I0223 09:10:32.119784 4940 memory_manager.go:354] "RemoveStaleState removing state" podUID="871fc6a4-b1d8-4676-977e-10c7bd9bf609" containerName="mariadb-account-create-update" Feb 23 09:10:32 crc kubenswrapper[4940]: I0223 09:10:32.119802 4940 memory_manager.go:354] "RemoveStaleState removing state" podUID="eafa7497-50f0-456e-9764-826840da7372" containerName="manila-scheduler" Feb 23 09:10:32 crc kubenswrapper[4940]: I0223 09:10:32.119811 4940 memory_manager.go:354] "RemoveStaleState removing state" podUID="a587b9bd-1362-449d-92a0-6b2f25b45735" containerName="neutron-httpd" Feb 23 09:10:32 crc kubenswrapper[4940]: I0223 09:10:32.121057 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-scheduler-0" Feb 23 09:10:32 crc kubenswrapper[4940]: I0223 09:10:32.128156 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-scheduler-0"] Feb 23 09:10:32 crc kubenswrapper[4940]: I0223 09:10:32.130514 4940 scope.go:117] "RemoveContainer" containerID="f8f0ef202b54c93d5f890ebcc3a445b472067725bf46cacfd6e9c5cfa9fd63ad" Feb 23 09:10:32 crc kubenswrapper[4940]: I0223 09:10:32.136029 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-scheduler-config-data" Feb 23 09:10:32 crc kubenswrapper[4940]: I0223 09:10:32.142499 4940 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-6c475756fc-pxxbv" podStartSLOduration=13.142477205 podStartE2EDuration="13.142477205s" podCreationTimestamp="2026-02-23 09:10:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 09:10:32.136384964 +0000 UTC m=+1363.519591141" watchObservedRunningTime="2026-02-23 09:10:32.142477205 +0000 UTC m=+1363.525683372" Feb 23 09:10:32 crc kubenswrapper[4940]: I0223 09:10:32.143955 4940 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/swift-proxy-6c475756fc-pxxbv" podUID="418704a3-dc2d-440f-8beb-2c00795cf4d4" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 23 09:10:32 crc kubenswrapper[4940]: I0223 09:10:32.179868 4940 scope.go:117] "RemoveContainer" containerID="b34f129a279e5f9d6d4a796ffb2079cee98358e47fe89b7c10add880a8f7af7e" Feb 23 09:10:32 crc kubenswrapper[4940]: I0223 09:10:32.225012 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6efb7037-6af6-4b85-b2fc-940a912cddf4-scripts\") pod \"manila-scheduler-0\" (UID: \"6efb7037-6af6-4b85-b2fc-940a912cddf4\") " pod="openstack/manila-scheduler-0" Feb 23 09:10:32 crc kubenswrapper[4940]: I0223 09:10:32.225068 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6efb7037-6af6-4b85-b2fc-940a912cddf4-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"6efb7037-6af6-4b85-b2fc-940a912cddf4\") " pod="openstack/manila-scheduler-0" Feb 23 09:10:32 crc kubenswrapper[4940]: I0223 09:10:32.225112 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6efb7037-6af6-4b85-b2fc-940a912cddf4-combined-ca-bundle\") pod \"manila-scheduler-0\" (UID: \"6efb7037-6af6-4b85-b2fc-940a912cddf4\") " pod="openstack/manila-scheduler-0" Feb 23 09:10:32 crc kubenswrapper[4940]: I0223 09:10:32.225168 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6efb7037-6af6-4b85-b2fc-940a912cddf4-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"6efb7037-6af6-4b85-b2fc-940a912cddf4\") " pod="openstack/manila-scheduler-0" Feb 23 09:10:32 crc kubenswrapper[4940]: I0223 09:10:32.225252 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6efb7037-6af6-4b85-b2fc-940a912cddf4-config-data\") pod \"manila-scheduler-0\" (UID: \"6efb7037-6af6-4b85-b2fc-940a912cddf4\") " pod="openstack/manila-scheduler-0" Feb 23 09:10:32 crc kubenswrapper[4940]: I0223 09:10:32.225310 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x8fqz\" (UniqueName: \"kubernetes.io/projected/6efb7037-6af6-4b85-b2fc-940a912cddf4-kube-api-access-x8fqz\") pod \"manila-scheduler-0\" (UID: \"6efb7037-6af6-4b85-b2fc-940a912cddf4\") " pod="openstack/manila-scheduler-0" Feb 23 09:10:32 crc kubenswrapper[4940]: I0223 09:10:32.230154 4940 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-7cc5d5d86-sr2r2"] Feb 23 09:10:32 crc kubenswrapper[4940]: I0223 09:10:32.254468 4940 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-7cc5d5d86-sr2r2"] Feb 23 09:10:32 crc kubenswrapper[4940]: I0223 09:10:32.328866 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x8fqz\" (UniqueName: \"kubernetes.io/projected/6efb7037-6af6-4b85-b2fc-940a912cddf4-kube-api-access-x8fqz\") pod \"manila-scheduler-0\" (UID: \"6efb7037-6af6-4b85-b2fc-940a912cddf4\") " pod="openstack/manila-scheduler-0" Feb 23 09:10:32 crc kubenswrapper[4940]: I0223 09:10:32.329222 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6efb7037-6af6-4b85-b2fc-940a912cddf4-scripts\") pod \"manila-scheduler-0\" (UID: \"6efb7037-6af6-4b85-b2fc-940a912cddf4\") " pod="openstack/manila-scheduler-0" Feb 23 09:10:32 crc kubenswrapper[4940]: I0223 09:10:32.329360 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6efb7037-6af6-4b85-b2fc-940a912cddf4-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"6efb7037-6af6-4b85-b2fc-940a912cddf4\") " pod="openstack/manila-scheduler-0" Feb 23 09:10:32 crc kubenswrapper[4940]: I0223 09:10:32.329622 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6efb7037-6af6-4b85-b2fc-940a912cddf4-combined-ca-bundle\") pod \"manila-scheduler-0\" (UID: \"6efb7037-6af6-4b85-b2fc-940a912cddf4\") " pod="openstack/manila-scheduler-0" Feb 23 09:10:32 crc kubenswrapper[4940]: I0223 09:10:32.329793 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6efb7037-6af6-4b85-b2fc-940a912cddf4-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"6efb7037-6af6-4b85-b2fc-940a912cddf4\") " pod="openstack/manila-scheduler-0" Feb 23 09:10:32 crc kubenswrapper[4940]: I0223 09:10:32.329944 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6efb7037-6af6-4b85-b2fc-940a912cddf4-config-data\") pod \"manila-scheduler-0\" (UID: \"6efb7037-6af6-4b85-b2fc-940a912cddf4\") " pod="openstack/manila-scheduler-0" Feb 23 09:10:32 crc kubenswrapper[4940]: I0223 09:10:32.334413 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6efb7037-6af6-4b85-b2fc-940a912cddf4-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"6efb7037-6af6-4b85-b2fc-940a912cddf4\") " pod="openstack/manila-scheduler-0" Feb 23 09:10:32 crc kubenswrapper[4940]: I0223 09:10:32.341365 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6efb7037-6af6-4b85-b2fc-940a912cddf4-scripts\") pod \"manila-scheduler-0\" (UID: \"6efb7037-6af6-4b85-b2fc-940a912cddf4\") " pod="openstack/manila-scheduler-0" Feb 23 09:10:32 crc kubenswrapper[4940]: I0223 09:10:32.346821 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6efb7037-6af6-4b85-b2fc-940a912cddf4-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"6efb7037-6af6-4b85-b2fc-940a912cddf4\") " pod="openstack/manila-scheduler-0" Feb 23 09:10:32 crc kubenswrapper[4940]: I0223 09:10:32.354565 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6efb7037-6af6-4b85-b2fc-940a912cddf4-combined-ca-bundle\") pod \"manila-scheduler-0\" (UID: \"6efb7037-6af6-4b85-b2fc-940a912cddf4\") " pod="openstack/manila-scheduler-0" Feb 23 09:10:32 crc kubenswrapper[4940]: I0223 09:10:32.355455 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6efb7037-6af6-4b85-b2fc-940a912cddf4-config-data\") pod \"manila-scheduler-0\" (UID: \"6efb7037-6af6-4b85-b2fc-940a912cddf4\") " pod="openstack/manila-scheduler-0" Feb 23 09:10:32 crc kubenswrapper[4940]: I0223 09:10:32.361017 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x8fqz\" (UniqueName: \"kubernetes.io/projected/6efb7037-6af6-4b85-b2fc-940a912cddf4-kube-api-access-x8fqz\") pod \"manila-scheduler-0\" (UID: \"6efb7037-6af6-4b85-b2fc-940a912cddf4\") " pod="openstack/manila-scheduler-0" Feb 23 09:10:32 crc kubenswrapper[4940]: I0223 09:10:32.491289 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-scheduler-0" Feb 23 09:10:32 crc kubenswrapper[4940]: I0223 09:10:32.578282 4940 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-6884678d78-ckt87" podUID="e330abf6-9282-4221-b286-672ffc3985e7" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.151:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.151:8443: connect: connection refused" Feb 23 09:10:33 crc kubenswrapper[4940]: I0223 09:10:33.085023 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"04e3d354-3545-45bc-be74-899fa8a434dd","Type":"ContainerStarted","Data":"0b1e0f3be5628bb714e510c5e1acf36dd29530ee01a51784b8fe95b32faeb9e7"} Feb 23 09:10:33 crc kubenswrapper[4940]: I0223 09:10:33.100197 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-6c475756fc-pxxbv" Feb 23 09:10:33 crc kubenswrapper[4940]: I0223 09:10:33.163202 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-scheduler-0"] Feb 23 09:10:33 crc kubenswrapper[4940]: W0223 09:10:33.179936 4940 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6efb7037_6af6_4b85_b2fc_940a912cddf4.slice/crio-902e25230203d0b812a577accd5394e73370387300a7754c242ec30ce83ea45d WatchSource:0}: Error finding container 902e25230203d0b812a577accd5394e73370387300a7754c242ec30ce83ea45d: Status 404 returned error can't find the container with id 902e25230203d0b812a577accd5394e73370387300a7754c242ec30ce83ea45d Feb 23 09:10:33 crc kubenswrapper[4940]: I0223 09:10:33.361147 4940 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a587b9bd-1362-449d-92a0-6b2f25b45735" path="/var/lib/kubelet/pods/a587b9bd-1362-449d-92a0-6b2f25b45735/volumes" Feb 23 09:10:33 crc kubenswrapper[4940]: I0223 09:10:33.361857 4940 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eafa7497-50f0-456e-9764-826840da7372" path="/var/lib/kubelet/pods/eafa7497-50f0-456e-9764-826840da7372/volumes" Feb 23 09:10:34 crc kubenswrapper[4940]: I0223 09:10:34.111875 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"6efb7037-6af6-4b85-b2fc-940a912cddf4","Type":"ContainerStarted","Data":"ac28eeff528df1dfa055815a5f30a534c421bf5a60f74d827416bbc8ed5f8cad"} Feb 23 09:10:34 crc kubenswrapper[4940]: I0223 09:10:34.112393 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"6efb7037-6af6-4b85-b2fc-940a912cddf4","Type":"ContainerStarted","Data":"902e25230203d0b812a577accd5394e73370387300a7754c242ec30ce83ea45d"} Feb 23 09:10:34 crc kubenswrapper[4940]: I0223 09:10:34.686902 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-r5b46"] Feb 23 09:10:34 crc kubenswrapper[4940]: I0223 09:10:34.693932 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-r5b46" Feb 23 09:10:34 crc kubenswrapper[4940]: I0223 09:10:34.697998 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Feb 23 09:10:34 crc kubenswrapper[4940]: I0223 09:10:34.698168 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Feb 23 09:10:34 crc kubenswrapper[4940]: I0223 09:10:34.698228 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-v2qrp" Feb 23 09:10:34 crc kubenswrapper[4940]: I0223 09:10:34.706419 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dcl6q\" (UniqueName: \"kubernetes.io/projected/8b3cfe7f-19c0-47e1-b535-0b4e98dba050-kube-api-access-dcl6q\") pod \"nova-cell0-conductor-db-sync-r5b46\" (UID: \"8b3cfe7f-19c0-47e1-b535-0b4e98dba050\") " pod="openstack/nova-cell0-conductor-db-sync-r5b46" Feb 23 09:10:34 crc kubenswrapper[4940]: I0223 09:10:34.706505 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b3cfe7f-19c0-47e1-b535-0b4e98dba050-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-r5b46\" (UID: \"8b3cfe7f-19c0-47e1-b535-0b4e98dba050\") " pod="openstack/nova-cell0-conductor-db-sync-r5b46" Feb 23 09:10:34 crc kubenswrapper[4940]: I0223 09:10:34.706589 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8b3cfe7f-19c0-47e1-b535-0b4e98dba050-config-data\") pod \"nova-cell0-conductor-db-sync-r5b46\" (UID: \"8b3cfe7f-19c0-47e1-b535-0b4e98dba050\") " pod="openstack/nova-cell0-conductor-db-sync-r5b46" Feb 23 09:10:34 crc kubenswrapper[4940]: I0223 09:10:34.706658 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8b3cfe7f-19c0-47e1-b535-0b4e98dba050-scripts\") pod \"nova-cell0-conductor-db-sync-r5b46\" (UID: \"8b3cfe7f-19c0-47e1-b535-0b4e98dba050\") " pod="openstack/nova-cell0-conductor-db-sync-r5b46" Feb 23 09:10:34 crc kubenswrapper[4940]: I0223 09:10:34.729396 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-r5b46"] Feb 23 09:10:34 crc kubenswrapper[4940]: I0223 09:10:34.808526 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8b3cfe7f-19c0-47e1-b535-0b4e98dba050-config-data\") pod \"nova-cell0-conductor-db-sync-r5b46\" (UID: \"8b3cfe7f-19c0-47e1-b535-0b4e98dba050\") " pod="openstack/nova-cell0-conductor-db-sync-r5b46" Feb 23 09:10:34 crc kubenswrapper[4940]: I0223 09:10:34.808587 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8b3cfe7f-19c0-47e1-b535-0b4e98dba050-scripts\") pod \"nova-cell0-conductor-db-sync-r5b46\" (UID: \"8b3cfe7f-19c0-47e1-b535-0b4e98dba050\") " pod="openstack/nova-cell0-conductor-db-sync-r5b46" Feb 23 09:10:34 crc kubenswrapper[4940]: I0223 09:10:34.808664 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dcl6q\" (UniqueName: \"kubernetes.io/projected/8b3cfe7f-19c0-47e1-b535-0b4e98dba050-kube-api-access-dcl6q\") pod \"nova-cell0-conductor-db-sync-r5b46\" (UID: \"8b3cfe7f-19c0-47e1-b535-0b4e98dba050\") " pod="openstack/nova-cell0-conductor-db-sync-r5b46" Feb 23 09:10:34 crc kubenswrapper[4940]: I0223 09:10:34.808726 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b3cfe7f-19c0-47e1-b535-0b4e98dba050-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-r5b46\" (UID: \"8b3cfe7f-19c0-47e1-b535-0b4e98dba050\") " pod="openstack/nova-cell0-conductor-db-sync-r5b46" Feb 23 09:10:34 crc kubenswrapper[4940]: I0223 09:10:34.817425 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b3cfe7f-19c0-47e1-b535-0b4e98dba050-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-r5b46\" (UID: \"8b3cfe7f-19c0-47e1-b535-0b4e98dba050\") " pod="openstack/nova-cell0-conductor-db-sync-r5b46" Feb 23 09:10:34 crc kubenswrapper[4940]: I0223 09:10:34.817419 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8b3cfe7f-19c0-47e1-b535-0b4e98dba050-scripts\") pod \"nova-cell0-conductor-db-sync-r5b46\" (UID: \"8b3cfe7f-19c0-47e1-b535-0b4e98dba050\") " pod="openstack/nova-cell0-conductor-db-sync-r5b46" Feb 23 09:10:34 crc kubenswrapper[4940]: I0223 09:10:34.817528 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8b3cfe7f-19c0-47e1-b535-0b4e98dba050-config-data\") pod \"nova-cell0-conductor-db-sync-r5b46\" (UID: \"8b3cfe7f-19c0-47e1-b535-0b4e98dba050\") " pod="openstack/nova-cell0-conductor-db-sync-r5b46" Feb 23 09:10:34 crc kubenswrapper[4940]: I0223 09:10:34.836112 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dcl6q\" (UniqueName: \"kubernetes.io/projected/8b3cfe7f-19c0-47e1-b535-0b4e98dba050-kube-api-access-dcl6q\") pod \"nova-cell0-conductor-db-sync-r5b46\" (UID: \"8b3cfe7f-19c0-47e1-b535-0b4e98dba050\") " pod="openstack/nova-cell0-conductor-db-sync-r5b46" Feb 23 09:10:35 crc kubenswrapper[4940]: I0223 09:10:35.014130 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-r5b46" Feb 23 09:10:35 crc kubenswrapper[4940]: I0223 09:10:35.130446 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"04e3d354-3545-45bc-be74-899fa8a434dd","Type":"ContainerStarted","Data":"4b61438b2bc039f52d4d1fa890ebc693adbae001d8364d8bf45e1ace6a959678"} Feb 23 09:10:35 crc kubenswrapper[4940]: I0223 09:10:35.130493 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"04e3d354-3545-45bc-be74-899fa8a434dd","Type":"ContainerStarted","Data":"d737e507350840eec5d8ee4d49e503df0c741f3bbbd038972afe8f0e9f371fd9"} Feb 23 09:10:35 crc kubenswrapper[4940]: I0223 09:10:35.134716 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"6efb7037-6af6-4b85-b2fc-940a912cddf4","Type":"ContainerStarted","Data":"0d9986917959a890d2db0644d9204bb785000429388508c167b8193c64cc0e72"} Feb 23 09:10:35 crc kubenswrapper[4940]: I0223 09:10:35.178741 4940 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-scheduler-0" podStartSLOduration=3.178716266 podStartE2EDuration="3.178716266s" podCreationTimestamp="2026-02-23 09:10:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 09:10:35.166477983 +0000 UTC m=+1366.549684140" watchObservedRunningTime="2026-02-23 09:10:35.178716266 +0000 UTC m=+1366.561922423" Feb 23 09:10:35 crc kubenswrapper[4940]: I0223 09:10:35.672023 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-r5b46"] Feb 23 09:10:36 crc kubenswrapper[4940]: I0223 09:10:36.145534 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-r5b46" event={"ID":"8b3cfe7f-19c0-47e1-b535-0b4e98dba050","Type":"ContainerStarted","Data":"3490bc066ac50263862154c78e08da73e0b4b0ee9da70b6a1de0433c37534d8b"} Feb 23 09:10:37 crc kubenswrapper[4940]: I0223 09:10:37.157705 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"04e3d354-3545-45bc-be74-899fa8a434dd","Type":"ContainerStarted","Data":"d6872764f58919ebd4459152c4126b8bb59d194ed9954c57499da8633d899ec5"} Feb 23 09:10:37 crc kubenswrapper[4940]: I0223 09:10:37.158115 4940 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="04e3d354-3545-45bc-be74-899fa8a434dd" containerName="ceilometer-central-agent" containerID="cri-o://0b1e0f3be5628bb714e510c5e1acf36dd29530ee01a51784b8fe95b32faeb9e7" gracePeriod=30 Feb 23 09:10:37 crc kubenswrapper[4940]: I0223 09:10:37.158462 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 23 09:10:37 crc kubenswrapper[4940]: I0223 09:10:37.158861 4940 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="04e3d354-3545-45bc-be74-899fa8a434dd" containerName="proxy-httpd" containerID="cri-o://d6872764f58919ebd4459152c4126b8bb59d194ed9954c57499da8633d899ec5" gracePeriod=30 Feb 23 09:10:37 crc kubenswrapper[4940]: I0223 09:10:37.158925 4940 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="04e3d354-3545-45bc-be74-899fa8a434dd" containerName="sg-core" containerID="cri-o://d737e507350840eec5d8ee4d49e503df0c741f3bbbd038972afe8f0e9f371fd9" gracePeriod=30 Feb 23 09:10:37 crc kubenswrapper[4940]: I0223 09:10:37.158995 4940 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="04e3d354-3545-45bc-be74-899fa8a434dd" containerName="ceilometer-notification-agent" containerID="cri-o://4b61438b2bc039f52d4d1fa890ebc693adbae001d8364d8bf45e1ace6a959678" gracePeriod=30 Feb 23 09:10:37 crc kubenswrapper[4940]: I0223 09:10:37.191804 4940 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=10.70095847 podStartE2EDuration="15.191781805s" podCreationTimestamp="2026-02-23 09:10:22 +0000 UTC" firstStartedPulling="2026-02-23 09:10:31.888963926 +0000 UTC m=+1363.272170093" lastFinishedPulling="2026-02-23 09:10:36.379787271 +0000 UTC m=+1367.762993428" observedRunningTime="2026-02-23 09:10:37.180928006 +0000 UTC m=+1368.564134193" watchObservedRunningTime="2026-02-23 09:10:37.191781805 +0000 UTC m=+1368.574987962" Feb 23 09:10:38 crc kubenswrapper[4940]: I0223 09:10:38.181926 4940 generic.go:334] "Generic (PLEG): container finished" podID="04e3d354-3545-45bc-be74-899fa8a434dd" containerID="d6872764f58919ebd4459152c4126b8bb59d194ed9954c57499da8633d899ec5" exitCode=0 Feb 23 09:10:38 crc kubenswrapper[4940]: I0223 09:10:38.182237 4940 generic.go:334] "Generic (PLEG): container finished" podID="04e3d354-3545-45bc-be74-899fa8a434dd" containerID="d737e507350840eec5d8ee4d49e503df0c741f3bbbd038972afe8f0e9f371fd9" exitCode=2 Feb 23 09:10:38 crc kubenswrapper[4940]: I0223 09:10:38.182248 4940 generic.go:334] "Generic (PLEG): container finished" podID="04e3d354-3545-45bc-be74-899fa8a434dd" containerID="4b61438b2bc039f52d4d1fa890ebc693adbae001d8364d8bf45e1ace6a959678" exitCode=0 Feb 23 09:10:38 crc kubenswrapper[4940]: I0223 09:10:38.182256 4940 generic.go:334] "Generic (PLEG): container finished" podID="04e3d354-3545-45bc-be74-899fa8a434dd" containerID="0b1e0f3be5628bb714e510c5e1acf36dd29530ee01a51784b8fe95b32faeb9e7" exitCode=0 Feb 23 09:10:38 crc kubenswrapper[4940]: I0223 09:10:38.181980 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"04e3d354-3545-45bc-be74-899fa8a434dd","Type":"ContainerDied","Data":"d6872764f58919ebd4459152c4126b8bb59d194ed9954c57499da8633d899ec5"} Feb 23 09:10:38 crc kubenswrapper[4940]: I0223 09:10:38.182316 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"04e3d354-3545-45bc-be74-899fa8a434dd","Type":"ContainerDied","Data":"d737e507350840eec5d8ee4d49e503df0c741f3bbbd038972afe8f0e9f371fd9"} Feb 23 09:10:38 crc kubenswrapper[4940]: I0223 09:10:38.182347 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"04e3d354-3545-45bc-be74-899fa8a434dd","Type":"ContainerDied","Data":"4b61438b2bc039f52d4d1fa890ebc693adbae001d8364d8bf45e1ace6a959678"} Feb 23 09:10:38 crc kubenswrapper[4940]: I0223 09:10:38.182359 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"04e3d354-3545-45bc-be74-899fa8a434dd","Type":"ContainerDied","Data":"0b1e0f3be5628bb714e510c5e1acf36dd29530ee01a51784b8fe95b32faeb9e7"} Feb 23 09:10:38 crc kubenswrapper[4940]: I0223 09:10:38.319071 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 23 09:10:38 crc kubenswrapper[4940]: I0223 09:10:38.486666 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04e3d354-3545-45bc-be74-899fa8a434dd-combined-ca-bundle\") pod \"04e3d354-3545-45bc-be74-899fa8a434dd\" (UID: \"04e3d354-3545-45bc-be74-899fa8a434dd\") " Feb 23 09:10:38 crc kubenswrapper[4940]: I0223 09:10:38.486700 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/04e3d354-3545-45bc-be74-899fa8a434dd-sg-core-conf-yaml\") pod \"04e3d354-3545-45bc-be74-899fa8a434dd\" (UID: \"04e3d354-3545-45bc-be74-899fa8a434dd\") " Feb 23 09:10:38 crc kubenswrapper[4940]: I0223 09:10:38.486733 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04e3d354-3545-45bc-be74-899fa8a434dd-config-data\") pod \"04e3d354-3545-45bc-be74-899fa8a434dd\" (UID: \"04e3d354-3545-45bc-be74-899fa8a434dd\") " Feb 23 09:10:38 crc kubenswrapper[4940]: I0223 09:10:38.486773 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/04e3d354-3545-45bc-be74-899fa8a434dd-log-httpd\") pod \"04e3d354-3545-45bc-be74-899fa8a434dd\" (UID: \"04e3d354-3545-45bc-be74-899fa8a434dd\") " Feb 23 09:10:38 crc kubenswrapper[4940]: I0223 09:10:38.486796 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/04e3d354-3545-45bc-be74-899fa8a434dd-scripts\") pod \"04e3d354-3545-45bc-be74-899fa8a434dd\" (UID: \"04e3d354-3545-45bc-be74-899fa8a434dd\") " Feb 23 09:10:38 crc kubenswrapper[4940]: I0223 09:10:38.486823 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lf9j7\" (UniqueName: \"kubernetes.io/projected/04e3d354-3545-45bc-be74-899fa8a434dd-kube-api-access-lf9j7\") pod \"04e3d354-3545-45bc-be74-899fa8a434dd\" (UID: \"04e3d354-3545-45bc-be74-899fa8a434dd\") " Feb 23 09:10:38 crc kubenswrapper[4940]: I0223 09:10:38.486856 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/04e3d354-3545-45bc-be74-899fa8a434dd-run-httpd\") pod \"04e3d354-3545-45bc-be74-899fa8a434dd\" (UID: \"04e3d354-3545-45bc-be74-899fa8a434dd\") " Feb 23 09:10:38 crc kubenswrapper[4940]: I0223 09:10:38.487405 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/04e3d354-3545-45bc-be74-899fa8a434dd-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "04e3d354-3545-45bc-be74-899fa8a434dd" (UID: "04e3d354-3545-45bc-be74-899fa8a434dd"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 09:10:38 crc kubenswrapper[4940]: I0223 09:10:38.487575 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/04e3d354-3545-45bc-be74-899fa8a434dd-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "04e3d354-3545-45bc-be74-899fa8a434dd" (UID: "04e3d354-3545-45bc-be74-899fa8a434dd"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 09:10:38 crc kubenswrapper[4940]: I0223 09:10:38.489018 4940 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/04e3d354-3545-45bc-be74-899fa8a434dd-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 23 09:10:38 crc kubenswrapper[4940]: I0223 09:10:38.489043 4940 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/04e3d354-3545-45bc-be74-899fa8a434dd-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 23 09:10:38 crc kubenswrapper[4940]: I0223 09:10:38.494989 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/04e3d354-3545-45bc-be74-899fa8a434dd-kube-api-access-lf9j7" (OuterVolumeSpecName: "kube-api-access-lf9j7") pod "04e3d354-3545-45bc-be74-899fa8a434dd" (UID: "04e3d354-3545-45bc-be74-899fa8a434dd"). InnerVolumeSpecName "kube-api-access-lf9j7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 09:10:38 crc kubenswrapper[4940]: I0223 09:10:38.495117 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/04e3d354-3545-45bc-be74-899fa8a434dd-scripts" (OuterVolumeSpecName: "scripts") pod "04e3d354-3545-45bc-be74-899fa8a434dd" (UID: "04e3d354-3545-45bc-be74-899fa8a434dd"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 09:10:38 crc kubenswrapper[4940]: I0223 09:10:38.528657 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/04e3d354-3545-45bc-be74-899fa8a434dd-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "04e3d354-3545-45bc-be74-899fa8a434dd" (UID: "04e3d354-3545-45bc-be74-899fa8a434dd"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 09:10:38 crc kubenswrapper[4940]: I0223 09:10:38.591359 4940 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/04e3d354-3545-45bc-be74-899fa8a434dd-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 23 09:10:38 crc kubenswrapper[4940]: I0223 09:10:38.591634 4940 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/04e3d354-3545-45bc-be74-899fa8a434dd-scripts\") on node \"crc\" DevicePath \"\"" Feb 23 09:10:38 crc kubenswrapper[4940]: I0223 09:10:38.591735 4940 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lf9j7\" (UniqueName: \"kubernetes.io/projected/04e3d354-3545-45bc-be74-899fa8a434dd-kube-api-access-lf9j7\") on node \"crc\" DevicePath \"\"" Feb 23 09:10:38 crc kubenswrapper[4940]: I0223 09:10:38.610277 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/04e3d354-3545-45bc-be74-899fa8a434dd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "04e3d354-3545-45bc-be74-899fa8a434dd" (UID: "04e3d354-3545-45bc-be74-899fa8a434dd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 09:10:38 crc kubenswrapper[4940]: I0223 09:10:38.676107 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/04e3d354-3545-45bc-be74-899fa8a434dd-config-data" (OuterVolumeSpecName: "config-data") pod "04e3d354-3545-45bc-be74-899fa8a434dd" (UID: "04e3d354-3545-45bc-be74-899fa8a434dd"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 09:10:38 crc kubenswrapper[4940]: I0223 09:10:38.696004 4940 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04e3d354-3545-45bc-be74-899fa8a434dd-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 09:10:38 crc kubenswrapper[4940]: I0223 09:10:38.696052 4940 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04e3d354-3545-45bc-be74-899fa8a434dd-config-data\") on node \"crc\" DevicePath \"\"" Feb 23 09:10:38 crc kubenswrapper[4940]: I0223 09:10:38.952823 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6884678d78-ckt87" Feb 23 09:10:39 crc kubenswrapper[4940]: I0223 09:10:39.109256 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/e330abf6-9282-4221-b286-672ffc3985e7-horizon-tls-certs\") pod \"e330abf6-9282-4221-b286-672ffc3985e7\" (UID: \"e330abf6-9282-4221-b286-672ffc3985e7\") " Feb 23 09:10:39 crc kubenswrapper[4940]: I0223 09:10:39.109484 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/e330abf6-9282-4221-b286-672ffc3985e7-horizon-secret-key\") pod \"e330abf6-9282-4221-b286-672ffc3985e7\" (UID: \"e330abf6-9282-4221-b286-672ffc3985e7\") " Feb 23 09:10:39 crc kubenswrapper[4940]: I0223 09:10:39.109812 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e330abf6-9282-4221-b286-672ffc3985e7-scripts\") pod \"e330abf6-9282-4221-b286-672ffc3985e7\" (UID: \"e330abf6-9282-4221-b286-672ffc3985e7\") " Feb 23 09:10:39 crc kubenswrapper[4940]: I0223 09:10:39.110141 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pbkvh\" (UniqueName: \"kubernetes.io/projected/e330abf6-9282-4221-b286-672ffc3985e7-kube-api-access-pbkvh\") pod \"e330abf6-9282-4221-b286-672ffc3985e7\" (UID: \"e330abf6-9282-4221-b286-672ffc3985e7\") " Feb 23 09:10:39 crc kubenswrapper[4940]: I0223 09:10:39.110207 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e330abf6-9282-4221-b286-672ffc3985e7-combined-ca-bundle\") pod \"e330abf6-9282-4221-b286-672ffc3985e7\" (UID: \"e330abf6-9282-4221-b286-672ffc3985e7\") " Feb 23 09:10:39 crc kubenswrapper[4940]: I0223 09:10:39.110243 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e330abf6-9282-4221-b286-672ffc3985e7-logs\") pod \"e330abf6-9282-4221-b286-672ffc3985e7\" (UID: \"e330abf6-9282-4221-b286-672ffc3985e7\") " Feb 23 09:10:39 crc kubenswrapper[4940]: I0223 09:10:39.110319 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e330abf6-9282-4221-b286-672ffc3985e7-config-data\") pod \"e330abf6-9282-4221-b286-672ffc3985e7\" (UID: \"e330abf6-9282-4221-b286-672ffc3985e7\") " Feb 23 09:10:39 crc kubenswrapper[4940]: I0223 09:10:39.111098 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e330abf6-9282-4221-b286-672ffc3985e7-logs" (OuterVolumeSpecName: "logs") pod "e330abf6-9282-4221-b286-672ffc3985e7" (UID: "e330abf6-9282-4221-b286-672ffc3985e7"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 09:10:39 crc kubenswrapper[4940]: I0223 09:10:39.111413 4940 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e330abf6-9282-4221-b286-672ffc3985e7-logs\") on node \"crc\" DevicePath \"\"" Feb 23 09:10:39 crc kubenswrapper[4940]: I0223 09:10:39.113990 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e330abf6-9282-4221-b286-672ffc3985e7-kube-api-access-pbkvh" (OuterVolumeSpecName: "kube-api-access-pbkvh") pod "e330abf6-9282-4221-b286-672ffc3985e7" (UID: "e330abf6-9282-4221-b286-672ffc3985e7"). InnerVolumeSpecName "kube-api-access-pbkvh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 09:10:39 crc kubenswrapper[4940]: I0223 09:10:39.114192 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e330abf6-9282-4221-b286-672ffc3985e7-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "e330abf6-9282-4221-b286-672ffc3985e7" (UID: "e330abf6-9282-4221-b286-672ffc3985e7"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 09:10:39 crc kubenswrapper[4940]: I0223 09:10:39.136742 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e330abf6-9282-4221-b286-672ffc3985e7-scripts" (OuterVolumeSpecName: "scripts") pod "e330abf6-9282-4221-b286-672ffc3985e7" (UID: "e330abf6-9282-4221-b286-672ffc3985e7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 09:10:39 crc kubenswrapper[4940]: I0223 09:10:39.142005 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e330abf6-9282-4221-b286-672ffc3985e7-config-data" (OuterVolumeSpecName: "config-data") pod "e330abf6-9282-4221-b286-672ffc3985e7" (UID: "e330abf6-9282-4221-b286-672ffc3985e7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 09:10:39 crc kubenswrapper[4940]: I0223 09:10:39.142243 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e330abf6-9282-4221-b286-672ffc3985e7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e330abf6-9282-4221-b286-672ffc3985e7" (UID: "e330abf6-9282-4221-b286-672ffc3985e7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 09:10:39 crc kubenswrapper[4940]: I0223 09:10:39.173854 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e330abf6-9282-4221-b286-672ffc3985e7-horizon-tls-certs" (OuterVolumeSpecName: "horizon-tls-certs") pod "e330abf6-9282-4221-b286-672ffc3985e7" (UID: "e330abf6-9282-4221-b286-672ffc3985e7"). InnerVolumeSpecName "horizon-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 09:10:39 crc kubenswrapper[4940]: I0223 09:10:39.198119 4940 generic.go:334] "Generic (PLEG): container finished" podID="e330abf6-9282-4221-b286-672ffc3985e7" containerID="9839860a41439a44231b70aa313797924a814f32443061171c7749de8e9583fd" exitCode=137 Feb 23 09:10:39 crc kubenswrapper[4940]: I0223 09:10:39.198226 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6884678d78-ckt87" Feb 23 09:10:39 crc kubenswrapper[4940]: I0223 09:10:39.198193 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6884678d78-ckt87" event={"ID":"e330abf6-9282-4221-b286-672ffc3985e7","Type":"ContainerDied","Data":"9839860a41439a44231b70aa313797924a814f32443061171c7749de8e9583fd"} Feb 23 09:10:39 crc kubenswrapper[4940]: I0223 09:10:39.198393 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6884678d78-ckt87" event={"ID":"e330abf6-9282-4221-b286-672ffc3985e7","Type":"ContainerDied","Data":"b2f6b421fd3fcdbe1e9e51fe97930ec00f103bfbd3e3aa172f2f7caee49d75a0"} Feb 23 09:10:39 crc kubenswrapper[4940]: I0223 09:10:39.198424 4940 scope.go:117] "RemoveContainer" containerID="7867726af076a08e4add2f52c56486390e581182e1eadda89120ffd72a88d736" Feb 23 09:10:39 crc kubenswrapper[4940]: I0223 09:10:39.204223 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"04e3d354-3545-45bc-be74-899fa8a434dd","Type":"ContainerDied","Data":"67c010c5bdd43256e9b3a0b106872969ca1d590ab066541de6d32ab7fd9dd06c"} Feb 23 09:10:39 crc kubenswrapper[4940]: I0223 09:10:39.204304 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 23 09:10:39 crc kubenswrapper[4940]: I0223 09:10:39.215187 4940 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e330abf6-9282-4221-b286-672ffc3985e7-config-data\") on node \"crc\" DevicePath \"\"" Feb 23 09:10:39 crc kubenswrapper[4940]: I0223 09:10:39.215226 4940 reconciler_common.go:293] "Volume detached for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/e330abf6-9282-4221-b286-672ffc3985e7-horizon-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 23 09:10:39 crc kubenswrapper[4940]: I0223 09:10:39.215239 4940 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/e330abf6-9282-4221-b286-672ffc3985e7-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Feb 23 09:10:39 crc kubenswrapper[4940]: I0223 09:10:39.215251 4940 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e330abf6-9282-4221-b286-672ffc3985e7-scripts\") on node \"crc\" DevicePath \"\"" Feb 23 09:10:39 crc kubenswrapper[4940]: I0223 09:10:39.215264 4940 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pbkvh\" (UniqueName: \"kubernetes.io/projected/e330abf6-9282-4221-b286-672ffc3985e7-kube-api-access-pbkvh\") on node \"crc\" DevicePath \"\"" Feb 23 09:10:39 crc kubenswrapper[4940]: I0223 09:10:39.215277 4940 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e330abf6-9282-4221-b286-672ffc3985e7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 09:10:39 crc kubenswrapper[4940]: I0223 09:10:39.254901 4940 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-6884678d78-ckt87"] Feb 23 09:10:39 crc kubenswrapper[4940]: I0223 09:10:39.266900 4940 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-6884678d78-ckt87"] Feb 23 09:10:39 crc kubenswrapper[4940]: I0223 09:10:39.281926 4940 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 23 09:10:39 crc kubenswrapper[4940]: I0223 09:10:39.295463 4940 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 23 09:10:39 crc kubenswrapper[4940]: I0223 09:10:39.328497 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 23 09:10:39 crc kubenswrapper[4940]: E0223 09:10:39.329147 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04e3d354-3545-45bc-be74-899fa8a434dd" containerName="sg-core" Feb 23 09:10:39 crc kubenswrapper[4940]: I0223 09:10:39.329159 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="04e3d354-3545-45bc-be74-899fa8a434dd" containerName="sg-core" Feb 23 09:10:39 crc kubenswrapper[4940]: E0223 09:10:39.329169 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e330abf6-9282-4221-b286-672ffc3985e7" containerName="horizon" Feb 23 09:10:39 crc kubenswrapper[4940]: I0223 09:10:39.329175 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="e330abf6-9282-4221-b286-672ffc3985e7" containerName="horizon" Feb 23 09:10:39 crc kubenswrapper[4940]: E0223 09:10:39.329188 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04e3d354-3545-45bc-be74-899fa8a434dd" containerName="ceilometer-central-agent" Feb 23 09:10:39 crc kubenswrapper[4940]: I0223 09:10:39.329193 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="04e3d354-3545-45bc-be74-899fa8a434dd" containerName="ceilometer-central-agent" Feb 23 09:10:39 crc kubenswrapper[4940]: E0223 09:10:39.329239 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04e3d354-3545-45bc-be74-899fa8a434dd" containerName="proxy-httpd" Feb 23 09:10:39 crc kubenswrapper[4940]: I0223 09:10:39.329245 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="04e3d354-3545-45bc-be74-899fa8a434dd" containerName="proxy-httpd" Feb 23 09:10:39 crc kubenswrapper[4940]: E0223 09:10:39.329266 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e330abf6-9282-4221-b286-672ffc3985e7" containerName="horizon-log" Feb 23 09:10:39 crc kubenswrapper[4940]: I0223 09:10:39.329272 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="e330abf6-9282-4221-b286-672ffc3985e7" containerName="horizon-log" Feb 23 09:10:39 crc kubenswrapper[4940]: E0223 09:10:39.329297 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04e3d354-3545-45bc-be74-899fa8a434dd" containerName="ceilometer-notification-agent" Feb 23 09:10:39 crc kubenswrapper[4940]: I0223 09:10:39.329303 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="04e3d354-3545-45bc-be74-899fa8a434dd" containerName="ceilometer-notification-agent" Feb 23 09:10:39 crc kubenswrapper[4940]: I0223 09:10:39.330007 4940 memory_manager.go:354] "RemoveStaleState removing state" podUID="04e3d354-3545-45bc-be74-899fa8a434dd" containerName="proxy-httpd" Feb 23 09:10:39 crc kubenswrapper[4940]: I0223 09:10:39.330026 4940 memory_manager.go:354] "RemoveStaleState removing state" podUID="04e3d354-3545-45bc-be74-899fa8a434dd" containerName="ceilometer-central-agent" Feb 23 09:10:39 crc kubenswrapper[4940]: I0223 09:10:39.330038 4940 memory_manager.go:354] "RemoveStaleState removing state" podUID="e330abf6-9282-4221-b286-672ffc3985e7" containerName="horizon" Feb 23 09:10:39 crc kubenswrapper[4940]: I0223 09:10:39.330053 4940 memory_manager.go:354] "RemoveStaleState removing state" podUID="04e3d354-3545-45bc-be74-899fa8a434dd" containerName="sg-core" Feb 23 09:10:39 crc kubenswrapper[4940]: I0223 09:10:39.330063 4940 memory_manager.go:354] "RemoveStaleState removing state" podUID="04e3d354-3545-45bc-be74-899fa8a434dd" containerName="ceilometer-notification-agent" Feb 23 09:10:39 crc kubenswrapper[4940]: I0223 09:10:39.330079 4940 memory_manager.go:354] "RemoveStaleState removing state" podUID="e330abf6-9282-4221-b286-672ffc3985e7" containerName="horizon-log" Feb 23 09:10:39 crc kubenswrapper[4940]: I0223 09:10:39.331901 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 23 09:10:39 crc kubenswrapper[4940]: I0223 09:10:39.334401 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 23 09:10:39 crc kubenswrapper[4940]: I0223 09:10:39.334577 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 23 09:10:39 crc kubenswrapper[4940]: I0223 09:10:39.365327 4940 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="04e3d354-3545-45bc-be74-899fa8a434dd" path="/var/lib/kubelet/pods/04e3d354-3545-45bc-be74-899fa8a434dd/volumes" Feb 23 09:10:39 crc kubenswrapper[4940]: I0223 09:10:39.366186 4940 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e330abf6-9282-4221-b286-672ffc3985e7" path="/var/lib/kubelet/pods/e330abf6-9282-4221-b286-672ffc3985e7/volumes" Feb 23 09:10:39 crc kubenswrapper[4940]: I0223 09:10:39.366745 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 23 09:10:39 crc kubenswrapper[4940]: I0223 09:10:39.406465 4940 scope.go:117] "RemoveContainer" containerID="9839860a41439a44231b70aa313797924a814f32443061171c7749de8e9583fd" Feb 23 09:10:39 crc kubenswrapper[4940]: I0223 09:10:39.521233 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/59266403-fc55-45bf-a711-ff1b0c13c329-config-data\") pod \"ceilometer-0\" (UID: \"59266403-fc55-45bf-a711-ff1b0c13c329\") " pod="openstack/ceilometer-0" Feb 23 09:10:39 crc kubenswrapper[4940]: I0223 09:10:39.521312 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59266403-fc55-45bf-a711-ff1b0c13c329-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"59266403-fc55-45bf-a711-ff1b0c13c329\") " pod="openstack/ceilometer-0" Feb 23 09:10:39 crc kubenswrapper[4940]: I0223 09:10:39.521401 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n2zzg\" (UniqueName: \"kubernetes.io/projected/59266403-fc55-45bf-a711-ff1b0c13c329-kube-api-access-n2zzg\") pod \"ceilometer-0\" (UID: \"59266403-fc55-45bf-a711-ff1b0c13c329\") " pod="openstack/ceilometer-0" Feb 23 09:10:39 crc kubenswrapper[4940]: I0223 09:10:39.521518 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/59266403-fc55-45bf-a711-ff1b0c13c329-log-httpd\") pod \"ceilometer-0\" (UID: \"59266403-fc55-45bf-a711-ff1b0c13c329\") " pod="openstack/ceilometer-0" Feb 23 09:10:39 crc kubenswrapper[4940]: I0223 09:10:39.521546 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/59266403-fc55-45bf-a711-ff1b0c13c329-run-httpd\") pod \"ceilometer-0\" (UID: \"59266403-fc55-45bf-a711-ff1b0c13c329\") " pod="openstack/ceilometer-0" Feb 23 09:10:39 crc kubenswrapper[4940]: I0223 09:10:39.521565 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/59266403-fc55-45bf-a711-ff1b0c13c329-scripts\") pod \"ceilometer-0\" (UID: \"59266403-fc55-45bf-a711-ff1b0c13c329\") " pod="openstack/ceilometer-0" Feb 23 09:10:39 crc kubenswrapper[4940]: I0223 09:10:39.521598 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/59266403-fc55-45bf-a711-ff1b0c13c329-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"59266403-fc55-45bf-a711-ff1b0c13c329\") " pod="openstack/ceilometer-0" Feb 23 09:10:39 crc kubenswrapper[4940]: I0223 09:10:39.623630 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/59266403-fc55-45bf-a711-ff1b0c13c329-scripts\") pod \"ceilometer-0\" (UID: \"59266403-fc55-45bf-a711-ff1b0c13c329\") " pod="openstack/ceilometer-0" Feb 23 09:10:39 crc kubenswrapper[4940]: I0223 09:10:39.623693 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/59266403-fc55-45bf-a711-ff1b0c13c329-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"59266403-fc55-45bf-a711-ff1b0c13c329\") " pod="openstack/ceilometer-0" Feb 23 09:10:39 crc kubenswrapper[4940]: I0223 09:10:39.623757 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/59266403-fc55-45bf-a711-ff1b0c13c329-config-data\") pod \"ceilometer-0\" (UID: \"59266403-fc55-45bf-a711-ff1b0c13c329\") " pod="openstack/ceilometer-0" Feb 23 09:10:39 crc kubenswrapper[4940]: I0223 09:10:39.623798 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59266403-fc55-45bf-a711-ff1b0c13c329-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"59266403-fc55-45bf-a711-ff1b0c13c329\") " pod="openstack/ceilometer-0" Feb 23 09:10:39 crc kubenswrapper[4940]: I0223 09:10:39.623872 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n2zzg\" (UniqueName: \"kubernetes.io/projected/59266403-fc55-45bf-a711-ff1b0c13c329-kube-api-access-n2zzg\") pod \"ceilometer-0\" (UID: \"59266403-fc55-45bf-a711-ff1b0c13c329\") " pod="openstack/ceilometer-0" Feb 23 09:10:39 crc kubenswrapper[4940]: I0223 09:10:39.623945 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/59266403-fc55-45bf-a711-ff1b0c13c329-log-httpd\") pod \"ceilometer-0\" (UID: \"59266403-fc55-45bf-a711-ff1b0c13c329\") " pod="openstack/ceilometer-0" Feb 23 09:10:39 crc kubenswrapper[4940]: I0223 09:10:39.623977 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/59266403-fc55-45bf-a711-ff1b0c13c329-run-httpd\") pod \"ceilometer-0\" (UID: \"59266403-fc55-45bf-a711-ff1b0c13c329\") " pod="openstack/ceilometer-0" Feb 23 09:10:39 crc kubenswrapper[4940]: I0223 09:10:39.624554 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/59266403-fc55-45bf-a711-ff1b0c13c329-log-httpd\") pod \"ceilometer-0\" (UID: \"59266403-fc55-45bf-a711-ff1b0c13c329\") " pod="openstack/ceilometer-0" Feb 23 09:10:39 crc kubenswrapper[4940]: I0223 09:10:39.624580 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/59266403-fc55-45bf-a711-ff1b0c13c329-run-httpd\") pod \"ceilometer-0\" (UID: \"59266403-fc55-45bf-a711-ff1b0c13c329\") " pod="openstack/ceilometer-0" Feb 23 09:10:39 crc kubenswrapper[4940]: I0223 09:10:39.628860 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/59266403-fc55-45bf-a711-ff1b0c13c329-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"59266403-fc55-45bf-a711-ff1b0c13c329\") " pod="openstack/ceilometer-0" Feb 23 09:10:39 crc kubenswrapper[4940]: I0223 09:10:39.630118 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/59266403-fc55-45bf-a711-ff1b0c13c329-scripts\") pod \"ceilometer-0\" (UID: \"59266403-fc55-45bf-a711-ff1b0c13c329\") " pod="openstack/ceilometer-0" Feb 23 09:10:39 crc kubenswrapper[4940]: I0223 09:10:39.630199 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59266403-fc55-45bf-a711-ff1b0c13c329-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"59266403-fc55-45bf-a711-ff1b0c13c329\") " pod="openstack/ceilometer-0" Feb 23 09:10:39 crc kubenswrapper[4940]: I0223 09:10:39.646895 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/59266403-fc55-45bf-a711-ff1b0c13c329-config-data\") pod \"ceilometer-0\" (UID: \"59266403-fc55-45bf-a711-ff1b0c13c329\") " pod="openstack/ceilometer-0" Feb 23 09:10:39 crc kubenswrapper[4940]: I0223 09:10:39.650643 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n2zzg\" (UniqueName: \"kubernetes.io/projected/59266403-fc55-45bf-a711-ff1b0c13c329-kube-api-access-n2zzg\") pod \"ceilometer-0\" (UID: \"59266403-fc55-45bf-a711-ff1b0c13c329\") " pod="openstack/ceilometer-0" Feb 23 09:10:39 crc kubenswrapper[4940]: I0223 09:10:39.657519 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 23 09:10:40 crc kubenswrapper[4940]: I0223 09:10:40.124854 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-6c475756fc-pxxbv" Feb 23 09:10:40 crc kubenswrapper[4940]: I0223 09:10:40.383683 4940 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 23 09:10:42 crc kubenswrapper[4940]: I0223 09:10:42.491823 4940 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/manila-scheduler-0" Feb 23 09:10:43 crc kubenswrapper[4940]: I0223 09:10:43.804969 4940 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/manila-share-share1-0" Feb 23 09:10:43 crc kubenswrapper[4940]: I0223 09:10:43.893865 4940 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-share-share1-0"] Feb 23 09:10:44 crc kubenswrapper[4940]: I0223 09:10:44.276381 4940 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/manila-share-share1-0" podUID="ec61a030-1b08-4c8f-8008-842f8c7decb0" containerName="manila-share" containerID="cri-o://d4e707da34679415a391142e287a18b6057007359a5330ef977a37142fb5e6fb" gracePeriod=30 Feb 23 09:10:44 crc kubenswrapper[4940]: I0223 09:10:44.277027 4940 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/manila-share-share1-0" podUID="ec61a030-1b08-4c8f-8008-842f8c7decb0" containerName="probe" containerID="cri-o://65138f81a7970bcde369c1e9f6ca1969251f7ad6a40f85aa2f816e5160390912" gracePeriod=30 Feb 23 09:10:45 crc kubenswrapper[4940]: I0223 09:10:45.259106 4940 scope.go:117] "RemoveContainer" containerID="7867726af076a08e4add2f52c56486390e581182e1eadda89120ffd72a88d736" Feb 23 09:10:45 crc kubenswrapper[4940]: E0223 09:10:45.267322 4940 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7867726af076a08e4add2f52c56486390e581182e1eadda89120ffd72a88d736\": container with ID starting with 7867726af076a08e4add2f52c56486390e581182e1eadda89120ffd72a88d736 not found: ID does not exist" containerID="7867726af076a08e4add2f52c56486390e581182e1eadda89120ffd72a88d736" Feb 23 09:10:45 crc kubenswrapper[4940]: I0223 09:10:45.267368 4940 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7867726af076a08e4add2f52c56486390e581182e1eadda89120ffd72a88d736"} err="failed to get container status \"7867726af076a08e4add2f52c56486390e581182e1eadda89120ffd72a88d736\": rpc error: code = NotFound desc = could not find container \"7867726af076a08e4add2f52c56486390e581182e1eadda89120ffd72a88d736\": container with ID starting with 7867726af076a08e4add2f52c56486390e581182e1eadda89120ffd72a88d736 not found: ID does not exist" Feb 23 09:10:45 crc kubenswrapper[4940]: I0223 09:10:45.267394 4940 scope.go:117] "RemoveContainer" containerID="9839860a41439a44231b70aa313797924a814f32443061171c7749de8e9583fd" Feb 23 09:10:45 crc kubenswrapper[4940]: E0223 09:10:45.267771 4940 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9839860a41439a44231b70aa313797924a814f32443061171c7749de8e9583fd\": container with ID starting with 9839860a41439a44231b70aa313797924a814f32443061171c7749de8e9583fd not found: ID does not exist" containerID="9839860a41439a44231b70aa313797924a814f32443061171c7749de8e9583fd" Feb 23 09:10:45 crc kubenswrapper[4940]: I0223 09:10:45.267817 4940 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9839860a41439a44231b70aa313797924a814f32443061171c7749de8e9583fd"} err="failed to get container status \"9839860a41439a44231b70aa313797924a814f32443061171c7749de8e9583fd\": rpc error: code = NotFound desc = could not find container \"9839860a41439a44231b70aa313797924a814f32443061171c7749de8e9583fd\": container with ID starting with 9839860a41439a44231b70aa313797924a814f32443061171c7749de8e9583fd not found: ID does not exist" Feb 23 09:10:45 crc kubenswrapper[4940]: I0223 09:10:45.267832 4940 scope.go:117] "RemoveContainer" containerID="d6872764f58919ebd4459152c4126b8bb59d194ed9954c57499da8633d899ec5" Feb 23 09:10:45 crc kubenswrapper[4940]: I0223 09:10:45.290729 4940 generic.go:334] "Generic (PLEG): container finished" podID="ec61a030-1b08-4c8f-8008-842f8c7decb0" containerID="65138f81a7970bcde369c1e9f6ca1969251f7ad6a40f85aa2f816e5160390912" exitCode=0 Feb 23 09:10:45 crc kubenswrapper[4940]: I0223 09:10:45.290765 4940 generic.go:334] "Generic (PLEG): container finished" podID="ec61a030-1b08-4c8f-8008-842f8c7decb0" containerID="d4e707da34679415a391142e287a18b6057007359a5330ef977a37142fb5e6fb" exitCode=1 Feb 23 09:10:45 crc kubenswrapper[4940]: I0223 09:10:45.290788 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"ec61a030-1b08-4c8f-8008-842f8c7decb0","Type":"ContainerDied","Data":"65138f81a7970bcde369c1e9f6ca1969251f7ad6a40f85aa2f816e5160390912"} Feb 23 09:10:45 crc kubenswrapper[4940]: I0223 09:10:45.290814 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"ec61a030-1b08-4c8f-8008-842f8c7decb0","Type":"ContainerDied","Data":"d4e707da34679415a391142e287a18b6057007359a5330ef977a37142fb5e6fb"} Feb 23 09:10:45 crc kubenswrapper[4940]: I0223 09:10:45.383123 4940 scope.go:117] "RemoveContainer" containerID="d737e507350840eec5d8ee4d49e503df0c741f3bbbd038972afe8f0e9f371fd9" Feb 23 09:10:45 crc kubenswrapper[4940]: I0223 09:10:45.542834 4940 scope.go:117] "RemoveContainer" containerID="4b61438b2bc039f52d4d1fa890ebc693adbae001d8364d8bf45e1ace6a959678" Feb 23 09:10:45 crc kubenswrapper[4940]: I0223 09:10:45.576330 4940 scope.go:117] "RemoveContainer" containerID="0b1e0f3be5628bb714e510c5e1acf36dd29530ee01a51784b8fe95b32faeb9e7" Feb 23 09:10:45 crc kubenswrapper[4940]: I0223 09:10:45.784891 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-share-share1-0" Feb 23 09:10:45 crc kubenswrapper[4940]: I0223 09:10:45.921040 4940 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 23 09:10:45 crc kubenswrapper[4940]: I0223 09:10:45.952363 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ec61a030-1b08-4c8f-8008-842f8c7decb0-etc-machine-id\") pod \"ec61a030-1b08-4c8f-8008-842f8c7decb0\" (UID: \"ec61a030-1b08-4c8f-8008-842f8c7decb0\") " Feb 23 09:10:45 crc kubenswrapper[4940]: I0223 09:10:45.952459 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ec61a030-1b08-4c8f-8008-842f8c7decb0-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "ec61a030-1b08-4c8f-8008-842f8c7decb0" (UID: "ec61a030-1b08-4c8f-8008-842f8c7decb0"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 09:10:45 crc kubenswrapper[4940]: I0223 09:10:45.952843 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/ec61a030-1b08-4c8f-8008-842f8c7decb0-var-lib-manila\") pod \"ec61a030-1b08-4c8f-8008-842f8c7decb0\" (UID: \"ec61a030-1b08-4c8f-8008-842f8c7decb0\") " Feb 23 09:10:45 crc kubenswrapper[4940]: I0223 09:10:45.952921 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ec61a030-1b08-4c8f-8008-842f8c7decb0-var-lib-manila" (OuterVolumeSpecName: "var-lib-manila") pod "ec61a030-1b08-4c8f-8008-842f8c7decb0" (UID: "ec61a030-1b08-4c8f-8008-842f8c7decb0"). InnerVolumeSpecName "var-lib-manila". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 09:10:45 crc kubenswrapper[4940]: I0223 09:10:45.953052 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bhzg2\" (UniqueName: \"kubernetes.io/projected/ec61a030-1b08-4c8f-8008-842f8c7decb0-kube-api-access-bhzg2\") pod \"ec61a030-1b08-4c8f-8008-842f8c7decb0\" (UID: \"ec61a030-1b08-4c8f-8008-842f8c7decb0\") " Feb 23 09:10:45 crc kubenswrapper[4940]: I0223 09:10:45.953262 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ec61a030-1b08-4c8f-8008-842f8c7decb0-scripts\") pod \"ec61a030-1b08-4c8f-8008-842f8c7decb0\" (UID: \"ec61a030-1b08-4c8f-8008-842f8c7decb0\") " Feb 23 09:10:45 crc kubenswrapper[4940]: I0223 09:10:45.953388 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec61a030-1b08-4c8f-8008-842f8c7decb0-combined-ca-bundle\") pod \"ec61a030-1b08-4c8f-8008-842f8c7decb0\" (UID: \"ec61a030-1b08-4c8f-8008-842f8c7decb0\") " Feb 23 09:10:45 crc kubenswrapper[4940]: I0223 09:10:45.953520 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ec61a030-1b08-4c8f-8008-842f8c7decb0-config-data-custom\") pod \"ec61a030-1b08-4c8f-8008-842f8c7decb0\" (UID: \"ec61a030-1b08-4c8f-8008-842f8c7decb0\") " Feb 23 09:10:45 crc kubenswrapper[4940]: I0223 09:10:45.953726 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/ec61a030-1b08-4c8f-8008-842f8c7decb0-ceph\") pod \"ec61a030-1b08-4c8f-8008-842f8c7decb0\" (UID: \"ec61a030-1b08-4c8f-8008-842f8c7decb0\") " Feb 23 09:10:45 crc kubenswrapper[4940]: I0223 09:10:45.953907 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec61a030-1b08-4c8f-8008-842f8c7decb0-config-data\") pod \"ec61a030-1b08-4c8f-8008-842f8c7decb0\" (UID: \"ec61a030-1b08-4c8f-8008-842f8c7decb0\") " Feb 23 09:10:45 crc kubenswrapper[4940]: I0223 09:10:45.954914 4940 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ec61a030-1b08-4c8f-8008-842f8c7decb0-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 23 09:10:45 crc kubenswrapper[4940]: I0223 09:10:45.955010 4940 reconciler_common.go:293] "Volume detached for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/ec61a030-1b08-4c8f-8008-842f8c7decb0-var-lib-manila\") on node \"crc\" DevicePath \"\"" Feb 23 09:10:45 crc kubenswrapper[4940]: I0223 09:10:45.960256 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ec61a030-1b08-4c8f-8008-842f8c7decb0-ceph" (OuterVolumeSpecName: "ceph") pod "ec61a030-1b08-4c8f-8008-842f8c7decb0" (UID: "ec61a030-1b08-4c8f-8008-842f8c7decb0"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 09:10:45 crc kubenswrapper[4940]: I0223 09:10:45.960408 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ec61a030-1b08-4c8f-8008-842f8c7decb0-kube-api-access-bhzg2" (OuterVolumeSpecName: "kube-api-access-bhzg2") pod "ec61a030-1b08-4c8f-8008-842f8c7decb0" (UID: "ec61a030-1b08-4c8f-8008-842f8c7decb0"). InnerVolumeSpecName "kube-api-access-bhzg2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 09:10:45 crc kubenswrapper[4940]: I0223 09:10:45.961798 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ec61a030-1b08-4c8f-8008-842f8c7decb0-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "ec61a030-1b08-4c8f-8008-842f8c7decb0" (UID: "ec61a030-1b08-4c8f-8008-842f8c7decb0"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 09:10:45 crc kubenswrapper[4940]: I0223 09:10:45.964081 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ec61a030-1b08-4c8f-8008-842f8c7decb0-scripts" (OuterVolumeSpecName: "scripts") pod "ec61a030-1b08-4c8f-8008-842f8c7decb0" (UID: "ec61a030-1b08-4c8f-8008-842f8c7decb0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 09:10:46 crc kubenswrapper[4940]: I0223 09:10:46.015348 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ec61a030-1b08-4c8f-8008-842f8c7decb0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ec61a030-1b08-4c8f-8008-842f8c7decb0" (UID: "ec61a030-1b08-4c8f-8008-842f8c7decb0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 09:10:46 crc kubenswrapper[4940]: I0223 09:10:46.056334 4940 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ec61a030-1b08-4c8f-8008-842f8c7decb0-scripts\") on node \"crc\" DevicePath \"\"" Feb 23 09:10:46 crc kubenswrapper[4940]: I0223 09:10:46.056363 4940 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec61a030-1b08-4c8f-8008-842f8c7decb0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 09:10:46 crc kubenswrapper[4940]: I0223 09:10:46.056373 4940 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ec61a030-1b08-4c8f-8008-842f8c7decb0-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 23 09:10:46 crc kubenswrapper[4940]: I0223 09:10:46.056383 4940 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/ec61a030-1b08-4c8f-8008-842f8c7decb0-ceph\") on node \"crc\" DevicePath \"\"" Feb 23 09:10:46 crc kubenswrapper[4940]: I0223 09:10:46.056391 4940 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bhzg2\" (UniqueName: \"kubernetes.io/projected/ec61a030-1b08-4c8f-8008-842f8c7decb0-kube-api-access-bhzg2\") on node \"crc\" DevicePath \"\"" Feb 23 09:10:46 crc kubenswrapper[4940]: I0223 09:10:46.088438 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ec61a030-1b08-4c8f-8008-842f8c7decb0-config-data" (OuterVolumeSpecName: "config-data") pod "ec61a030-1b08-4c8f-8008-842f8c7decb0" (UID: "ec61a030-1b08-4c8f-8008-842f8c7decb0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 09:10:46 crc kubenswrapper[4940]: I0223 09:10:46.157704 4940 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec61a030-1b08-4c8f-8008-842f8c7decb0-config-data\") on node \"crc\" DevicePath \"\"" Feb 23 09:10:46 crc kubenswrapper[4940]: I0223 09:10:46.308162 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"ec61a030-1b08-4c8f-8008-842f8c7decb0","Type":"ContainerDied","Data":"b37d5a91c4195d4558ffb2fab7bf666a9fbf5bdc822c5acd147ab5c8ea4927df"} Feb 23 09:10:46 crc kubenswrapper[4940]: I0223 09:10:46.308220 4940 scope.go:117] "RemoveContainer" containerID="65138f81a7970bcde369c1e9f6ca1969251f7ad6a40f85aa2f816e5160390912" Feb 23 09:10:46 crc kubenswrapper[4940]: I0223 09:10:46.308365 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-share-share1-0" Feb 23 09:10:46 crc kubenswrapper[4940]: I0223 09:10:46.318815 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"59266403-fc55-45bf-a711-ff1b0c13c329","Type":"ContainerStarted","Data":"fcdfd7cbafa68bfaa420b253f8759ced83a077ab74af08f08867815aced81355"} Feb 23 09:10:46 crc kubenswrapper[4940]: I0223 09:10:46.324477 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-r5b46" event={"ID":"8b3cfe7f-19c0-47e1-b535-0b4e98dba050","Type":"ContainerStarted","Data":"94c228c755da60cf2c2d4aff4d92241a5462047210c1ba47c929e426e1101812"} Feb 23 09:10:46 crc kubenswrapper[4940]: I0223 09:10:46.349041 4940 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-r5b46" podStartSLOduration=2.677556474 podStartE2EDuration="12.349016772s" podCreationTimestamp="2026-02-23 09:10:34 +0000 UTC" firstStartedPulling="2026-02-23 09:10:35.678574419 +0000 UTC m=+1367.061780576" lastFinishedPulling="2026-02-23 09:10:45.350034717 +0000 UTC m=+1376.733240874" observedRunningTime="2026-02-23 09:10:46.340091933 +0000 UTC m=+1377.723298100" watchObservedRunningTime="2026-02-23 09:10:46.349016772 +0000 UTC m=+1377.732222919" Feb 23 09:10:46 crc kubenswrapper[4940]: I0223 09:10:46.485737 4940 scope.go:117] "RemoveContainer" containerID="d4e707da34679415a391142e287a18b6057007359a5330ef977a37142fb5e6fb" Feb 23 09:10:46 crc kubenswrapper[4940]: I0223 09:10:46.516412 4940 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-share-share1-0"] Feb 23 09:10:46 crc kubenswrapper[4940]: I0223 09:10:46.537841 4940 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/manila-share-share1-0"] Feb 23 09:10:46 crc kubenswrapper[4940]: I0223 09:10:46.558217 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-share-share1-0"] Feb 23 09:10:46 crc kubenswrapper[4940]: E0223 09:10:46.558678 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec61a030-1b08-4c8f-8008-842f8c7decb0" containerName="manila-share" Feb 23 09:10:46 crc kubenswrapper[4940]: I0223 09:10:46.558700 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec61a030-1b08-4c8f-8008-842f8c7decb0" containerName="manila-share" Feb 23 09:10:46 crc kubenswrapper[4940]: E0223 09:10:46.558728 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec61a030-1b08-4c8f-8008-842f8c7decb0" containerName="probe" Feb 23 09:10:46 crc kubenswrapper[4940]: I0223 09:10:46.558734 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec61a030-1b08-4c8f-8008-842f8c7decb0" containerName="probe" Feb 23 09:10:46 crc kubenswrapper[4940]: I0223 09:10:46.558924 4940 memory_manager.go:354] "RemoveStaleState removing state" podUID="ec61a030-1b08-4c8f-8008-842f8c7decb0" containerName="manila-share" Feb 23 09:10:46 crc kubenswrapper[4940]: I0223 09:10:46.558938 4940 memory_manager.go:354] "RemoveStaleState removing state" podUID="ec61a030-1b08-4c8f-8008-842f8c7decb0" containerName="probe" Feb 23 09:10:46 crc kubenswrapper[4940]: I0223 09:10:46.559927 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-share-share1-0" Feb 23 09:10:46 crc kubenswrapper[4940]: I0223 09:10:46.562058 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-share-share1-config-data" Feb 23 09:10:46 crc kubenswrapper[4940]: I0223 09:10:46.581506 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-share-share1-0"] Feb 23 09:10:46 crc kubenswrapper[4940]: I0223 09:10:46.668884 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1cfd9d39-e351-44f6-90b2-02c15fef4e9f-etc-machine-id\") pod \"manila-share-share1-0\" (UID: \"1cfd9d39-e351-44f6-90b2-02c15fef4e9f\") " pod="openstack/manila-share-share1-0" Feb 23 09:10:46 crc kubenswrapper[4940]: I0223 09:10:46.668945 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1cfd9d39-e351-44f6-90b2-02c15fef4e9f-config-data-custom\") pod \"manila-share-share1-0\" (UID: \"1cfd9d39-e351-44f6-90b2-02c15fef4e9f\") " pod="openstack/manila-share-share1-0" Feb 23 09:10:46 crc kubenswrapper[4940]: I0223 09:10:46.668981 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/1cfd9d39-e351-44f6-90b2-02c15fef4e9f-var-lib-manila\") pod \"manila-share-share1-0\" (UID: \"1cfd9d39-e351-44f6-90b2-02c15fef4e9f\") " pod="openstack/manila-share-share1-0" Feb 23 09:10:46 crc kubenswrapper[4940]: I0223 09:10:46.669064 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9p2f2\" (UniqueName: \"kubernetes.io/projected/1cfd9d39-e351-44f6-90b2-02c15fef4e9f-kube-api-access-9p2f2\") pod \"manila-share-share1-0\" (UID: \"1cfd9d39-e351-44f6-90b2-02c15fef4e9f\") " pod="openstack/manila-share-share1-0" Feb 23 09:10:46 crc kubenswrapper[4940]: I0223 09:10:46.669101 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1cfd9d39-e351-44f6-90b2-02c15fef4e9f-combined-ca-bundle\") pod \"manila-share-share1-0\" (UID: \"1cfd9d39-e351-44f6-90b2-02c15fef4e9f\") " pod="openstack/manila-share-share1-0" Feb 23 09:10:46 crc kubenswrapper[4940]: I0223 09:10:46.669123 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1cfd9d39-e351-44f6-90b2-02c15fef4e9f-scripts\") pod \"manila-share-share1-0\" (UID: \"1cfd9d39-e351-44f6-90b2-02c15fef4e9f\") " pod="openstack/manila-share-share1-0" Feb 23 09:10:46 crc kubenswrapper[4940]: I0223 09:10:46.669174 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1cfd9d39-e351-44f6-90b2-02c15fef4e9f-config-data\") pod \"manila-share-share1-0\" (UID: \"1cfd9d39-e351-44f6-90b2-02c15fef4e9f\") " pod="openstack/manila-share-share1-0" Feb 23 09:10:46 crc kubenswrapper[4940]: I0223 09:10:46.669218 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/1cfd9d39-e351-44f6-90b2-02c15fef4e9f-ceph\") pod \"manila-share-share1-0\" (UID: \"1cfd9d39-e351-44f6-90b2-02c15fef4e9f\") " pod="openstack/manila-share-share1-0" Feb 23 09:10:46 crc kubenswrapper[4940]: I0223 09:10:46.771316 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9p2f2\" (UniqueName: \"kubernetes.io/projected/1cfd9d39-e351-44f6-90b2-02c15fef4e9f-kube-api-access-9p2f2\") pod \"manila-share-share1-0\" (UID: \"1cfd9d39-e351-44f6-90b2-02c15fef4e9f\") " pod="openstack/manila-share-share1-0" Feb 23 09:10:46 crc kubenswrapper[4940]: I0223 09:10:46.771401 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1cfd9d39-e351-44f6-90b2-02c15fef4e9f-combined-ca-bundle\") pod \"manila-share-share1-0\" (UID: \"1cfd9d39-e351-44f6-90b2-02c15fef4e9f\") " pod="openstack/manila-share-share1-0" Feb 23 09:10:46 crc kubenswrapper[4940]: I0223 09:10:46.771431 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1cfd9d39-e351-44f6-90b2-02c15fef4e9f-scripts\") pod \"manila-share-share1-0\" (UID: \"1cfd9d39-e351-44f6-90b2-02c15fef4e9f\") " pod="openstack/manila-share-share1-0" Feb 23 09:10:46 crc kubenswrapper[4940]: I0223 09:10:46.771505 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1cfd9d39-e351-44f6-90b2-02c15fef4e9f-config-data\") pod \"manila-share-share1-0\" (UID: \"1cfd9d39-e351-44f6-90b2-02c15fef4e9f\") " pod="openstack/manila-share-share1-0" Feb 23 09:10:46 crc kubenswrapper[4940]: I0223 09:10:46.771559 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/1cfd9d39-e351-44f6-90b2-02c15fef4e9f-ceph\") pod \"manila-share-share1-0\" (UID: \"1cfd9d39-e351-44f6-90b2-02c15fef4e9f\") " pod="openstack/manila-share-share1-0" Feb 23 09:10:46 crc kubenswrapper[4940]: I0223 09:10:46.771609 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1cfd9d39-e351-44f6-90b2-02c15fef4e9f-etc-machine-id\") pod \"manila-share-share1-0\" (UID: \"1cfd9d39-e351-44f6-90b2-02c15fef4e9f\") " pod="openstack/manila-share-share1-0" Feb 23 09:10:46 crc kubenswrapper[4940]: I0223 09:10:46.771646 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1cfd9d39-e351-44f6-90b2-02c15fef4e9f-config-data-custom\") pod \"manila-share-share1-0\" (UID: \"1cfd9d39-e351-44f6-90b2-02c15fef4e9f\") " pod="openstack/manila-share-share1-0" Feb 23 09:10:46 crc kubenswrapper[4940]: I0223 09:10:46.771674 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/1cfd9d39-e351-44f6-90b2-02c15fef4e9f-var-lib-manila\") pod \"manila-share-share1-0\" (UID: \"1cfd9d39-e351-44f6-90b2-02c15fef4e9f\") " pod="openstack/manila-share-share1-0" Feb 23 09:10:46 crc kubenswrapper[4940]: I0223 09:10:46.771836 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/1cfd9d39-e351-44f6-90b2-02c15fef4e9f-var-lib-manila\") pod \"manila-share-share1-0\" (UID: \"1cfd9d39-e351-44f6-90b2-02c15fef4e9f\") " pod="openstack/manila-share-share1-0" Feb 23 09:10:46 crc kubenswrapper[4940]: I0223 09:10:46.772784 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1cfd9d39-e351-44f6-90b2-02c15fef4e9f-etc-machine-id\") pod \"manila-share-share1-0\" (UID: \"1cfd9d39-e351-44f6-90b2-02c15fef4e9f\") " pod="openstack/manila-share-share1-0" Feb 23 09:10:46 crc kubenswrapper[4940]: I0223 09:10:46.779339 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1cfd9d39-e351-44f6-90b2-02c15fef4e9f-combined-ca-bundle\") pod \"manila-share-share1-0\" (UID: \"1cfd9d39-e351-44f6-90b2-02c15fef4e9f\") " pod="openstack/manila-share-share1-0" Feb 23 09:10:46 crc kubenswrapper[4940]: I0223 09:10:46.780148 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/1cfd9d39-e351-44f6-90b2-02c15fef4e9f-ceph\") pod \"manila-share-share1-0\" (UID: \"1cfd9d39-e351-44f6-90b2-02c15fef4e9f\") " pod="openstack/manila-share-share1-0" Feb 23 09:10:46 crc kubenswrapper[4940]: I0223 09:10:46.783950 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1cfd9d39-e351-44f6-90b2-02c15fef4e9f-scripts\") pod \"manila-share-share1-0\" (UID: \"1cfd9d39-e351-44f6-90b2-02c15fef4e9f\") " pod="openstack/manila-share-share1-0" Feb 23 09:10:46 crc kubenswrapper[4940]: I0223 09:10:46.788525 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1cfd9d39-e351-44f6-90b2-02c15fef4e9f-config-data\") pod \"manila-share-share1-0\" (UID: \"1cfd9d39-e351-44f6-90b2-02c15fef4e9f\") " pod="openstack/manila-share-share1-0" Feb 23 09:10:46 crc kubenswrapper[4940]: I0223 09:10:46.804532 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1cfd9d39-e351-44f6-90b2-02c15fef4e9f-config-data-custom\") pod \"manila-share-share1-0\" (UID: \"1cfd9d39-e351-44f6-90b2-02c15fef4e9f\") " pod="openstack/manila-share-share1-0" Feb 23 09:10:46 crc kubenswrapper[4940]: I0223 09:10:46.869506 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9p2f2\" (UniqueName: \"kubernetes.io/projected/1cfd9d39-e351-44f6-90b2-02c15fef4e9f-kube-api-access-9p2f2\") pod \"manila-share-share1-0\" (UID: \"1cfd9d39-e351-44f6-90b2-02c15fef4e9f\") " pod="openstack/manila-share-share1-0" Feb 23 09:10:46 crc kubenswrapper[4940]: I0223 09:10:46.944920 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-share-share1-0" Feb 23 09:10:47 crc kubenswrapper[4940]: I0223 09:10:47.336495 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"59266403-fc55-45bf-a711-ff1b0c13c329","Type":"ContainerStarted","Data":"a6948c161ee795e2de6369b9172de5423247ddd8608121ee0d5a81edeff72fed"} Feb 23 09:10:47 crc kubenswrapper[4940]: I0223 09:10:47.336988 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"59266403-fc55-45bf-a711-ff1b0c13c329","Type":"ContainerStarted","Data":"c0cc0776ad7617471a8763d7ca8fa9725b1c96c94cdb89d4aaac4d45ea83c9fe"} Feb 23 09:10:47 crc kubenswrapper[4940]: I0223 09:10:47.358414 4940 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ec61a030-1b08-4c8f-8008-842f8c7decb0" path="/var/lib/kubelet/pods/ec61a030-1b08-4c8f-8008-842f8c7decb0/volumes" Feb 23 09:10:47 crc kubenswrapper[4940]: I0223 09:10:47.595749 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-share-share1-0"] Feb 23 09:10:48 crc kubenswrapper[4940]: I0223 09:10:48.396278 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"59266403-fc55-45bf-a711-ff1b0c13c329","Type":"ContainerStarted","Data":"4889ba2f03ad79134ecd01bfd1772fa34505c16b36fc6c0bb5a24b496610438f"} Feb 23 09:10:48 crc kubenswrapper[4940]: I0223 09:10:48.403935 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"1cfd9d39-e351-44f6-90b2-02c15fef4e9f","Type":"ContainerStarted","Data":"1816ef314a5794c0e730750403e80da01aae0636e22d30b177247949d7829f1d"} Feb 23 09:10:48 crc kubenswrapper[4940]: I0223 09:10:48.403979 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"1cfd9d39-e351-44f6-90b2-02c15fef4e9f","Type":"ContainerStarted","Data":"dda0812c5855ecc37a87e59aa5a3c5705a49bc408e388e09a33f53425a220b0d"} Feb 23 09:10:49 crc kubenswrapper[4940]: I0223 09:10:49.413415 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"1cfd9d39-e351-44f6-90b2-02c15fef4e9f","Type":"ContainerStarted","Data":"0772042ba9d484aebe4c2f6b557d54e7f8b9c32cc6caf2de06cb1d4eb47e3197"} Feb 23 09:10:49 crc kubenswrapper[4940]: I0223 09:10:49.435919 4940 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-share-share1-0" podStartSLOduration=3.435902624 podStartE2EDuration="3.435902624s" podCreationTimestamp="2026-02-23 09:10:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 09:10:49.434892072 +0000 UTC m=+1380.818098249" watchObservedRunningTime="2026-02-23 09:10:49.435902624 +0000 UTC m=+1380.819108781" Feb 23 09:10:50 crc kubenswrapper[4940]: I0223 09:10:50.425343 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"59266403-fc55-45bf-a711-ff1b0c13c329","Type":"ContainerStarted","Data":"a219aa6f9b672d8f98651b131b3a916fa6bb6dc9f934b1557a4f93124c2a7c3c"} Feb 23 09:10:50 crc kubenswrapper[4940]: I0223 09:10:50.425670 4940 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="59266403-fc55-45bf-a711-ff1b0c13c329" containerName="ceilometer-central-agent" containerID="cri-o://c0cc0776ad7617471a8763d7ca8fa9725b1c96c94cdb89d4aaac4d45ea83c9fe" gracePeriod=30 Feb 23 09:10:50 crc kubenswrapper[4940]: I0223 09:10:50.425670 4940 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="59266403-fc55-45bf-a711-ff1b0c13c329" containerName="proxy-httpd" containerID="cri-o://a219aa6f9b672d8f98651b131b3a916fa6bb6dc9f934b1557a4f93124c2a7c3c" gracePeriod=30 Feb 23 09:10:50 crc kubenswrapper[4940]: I0223 09:10:50.425723 4940 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="59266403-fc55-45bf-a711-ff1b0c13c329" containerName="ceilometer-notification-agent" containerID="cri-o://a6948c161ee795e2de6369b9172de5423247ddd8608121ee0d5a81edeff72fed" gracePeriod=30 Feb 23 09:10:50 crc kubenswrapper[4940]: I0223 09:10:50.425689 4940 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="59266403-fc55-45bf-a711-ff1b0c13c329" containerName="sg-core" containerID="cri-o://4889ba2f03ad79134ecd01bfd1772fa34505c16b36fc6c0bb5a24b496610438f" gracePeriod=30 Feb 23 09:10:50 crc kubenswrapper[4940]: I0223 09:10:50.451930 4940 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=7.60763376 podStartE2EDuration="11.45191032s" podCreationTimestamp="2026-02-23 09:10:39 +0000 UTC" firstStartedPulling="2026-02-23 09:10:45.926354859 +0000 UTC m=+1377.309561016" lastFinishedPulling="2026-02-23 09:10:49.770631419 +0000 UTC m=+1381.153837576" observedRunningTime="2026-02-23 09:10:50.448257585 +0000 UTC m=+1381.831463742" watchObservedRunningTime="2026-02-23 09:10:50.45191032 +0000 UTC m=+1381.835116477" Feb 23 09:10:51 crc kubenswrapper[4940]: I0223 09:10:51.437917 4940 generic.go:334] "Generic (PLEG): container finished" podID="59266403-fc55-45bf-a711-ff1b0c13c329" containerID="a219aa6f9b672d8f98651b131b3a916fa6bb6dc9f934b1557a4f93124c2a7c3c" exitCode=0 Feb 23 09:10:51 crc kubenswrapper[4940]: I0223 09:10:51.438427 4940 generic.go:334] "Generic (PLEG): container finished" podID="59266403-fc55-45bf-a711-ff1b0c13c329" containerID="4889ba2f03ad79134ecd01bfd1772fa34505c16b36fc6c0bb5a24b496610438f" exitCode=2 Feb 23 09:10:51 crc kubenswrapper[4940]: I0223 09:10:51.438444 4940 generic.go:334] "Generic (PLEG): container finished" podID="59266403-fc55-45bf-a711-ff1b0c13c329" containerID="a6948c161ee795e2de6369b9172de5423247ddd8608121ee0d5a81edeff72fed" exitCode=0 Feb 23 09:10:51 crc kubenswrapper[4940]: I0223 09:10:51.438001 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"59266403-fc55-45bf-a711-ff1b0c13c329","Type":"ContainerDied","Data":"a219aa6f9b672d8f98651b131b3a916fa6bb6dc9f934b1557a4f93124c2a7c3c"} Feb 23 09:10:51 crc kubenswrapper[4940]: I0223 09:10:51.438485 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"59266403-fc55-45bf-a711-ff1b0c13c329","Type":"ContainerDied","Data":"4889ba2f03ad79134ecd01bfd1772fa34505c16b36fc6c0bb5a24b496610438f"} Feb 23 09:10:51 crc kubenswrapper[4940]: I0223 09:10:51.438501 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"59266403-fc55-45bf-a711-ff1b0c13c329","Type":"ContainerDied","Data":"a6948c161ee795e2de6369b9172de5423247ddd8608121ee0d5a81edeff72fed"} Feb 23 09:10:54 crc kubenswrapper[4940]: I0223 09:10:54.190600 4940 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/manila-scheduler-0" Feb 23 09:10:56 crc kubenswrapper[4940]: I0223 09:10:56.946177 4940 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/manila-share-share1-0" Feb 23 09:10:58 crc kubenswrapper[4940]: I0223 09:10:58.501113 4940 generic.go:334] "Generic (PLEG): container finished" podID="8b3cfe7f-19c0-47e1-b535-0b4e98dba050" containerID="94c228c755da60cf2c2d4aff4d92241a5462047210c1ba47c929e426e1101812" exitCode=0 Feb 23 09:10:58 crc kubenswrapper[4940]: I0223 09:10:58.501208 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-r5b46" event={"ID":"8b3cfe7f-19c0-47e1-b535-0b4e98dba050","Type":"ContainerDied","Data":"94c228c755da60cf2c2d4aff4d92241a5462047210c1ba47c929e426e1101812"} Feb 23 09:10:59 crc kubenswrapper[4940]: I0223 09:10:59.207286 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 23 09:10:59 crc kubenswrapper[4940]: I0223 09:10:59.338166 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/59266403-fc55-45bf-a711-ff1b0c13c329-config-data\") pod \"59266403-fc55-45bf-a711-ff1b0c13c329\" (UID: \"59266403-fc55-45bf-a711-ff1b0c13c329\") " Feb 23 09:10:59 crc kubenswrapper[4940]: I0223 09:10:59.338220 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n2zzg\" (UniqueName: \"kubernetes.io/projected/59266403-fc55-45bf-a711-ff1b0c13c329-kube-api-access-n2zzg\") pod \"59266403-fc55-45bf-a711-ff1b0c13c329\" (UID: \"59266403-fc55-45bf-a711-ff1b0c13c329\") " Feb 23 09:10:59 crc kubenswrapper[4940]: I0223 09:10:59.338280 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/59266403-fc55-45bf-a711-ff1b0c13c329-run-httpd\") pod \"59266403-fc55-45bf-a711-ff1b0c13c329\" (UID: \"59266403-fc55-45bf-a711-ff1b0c13c329\") " Feb 23 09:10:59 crc kubenswrapper[4940]: I0223 09:10:59.338365 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/59266403-fc55-45bf-a711-ff1b0c13c329-sg-core-conf-yaml\") pod \"59266403-fc55-45bf-a711-ff1b0c13c329\" (UID: \"59266403-fc55-45bf-a711-ff1b0c13c329\") " Feb 23 09:10:59 crc kubenswrapper[4940]: I0223 09:10:59.338411 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/59266403-fc55-45bf-a711-ff1b0c13c329-scripts\") pod \"59266403-fc55-45bf-a711-ff1b0c13c329\" (UID: \"59266403-fc55-45bf-a711-ff1b0c13c329\") " Feb 23 09:10:59 crc kubenswrapper[4940]: I0223 09:10:59.338430 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59266403-fc55-45bf-a711-ff1b0c13c329-combined-ca-bundle\") pod \"59266403-fc55-45bf-a711-ff1b0c13c329\" (UID: \"59266403-fc55-45bf-a711-ff1b0c13c329\") " Feb 23 09:10:59 crc kubenswrapper[4940]: I0223 09:10:59.338506 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/59266403-fc55-45bf-a711-ff1b0c13c329-log-httpd\") pod \"59266403-fc55-45bf-a711-ff1b0c13c329\" (UID: \"59266403-fc55-45bf-a711-ff1b0c13c329\") " Feb 23 09:10:59 crc kubenswrapper[4940]: I0223 09:10:59.338910 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/59266403-fc55-45bf-a711-ff1b0c13c329-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "59266403-fc55-45bf-a711-ff1b0c13c329" (UID: "59266403-fc55-45bf-a711-ff1b0c13c329"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 09:10:59 crc kubenswrapper[4940]: I0223 09:10:59.339108 4940 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/59266403-fc55-45bf-a711-ff1b0c13c329-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 23 09:10:59 crc kubenswrapper[4940]: I0223 09:10:59.339450 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/59266403-fc55-45bf-a711-ff1b0c13c329-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "59266403-fc55-45bf-a711-ff1b0c13c329" (UID: "59266403-fc55-45bf-a711-ff1b0c13c329"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 09:10:59 crc kubenswrapper[4940]: I0223 09:10:59.344337 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/59266403-fc55-45bf-a711-ff1b0c13c329-scripts" (OuterVolumeSpecName: "scripts") pod "59266403-fc55-45bf-a711-ff1b0c13c329" (UID: "59266403-fc55-45bf-a711-ff1b0c13c329"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 09:10:59 crc kubenswrapper[4940]: I0223 09:10:59.345270 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/59266403-fc55-45bf-a711-ff1b0c13c329-kube-api-access-n2zzg" (OuterVolumeSpecName: "kube-api-access-n2zzg") pod "59266403-fc55-45bf-a711-ff1b0c13c329" (UID: "59266403-fc55-45bf-a711-ff1b0c13c329"). InnerVolumeSpecName "kube-api-access-n2zzg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 09:10:59 crc kubenswrapper[4940]: I0223 09:10:59.376485 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/59266403-fc55-45bf-a711-ff1b0c13c329-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "59266403-fc55-45bf-a711-ff1b0c13c329" (UID: "59266403-fc55-45bf-a711-ff1b0c13c329"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 09:10:59 crc kubenswrapper[4940]: I0223 09:10:59.434243 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/59266403-fc55-45bf-a711-ff1b0c13c329-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "59266403-fc55-45bf-a711-ff1b0c13c329" (UID: "59266403-fc55-45bf-a711-ff1b0c13c329"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 09:10:59 crc kubenswrapper[4940]: I0223 09:10:59.441594 4940 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/59266403-fc55-45bf-a711-ff1b0c13c329-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 23 09:10:59 crc kubenswrapper[4940]: I0223 09:10:59.441656 4940 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/59266403-fc55-45bf-a711-ff1b0c13c329-scripts\") on node \"crc\" DevicePath \"\"" Feb 23 09:10:59 crc kubenswrapper[4940]: I0223 09:10:59.441673 4940 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59266403-fc55-45bf-a711-ff1b0c13c329-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 09:10:59 crc kubenswrapper[4940]: I0223 09:10:59.441683 4940 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/59266403-fc55-45bf-a711-ff1b0c13c329-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 23 09:10:59 crc kubenswrapper[4940]: I0223 09:10:59.441695 4940 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n2zzg\" (UniqueName: \"kubernetes.io/projected/59266403-fc55-45bf-a711-ff1b0c13c329-kube-api-access-n2zzg\") on node \"crc\" DevicePath \"\"" Feb 23 09:10:59 crc kubenswrapper[4940]: I0223 09:10:59.454752 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/59266403-fc55-45bf-a711-ff1b0c13c329-config-data" (OuterVolumeSpecName: "config-data") pod "59266403-fc55-45bf-a711-ff1b0c13c329" (UID: "59266403-fc55-45bf-a711-ff1b0c13c329"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 09:10:59 crc kubenswrapper[4940]: I0223 09:10:59.517479 4940 generic.go:334] "Generic (PLEG): container finished" podID="59266403-fc55-45bf-a711-ff1b0c13c329" containerID="c0cc0776ad7617471a8763d7ca8fa9725b1c96c94cdb89d4aaac4d45ea83c9fe" exitCode=0 Feb 23 09:10:59 crc kubenswrapper[4940]: I0223 09:10:59.517557 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 23 09:10:59 crc kubenswrapper[4940]: I0223 09:10:59.517588 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"59266403-fc55-45bf-a711-ff1b0c13c329","Type":"ContainerDied","Data":"c0cc0776ad7617471a8763d7ca8fa9725b1c96c94cdb89d4aaac4d45ea83c9fe"} Feb 23 09:10:59 crc kubenswrapper[4940]: I0223 09:10:59.517629 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"59266403-fc55-45bf-a711-ff1b0c13c329","Type":"ContainerDied","Data":"fcdfd7cbafa68bfaa420b253f8759ced83a077ab74af08f08867815aced81355"} Feb 23 09:10:59 crc kubenswrapper[4940]: I0223 09:10:59.517645 4940 scope.go:117] "RemoveContainer" containerID="a219aa6f9b672d8f98651b131b3a916fa6bb6dc9f934b1557a4f93124c2a7c3c" Feb 23 09:10:59 crc kubenswrapper[4940]: I0223 09:10:59.543344 4940 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/59266403-fc55-45bf-a711-ff1b0c13c329-config-data\") on node \"crc\" DevicePath \"\"" Feb 23 09:10:59 crc kubenswrapper[4940]: I0223 09:10:59.557586 4940 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 23 09:10:59 crc kubenswrapper[4940]: I0223 09:10:59.586932 4940 scope.go:117] "RemoveContainer" containerID="4889ba2f03ad79134ecd01bfd1772fa34505c16b36fc6c0bb5a24b496610438f" Feb 23 09:10:59 crc kubenswrapper[4940]: I0223 09:10:59.588161 4940 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 23 09:10:59 crc kubenswrapper[4940]: I0223 09:10:59.605722 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 23 09:10:59 crc kubenswrapper[4940]: E0223 09:10:59.606322 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59266403-fc55-45bf-a711-ff1b0c13c329" containerName="proxy-httpd" Feb 23 09:10:59 crc kubenswrapper[4940]: I0223 09:10:59.606350 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="59266403-fc55-45bf-a711-ff1b0c13c329" containerName="proxy-httpd" Feb 23 09:10:59 crc kubenswrapper[4940]: E0223 09:10:59.606363 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59266403-fc55-45bf-a711-ff1b0c13c329" containerName="ceilometer-central-agent" Feb 23 09:10:59 crc kubenswrapper[4940]: I0223 09:10:59.606371 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="59266403-fc55-45bf-a711-ff1b0c13c329" containerName="ceilometer-central-agent" Feb 23 09:10:59 crc kubenswrapper[4940]: E0223 09:10:59.606381 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59266403-fc55-45bf-a711-ff1b0c13c329" containerName="sg-core" Feb 23 09:10:59 crc kubenswrapper[4940]: I0223 09:10:59.606388 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="59266403-fc55-45bf-a711-ff1b0c13c329" containerName="sg-core" Feb 23 09:10:59 crc kubenswrapper[4940]: E0223 09:10:59.606406 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59266403-fc55-45bf-a711-ff1b0c13c329" containerName="ceilometer-notification-agent" Feb 23 09:10:59 crc kubenswrapper[4940]: I0223 09:10:59.606411 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="59266403-fc55-45bf-a711-ff1b0c13c329" containerName="ceilometer-notification-agent" Feb 23 09:10:59 crc kubenswrapper[4940]: I0223 09:10:59.606578 4940 memory_manager.go:354] "RemoveStaleState removing state" podUID="59266403-fc55-45bf-a711-ff1b0c13c329" containerName="ceilometer-central-agent" Feb 23 09:10:59 crc kubenswrapper[4940]: I0223 09:10:59.606592 4940 memory_manager.go:354] "RemoveStaleState removing state" podUID="59266403-fc55-45bf-a711-ff1b0c13c329" containerName="sg-core" Feb 23 09:10:59 crc kubenswrapper[4940]: I0223 09:10:59.606604 4940 memory_manager.go:354] "RemoveStaleState removing state" podUID="59266403-fc55-45bf-a711-ff1b0c13c329" containerName="ceilometer-notification-agent" Feb 23 09:10:59 crc kubenswrapper[4940]: I0223 09:10:59.606638 4940 memory_manager.go:354] "RemoveStaleState removing state" podUID="59266403-fc55-45bf-a711-ff1b0c13c329" containerName="proxy-httpd" Feb 23 09:10:59 crc kubenswrapper[4940]: I0223 09:10:59.613457 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 23 09:10:59 crc kubenswrapper[4940]: I0223 09:10:59.615264 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 23 09:10:59 crc kubenswrapper[4940]: I0223 09:10:59.623449 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 23 09:10:59 crc kubenswrapper[4940]: I0223 09:10:59.623566 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 23 09:10:59 crc kubenswrapper[4940]: I0223 09:10:59.623925 4940 scope.go:117] "RemoveContainer" containerID="a6948c161ee795e2de6369b9172de5423247ddd8608121ee0d5a81edeff72fed" Feb 23 09:10:59 crc kubenswrapper[4940]: I0223 09:10:59.654037 4940 scope.go:117] "RemoveContainer" containerID="c0cc0776ad7617471a8763d7ca8fa9725b1c96c94cdb89d4aaac4d45ea83c9fe" Feb 23 09:10:59 crc kubenswrapper[4940]: I0223 09:10:59.679310 4940 scope.go:117] "RemoveContainer" containerID="a219aa6f9b672d8f98651b131b3a916fa6bb6dc9f934b1557a4f93124c2a7c3c" Feb 23 09:10:59 crc kubenswrapper[4940]: E0223 09:10:59.679827 4940 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a219aa6f9b672d8f98651b131b3a916fa6bb6dc9f934b1557a4f93124c2a7c3c\": container with ID starting with a219aa6f9b672d8f98651b131b3a916fa6bb6dc9f934b1557a4f93124c2a7c3c not found: ID does not exist" containerID="a219aa6f9b672d8f98651b131b3a916fa6bb6dc9f934b1557a4f93124c2a7c3c" Feb 23 09:10:59 crc kubenswrapper[4940]: I0223 09:10:59.679865 4940 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a219aa6f9b672d8f98651b131b3a916fa6bb6dc9f934b1557a4f93124c2a7c3c"} err="failed to get container status \"a219aa6f9b672d8f98651b131b3a916fa6bb6dc9f934b1557a4f93124c2a7c3c\": rpc error: code = NotFound desc = could not find container \"a219aa6f9b672d8f98651b131b3a916fa6bb6dc9f934b1557a4f93124c2a7c3c\": container with ID starting with a219aa6f9b672d8f98651b131b3a916fa6bb6dc9f934b1557a4f93124c2a7c3c not found: ID does not exist" Feb 23 09:10:59 crc kubenswrapper[4940]: I0223 09:10:59.679889 4940 scope.go:117] "RemoveContainer" containerID="4889ba2f03ad79134ecd01bfd1772fa34505c16b36fc6c0bb5a24b496610438f" Feb 23 09:10:59 crc kubenswrapper[4940]: E0223 09:10:59.680203 4940 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4889ba2f03ad79134ecd01bfd1772fa34505c16b36fc6c0bb5a24b496610438f\": container with ID starting with 4889ba2f03ad79134ecd01bfd1772fa34505c16b36fc6c0bb5a24b496610438f not found: ID does not exist" containerID="4889ba2f03ad79134ecd01bfd1772fa34505c16b36fc6c0bb5a24b496610438f" Feb 23 09:10:59 crc kubenswrapper[4940]: I0223 09:10:59.680260 4940 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4889ba2f03ad79134ecd01bfd1772fa34505c16b36fc6c0bb5a24b496610438f"} err="failed to get container status \"4889ba2f03ad79134ecd01bfd1772fa34505c16b36fc6c0bb5a24b496610438f\": rpc error: code = NotFound desc = could not find container \"4889ba2f03ad79134ecd01bfd1772fa34505c16b36fc6c0bb5a24b496610438f\": container with ID starting with 4889ba2f03ad79134ecd01bfd1772fa34505c16b36fc6c0bb5a24b496610438f not found: ID does not exist" Feb 23 09:10:59 crc kubenswrapper[4940]: I0223 09:10:59.680301 4940 scope.go:117] "RemoveContainer" containerID="a6948c161ee795e2de6369b9172de5423247ddd8608121ee0d5a81edeff72fed" Feb 23 09:10:59 crc kubenswrapper[4940]: E0223 09:10:59.680624 4940 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a6948c161ee795e2de6369b9172de5423247ddd8608121ee0d5a81edeff72fed\": container with ID starting with a6948c161ee795e2de6369b9172de5423247ddd8608121ee0d5a81edeff72fed not found: ID does not exist" containerID="a6948c161ee795e2de6369b9172de5423247ddd8608121ee0d5a81edeff72fed" Feb 23 09:10:59 crc kubenswrapper[4940]: I0223 09:10:59.680656 4940 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a6948c161ee795e2de6369b9172de5423247ddd8608121ee0d5a81edeff72fed"} err="failed to get container status \"a6948c161ee795e2de6369b9172de5423247ddd8608121ee0d5a81edeff72fed\": rpc error: code = NotFound desc = could not find container \"a6948c161ee795e2de6369b9172de5423247ddd8608121ee0d5a81edeff72fed\": container with ID starting with a6948c161ee795e2de6369b9172de5423247ddd8608121ee0d5a81edeff72fed not found: ID does not exist" Feb 23 09:10:59 crc kubenswrapper[4940]: I0223 09:10:59.680671 4940 scope.go:117] "RemoveContainer" containerID="c0cc0776ad7617471a8763d7ca8fa9725b1c96c94cdb89d4aaac4d45ea83c9fe" Feb 23 09:10:59 crc kubenswrapper[4940]: E0223 09:10:59.680918 4940 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c0cc0776ad7617471a8763d7ca8fa9725b1c96c94cdb89d4aaac4d45ea83c9fe\": container with ID starting with c0cc0776ad7617471a8763d7ca8fa9725b1c96c94cdb89d4aaac4d45ea83c9fe not found: ID does not exist" containerID="c0cc0776ad7617471a8763d7ca8fa9725b1c96c94cdb89d4aaac4d45ea83c9fe" Feb 23 09:10:59 crc kubenswrapper[4940]: I0223 09:10:59.680952 4940 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c0cc0776ad7617471a8763d7ca8fa9725b1c96c94cdb89d4aaac4d45ea83c9fe"} err="failed to get container status \"c0cc0776ad7617471a8763d7ca8fa9725b1c96c94cdb89d4aaac4d45ea83c9fe\": rpc error: code = NotFound desc = could not find container \"c0cc0776ad7617471a8763d7ca8fa9725b1c96c94cdb89d4aaac4d45ea83c9fe\": container with ID starting with c0cc0776ad7617471a8763d7ca8fa9725b1c96c94cdb89d4aaac4d45ea83c9fe not found: ID does not exist" Feb 23 09:10:59 crc kubenswrapper[4940]: I0223 09:10:59.756931 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/849f2281-2c86-44e5-88b0-5123f155ad0a-scripts\") pod \"ceilometer-0\" (UID: \"849f2281-2c86-44e5-88b0-5123f155ad0a\") " pod="openstack/ceilometer-0" Feb 23 09:10:59 crc kubenswrapper[4940]: I0223 09:10:59.756986 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/849f2281-2c86-44e5-88b0-5123f155ad0a-config-data\") pod \"ceilometer-0\" (UID: \"849f2281-2c86-44e5-88b0-5123f155ad0a\") " pod="openstack/ceilometer-0" Feb 23 09:10:59 crc kubenswrapper[4940]: I0223 09:10:59.757305 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/849f2281-2c86-44e5-88b0-5123f155ad0a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"849f2281-2c86-44e5-88b0-5123f155ad0a\") " pod="openstack/ceilometer-0" Feb 23 09:10:59 crc kubenswrapper[4940]: I0223 09:10:59.757359 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/849f2281-2c86-44e5-88b0-5123f155ad0a-log-httpd\") pod \"ceilometer-0\" (UID: \"849f2281-2c86-44e5-88b0-5123f155ad0a\") " pod="openstack/ceilometer-0" Feb 23 09:10:59 crc kubenswrapper[4940]: I0223 09:10:59.757452 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/849f2281-2c86-44e5-88b0-5123f155ad0a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"849f2281-2c86-44e5-88b0-5123f155ad0a\") " pod="openstack/ceilometer-0" Feb 23 09:10:59 crc kubenswrapper[4940]: I0223 09:10:59.757581 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vq8jr\" (UniqueName: \"kubernetes.io/projected/849f2281-2c86-44e5-88b0-5123f155ad0a-kube-api-access-vq8jr\") pod \"ceilometer-0\" (UID: \"849f2281-2c86-44e5-88b0-5123f155ad0a\") " pod="openstack/ceilometer-0" Feb 23 09:10:59 crc kubenswrapper[4940]: I0223 09:10:59.757644 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/849f2281-2c86-44e5-88b0-5123f155ad0a-run-httpd\") pod \"ceilometer-0\" (UID: \"849f2281-2c86-44e5-88b0-5123f155ad0a\") " pod="openstack/ceilometer-0" Feb 23 09:10:59 crc kubenswrapper[4940]: I0223 09:10:59.839728 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-r5b46" Feb 23 09:10:59 crc kubenswrapper[4940]: I0223 09:10:59.859789 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/849f2281-2c86-44e5-88b0-5123f155ad0a-scripts\") pod \"ceilometer-0\" (UID: \"849f2281-2c86-44e5-88b0-5123f155ad0a\") " pod="openstack/ceilometer-0" Feb 23 09:10:59 crc kubenswrapper[4940]: I0223 09:10:59.859843 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/849f2281-2c86-44e5-88b0-5123f155ad0a-config-data\") pod \"ceilometer-0\" (UID: \"849f2281-2c86-44e5-88b0-5123f155ad0a\") " pod="openstack/ceilometer-0" Feb 23 09:10:59 crc kubenswrapper[4940]: I0223 09:10:59.860054 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/849f2281-2c86-44e5-88b0-5123f155ad0a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"849f2281-2c86-44e5-88b0-5123f155ad0a\") " pod="openstack/ceilometer-0" Feb 23 09:10:59 crc kubenswrapper[4940]: I0223 09:10:59.860107 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/849f2281-2c86-44e5-88b0-5123f155ad0a-log-httpd\") pod \"ceilometer-0\" (UID: \"849f2281-2c86-44e5-88b0-5123f155ad0a\") " pod="openstack/ceilometer-0" Feb 23 09:10:59 crc kubenswrapper[4940]: I0223 09:10:59.860161 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/849f2281-2c86-44e5-88b0-5123f155ad0a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"849f2281-2c86-44e5-88b0-5123f155ad0a\") " pod="openstack/ceilometer-0" Feb 23 09:10:59 crc kubenswrapper[4940]: I0223 09:10:59.860210 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vq8jr\" (UniqueName: \"kubernetes.io/projected/849f2281-2c86-44e5-88b0-5123f155ad0a-kube-api-access-vq8jr\") pod \"ceilometer-0\" (UID: \"849f2281-2c86-44e5-88b0-5123f155ad0a\") " pod="openstack/ceilometer-0" Feb 23 09:10:59 crc kubenswrapper[4940]: I0223 09:10:59.860245 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/849f2281-2c86-44e5-88b0-5123f155ad0a-run-httpd\") pod \"ceilometer-0\" (UID: \"849f2281-2c86-44e5-88b0-5123f155ad0a\") " pod="openstack/ceilometer-0" Feb 23 09:10:59 crc kubenswrapper[4940]: I0223 09:10:59.866237 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/849f2281-2c86-44e5-88b0-5123f155ad0a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"849f2281-2c86-44e5-88b0-5123f155ad0a\") " pod="openstack/ceilometer-0" Feb 23 09:10:59 crc kubenswrapper[4940]: I0223 09:10:59.866686 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/849f2281-2c86-44e5-88b0-5123f155ad0a-run-httpd\") pod \"ceilometer-0\" (UID: \"849f2281-2c86-44e5-88b0-5123f155ad0a\") " pod="openstack/ceilometer-0" Feb 23 09:10:59 crc kubenswrapper[4940]: I0223 09:10:59.866730 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/849f2281-2c86-44e5-88b0-5123f155ad0a-log-httpd\") pod \"ceilometer-0\" (UID: \"849f2281-2c86-44e5-88b0-5123f155ad0a\") " pod="openstack/ceilometer-0" Feb 23 09:10:59 crc kubenswrapper[4940]: I0223 09:10:59.866759 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/849f2281-2c86-44e5-88b0-5123f155ad0a-config-data\") pod \"ceilometer-0\" (UID: \"849f2281-2c86-44e5-88b0-5123f155ad0a\") " pod="openstack/ceilometer-0" Feb 23 09:10:59 crc kubenswrapper[4940]: I0223 09:10:59.876588 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/849f2281-2c86-44e5-88b0-5123f155ad0a-scripts\") pod \"ceilometer-0\" (UID: \"849f2281-2c86-44e5-88b0-5123f155ad0a\") " pod="openstack/ceilometer-0" Feb 23 09:10:59 crc kubenswrapper[4940]: I0223 09:10:59.887372 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vq8jr\" (UniqueName: \"kubernetes.io/projected/849f2281-2c86-44e5-88b0-5123f155ad0a-kube-api-access-vq8jr\") pod \"ceilometer-0\" (UID: \"849f2281-2c86-44e5-88b0-5123f155ad0a\") " pod="openstack/ceilometer-0" Feb 23 09:10:59 crc kubenswrapper[4940]: I0223 09:10:59.890453 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/849f2281-2c86-44e5-88b0-5123f155ad0a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"849f2281-2c86-44e5-88b0-5123f155ad0a\") " pod="openstack/ceilometer-0" Feb 23 09:10:59 crc kubenswrapper[4940]: I0223 09:10:59.950306 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 23 09:10:59 crc kubenswrapper[4940]: I0223 09:10:59.961645 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dcl6q\" (UniqueName: \"kubernetes.io/projected/8b3cfe7f-19c0-47e1-b535-0b4e98dba050-kube-api-access-dcl6q\") pod \"8b3cfe7f-19c0-47e1-b535-0b4e98dba050\" (UID: \"8b3cfe7f-19c0-47e1-b535-0b4e98dba050\") " Feb 23 09:10:59 crc kubenswrapper[4940]: I0223 09:10:59.961737 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8b3cfe7f-19c0-47e1-b535-0b4e98dba050-scripts\") pod \"8b3cfe7f-19c0-47e1-b535-0b4e98dba050\" (UID: \"8b3cfe7f-19c0-47e1-b535-0b4e98dba050\") " Feb 23 09:10:59 crc kubenswrapper[4940]: I0223 09:10:59.961873 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8b3cfe7f-19c0-47e1-b535-0b4e98dba050-config-data\") pod \"8b3cfe7f-19c0-47e1-b535-0b4e98dba050\" (UID: \"8b3cfe7f-19c0-47e1-b535-0b4e98dba050\") " Feb 23 09:10:59 crc kubenswrapper[4940]: I0223 09:10:59.961921 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b3cfe7f-19c0-47e1-b535-0b4e98dba050-combined-ca-bundle\") pod \"8b3cfe7f-19c0-47e1-b535-0b4e98dba050\" (UID: \"8b3cfe7f-19c0-47e1-b535-0b4e98dba050\") " Feb 23 09:10:59 crc kubenswrapper[4940]: I0223 09:10:59.972721 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b3cfe7f-19c0-47e1-b535-0b4e98dba050-scripts" (OuterVolumeSpecName: "scripts") pod "8b3cfe7f-19c0-47e1-b535-0b4e98dba050" (UID: "8b3cfe7f-19c0-47e1-b535-0b4e98dba050"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 09:10:59 crc kubenswrapper[4940]: I0223 09:10:59.972825 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8b3cfe7f-19c0-47e1-b535-0b4e98dba050-kube-api-access-dcl6q" (OuterVolumeSpecName: "kube-api-access-dcl6q") pod "8b3cfe7f-19c0-47e1-b535-0b4e98dba050" (UID: "8b3cfe7f-19c0-47e1-b535-0b4e98dba050"). InnerVolumeSpecName "kube-api-access-dcl6q". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 09:10:59 crc kubenswrapper[4940]: I0223 09:10:59.996913 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b3cfe7f-19c0-47e1-b535-0b4e98dba050-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8b3cfe7f-19c0-47e1-b535-0b4e98dba050" (UID: "8b3cfe7f-19c0-47e1-b535-0b4e98dba050"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 09:11:00 crc kubenswrapper[4940]: I0223 09:11:00.004044 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b3cfe7f-19c0-47e1-b535-0b4e98dba050-config-data" (OuterVolumeSpecName: "config-data") pod "8b3cfe7f-19c0-47e1-b535-0b4e98dba050" (UID: "8b3cfe7f-19c0-47e1-b535-0b4e98dba050"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 09:11:00 crc kubenswrapper[4940]: I0223 09:11:00.064016 4940 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8b3cfe7f-19c0-47e1-b535-0b4e98dba050-config-data\") on node \"crc\" DevicePath \"\"" Feb 23 09:11:00 crc kubenswrapper[4940]: I0223 09:11:00.064051 4940 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b3cfe7f-19c0-47e1-b535-0b4e98dba050-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 09:11:00 crc kubenswrapper[4940]: I0223 09:11:00.064067 4940 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dcl6q\" (UniqueName: \"kubernetes.io/projected/8b3cfe7f-19c0-47e1-b535-0b4e98dba050-kube-api-access-dcl6q\") on node \"crc\" DevicePath \"\"" Feb 23 09:11:00 crc kubenswrapper[4940]: I0223 09:11:00.064080 4940 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8b3cfe7f-19c0-47e1-b535-0b4e98dba050-scripts\") on node \"crc\" DevicePath \"\"" Feb 23 09:11:00 crc kubenswrapper[4940]: W0223 09:11:00.447638 4940 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod849f2281_2c86_44e5_88b0_5123f155ad0a.slice/crio-05b7068b067c73fda48438dab1a0027d8cb6b0c00e4fe68a8e23b3d2c9c09658 WatchSource:0}: Error finding container 05b7068b067c73fda48438dab1a0027d8cb6b0c00e4fe68a8e23b3d2c9c09658: Status 404 returned error can't find the container with id 05b7068b067c73fda48438dab1a0027d8cb6b0c00e4fe68a8e23b3d2c9c09658 Feb 23 09:11:00 crc kubenswrapper[4940]: I0223 09:11:00.455964 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 23 09:11:00 crc kubenswrapper[4940]: I0223 09:11:00.529806 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"849f2281-2c86-44e5-88b0-5123f155ad0a","Type":"ContainerStarted","Data":"05b7068b067c73fda48438dab1a0027d8cb6b0c00e4fe68a8e23b3d2c9c09658"} Feb 23 09:11:00 crc kubenswrapper[4940]: I0223 09:11:00.533910 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-r5b46" event={"ID":"8b3cfe7f-19c0-47e1-b535-0b4e98dba050","Type":"ContainerDied","Data":"3490bc066ac50263862154c78e08da73e0b4b0ee9da70b6a1de0433c37534d8b"} Feb 23 09:11:00 crc kubenswrapper[4940]: I0223 09:11:00.534123 4940 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3490bc066ac50263862154c78e08da73e0b4b0ee9da70b6a1de0433c37534d8b" Feb 23 09:11:00 crc kubenswrapper[4940]: I0223 09:11:00.534009 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-r5b46" Feb 23 09:11:00 crc kubenswrapper[4940]: I0223 09:11:00.622491 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 23 09:11:00 crc kubenswrapper[4940]: E0223 09:11:00.623066 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b3cfe7f-19c0-47e1-b535-0b4e98dba050" containerName="nova-cell0-conductor-db-sync" Feb 23 09:11:00 crc kubenswrapper[4940]: I0223 09:11:00.623092 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b3cfe7f-19c0-47e1-b535-0b4e98dba050" containerName="nova-cell0-conductor-db-sync" Feb 23 09:11:00 crc kubenswrapper[4940]: I0223 09:11:00.623317 4940 memory_manager.go:354] "RemoveStaleState removing state" podUID="8b3cfe7f-19c0-47e1-b535-0b4e98dba050" containerName="nova-cell0-conductor-db-sync" Feb 23 09:11:00 crc kubenswrapper[4940]: I0223 09:11:00.624032 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 23 09:11:00 crc kubenswrapper[4940]: I0223 09:11:00.628340 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Feb 23 09:11:00 crc kubenswrapper[4940]: I0223 09:11:00.628532 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-v2qrp" Feb 23 09:11:00 crc kubenswrapper[4940]: I0223 09:11:00.630550 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 23 09:11:00 crc kubenswrapper[4940]: I0223 09:11:00.679176 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hl9l5\" (UniqueName: \"kubernetes.io/projected/f1a9723b-c83b-4940-b9bf-4daf50b078e5-kube-api-access-hl9l5\") pod \"nova-cell0-conductor-0\" (UID: \"f1a9723b-c83b-4940-b9bf-4daf50b078e5\") " pod="openstack/nova-cell0-conductor-0" Feb 23 09:11:00 crc kubenswrapper[4940]: I0223 09:11:00.679233 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f1a9723b-c83b-4940-b9bf-4daf50b078e5-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"f1a9723b-c83b-4940-b9bf-4daf50b078e5\") " pod="openstack/nova-cell0-conductor-0" Feb 23 09:11:00 crc kubenswrapper[4940]: I0223 09:11:00.679322 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1a9723b-c83b-4940-b9bf-4daf50b078e5-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"f1a9723b-c83b-4940-b9bf-4daf50b078e5\") " pod="openstack/nova-cell0-conductor-0" Feb 23 09:11:00 crc kubenswrapper[4940]: I0223 09:11:00.781525 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hl9l5\" (UniqueName: \"kubernetes.io/projected/f1a9723b-c83b-4940-b9bf-4daf50b078e5-kube-api-access-hl9l5\") pod \"nova-cell0-conductor-0\" (UID: \"f1a9723b-c83b-4940-b9bf-4daf50b078e5\") " pod="openstack/nova-cell0-conductor-0" Feb 23 09:11:00 crc kubenswrapper[4940]: I0223 09:11:00.782187 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f1a9723b-c83b-4940-b9bf-4daf50b078e5-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"f1a9723b-c83b-4940-b9bf-4daf50b078e5\") " pod="openstack/nova-cell0-conductor-0" Feb 23 09:11:00 crc kubenswrapper[4940]: I0223 09:11:00.783502 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1a9723b-c83b-4940-b9bf-4daf50b078e5-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"f1a9723b-c83b-4940-b9bf-4daf50b078e5\") " pod="openstack/nova-cell0-conductor-0" Feb 23 09:11:00 crc kubenswrapper[4940]: I0223 09:11:00.787805 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1a9723b-c83b-4940-b9bf-4daf50b078e5-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"f1a9723b-c83b-4940-b9bf-4daf50b078e5\") " pod="openstack/nova-cell0-conductor-0" Feb 23 09:11:00 crc kubenswrapper[4940]: I0223 09:11:00.787995 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f1a9723b-c83b-4940-b9bf-4daf50b078e5-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"f1a9723b-c83b-4940-b9bf-4daf50b078e5\") " pod="openstack/nova-cell0-conductor-0" Feb 23 09:11:00 crc kubenswrapper[4940]: I0223 09:11:00.796732 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hl9l5\" (UniqueName: \"kubernetes.io/projected/f1a9723b-c83b-4940-b9bf-4daf50b078e5-kube-api-access-hl9l5\") pod \"nova-cell0-conductor-0\" (UID: \"f1a9723b-c83b-4940-b9bf-4daf50b078e5\") " pod="openstack/nova-cell0-conductor-0" Feb 23 09:11:00 crc kubenswrapper[4940]: I0223 09:11:00.941121 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 23 09:11:01 crc kubenswrapper[4940]: I0223 09:11:01.368424 4940 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="59266403-fc55-45bf-a711-ff1b0c13c329" path="/var/lib/kubelet/pods/59266403-fc55-45bf-a711-ff1b0c13c329/volumes" Feb 23 09:11:01 crc kubenswrapper[4940]: I0223 09:11:01.413905 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 23 09:11:01 crc kubenswrapper[4940]: I0223 09:11:01.430376 4940 patch_prober.go:28] interesting pod/machine-config-daemon-26mgs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 23 09:11:01 crc kubenswrapper[4940]: I0223 09:11:01.430443 4940 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" podUID="f3f2cfd6-5ddf-436d-998f-440f1cc642b1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 23 09:11:01 crc kubenswrapper[4940]: I0223 09:11:01.430494 4940 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" Feb 23 09:11:01 crc kubenswrapper[4940]: I0223 09:11:01.431414 4940 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"4fad30523a6437e41dff0a057c489e191d38b824c4f57fb0c206fe34b2b2c2ec"} pod="openshift-machine-config-operator/machine-config-daemon-26mgs" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 23 09:11:01 crc kubenswrapper[4940]: I0223 09:11:01.431486 4940 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" podUID="f3f2cfd6-5ddf-436d-998f-440f1cc642b1" containerName="machine-config-daemon" containerID="cri-o://4fad30523a6437e41dff0a057c489e191d38b824c4f57fb0c206fe34b2b2c2ec" gracePeriod=600 Feb 23 09:11:01 crc kubenswrapper[4940]: I0223 09:11:01.557871 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"f1a9723b-c83b-4940-b9bf-4daf50b078e5","Type":"ContainerStarted","Data":"142e7af60a954f4ba58a5c33b0922ac39fcfcbc1010c35c559d51fd599a91c5b"} Feb 23 09:11:01 crc kubenswrapper[4940]: I0223 09:11:01.570205 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"849f2281-2c86-44e5-88b0-5123f155ad0a","Type":"ContainerStarted","Data":"599733e1b233c09f1dc450cfa8bf4c007067abb75765fefe2b2f94cf02e45a54"} Feb 23 09:11:02 crc kubenswrapper[4940]: I0223 09:11:02.169068 4940 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 23 09:11:02 crc kubenswrapper[4940]: I0223 09:11:02.169588 4940 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="301b7712-08f5-47e5-b4bf-f1c3033eba8d" containerName="glance-log" containerID="cri-o://3b6b92fcd4fc029fa385dfd58b0573e51722a8378618e58b1258bf6bd5622be1" gracePeriod=30 Feb 23 09:11:02 crc kubenswrapper[4940]: I0223 09:11:02.169691 4940 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="301b7712-08f5-47e5-b4bf-f1c3033eba8d" containerName="glance-httpd" containerID="cri-o://1aa64c34618472d0564c8fc7028b863b88d1acb04832bfcd0dff1c20073ebc14" gracePeriod=30 Feb 23 09:11:02 crc kubenswrapper[4940]: I0223 09:11:02.649983 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"f1a9723b-c83b-4940-b9bf-4daf50b078e5","Type":"ContainerStarted","Data":"786ed23b24823fa4c18f7986888cefc91aef81e905a663e08a91c8811816ad42"} Feb 23 09:11:02 crc kubenswrapper[4940]: I0223 09:11:02.650325 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Feb 23 09:11:02 crc kubenswrapper[4940]: I0223 09:11:02.656035 4940 generic.go:334] "Generic (PLEG): container finished" podID="301b7712-08f5-47e5-b4bf-f1c3033eba8d" containerID="3b6b92fcd4fc029fa385dfd58b0573e51722a8378618e58b1258bf6bd5622be1" exitCode=143 Feb 23 09:11:02 crc kubenswrapper[4940]: I0223 09:11:02.656338 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"301b7712-08f5-47e5-b4bf-f1c3033eba8d","Type":"ContainerDied","Data":"3b6b92fcd4fc029fa385dfd58b0573e51722a8378618e58b1258bf6bd5622be1"} Feb 23 09:11:02 crc kubenswrapper[4940]: I0223 09:11:02.658263 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"849f2281-2c86-44e5-88b0-5123f155ad0a","Type":"ContainerStarted","Data":"1f31f93ed3911443e9dc71e2cb5aaefcc7c0d0965fc832964eec7fe2571c8e2e"} Feb 23 09:11:02 crc kubenswrapper[4940]: I0223 09:11:02.658288 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"849f2281-2c86-44e5-88b0-5123f155ad0a","Type":"ContainerStarted","Data":"81629838eee18d31cfe1723ffbac93927cb299e99c7b976f007b360f4c36da94"} Feb 23 09:11:02 crc kubenswrapper[4940]: I0223 09:11:02.681036 4940 generic.go:334] "Generic (PLEG): container finished" podID="f3f2cfd6-5ddf-436d-998f-440f1cc642b1" containerID="4fad30523a6437e41dff0a057c489e191d38b824c4f57fb0c206fe34b2b2c2ec" exitCode=0 Feb 23 09:11:02 crc kubenswrapper[4940]: I0223 09:11:02.681282 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" event={"ID":"f3f2cfd6-5ddf-436d-998f-440f1cc642b1","Type":"ContainerDied","Data":"4fad30523a6437e41dff0a057c489e191d38b824c4f57fb0c206fe34b2b2c2ec"} Feb 23 09:11:02 crc kubenswrapper[4940]: I0223 09:11:02.681310 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" event={"ID":"f3f2cfd6-5ddf-436d-998f-440f1cc642b1","Type":"ContainerStarted","Data":"19d6b225c575372b5c150867fa5fdc624fa3eadb3fc5651545d3dd6885ec731e"} Feb 23 09:11:02 crc kubenswrapper[4940]: I0223 09:11:02.681327 4940 scope.go:117] "RemoveContainer" containerID="c29b2fbbefbec40ee98e54f4b935b91d77c404cc4f7cfd7802c91d0a773948d7" Feb 23 09:11:02 crc kubenswrapper[4940]: I0223 09:11:02.697145 4940 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.697125702 podStartE2EDuration="2.697125702s" podCreationTimestamp="2026-02-23 09:11:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 09:11:02.685445637 +0000 UTC m=+1394.068651794" watchObservedRunningTime="2026-02-23 09:11:02.697125702 +0000 UTC m=+1394.080331859" Feb 23 09:11:03 crc kubenswrapper[4940]: I0223 09:11:03.526532 4940 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 23 09:11:03 crc kubenswrapper[4940]: I0223 09:11:03.574152 4940 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 23 09:11:03 crc kubenswrapper[4940]: I0223 09:11:03.574439 4940 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="c3fdde6c-2b86-4f4c-b431-292279528e91" containerName="glance-log" containerID="cri-o://225b8f1b78a337f94f840edf63aede90339c5be288f9db5611f4a308ed21f154" gracePeriod=30 Feb 23 09:11:03 crc kubenswrapper[4940]: I0223 09:11:03.574527 4940 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="c3fdde6c-2b86-4f4c-b431-292279528e91" containerName="glance-httpd" containerID="cri-o://0358e3f2841db2404e85206a092ff663454bce4fea1a3a237583a4abe8f0178a" gracePeriod=30 Feb 23 09:11:04 crc kubenswrapper[4940]: I0223 09:11:04.203811 4940 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 23 09:11:04 crc kubenswrapper[4940]: I0223 09:11:04.740940 4940 generic.go:334] "Generic (PLEG): container finished" podID="c3fdde6c-2b86-4f4c-b431-292279528e91" containerID="225b8f1b78a337f94f840edf63aede90339c5be288f9db5611f4a308ed21f154" exitCode=143 Feb 23 09:11:04 crc kubenswrapper[4940]: I0223 09:11:04.741028 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"c3fdde6c-2b86-4f4c-b431-292279528e91","Type":"ContainerDied","Data":"225b8f1b78a337f94f840edf63aede90339c5be288f9db5611f4a308ed21f154"} Feb 23 09:11:04 crc kubenswrapper[4940]: I0223 09:11:04.744394 4940 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell0-conductor-0" podUID="f1a9723b-c83b-4940-b9bf-4daf50b078e5" containerName="nova-cell0-conductor-conductor" containerID="cri-o://786ed23b24823fa4c18f7986888cefc91aef81e905a663e08a91c8811816ad42" gracePeriod=30 Feb 23 09:11:04 crc kubenswrapper[4940]: I0223 09:11:04.744831 4940 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="849f2281-2c86-44e5-88b0-5123f155ad0a" containerName="ceilometer-central-agent" containerID="cri-o://599733e1b233c09f1dc450cfa8bf4c007067abb75765fefe2b2f94cf02e45a54" gracePeriod=30 Feb 23 09:11:04 crc kubenswrapper[4940]: I0223 09:11:04.744888 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"849f2281-2c86-44e5-88b0-5123f155ad0a","Type":"ContainerStarted","Data":"3cb83cbbfc7fbfc38f6b0fdd9deadb062415f843a07df7ad0f815a75d6c3a2c1"} Feb 23 09:11:04 crc kubenswrapper[4940]: I0223 09:11:04.745233 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 23 09:11:04 crc kubenswrapper[4940]: I0223 09:11:04.745277 4940 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="849f2281-2c86-44e5-88b0-5123f155ad0a" containerName="proxy-httpd" containerID="cri-o://3cb83cbbfc7fbfc38f6b0fdd9deadb062415f843a07df7ad0f815a75d6c3a2c1" gracePeriod=30 Feb 23 09:11:04 crc kubenswrapper[4940]: I0223 09:11:04.745326 4940 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="849f2281-2c86-44e5-88b0-5123f155ad0a" containerName="sg-core" containerID="cri-o://1f31f93ed3911443e9dc71e2cb5aaefcc7c0d0965fc832964eec7fe2571c8e2e" gracePeriod=30 Feb 23 09:11:04 crc kubenswrapper[4940]: I0223 09:11:04.745366 4940 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="849f2281-2c86-44e5-88b0-5123f155ad0a" containerName="ceilometer-notification-agent" containerID="cri-o://81629838eee18d31cfe1723ffbac93927cb299e99c7b976f007b360f4c36da94" gracePeriod=30 Feb 23 09:11:04 crc kubenswrapper[4940]: I0223 09:11:04.778978 4940 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.281116781 podStartE2EDuration="5.778899238s" podCreationTimestamp="2026-02-23 09:10:59 +0000 UTC" firstStartedPulling="2026-02-23 09:11:00.450480076 +0000 UTC m=+1391.833686243" lastFinishedPulling="2026-02-23 09:11:03.948262543 +0000 UTC m=+1395.331468700" observedRunningTime="2026-02-23 09:11:04.775492873 +0000 UTC m=+1396.158699030" watchObservedRunningTime="2026-02-23 09:11:04.778899238 +0000 UTC m=+1396.162105395" Feb 23 09:11:05 crc kubenswrapper[4940]: I0223 09:11:05.530547 4940 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-default-external-api-0" podUID="301b7712-08f5-47e5-b4bf-f1c3033eba8d" containerName="glance-log" probeResult="failure" output="Get \"https://10.217.0.153:9292/healthcheck\": read tcp 10.217.0.2:45114->10.217.0.153:9292: read: connection reset by peer" Feb 23 09:11:05 crc kubenswrapper[4940]: I0223 09:11:05.530717 4940 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-default-external-api-0" podUID="301b7712-08f5-47e5-b4bf-f1c3033eba8d" containerName="glance-httpd" probeResult="failure" output="Get \"https://10.217.0.153:9292/healthcheck\": read tcp 10.217.0.2:45128->10.217.0.153:9292: read: connection reset by peer" Feb 23 09:11:05 crc kubenswrapper[4940]: I0223 09:11:05.757143 4940 generic.go:334] "Generic (PLEG): container finished" podID="849f2281-2c86-44e5-88b0-5123f155ad0a" containerID="3cb83cbbfc7fbfc38f6b0fdd9deadb062415f843a07df7ad0f815a75d6c3a2c1" exitCode=0 Feb 23 09:11:05 crc kubenswrapper[4940]: I0223 09:11:05.757181 4940 generic.go:334] "Generic (PLEG): container finished" podID="849f2281-2c86-44e5-88b0-5123f155ad0a" containerID="1f31f93ed3911443e9dc71e2cb5aaefcc7c0d0965fc832964eec7fe2571c8e2e" exitCode=2 Feb 23 09:11:05 crc kubenswrapper[4940]: I0223 09:11:05.757192 4940 generic.go:334] "Generic (PLEG): container finished" podID="849f2281-2c86-44e5-88b0-5123f155ad0a" containerID="81629838eee18d31cfe1723ffbac93927cb299e99c7b976f007b360f4c36da94" exitCode=0 Feb 23 09:11:05 crc kubenswrapper[4940]: I0223 09:11:05.757234 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"849f2281-2c86-44e5-88b0-5123f155ad0a","Type":"ContainerDied","Data":"3cb83cbbfc7fbfc38f6b0fdd9deadb062415f843a07df7ad0f815a75d6c3a2c1"} Feb 23 09:11:05 crc kubenswrapper[4940]: I0223 09:11:05.757266 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"849f2281-2c86-44e5-88b0-5123f155ad0a","Type":"ContainerDied","Data":"1f31f93ed3911443e9dc71e2cb5aaefcc7c0d0965fc832964eec7fe2571c8e2e"} Feb 23 09:11:05 crc kubenswrapper[4940]: I0223 09:11:05.757280 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"849f2281-2c86-44e5-88b0-5123f155ad0a","Type":"ContainerDied","Data":"81629838eee18d31cfe1723ffbac93927cb299e99c7b976f007b360f4c36da94"} Feb 23 09:11:05 crc kubenswrapper[4940]: I0223 09:11:05.764256 4940 generic.go:334] "Generic (PLEG): container finished" podID="301b7712-08f5-47e5-b4bf-f1c3033eba8d" containerID="1aa64c34618472d0564c8fc7028b863b88d1acb04832bfcd0dff1c20073ebc14" exitCode=0 Feb 23 09:11:05 crc kubenswrapper[4940]: I0223 09:11:05.764671 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"301b7712-08f5-47e5-b4bf-f1c3033eba8d","Type":"ContainerDied","Data":"1aa64c34618472d0564c8fc7028b863b88d1acb04832bfcd0dff1c20073ebc14"} Feb 23 09:11:06 crc kubenswrapper[4940]: I0223 09:11:06.340129 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 23 09:11:06 crc kubenswrapper[4940]: I0223 09:11:06.441634 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/301b7712-08f5-47e5-b4bf-f1c3033eba8d-config-data\") pod \"301b7712-08f5-47e5-b4bf-f1c3033eba8d\" (UID: \"301b7712-08f5-47e5-b4bf-f1c3033eba8d\") " Feb 23 09:11:06 crc kubenswrapper[4940]: I0223 09:11:06.442012 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/301b7712-08f5-47e5-b4bf-f1c3033eba8d-httpd-run\") pod \"301b7712-08f5-47e5-b4bf-f1c3033eba8d\" (UID: \"301b7712-08f5-47e5-b4bf-f1c3033eba8d\") " Feb 23 09:11:06 crc kubenswrapper[4940]: I0223 09:11:06.442348 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/301b7712-08f5-47e5-b4bf-f1c3033eba8d-scripts\") pod \"301b7712-08f5-47e5-b4bf-f1c3033eba8d\" (UID: \"301b7712-08f5-47e5-b4bf-f1c3033eba8d\") " Feb 23 09:11:06 crc kubenswrapper[4940]: I0223 09:11:06.442382 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/301b7712-08f5-47e5-b4bf-f1c3033eba8d-logs\") pod \"301b7712-08f5-47e5-b4bf-f1c3033eba8d\" (UID: \"301b7712-08f5-47e5-b4bf-f1c3033eba8d\") " Feb 23 09:11:06 crc kubenswrapper[4940]: I0223 09:11:06.442417 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/301b7712-08f5-47e5-b4bf-f1c3033eba8d-public-tls-certs\") pod \"301b7712-08f5-47e5-b4bf-f1c3033eba8d\" (UID: \"301b7712-08f5-47e5-b4bf-f1c3033eba8d\") " Feb 23 09:11:06 crc kubenswrapper[4940]: I0223 09:11:06.442522 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/301b7712-08f5-47e5-b4bf-f1c3033eba8d-ceph\") pod \"301b7712-08f5-47e5-b4bf-f1c3033eba8d\" (UID: \"301b7712-08f5-47e5-b4bf-f1c3033eba8d\") " Feb 23 09:11:06 crc kubenswrapper[4940]: I0223 09:11:06.442553 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"301b7712-08f5-47e5-b4bf-f1c3033eba8d\" (UID: \"301b7712-08f5-47e5-b4bf-f1c3033eba8d\") " Feb 23 09:11:06 crc kubenswrapper[4940]: I0223 09:11:06.442672 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/301b7712-08f5-47e5-b4bf-f1c3033eba8d-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "301b7712-08f5-47e5-b4bf-f1c3033eba8d" (UID: "301b7712-08f5-47e5-b4bf-f1c3033eba8d"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 09:11:06 crc kubenswrapper[4940]: I0223 09:11:06.443066 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/301b7712-08f5-47e5-b4bf-f1c3033eba8d-logs" (OuterVolumeSpecName: "logs") pod "301b7712-08f5-47e5-b4bf-f1c3033eba8d" (UID: "301b7712-08f5-47e5-b4bf-f1c3033eba8d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 09:11:06 crc kubenswrapper[4940]: I0223 09:11:06.443731 4940 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/301b7712-08f5-47e5-b4bf-f1c3033eba8d-logs\") on node \"crc\" DevicePath \"\"" Feb 23 09:11:06 crc kubenswrapper[4940]: I0223 09:11:06.443837 4940 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/301b7712-08f5-47e5-b4bf-f1c3033eba8d-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 23 09:11:06 crc kubenswrapper[4940]: I0223 09:11:06.457024 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage02-crc" (OuterVolumeSpecName: "glance") pod "301b7712-08f5-47e5-b4bf-f1c3033eba8d" (UID: "301b7712-08f5-47e5-b4bf-f1c3033eba8d"). InnerVolumeSpecName "local-storage02-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 23 09:11:06 crc kubenswrapper[4940]: I0223 09:11:06.457539 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/301b7712-08f5-47e5-b4bf-f1c3033eba8d-ceph" (OuterVolumeSpecName: "ceph") pod "301b7712-08f5-47e5-b4bf-f1c3033eba8d" (UID: "301b7712-08f5-47e5-b4bf-f1c3033eba8d"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 09:11:06 crc kubenswrapper[4940]: I0223 09:11:06.462821 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/301b7712-08f5-47e5-b4bf-f1c3033eba8d-scripts" (OuterVolumeSpecName: "scripts") pod "301b7712-08f5-47e5-b4bf-f1c3033eba8d" (UID: "301b7712-08f5-47e5-b4bf-f1c3033eba8d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 09:11:06 crc kubenswrapper[4940]: I0223 09:11:06.495922 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/301b7712-08f5-47e5-b4bf-f1c3033eba8d-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "301b7712-08f5-47e5-b4bf-f1c3033eba8d" (UID: "301b7712-08f5-47e5-b4bf-f1c3033eba8d"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 09:11:06 crc kubenswrapper[4940]: I0223 09:11:06.513113 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/301b7712-08f5-47e5-b4bf-f1c3033eba8d-config-data" (OuterVolumeSpecName: "config-data") pod "301b7712-08f5-47e5-b4bf-f1c3033eba8d" (UID: "301b7712-08f5-47e5-b4bf-f1c3033eba8d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 09:11:06 crc kubenswrapper[4940]: I0223 09:11:06.544415 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 23 09:11:06 crc kubenswrapper[4940]: I0223 09:11:06.545752 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/301b7712-08f5-47e5-b4bf-f1c3033eba8d-combined-ca-bundle\") pod \"301b7712-08f5-47e5-b4bf-f1c3033eba8d\" (UID: \"301b7712-08f5-47e5-b4bf-f1c3033eba8d\") " Feb 23 09:11:06 crc kubenswrapper[4940]: I0223 09:11:06.545814 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9bbgw\" (UniqueName: \"kubernetes.io/projected/301b7712-08f5-47e5-b4bf-f1c3033eba8d-kube-api-access-9bbgw\") pod \"301b7712-08f5-47e5-b4bf-f1c3033eba8d\" (UID: \"301b7712-08f5-47e5-b4bf-f1c3033eba8d\") " Feb 23 09:11:06 crc kubenswrapper[4940]: I0223 09:11:06.546465 4940 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/301b7712-08f5-47e5-b4bf-f1c3033eba8d-config-data\") on node \"crc\" DevicePath \"\"" Feb 23 09:11:06 crc kubenswrapper[4940]: I0223 09:11:06.546488 4940 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/301b7712-08f5-47e5-b4bf-f1c3033eba8d-scripts\") on node \"crc\" DevicePath \"\"" Feb 23 09:11:06 crc kubenswrapper[4940]: I0223 09:11:06.546499 4940 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/301b7712-08f5-47e5-b4bf-f1c3033eba8d-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 23 09:11:06 crc kubenswrapper[4940]: I0223 09:11:06.546514 4940 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/301b7712-08f5-47e5-b4bf-f1c3033eba8d-ceph\") on node \"crc\" DevicePath \"\"" Feb 23 09:11:06 crc kubenswrapper[4940]: I0223 09:11:06.546537 4940 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" " Feb 23 09:11:06 crc kubenswrapper[4940]: I0223 09:11:06.556143 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/301b7712-08f5-47e5-b4bf-f1c3033eba8d-kube-api-access-9bbgw" (OuterVolumeSpecName: "kube-api-access-9bbgw") pod "301b7712-08f5-47e5-b4bf-f1c3033eba8d" (UID: "301b7712-08f5-47e5-b4bf-f1c3033eba8d"). InnerVolumeSpecName "kube-api-access-9bbgw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 09:11:06 crc kubenswrapper[4940]: I0223 09:11:06.571200 4940 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage02-crc" (UniqueName: "kubernetes.io/local-volume/local-storage02-crc") on node "crc" Feb 23 09:11:06 crc kubenswrapper[4940]: I0223 09:11:06.585459 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/301b7712-08f5-47e5-b4bf-f1c3033eba8d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "301b7712-08f5-47e5-b4bf-f1c3033eba8d" (UID: "301b7712-08f5-47e5-b4bf-f1c3033eba8d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 09:11:06 crc kubenswrapper[4940]: I0223 09:11:06.646996 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f1a9723b-c83b-4940-b9bf-4daf50b078e5-config-data\") pod \"f1a9723b-c83b-4940-b9bf-4daf50b078e5\" (UID: \"f1a9723b-c83b-4940-b9bf-4daf50b078e5\") " Feb 23 09:11:06 crc kubenswrapper[4940]: I0223 09:11:06.647050 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hl9l5\" (UniqueName: \"kubernetes.io/projected/f1a9723b-c83b-4940-b9bf-4daf50b078e5-kube-api-access-hl9l5\") pod \"f1a9723b-c83b-4940-b9bf-4daf50b078e5\" (UID: \"f1a9723b-c83b-4940-b9bf-4daf50b078e5\") " Feb 23 09:11:06 crc kubenswrapper[4940]: I0223 09:11:06.647152 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1a9723b-c83b-4940-b9bf-4daf50b078e5-combined-ca-bundle\") pod \"f1a9723b-c83b-4940-b9bf-4daf50b078e5\" (UID: \"f1a9723b-c83b-4940-b9bf-4daf50b078e5\") " Feb 23 09:11:06 crc kubenswrapper[4940]: I0223 09:11:06.650348 4940 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/301b7712-08f5-47e5-b4bf-f1c3033eba8d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 09:11:06 crc kubenswrapper[4940]: I0223 09:11:06.650509 4940 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9bbgw\" (UniqueName: \"kubernetes.io/projected/301b7712-08f5-47e5-b4bf-f1c3033eba8d-kube-api-access-9bbgw\") on node \"crc\" DevicePath \"\"" Feb 23 09:11:06 crc kubenswrapper[4940]: I0223 09:11:06.650602 4940 reconciler_common.go:293] "Volume detached for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" DevicePath \"\"" Feb 23 09:11:06 crc kubenswrapper[4940]: I0223 09:11:06.650927 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f1a9723b-c83b-4940-b9bf-4daf50b078e5-kube-api-access-hl9l5" (OuterVolumeSpecName: "kube-api-access-hl9l5") pod "f1a9723b-c83b-4940-b9bf-4daf50b078e5" (UID: "f1a9723b-c83b-4940-b9bf-4daf50b078e5"). InnerVolumeSpecName "kube-api-access-hl9l5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 09:11:06 crc kubenswrapper[4940]: I0223 09:11:06.675080 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f1a9723b-c83b-4940-b9bf-4daf50b078e5-config-data" (OuterVolumeSpecName: "config-data") pod "f1a9723b-c83b-4940-b9bf-4daf50b078e5" (UID: "f1a9723b-c83b-4940-b9bf-4daf50b078e5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 09:11:06 crc kubenswrapper[4940]: I0223 09:11:06.680687 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f1a9723b-c83b-4940-b9bf-4daf50b078e5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f1a9723b-c83b-4940-b9bf-4daf50b078e5" (UID: "f1a9723b-c83b-4940-b9bf-4daf50b078e5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 09:11:06 crc kubenswrapper[4940]: I0223 09:11:06.751958 4940 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1a9723b-c83b-4940-b9bf-4daf50b078e5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 09:11:06 crc kubenswrapper[4940]: I0223 09:11:06.752237 4940 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f1a9723b-c83b-4940-b9bf-4daf50b078e5-config-data\") on node \"crc\" DevicePath \"\"" Feb 23 09:11:06 crc kubenswrapper[4940]: I0223 09:11:06.752312 4940 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hl9l5\" (UniqueName: \"kubernetes.io/projected/f1a9723b-c83b-4940-b9bf-4daf50b078e5-kube-api-access-hl9l5\") on node \"crc\" DevicePath \"\"" Feb 23 09:11:06 crc kubenswrapper[4940]: I0223 09:11:06.775678 4940 generic.go:334] "Generic (PLEG): container finished" podID="f1a9723b-c83b-4940-b9bf-4daf50b078e5" containerID="786ed23b24823fa4c18f7986888cefc91aef81e905a663e08a91c8811816ad42" exitCode=0 Feb 23 09:11:06 crc kubenswrapper[4940]: I0223 09:11:06.775810 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"f1a9723b-c83b-4940-b9bf-4daf50b078e5","Type":"ContainerDied","Data":"786ed23b24823fa4c18f7986888cefc91aef81e905a663e08a91c8811816ad42"} Feb 23 09:11:06 crc kubenswrapper[4940]: I0223 09:11:06.775870 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 23 09:11:06 crc kubenswrapper[4940]: I0223 09:11:06.776103 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"f1a9723b-c83b-4940-b9bf-4daf50b078e5","Type":"ContainerDied","Data":"142e7af60a954f4ba58a5c33b0922ac39fcfcbc1010c35c559d51fd599a91c5b"} Feb 23 09:11:06 crc kubenswrapper[4940]: I0223 09:11:06.776206 4940 scope.go:117] "RemoveContainer" containerID="786ed23b24823fa4c18f7986888cefc91aef81e905a663e08a91c8811816ad42" Feb 23 09:11:06 crc kubenswrapper[4940]: I0223 09:11:06.778595 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"301b7712-08f5-47e5-b4bf-f1c3033eba8d","Type":"ContainerDied","Data":"f21125ec4223acc0f911d393b368b7f327de745cfbdcf21d8d02fbe2f5fba52b"} Feb 23 09:11:06 crc kubenswrapper[4940]: I0223 09:11:06.778893 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 23 09:11:06 crc kubenswrapper[4940]: I0223 09:11:06.800686 4940 scope.go:117] "RemoveContainer" containerID="786ed23b24823fa4c18f7986888cefc91aef81e905a663e08a91c8811816ad42" Feb 23 09:11:06 crc kubenswrapper[4940]: E0223 09:11:06.801504 4940 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"786ed23b24823fa4c18f7986888cefc91aef81e905a663e08a91c8811816ad42\": container with ID starting with 786ed23b24823fa4c18f7986888cefc91aef81e905a663e08a91c8811816ad42 not found: ID does not exist" containerID="786ed23b24823fa4c18f7986888cefc91aef81e905a663e08a91c8811816ad42" Feb 23 09:11:06 crc kubenswrapper[4940]: I0223 09:11:06.801529 4940 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"786ed23b24823fa4c18f7986888cefc91aef81e905a663e08a91c8811816ad42"} err="failed to get container status \"786ed23b24823fa4c18f7986888cefc91aef81e905a663e08a91c8811816ad42\": rpc error: code = NotFound desc = could not find container \"786ed23b24823fa4c18f7986888cefc91aef81e905a663e08a91c8811816ad42\": container with ID starting with 786ed23b24823fa4c18f7986888cefc91aef81e905a663e08a91c8811816ad42 not found: ID does not exist" Feb 23 09:11:06 crc kubenswrapper[4940]: I0223 09:11:06.801551 4940 scope.go:117] "RemoveContainer" containerID="1aa64c34618472d0564c8fc7028b863b88d1acb04832bfcd0dff1c20073ebc14" Feb 23 09:11:06 crc kubenswrapper[4940]: I0223 09:11:06.820096 4940 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 23 09:11:06 crc kubenswrapper[4940]: I0223 09:11:06.840357 4940 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 23 09:11:07 crc kubenswrapper[4940]: I0223 09:11:07.097687 4940 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 23 09:11:07 crc kubenswrapper[4940]: I0223 09:11:07.102134 4940 scope.go:117] "RemoveContainer" containerID="3b6b92fcd4fc029fa385dfd58b0573e51722a8378618e58b1258bf6bd5622be1" Feb 23 09:11:07 crc kubenswrapper[4940]: I0223 09:11:07.137524 4940 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 23 09:11:07 crc kubenswrapper[4940]: I0223 09:11:07.152789 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 23 09:11:07 crc kubenswrapper[4940]: E0223 09:11:07.153598 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="301b7712-08f5-47e5-b4bf-f1c3033eba8d" containerName="glance-httpd" Feb 23 09:11:07 crc kubenswrapper[4940]: I0223 09:11:07.153721 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="301b7712-08f5-47e5-b4bf-f1c3033eba8d" containerName="glance-httpd" Feb 23 09:11:07 crc kubenswrapper[4940]: E0223 09:11:07.153791 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f1a9723b-c83b-4940-b9bf-4daf50b078e5" containerName="nova-cell0-conductor-conductor" Feb 23 09:11:07 crc kubenswrapper[4940]: I0223 09:11:07.153843 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1a9723b-c83b-4940-b9bf-4daf50b078e5" containerName="nova-cell0-conductor-conductor" Feb 23 09:11:07 crc kubenswrapper[4940]: E0223 09:11:07.153925 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="301b7712-08f5-47e5-b4bf-f1c3033eba8d" containerName="glance-log" Feb 23 09:11:07 crc kubenswrapper[4940]: I0223 09:11:07.153977 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="301b7712-08f5-47e5-b4bf-f1c3033eba8d" containerName="glance-log" Feb 23 09:11:07 crc kubenswrapper[4940]: I0223 09:11:07.154214 4940 memory_manager.go:354] "RemoveStaleState removing state" podUID="301b7712-08f5-47e5-b4bf-f1c3033eba8d" containerName="glance-log" Feb 23 09:11:07 crc kubenswrapper[4940]: I0223 09:11:07.154288 4940 memory_manager.go:354] "RemoveStaleState removing state" podUID="f1a9723b-c83b-4940-b9bf-4daf50b078e5" containerName="nova-cell0-conductor-conductor" Feb 23 09:11:07 crc kubenswrapper[4940]: I0223 09:11:07.154352 4940 memory_manager.go:354] "RemoveStaleState removing state" podUID="301b7712-08f5-47e5-b4bf-f1c3033eba8d" containerName="glance-httpd" Feb 23 09:11:07 crc kubenswrapper[4940]: I0223 09:11:07.154999 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 23 09:11:07 crc kubenswrapper[4940]: I0223 09:11:07.162254 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Feb 23 09:11:07 crc kubenswrapper[4940]: I0223 09:11:07.162496 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-v2qrp" Feb 23 09:11:07 crc kubenswrapper[4940]: I0223 09:11:07.187368 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 23 09:11:07 crc kubenswrapper[4940]: I0223 09:11:07.206548 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Feb 23 09:11:07 crc kubenswrapper[4940]: I0223 09:11:07.220557 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 23 09:11:07 crc kubenswrapper[4940]: I0223 09:11:07.234720 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 23 09:11:07 crc kubenswrapper[4940]: I0223 09:11:07.245413 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Feb 23 09:11:07 crc kubenswrapper[4940]: I0223 09:11:07.245899 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Feb 23 09:11:07 crc kubenswrapper[4940]: I0223 09:11:07.267910 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hbwtt\" (UniqueName: \"kubernetes.io/projected/09737df3-14f0-4f68-a683-5402bfcb0aab-kube-api-access-hbwtt\") pod \"nova-cell0-conductor-0\" (UID: \"09737df3-14f0-4f68-a683-5402bfcb0aab\") " pod="openstack/nova-cell0-conductor-0" Feb 23 09:11:07 crc kubenswrapper[4940]: I0223 09:11:07.269030 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09737df3-14f0-4f68-a683-5402bfcb0aab-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"09737df3-14f0-4f68-a683-5402bfcb0aab\") " pod="openstack/nova-cell0-conductor-0" Feb 23 09:11:07 crc kubenswrapper[4940]: I0223 09:11:07.269104 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/09737df3-14f0-4f68-a683-5402bfcb0aab-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"09737df3-14f0-4f68-a683-5402bfcb0aab\") " pod="openstack/nova-cell0-conductor-0" Feb 23 09:11:07 crc kubenswrapper[4940]: I0223 09:11:07.361349 4940 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="301b7712-08f5-47e5-b4bf-f1c3033eba8d" path="/var/lib/kubelet/pods/301b7712-08f5-47e5-b4bf-f1c3033eba8d/volumes" Feb 23 09:11:07 crc kubenswrapper[4940]: I0223 09:11:07.362231 4940 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f1a9723b-c83b-4940-b9bf-4daf50b078e5" path="/var/lib/kubelet/pods/f1a9723b-c83b-4940-b9bf-4daf50b078e5/volumes" Feb 23 09:11:07 crc kubenswrapper[4940]: I0223 09:11:07.371286 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/4c37345c-c81e-4d3f-8b55-8eec1705a5a1-ceph\") pod \"glance-default-external-api-0\" (UID: \"4c37345c-c81e-4d3f-8b55-8eec1705a5a1\") " pod="openstack/glance-default-external-api-0" Feb 23 09:11:07 crc kubenswrapper[4940]: I0223 09:11:07.371421 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4c37345c-c81e-4d3f-8b55-8eec1705a5a1-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"4c37345c-c81e-4d3f-8b55-8eec1705a5a1\") " pod="openstack/glance-default-external-api-0" Feb 23 09:11:07 crc kubenswrapper[4940]: I0223 09:11:07.371478 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09737df3-14f0-4f68-a683-5402bfcb0aab-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"09737df3-14f0-4f68-a683-5402bfcb0aab\") " pod="openstack/nova-cell0-conductor-0" Feb 23 09:11:07 crc kubenswrapper[4940]: I0223 09:11:07.371527 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4c37345c-c81e-4d3f-8b55-8eec1705a5a1-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"4c37345c-c81e-4d3f-8b55-8eec1705a5a1\") " pod="openstack/glance-default-external-api-0" Feb 23 09:11:07 crc kubenswrapper[4940]: I0223 09:11:07.371594 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/09737df3-14f0-4f68-a683-5402bfcb0aab-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"09737df3-14f0-4f68-a683-5402bfcb0aab\") " pod="openstack/nova-cell0-conductor-0" Feb 23 09:11:07 crc kubenswrapper[4940]: I0223 09:11:07.371638 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-external-api-0\" (UID: \"4c37345c-c81e-4d3f-8b55-8eec1705a5a1\") " pod="openstack/glance-default-external-api-0" Feb 23 09:11:07 crc kubenswrapper[4940]: I0223 09:11:07.371674 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4c37345c-c81e-4d3f-8b55-8eec1705a5a1-scripts\") pod \"glance-default-external-api-0\" (UID: \"4c37345c-c81e-4d3f-8b55-8eec1705a5a1\") " pod="openstack/glance-default-external-api-0" Feb 23 09:11:07 crc kubenswrapper[4940]: I0223 09:11:07.371711 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4c37345c-c81e-4d3f-8b55-8eec1705a5a1-logs\") pod \"glance-default-external-api-0\" (UID: \"4c37345c-c81e-4d3f-8b55-8eec1705a5a1\") " pod="openstack/glance-default-external-api-0" Feb 23 09:11:07 crc kubenswrapper[4940]: I0223 09:11:07.371800 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jxccq\" (UniqueName: \"kubernetes.io/projected/4c37345c-c81e-4d3f-8b55-8eec1705a5a1-kube-api-access-jxccq\") pod \"glance-default-external-api-0\" (UID: \"4c37345c-c81e-4d3f-8b55-8eec1705a5a1\") " pod="openstack/glance-default-external-api-0" Feb 23 09:11:07 crc kubenswrapper[4940]: I0223 09:11:07.371828 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4c37345c-c81e-4d3f-8b55-8eec1705a5a1-config-data\") pod \"glance-default-external-api-0\" (UID: \"4c37345c-c81e-4d3f-8b55-8eec1705a5a1\") " pod="openstack/glance-default-external-api-0" Feb 23 09:11:07 crc kubenswrapper[4940]: I0223 09:11:07.372003 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hbwtt\" (UniqueName: \"kubernetes.io/projected/09737df3-14f0-4f68-a683-5402bfcb0aab-kube-api-access-hbwtt\") pod \"nova-cell0-conductor-0\" (UID: \"09737df3-14f0-4f68-a683-5402bfcb0aab\") " pod="openstack/nova-cell0-conductor-0" Feb 23 09:11:07 crc kubenswrapper[4940]: I0223 09:11:07.372052 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c37345c-c81e-4d3f-8b55-8eec1705a5a1-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"4c37345c-c81e-4d3f-8b55-8eec1705a5a1\") " pod="openstack/glance-default-external-api-0" Feb 23 09:11:07 crc kubenswrapper[4940]: I0223 09:11:07.389626 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09737df3-14f0-4f68-a683-5402bfcb0aab-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"09737df3-14f0-4f68-a683-5402bfcb0aab\") " pod="openstack/nova-cell0-conductor-0" Feb 23 09:11:07 crc kubenswrapper[4940]: I0223 09:11:07.393870 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hbwtt\" (UniqueName: \"kubernetes.io/projected/09737df3-14f0-4f68-a683-5402bfcb0aab-kube-api-access-hbwtt\") pod \"nova-cell0-conductor-0\" (UID: \"09737df3-14f0-4f68-a683-5402bfcb0aab\") " pod="openstack/nova-cell0-conductor-0" Feb 23 09:11:07 crc kubenswrapper[4940]: I0223 09:11:07.400660 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/09737df3-14f0-4f68-a683-5402bfcb0aab-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"09737df3-14f0-4f68-a683-5402bfcb0aab\") " pod="openstack/nova-cell0-conductor-0" Feb 23 09:11:07 crc kubenswrapper[4940]: I0223 09:11:07.474737 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c37345c-c81e-4d3f-8b55-8eec1705a5a1-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"4c37345c-c81e-4d3f-8b55-8eec1705a5a1\") " pod="openstack/glance-default-external-api-0" Feb 23 09:11:07 crc kubenswrapper[4940]: I0223 09:11:07.475087 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/4c37345c-c81e-4d3f-8b55-8eec1705a5a1-ceph\") pod \"glance-default-external-api-0\" (UID: \"4c37345c-c81e-4d3f-8b55-8eec1705a5a1\") " pod="openstack/glance-default-external-api-0" Feb 23 09:11:07 crc kubenswrapper[4940]: I0223 09:11:07.475188 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4c37345c-c81e-4d3f-8b55-8eec1705a5a1-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"4c37345c-c81e-4d3f-8b55-8eec1705a5a1\") " pod="openstack/glance-default-external-api-0" Feb 23 09:11:07 crc kubenswrapper[4940]: I0223 09:11:07.475252 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4c37345c-c81e-4d3f-8b55-8eec1705a5a1-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"4c37345c-c81e-4d3f-8b55-8eec1705a5a1\") " pod="openstack/glance-default-external-api-0" Feb 23 09:11:07 crc kubenswrapper[4940]: I0223 09:11:07.475306 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-external-api-0\" (UID: \"4c37345c-c81e-4d3f-8b55-8eec1705a5a1\") " pod="openstack/glance-default-external-api-0" Feb 23 09:11:07 crc kubenswrapper[4940]: I0223 09:11:07.475338 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4c37345c-c81e-4d3f-8b55-8eec1705a5a1-scripts\") pod \"glance-default-external-api-0\" (UID: \"4c37345c-c81e-4d3f-8b55-8eec1705a5a1\") " pod="openstack/glance-default-external-api-0" Feb 23 09:11:07 crc kubenswrapper[4940]: I0223 09:11:07.475374 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4c37345c-c81e-4d3f-8b55-8eec1705a5a1-logs\") pod \"glance-default-external-api-0\" (UID: \"4c37345c-c81e-4d3f-8b55-8eec1705a5a1\") " pod="openstack/glance-default-external-api-0" Feb 23 09:11:07 crc kubenswrapper[4940]: I0223 09:11:07.475502 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jxccq\" (UniqueName: \"kubernetes.io/projected/4c37345c-c81e-4d3f-8b55-8eec1705a5a1-kube-api-access-jxccq\") pod \"glance-default-external-api-0\" (UID: \"4c37345c-c81e-4d3f-8b55-8eec1705a5a1\") " pod="openstack/glance-default-external-api-0" Feb 23 09:11:07 crc kubenswrapper[4940]: I0223 09:11:07.475532 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4c37345c-c81e-4d3f-8b55-8eec1705a5a1-config-data\") pod \"glance-default-external-api-0\" (UID: \"4c37345c-c81e-4d3f-8b55-8eec1705a5a1\") " pod="openstack/glance-default-external-api-0" Feb 23 09:11:07 crc kubenswrapper[4940]: I0223 09:11:07.475857 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4c37345c-c81e-4d3f-8b55-8eec1705a5a1-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"4c37345c-c81e-4d3f-8b55-8eec1705a5a1\") " pod="openstack/glance-default-external-api-0" Feb 23 09:11:07 crc kubenswrapper[4940]: I0223 09:11:07.476671 4940 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-external-api-0\" (UID: \"4c37345c-c81e-4d3f-8b55-8eec1705a5a1\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/glance-default-external-api-0" Feb 23 09:11:07 crc kubenswrapper[4940]: I0223 09:11:07.477158 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4c37345c-c81e-4d3f-8b55-8eec1705a5a1-logs\") pod \"glance-default-external-api-0\" (UID: \"4c37345c-c81e-4d3f-8b55-8eec1705a5a1\") " pod="openstack/glance-default-external-api-0" Feb 23 09:11:07 crc kubenswrapper[4940]: I0223 09:11:07.481589 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4c37345c-c81e-4d3f-8b55-8eec1705a5a1-scripts\") pod \"glance-default-external-api-0\" (UID: \"4c37345c-c81e-4d3f-8b55-8eec1705a5a1\") " pod="openstack/glance-default-external-api-0" Feb 23 09:11:07 crc kubenswrapper[4940]: I0223 09:11:07.482342 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/4c37345c-c81e-4d3f-8b55-8eec1705a5a1-ceph\") pod \"glance-default-external-api-0\" (UID: \"4c37345c-c81e-4d3f-8b55-8eec1705a5a1\") " pod="openstack/glance-default-external-api-0" Feb 23 09:11:07 crc kubenswrapper[4940]: I0223 09:11:07.482458 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c37345c-c81e-4d3f-8b55-8eec1705a5a1-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"4c37345c-c81e-4d3f-8b55-8eec1705a5a1\") " pod="openstack/glance-default-external-api-0" Feb 23 09:11:07 crc kubenswrapper[4940]: I0223 09:11:07.482758 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4c37345c-c81e-4d3f-8b55-8eec1705a5a1-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"4c37345c-c81e-4d3f-8b55-8eec1705a5a1\") " pod="openstack/glance-default-external-api-0" Feb 23 09:11:07 crc kubenswrapper[4940]: I0223 09:11:07.492917 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4c37345c-c81e-4d3f-8b55-8eec1705a5a1-config-data\") pod \"glance-default-external-api-0\" (UID: \"4c37345c-c81e-4d3f-8b55-8eec1705a5a1\") " pod="openstack/glance-default-external-api-0" Feb 23 09:11:07 crc kubenswrapper[4940]: I0223 09:11:07.497394 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jxccq\" (UniqueName: \"kubernetes.io/projected/4c37345c-c81e-4d3f-8b55-8eec1705a5a1-kube-api-access-jxccq\") pod \"glance-default-external-api-0\" (UID: \"4c37345c-c81e-4d3f-8b55-8eec1705a5a1\") " pod="openstack/glance-default-external-api-0" Feb 23 09:11:07 crc kubenswrapper[4940]: I0223 09:11:07.511525 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 23 09:11:07 crc kubenswrapper[4940]: I0223 09:11:07.552161 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-external-api-0\" (UID: \"4c37345c-c81e-4d3f-8b55-8eec1705a5a1\") " pod="openstack/glance-default-external-api-0" Feb 23 09:11:07 crc kubenswrapper[4940]: I0223 09:11:07.572288 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 23 09:11:07 crc kubenswrapper[4940]: I0223 09:11:07.753341 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 23 09:11:07 crc kubenswrapper[4940]: I0223 09:11:07.813506 4940 generic.go:334] "Generic (PLEG): container finished" podID="c3fdde6c-2b86-4f4c-b431-292279528e91" containerID="0358e3f2841db2404e85206a092ff663454bce4fea1a3a237583a4abe8f0178a" exitCode=0 Feb 23 09:11:07 crc kubenswrapper[4940]: I0223 09:11:07.813539 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"c3fdde6c-2b86-4f4c-b431-292279528e91","Type":"ContainerDied","Data":"0358e3f2841db2404e85206a092ff663454bce4fea1a3a237583a4abe8f0178a"} Feb 23 09:11:07 crc kubenswrapper[4940]: I0223 09:11:07.813563 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"c3fdde6c-2b86-4f4c-b431-292279528e91","Type":"ContainerDied","Data":"79287ac34ed1c2aa4a9aa7fb372145156f95f46c3bfc296d298bfa4e70ebd52d"} Feb 23 09:11:07 crc kubenswrapper[4940]: I0223 09:11:07.813580 4940 scope.go:117] "RemoveContainer" containerID="0358e3f2841db2404e85206a092ff663454bce4fea1a3a237583a4abe8f0178a" Feb 23 09:11:07 crc kubenswrapper[4940]: I0223 09:11:07.813600 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 23 09:11:07 crc kubenswrapper[4940]: I0223 09:11:07.850032 4940 scope.go:117] "RemoveContainer" containerID="225b8f1b78a337f94f840edf63aede90339c5be288f9db5611f4a308ed21f154" Feb 23 09:11:07 crc kubenswrapper[4940]: I0223 09:11:07.870912 4940 scope.go:117] "RemoveContainer" containerID="0358e3f2841db2404e85206a092ff663454bce4fea1a3a237583a4abe8f0178a" Feb 23 09:11:07 crc kubenswrapper[4940]: E0223 09:11:07.873928 4940 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0358e3f2841db2404e85206a092ff663454bce4fea1a3a237583a4abe8f0178a\": container with ID starting with 0358e3f2841db2404e85206a092ff663454bce4fea1a3a237583a4abe8f0178a not found: ID does not exist" containerID="0358e3f2841db2404e85206a092ff663454bce4fea1a3a237583a4abe8f0178a" Feb 23 09:11:07 crc kubenswrapper[4940]: I0223 09:11:07.873981 4940 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0358e3f2841db2404e85206a092ff663454bce4fea1a3a237583a4abe8f0178a"} err="failed to get container status \"0358e3f2841db2404e85206a092ff663454bce4fea1a3a237583a4abe8f0178a\": rpc error: code = NotFound desc = could not find container \"0358e3f2841db2404e85206a092ff663454bce4fea1a3a237583a4abe8f0178a\": container with ID starting with 0358e3f2841db2404e85206a092ff663454bce4fea1a3a237583a4abe8f0178a not found: ID does not exist" Feb 23 09:11:07 crc kubenswrapper[4940]: I0223 09:11:07.874010 4940 scope.go:117] "RemoveContainer" containerID="225b8f1b78a337f94f840edf63aede90339c5be288f9db5611f4a308ed21f154" Feb 23 09:11:07 crc kubenswrapper[4940]: E0223 09:11:07.874451 4940 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"225b8f1b78a337f94f840edf63aede90339c5be288f9db5611f4a308ed21f154\": container with ID starting with 225b8f1b78a337f94f840edf63aede90339c5be288f9db5611f4a308ed21f154 not found: ID does not exist" containerID="225b8f1b78a337f94f840edf63aede90339c5be288f9db5611f4a308ed21f154" Feb 23 09:11:07 crc kubenswrapper[4940]: I0223 09:11:07.874488 4940 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"225b8f1b78a337f94f840edf63aede90339c5be288f9db5611f4a308ed21f154"} err="failed to get container status \"225b8f1b78a337f94f840edf63aede90339c5be288f9db5611f4a308ed21f154\": rpc error: code = NotFound desc = could not find container \"225b8f1b78a337f94f840edf63aede90339c5be288f9db5611f4a308ed21f154\": container with ID starting with 225b8f1b78a337f94f840edf63aede90339c5be288f9db5611f4a308ed21f154 not found: ID does not exist" Feb 23 09:11:07 crc kubenswrapper[4940]: I0223 09:11:07.906960 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c3fdde6c-2b86-4f4c-b431-292279528e91-httpd-run\") pod \"c3fdde6c-2b86-4f4c-b431-292279528e91\" (UID: \"c3fdde6c-2b86-4f4c-b431-292279528e91\") " Feb 23 09:11:07 crc kubenswrapper[4940]: I0223 09:11:07.907010 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c3fdde6c-2b86-4f4c-b431-292279528e91-logs\") pod \"c3fdde6c-2b86-4f4c-b431-292279528e91\" (UID: \"c3fdde6c-2b86-4f4c-b431-292279528e91\") " Feb 23 09:11:07 crc kubenswrapper[4940]: I0223 09:11:07.907037 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c3fdde6c-2b86-4f4c-b431-292279528e91-scripts\") pod \"c3fdde6c-2b86-4f4c-b431-292279528e91\" (UID: \"c3fdde6c-2b86-4f4c-b431-292279528e91\") " Feb 23 09:11:07 crc kubenswrapper[4940]: I0223 09:11:07.907057 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/c3fdde6c-2b86-4f4c-b431-292279528e91-ceph\") pod \"c3fdde6c-2b86-4f4c-b431-292279528e91\" (UID: \"c3fdde6c-2b86-4f4c-b431-292279528e91\") " Feb 23 09:11:07 crc kubenswrapper[4940]: I0223 09:11:07.907100 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sdqnc\" (UniqueName: \"kubernetes.io/projected/c3fdde6c-2b86-4f4c-b431-292279528e91-kube-api-access-sdqnc\") pod \"c3fdde6c-2b86-4f4c-b431-292279528e91\" (UID: \"c3fdde6c-2b86-4f4c-b431-292279528e91\") " Feb 23 09:11:07 crc kubenswrapper[4940]: I0223 09:11:07.907158 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3fdde6c-2b86-4f4c-b431-292279528e91-combined-ca-bundle\") pod \"c3fdde6c-2b86-4f4c-b431-292279528e91\" (UID: \"c3fdde6c-2b86-4f4c-b431-292279528e91\") " Feb 23 09:11:07 crc kubenswrapper[4940]: I0223 09:11:07.907183 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"c3fdde6c-2b86-4f4c-b431-292279528e91\" (UID: \"c3fdde6c-2b86-4f4c-b431-292279528e91\") " Feb 23 09:11:07 crc kubenswrapper[4940]: I0223 09:11:07.907230 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c3fdde6c-2b86-4f4c-b431-292279528e91-config-data\") pod \"c3fdde6c-2b86-4f4c-b431-292279528e91\" (UID: \"c3fdde6c-2b86-4f4c-b431-292279528e91\") " Feb 23 09:11:07 crc kubenswrapper[4940]: I0223 09:11:07.907244 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c3fdde6c-2b86-4f4c-b431-292279528e91-internal-tls-certs\") pod \"c3fdde6c-2b86-4f4c-b431-292279528e91\" (UID: \"c3fdde6c-2b86-4f4c-b431-292279528e91\") " Feb 23 09:11:07 crc kubenswrapper[4940]: I0223 09:11:07.911581 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c3fdde6c-2b86-4f4c-b431-292279528e91-logs" (OuterVolumeSpecName: "logs") pod "c3fdde6c-2b86-4f4c-b431-292279528e91" (UID: "c3fdde6c-2b86-4f4c-b431-292279528e91"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 09:11:07 crc kubenswrapper[4940]: I0223 09:11:07.911897 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c3fdde6c-2b86-4f4c-b431-292279528e91-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "c3fdde6c-2b86-4f4c-b431-292279528e91" (UID: "c3fdde6c-2b86-4f4c-b431-292279528e91"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 09:11:07 crc kubenswrapper[4940]: I0223 09:11:07.913326 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c3fdde6c-2b86-4f4c-b431-292279528e91-ceph" (OuterVolumeSpecName: "ceph") pod "c3fdde6c-2b86-4f4c-b431-292279528e91" (UID: "c3fdde6c-2b86-4f4c-b431-292279528e91"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 09:11:07 crc kubenswrapper[4940]: I0223 09:11:07.917014 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c3fdde6c-2b86-4f4c-b431-292279528e91-kube-api-access-sdqnc" (OuterVolumeSpecName: "kube-api-access-sdqnc") pod "c3fdde6c-2b86-4f4c-b431-292279528e91" (UID: "c3fdde6c-2b86-4f4c-b431-292279528e91"). InnerVolumeSpecName "kube-api-access-sdqnc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 09:11:07 crc kubenswrapper[4940]: I0223 09:11:07.917068 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c3fdde6c-2b86-4f4c-b431-292279528e91-scripts" (OuterVolumeSpecName: "scripts") pod "c3fdde6c-2b86-4f4c-b431-292279528e91" (UID: "c3fdde6c-2b86-4f4c-b431-292279528e91"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 09:11:07 crc kubenswrapper[4940]: I0223 09:11:07.919147 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage09-crc" (OuterVolumeSpecName: "glance") pod "c3fdde6c-2b86-4f4c-b431-292279528e91" (UID: "c3fdde6c-2b86-4f4c-b431-292279528e91"). InnerVolumeSpecName "local-storage09-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 23 09:11:07 crc kubenswrapper[4940]: I0223 09:11:07.951535 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c3fdde6c-2b86-4f4c-b431-292279528e91-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c3fdde6c-2b86-4f4c-b431-292279528e91" (UID: "c3fdde6c-2b86-4f4c-b431-292279528e91"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 09:11:07 crc kubenswrapper[4940]: I0223 09:11:07.979159 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c3fdde6c-2b86-4f4c-b431-292279528e91-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "c3fdde6c-2b86-4f4c-b431-292279528e91" (UID: "c3fdde6c-2b86-4f4c-b431-292279528e91"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 09:11:07 crc kubenswrapper[4940]: I0223 09:11:07.985469 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c3fdde6c-2b86-4f4c-b431-292279528e91-config-data" (OuterVolumeSpecName: "config-data") pod "c3fdde6c-2b86-4f4c-b431-292279528e91" (UID: "c3fdde6c-2b86-4f4c-b431-292279528e91"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 09:11:08 crc kubenswrapper[4940]: I0223 09:11:08.009116 4940 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3fdde6c-2b86-4f4c-b431-292279528e91-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 09:11:08 crc kubenswrapper[4940]: I0223 09:11:08.009174 4940 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" " Feb 23 09:11:08 crc kubenswrapper[4940]: I0223 09:11:08.009185 4940 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c3fdde6c-2b86-4f4c-b431-292279528e91-config-data\") on node \"crc\" DevicePath \"\"" Feb 23 09:11:08 crc kubenswrapper[4940]: I0223 09:11:08.009194 4940 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c3fdde6c-2b86-4f4c-b431-292279528e91-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 23 09:11:08 crc kubenswrapper[4940]: I0223 09:11:08.009205 4940 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c3fdde6c-2b86-4f4c-b431-292279528e91-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 23 09:11:08 crc kubenswrapper[4940]: I0223 09:11:08.009213 4940 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c3fdde6c-2b86-4f4c-b431-292279528e91-logs\") on node \"crc\" DevicePath \"\"" Feb 23 09:11:08 crc kubenswrapper[4940]: I0223 09:11:08.009221 4940 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c3fdde6c-2b86-4f4c-b431-292279528e91-scripts\") on node \"crc\" DevicePath \"\"" Feb 23 09:11:08 crc kubenswrapper[4940]: I0223 09:11:08.009229 4940 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/c3fdde6c-2b86-4f4c-b431-292279528e91-ceph\") on node \"crc\" DevicePath \"\"" Feb 23 09:11:08 crc kubenswrapper[4940]: I0223 09:11:08.009237 4940 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sdqnc\" (UniqueName: \"kubernetes.io/projected/c3fdde6c-2b86-4f4c-b431-292279528e91-kube-api-access-sdqnc\") on node \"crc\" DevicePath \"\"" Feb 23 09:11:08 crc kubenswrapper[4940]: I0223 09:11:08.353338 4940 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage09-crc" (UniqueName: "kubernetes.io/local-volume/local-storage09-crc") on node "crc" Feb 23 09:11:08 crc kubenswrapper[4940]: I0223 09:11:08.420274 4940 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 23 09:11:08 crc kubenswrapper[4940]: I0223 09:11:08.426208 4940 reconciler_common.go:293] "Volume detached for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" DevicePath \"\"" Feb 23 09:11:08 crc kubenswrapper[4940]: I0223 09:11:08.432719 4940 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 23 09:11:08 crc kubenswrapper[4940]: I0223 09:11:08.456229 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 23 09:11:08 crc kubenswrapper[4940]: I0223 09:11:08.481301 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 23 09:11:08 crc kubenswrapper[4940]: E0223 09:11:08.481742 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3fdde6c-2b86-4f4c-b431-292279528e91" containerName="glance-httpd" Feb 23 09:11:08 crc kubenswrapper[4940]: I0223 09:11:08.481763 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3fdde6c-2b86-4f4c-b431-292279528e91" containerName="glance-httpd" Feb 23 09:11:08 crc kubenswrapper[4940]: E0223 09:11:08.481797 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3fdde6c-2b86-4f4c-b431-292279528e91" containerName="glance-log" Feb 23 09:11:08 crc kubenswrapper[4940]: I0223 09:11:08.481803 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3fdde6c-2b86-4f4c-b431-292279528e91" containerName="glance-log" Feb 23 09:11:08 crc kubenswrapper[4940]: I0223 09:11:08.481996 4940 memory_manager.go:354] "RemoveStaleState removing state" podUID="c3fdde6c-2b86-4f4c-b431-292279528e91" containerName="glance-httpd" Feb 23 09:11:08 crc kubenswrapper[4940]: I0223 09:11:08.482016 4940 memory_manager.go:354] "RemoveStaleState removing state" podUID="c3fdde6c-2b86-4f4c-b431-292279528e91" containerName="glance-log" Feb 23 09:11:08 crc kubenswrapper[4940]: I0223 09:11:08.483025 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 23 09:11:08 crc kubenswrapper[4940]: I0223 09:11:08.488320 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Feb 23 09:11:08 crc kubenswrapper[4940]: I0223 09:11:08.488621 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Feb 23 09:11:08 crc kubenswrapper[4940]: I0223 09:11:08.515458 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 23 09:11:08 crc kubenswrapper[4940]: I0223 09:11:08.529157 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a6886923-a3fa-46f7-97f5-7864c61a5137-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"a6886923-a3fa-46f7-97f5-7864c61a5137\") " pod="openstack/glance-default-internal-api-0" Feb 23 09:11:08 crc kubenswrapper[4940]: I0223 09:11:08.529232 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a6886923-a3fa-46f7-97f5-7864c61a5137-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"a6886923-a3fa-46f7-97f5-7864c61a5137\") " pod="openstack/glance-default-internal-api-0" Feb 23 09:11:08 crc kubenswrapper[4940]: I0223 09:11:08.529269 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f9hd8\" (UniqueName: \"kubernetes.io/projected/a6886923-a3fa-46f7-97f5-7864c61a5137-kube-api-access-f9hd8\") pod \"glance-default-internal-api-0\" (UID: \"a6886923-a3fa-46f7-97f5-7864c61a5137\") " pod="openstack/glance-default-internal-api-0" Feb 23 09:11:08 crc kubenswrapper[4940]: I0223 09:11:08.529297 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6886923-a3fa-46f7-97f5-7864c61a5137-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"a6886923-a3fa-46f7-97f5-7864c61a5137\") " pod="openstack/glance-default-internal-api-0" Feb 23 09:11:08 crc kubenswrapper[4940]: I0223 09:11:08.529322 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a6886923-a3fa-46f7-97f5-7864c61a5137-logs\") pod \"glance-default-internal-api-0\" (UID: \"a6886923-a3fa-46f7-97f5-7864c61a5137\") " pod="openstack/glance-default-internal-api-0" Feb 23 09:11:08 crc kubenswrapper[4940]: I0223 09:11:08.529385 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a6886923-a3fa-46f7-97f5-7864c61a5137-config-data\") pod \"glance-default-internal-api-0\" (UID: \"a6886923-a3fa-46f7-97f5-7864c61a5137\") " pod="openstack/glance-default-internal-api-0" Feb 23 09:11:08 crc kubenswrapper[4940]: I0223 09:11:08.529433 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"a6886923-a3fa-46f7-97f5-7864c61a5137\") " pod="openstack/glance-default-internal-api-0" Feb 23 09:11:08 crc kubenswrapper[4940]: I0223 09:11:08.529505 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a6886923-a3fa-46f7-97f5-7864c61a5137-scripts\") pod \"glance-default-internal-api-0\" (UID: \"a6886923-a3fa-46f7-97f5-7864c61a5137\") " pod="openstack/glance-default-internal-api-0" Feb 23 09:11:08 crc kubenswrapper[4940]: I0223 09:11:08.529534 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/a6886923-a3fa-46f7-97f5-7864c61a5137-ceph\") pod \"glance-default-internal-api-0\" (UID: \"a6886923-a3fa-46f7-97f5-7864c61a5137\") " pod="openstack/glance-default-internal-api-0" Feb 23 09:11:08 crc kubenswrapper[4940]: I0223 09:11:08.552363 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 23 09:11:08 crc kubenswrapper[4940]: I0223 09:11:08.631414 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a6886923-a3fa-46f7-97f5-7864c61a5137-config-data\") pod \"glance-default-internal-api-0\" (UID: \"a6886923-a3fa-46f7-97f5-7864c61a5137\") " pod="openstack/glance-default-internal-api-0" Feb 23 09:11:08 crc kubenswrapper[4940]: I0223 09:11:08.631499 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"a6886923-a3fa-46f7-97f5-7864c61a5137\") " pod="openstack/glance-default-internal-api-0" Feb 23 09:11:08 crc kubenswrapper[4940]: I0223 09:11:08.631994 4940 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"a6886923-a3fa-46f7-97f5-7864c61a5137\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/glance-default-internal-api-0" Feb 23 09:11:08 crc kubenswrapper[4940]: I0223 09:11:08.632270 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a6886923-a3fa-46f7-97f5-7864c61a5137-scripts\") pod \"glance-default-internal-api-0\" (UID: \"a6886923-a3fa-46f7-97f5-7864c61a5137\") " pod="openstack/glance-default-internal-api-0" Feb 23 09:11:08 crc kubenswrapper[4940]: I0223 09:11:08.632333 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/a6886923-a3fa-46f7-97f5-7864c61a5137-ceph\") pod \"glance-default-internal-api-0\" (UID: \"a6886923-a3fa-46f7-97f5-7864c61a5137\") " pod="openstack/glance-default-internal-api-0" Feb 23 09:11:08 crc kubenswrapper[4940]: I0223 09:11:08.632439 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a6886923-a3fa-46f7-97f5-7864c61a5137-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"a6886923-a3fa-46f7-97f5-7864c61a5137\") " pod="openstack/glance-default-internal-api-0" Feb 23 09:11:08 crc kubenswrapper[4940]: I0223 09:11:08.632512 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a6886923-a3fa-46f7-97f5-7864c61a5137-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"a6886923-a3fa-46f7-97f5-7864c61a5137\") " pod="openstack/glance-default-internal-api-0" Feb 23 09:11:08 crc kubenswrapper[4940]: I0223 09:11:08.632553 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f9hd8\" (UniqueName: \"kubernetes.io/projected/a6886923-a3fa-46f7-97f5-7864c61a5137-kube-api-access-f9hd8\") pod \"glance-default-internal-api-0\" (UID: \"a6886923-a3fa-46f7-97f5-7864c61a5137\") " pod="openstack/glance-default-internal-api-0" Feb 23 09:11:08 crc kubenswrapper[4940]: I0223 09:11:08.632582 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6886923-a3fa-46f7-97f5-7864c61a5137-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"a6886923-a3fa-46f7-97f5-7864c61a5137\") " pod="openstack/glance-default-internal-api-0" Feb 23 09:11:08 crc kubenswrapper[4940]: I0223 09:11:08.632634 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a6886923-a3fa-46f7-97f5-7864c61a5137-logs\") pod \"glance-default-internal-api-0\" (UID: \"a6886923-a3fa-46f7-97f5-7864c61a5137\") " pod="openstack/glance-default-internal-api-0" Feb 23 09:11:08 crc kubenswrapper[4940]: I0223 09:11:08.633371 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a6886923-a3fa-46f7-97f5-7864c61a5137-logs\") pod \"glance-default-internal-api-0\" (UID: \"a6886923-a3fa-46f7-97f5-7864c61a5137\") " pod="openstack/glance-default-internal-api-0" Feb 23 09:11:08 crc kubenswrapper[4940]: I0223 09:11:08.633465 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a6886923-a3fa-46f7-97f5-7864c61a5137-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"a6886923-a3fa-46f7-97f5-7864c61a5137\") " pod="openstack/glance-default-internal-api-0" Feb 23 09:11:08 crc kubenswrapper[4940]: I0223 09:11:08.638238 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a6886923-a3fa-46f7-97f5-7864c61a5137-config-data\") pod \"glance-default-internal-api-0\" (UID: \"a6886923-a3fa-46f7-97f5-7864c61a5137\") " pod="openstack/glance-default-internal-api-0" Feb 23 09:11:08 crc kubenswrapper[4940]: I0223 09:11:08.639168 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6886923-a3fa-46f7-97f5-7864c61a5137-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"a6886923-a3fa-46f7-97f5-7864c61a5137\") " pod="openstack/glance-default-internal-api-0" Feb 23 09:11:08 crc kubenswrapper[4940]: I0223 09:11:08.641176 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a6886923-a3fa-46f7-97f5-7864c61a5137-scripts\") pod \"glance-default-internal-api-0\" (UID: \"a6886923-a3fa-46f7-97f5-7864c61a5137\") " pod="openstack/glance-default-internal-api-0" Feb 23 09:11:08 crc kubenswrapper[4940]: I0223 09:11:08.646114 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a6886923-a3fa-46f7-97f5-7864c61a5137-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"a6886923-a3fa-46f7-97f5-7864c61a5137\") " pod="openstack/glance-default-internal-api-0" Feb 23 09:11:08 crc kubenswrapper[4940]: I0223 09:11:08.664530 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f9hd8\" (UniqueName: \"kubernetes.io/projected/a6886923-a3fa-46f7-97f5-7864c61a5137-kube-api-access-f9hd8\") pod \"glance-default-internal-api-0\" (UID: \"a6886923-a3fa-46f7-97f5-7864c61a5137\") " pod="openstack/glance-default-internal-api-0" Feb 23 09:11:08 crc kubenswrapper[4940]: I0223 09:11:08.669586 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/a6886923-a3fa-46f7-97f5-7864c61a5137-ceph\") pod \"glance-default-internal-api-0\" (UID: \"a6886923-a3fa-46f7-97f5-7864c61a5137\") " pod="openstack/glance-default-internal-api-0" Feb 23 09:11:08 crc kubenswrapper[4940]: I0223 09:11:08.694321 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"a6886923-a3fa-46f7-97f5-7864c61a5137\") " pod="openstack/glance-default-internal-api-0" Feb 23 09:11:08 crc kubenswrapper[4940]: I0223 09:11:08.891051 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"09737df3-14f0-4f68-a683-5402bfcb0aab","Type":"ContainerStarted","Data":"9a4af06df4f508b457d05384da4b40532ccfb38b8e077d8e3f4f8bb92eeb5f98"} Feb 23 09:11:08 crc kubenswrapper[4940]: I0223 09:11:08.891357 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"09737df3-14f0-4f68-a683-5402bfcb0aab","Type":"ContainerStarted","Data":"db4d94f01d227498d3799550c93ae21f0cf084a155da928b73bad63c56d3c522"} Feb 23 09:11:08 crc kubenswrapper[4940]: I0223 09:11:08.892687 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Feb 23 09:11:08 crc kubenswrapper[4940]: I0223 09:11:08.902864 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"4c37345c-c81e-4d3f-8b55-8eec1705a5a1","Type":"ContainerStarted","Data":"119ecf7578fee594657ea9076a1383960df6111d148188361efbafe983da0eaa"} Feb 23 09:11:08 crc kubenswrapper[4940]: I0223 09:11:08.929822 4940 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.929799117 podStartE2EDuration="2.929799117s" podCreationTimestamp="2026-02-23 09:11:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 09:11:08.924996456 +0000 UTC m=+1400.308202613" watchObservedRunningTime="2026-02-23 09:11:08.929799117 +0000 UTC m=+1400.313005274" Feb 23 09:11:08 crc kubenswrapper[4940]: I0223 09:11:08.931139 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 23 09:11:09 crc kubenswrapper[4940]: I0223 09:11:09.354937 4940 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c3fdde6c-2b86-4f4c-b431-292279528e91" path="/var/lib/kubelet/pods/c3fdde6c-2b86-4f4c-b431-292279528e91/volumes" Feb 23 09:11:09 crc kubenswrapper[4940]: I0223 09:11:09.796585 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 23 09:11:09 crc kubenswrapper[4940]: W0223 09:11:09.797823 4940 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda6886923_a3fa_46f7_97f5_7864c61a5137.slice/crio-3c41e27ae7e1da040367da9c22cfb532077aec1adaed9a432da0f5ae342c7a0f WatchSource:0}: Error finding container 3c41e27ae7e1da040367da9c22cfb532077aec1adaed9a432da0f5ae342c7a0f: Status 404 returned error can't find the container with id 3c41e27ae7e1da040367da9c22cfb532077aec1adaed9a432da0f5ae342c7a0f Feb 23 09:11:09 crc kubenswrapper[4940]: I0223 09:11:09.919628 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"4c37345c-c81e-4d3f-8b55-8eec1705a5a1","Type":"ContainerStarted","Data":"35cce151bf1c22c1fc63e092ff15e756198645173c5a0ae2673c442452b0f012"} Feb 23 09:11:09 crc kubenswrapper[4940]: I0223 09:11:09.921522 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"a6886923-a3fa-46f7-97f5-7864c61a5137","Type":"ContainerStarted","Data":"3c41e27ae7e1da040367da9c22cfb532077aec1adaed9a432da0f5ae342c7a0f"} Feb 23 09:11:10 crc kubenswrapper[4940]: I0223 09:11:10.085447 4940 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/manila-share-share1-0" Feb 23 09:11:10 crc kubenswrapper[4940]: I0223 09:11:10.939711 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"4c37345c-c81e-4d3f-8b55-8eec1705a5a1","Type":"ContainerStarted","Data":"884317a8a147154ace73d8ee5bc94956dae3ba139d9ce04a80d76a7454d76b3b"} Feb 23 09:11:10 crc kubenswrapper[4940]: I0223 09:11:10.943341 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"a6886923-a3fa-46f7-97f5-7864c61a5137","Type":"ContainerStarted","Data":"1601a7493d4d48ba03e537e93a2aa1c17476dc1f3498715840c3501db50179e5"} Feb 23 09:11:10 crc kubenswrapper[4940]: I0223 09:11:10.964531 4940 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=3.964515514 podStartE2EDuration="3.964515514s" podCreationTimestamp="2026-02-23 09:11:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 09:11:10.957209546 +0000 UTC m=+1402.340415703" watchObservedRunningTime="2026-02-23 09:11:10.964515514 +0000 UTC m=+1402.347721671" Feb 23 09:11:11 crc kubenswrapper[4940]: I0223 09:11:11.957305 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"a6886923-a3fa-46f7-97f5-7864c61a5137","Type":"ContainerStarted","Data":"c169855ebbe11f72b111f491b5349988c41f200b5d9f6f2128c10df667add793"} Feb 23 09:11:11 crc kubenswrapper[4940]: I0223 09:11:11.996792 4940 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=3.996770527 podStartE2EDuration="3.996770527s" podCreationTimestamp="2026-02-23 09:11:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 09:11:11.980599352 +0000 UTC m=+1403.363805509" watchObservedRunningTime="2026-02-23 09:11:11.996770527 +0000 UTC m=+1403.379976684" Feb 23 09:11:13 crc kubenswrapper[4940]: E0223 09:11:13.859639 4940 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod849f2281_2c86_44e5_88b0_5123f155ad0a.slice/crio-599733e1b233c09f1dc450cfa8bf4c007067abb75765fefe2b2f94cf02e45a54.scope\": RecentStats: unable to find data in memory cache]" Feb 23 09:11:14 crc kubenswrapper[4940]: I0223 09:11:14.009138 4940 generic.go:334] "Generic (PLEG): container finished" podID="849f2281-2c86-44e5-88b0-5123f155ad0a" containerID="599733e1b233c09f1dc450cfa8bf4c007067abb75765fefe2b2f94cf02e45a54" exitCode=0 Feb 23 09:11:14 crc kubenswrapper[4940]: I0223 09:11:14.009201 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"849f2281-2c86-44e5-88b0-5123f155ad0a","Type":"ContainerDied","Data":"599733e1b233c09f1dc450cfa8bf4c007067abb75765fefe2b2f94cf02e45a54"} Feb 23 09:11:14 crc kubenswrapper[4940]: I0223 09:11:14.146011 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 23 09:11:14 crc kubenswrapper[4940]: I0223 09:11:14.254484 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/849f2281-2c86-44e5-88b0-5123f155ad0a-run-httpd\") pod \"849f2281-2c86-44e5-88b0-5123f155ad0a\" (UID: \"849f2281-2c86-44e5-88b0-5123f155ad0a\") " Feb 23 09:11:14 crc kubenswrapper[4940]: I0223 09:11:14.254678 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/849f2281-2c86-44e5-88b0-5123f155ad0a-sg-core-conf-yaml\") pod \"849f2281-2c86-44e5-88b0-5123f155ad0a\" (UID: \"849f2281-2c86-44e5-88b0-5123f155ad0a\") " Feb 23 09:11:14 crc kubenswrapper[4940]: I0223 09:11:14.254767 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vq8jr\" (UniqueName: \"kubernetes.io/projected/849f2281-2c86-44e5-88b0-5123f155ad0a-kube-api-access-vq8jr\") pod \"849f2281-2c86-44e5-88b0-5123f155ad0a\" (UID: \"849f2281-2c86-44e5-88b0-5123f155ad0a\") " Feb 23 09:11:14 crc kubenswrapper[4940]: I0223 09:11:14.254953 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/849f2281-2c86-44e5-88b0-5123f155ad0a-combined-ca-bundle\") pod \"849f2281-2c86-44e5-88b0-5123f155ad0a\" (UID: \"849f2281-2c86-44e5-88b0-5123f155ad0a\") " Feb 23 09:11:14 crc kubenswrapper[4940]: I0223 09:11:14.255109 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/849f2281-2c86-44e5-88b0-5123f155ad0a-scripts\") pod \"849f2281-2c86-44e5-88b0-5123f155ad0a\" (UID: \"849f2281-2c86-44e5-88b0-5123f155ad0a\") " Feb 23 09:11:14 crc kubenswrapper[4940]: I0223 09:11:14.255185 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/849f2281-2c86-44e5-88b0-5123f155ad0a-log-httpd\") pod \"849f2281-2c86-44e5-88b0-5123f155ad0a\" (UID: \"849f2281-2c86-44e5-88b0-5123f155ad0a\") " Feb 23 09:11:14 crc kubenswrapper[4940]: I0223 09:11:14.255249 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/849f2281-2c86-44e5-88b0-5123f155ad0a-config-data\") pod \"849f2281-2c86-44e5-88b0-5123f155ad0a\" (UID: \"849f2281-2c86-44e5-88b0-5123f155ad0a\") " Feb 23 09:11:14 crc kubenswrapper[4940]: I0223 09:11:14.255259 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/849f2281-2c86-44e5-88b0-5123f155ad0a-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "849f2281-2c86-44e5-88b0-5123f155ad0a" (UID: "849f2281-2c86-44e5-88b0-5123f155ad0a"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 09:11:14 crc kubenswrapper[4940]: I0223 09:11:14.255922 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/849f2281-2c86-44e5-88b0-5123f155ad0a-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "849f2281-2c86-44e5-88b0-5123f155ad0a" (UID: "849f2281-2c86-44e5-88b0-5123f155ad0a"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 09:11:14 crc kubenswrapper[4940]: I0223 09:11:14.256122 4940 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/849f2281-2c86-44e5-88b0-5123f155ad0a-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 23 09:11:14 crc kubenswrapper[4940]: I0223 09:11:14.256141 4940 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/849f2281-2c86-44e5-88b0-5123f155ad0a-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 23 09:11:14 crc kubenswrapper[4940]: I0223 09:11:14.261232 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/849f2281-2c86-44e5-88b0-5123f155ad0a-kube-api-access-vq8jr" (OuterVolumeSpecName: "kube-api-access-vq8jr") pod "849f2281-2c86-44e5-88b0-5123f155ad0a" (UID: "849f2281-2c86-44e5-88b0-5123f155ad0a"). InnerVolumeSpecName "kube-api-access-vq8jr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 09:11:14 crc kubenswrapper[4940]: I0223 09:11:14.274413 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/849f2281-2c86-44e5-88b0-5123f155ad0a-scripts" (OuterVolumeSpecName: "scripts") pod "849f2281-2c86-44e5-88b0-5123f155ad0a" (UID: "849f2281-2c86-44e5-88b0-5123f155ad0a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 09:11:14 crc kubenswrapper[4940]: I0223 09:11:14.309074 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/849f2281-2c86-44e5-88b0-5123f155ad0a-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "849f2281-2c86-44e5-88b0-5123f155ad0a" (UID: "849f2281-2c86-44e5-88b0-5123f155ad0a"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 09:11:14 crc kubenswrapper[4940]: I0223 09:11:14.345939 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/849f2281-2c86-44e5-88b0-5123f155ad0a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "849f2281-2c86-44e5-88b0-5123f155ad0a" (UID: "849f2281-2c86-44e5-88b0-5123f155ad0a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 09:11:14 crc kubenswrapper[4940]: I0223 09:11:14.359475 4940 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/849f2281-2c86-44e5-88b0-5123f155ad0a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 09:11:14 crc kubenswrapper[4940]: I0223 09:11:14.359513 4940 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/849f2281-2c86-44e5-88b0-5123f155ad0a-scripts\") on node \"crc\" DevicePath \"\"" Feb 23 09:11:14 crc kubenswrapper[4940]: I0223 09:11:14.359525 4940 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/849f2281-2c86-44e5-88b0-5123f155ad0a-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 23 09:11:14 crc kubenswrapper[4940]: I0223 09:11:14.359536 4940 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vq8jr\" (UniqueName: \"kubernetes.io/projected/849f2281-2c86-44e5-88b0-5123f155ad0a-kube-api-access-vq8jr\") on node \"crc\" DevicePath \"\"" Feb 23 09:11:14 crc kubenswrapper[4940]: I0223 09:11:14.398695 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/849f2281-2c86-44e5-88b0-5123f155ad0a-config-data" (OuterVolumeSpecName: "config-data") pod "849f2281-2c86-44e5-88b0-5123f155ad0a" (UID: "849f2281-2c86-44e5-88b0-5123f155ad0a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 09:11:14 crc kubenswrapper[4940]: I0223 09:11:14.461348 4940 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/849f2281-2c86-44e5-88b0-5123f155ad0a-config-data\") on node \"crc\" DevicePath \"\"" Feb 23 09:11:15 crc kubenswrapper[4940]: I0223 09:11:15.023995 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"849f2281-2c86-44e5-88b0-5123f155ad0a","Type":"ContainerDied","Data":"05b7068b067c73fda48438dab1a0027d8cb6b0c00e4fe68a8e23b3d2c9c09658"} Feb 23 09:11:15 crc kubenswrapper[4940]: I0223 09:11:15.024299 4940 scope.go:117] "RemoveContainer" containerID="3cb83cbbfc7fbfc38f6b0fdd9deadb062415f843a07df7ad0f815a75d6c3a2c1" Feb 23 09:11:15 crc kubenswrapper[4940]: I0223 09:11:15.024050 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 23 09:11:15 crc kubenswrapper[4940]: I0223 09:11:15.058863 4940 scope.go:117] "RemoveContainer" containerID="1f31f93ed3911443e9dc71e2cb5aaefcc7c0d0965fc832964eec7fe2571c8e2e" Feb 23 09:11:15 crc kubenswrapper[4940]: I0223 09:11:15.061144 4940 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 23 09:11:15 crc kubenswrapper[4940]: I0223 09:11:15.071386 4940 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 23 09:11:15 crc kubenswrapper[4940]: I0223 09:11:15.087023 4940 scope.go:117] "RemoveContainer" containerID="81629838eee18d31cfe1723ffbac93927cb299e99c7b976f007b360f4c36da94" Feb 23 09:11:15 crc kubenswrapper[4940]: I0223 09:11:15.090895 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 23 09:11:15 crc kubenswrapper[4940]: E0223 09:11:15.091304 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="849f2281-2c86-44e5-88b0-5123f155ad0a" containerName="sg-core" Feb 23 09:11:15 crc kubenswrapper[4940]: I0223 09:11:15.091326 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="849f2281-2c86-44e5-88b0-5123f155ad0a" containerName="sg-core" Feb 23 09:11:15 crc kubenswrapper[4940]: E0223 09:11:15.091341 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="849f2281-2c86-44e5-88b0-5123f155ad0a" containerName="ceilometer-central-agent" Feb 23 09:11:15 crc kubenswrapper[4940]: I0223 09:11:15.091350 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="849f2281-2c86-44e5-88b0-5123f155ad0a" containerName="ceilometer-central-agent" Feb 23 09:11:15 crc kubenswrapper[4940]: E0223 09:11:15.091363 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="849f2281-2c86-44e5-88b0-5123f155ad0a" containerName="ceilometer-notification-agent" Feb 23 09:11:15 crc kubenswrapper[4940]: I0223 09:11:15.091370 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="849f2281-2c86-44e5-88b0-5123f155ad0a" containerName="ceilometer-notification-agent" Feb 23 09:11:15 crc kubenswrapper[4940]: E0223 09:11:15.091378 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="849f2281-2c86-44e5-88b0-5123f155ad0a" containerName="proxy-httpd" Feb 23 09:11:15 crc kubenswrapper[4940]: I0223 09:11:15.091385 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="849f2281-2c86-44e5-88b0-5123f155ad0a" containerName="proxy-httpd" Feb 23 09:11:15 crc kubenswrapper[4940]: I0223 09:11:15.091606 4940 memory_manager.go:354] "RemoveStaleState removing state" podUID="849f2281-2c86-44e5-88b0-5123f155ad0a" containerName="sg-core" Feb 23 09:11:15 crc kubenswrapper[4940]: I0223 09:11:15.091643 4940 memory_manager.go:354] "RemoveStaleState removing state" podUID="849f2281-2c86-44e5-88b0-5123f155ad0a" containerName="ceilometer-central-agent" Feb 23 09:11:15 crc kubenswrapper[4940]: I0223 09:11:15.091655 4940 memory_manager.go:354] "RemoveStaleState removing state" podUID="849f2281-2c86-44e5-88b0-5123f155ad0a" containerName="ceilometer-notification-agent" Feb 23 09:11:15 crc kubenswrapper[4940]: I0223 09:11:15.091662 4940 memory_manager.go:354] "RemoveStaleState removing state" podUID="849f2281-2c86-44e5-88b0-5123f155ad0a" containerName="proxy-httpd" Feb 23 09:11:15 crc kubenswrapper[4940]: I0223 09:11:15.093244 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 23 09:11:15 crc kubenswrapper[4940]: I0223 09:11:15.095803 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 23 09:11:15 crc kubenswrapper[4940]: I0223 09:11:15.096880 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 23 09:11:15 crc kubenswrapper[4940]: I0223 09:11:15.115559 4940 scope.go:117] "RemoveContainer" containerID="599733e1b233c09f1dc450cfa8bf4c007067abb75765fefe2b2f94cf02e45a54" Feb 23 09:11:15 crc kubenswrapper[4940]: I0223 09:11:15.117574 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 23 09:11:15 crc kubenswrapper[4940]: I0223 09:11:15.282751 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2d2d73f9-5768-49cf-b547-7668dfe210fa-run-httpd\") pod \"ceilometer-0\" (UID: \"2d2d73f9-5768-49cf-b547-7668dfe210fa\") " pod="openstack/ceilometer-0" Feb 23 09:11:15 crc kubenswrapper[4940]: I0223 09:11:15.282926 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2d2d73f9-5768-49cf-b547-7668dfe210fa-config-data\") pod \"ceilometer-0\" (UID: \"2d2d73f9-5768-49cf-b547-7668dfe210fa\") " pod="openstack/ceilometer-0" Feb 23 09:11:15 crc kubenswrapper[4940]: I0223 09:11:15.282977 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d2d73f9-5768-49cf-b547-7668dfe210fa-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2d2d73f9-5768-49cf-b547-7668dfe210fa\") " pod="openstack/ceilometer-0" Feb 23 09:11:15 crc kubenswrapper[4940]: I0223 09:11:15.283024 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2d2d73f9-5768-49cf-b547-7668dfe210fa-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2d2d73f9-5768-49cf-b547-7668dfe210fa\") " pod="openstack/ceilometer-0" Feb 23 09:11:15 crc kubenswrapper[4940]: I0223 09:11:15.283072 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2d2d73f9-5768-49cf-b547-7668dfe210fa-log-httpd\") pod \"ceilometer-0\" (UID: \"2d2d73f9-5768-49cf-b547-7668dfe210fa\") " pod="openstack/ceilometer-0" Feb 23 09:11:15 crc kubenswrapper[4940]: I0223 09:11:15.283095 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-85hqs\" (UniqueName: \"kubernetes.io/projected/2d2d73f9-5768-49cf-b547-7668dfe210fa-kube-api-access-85hqs\") pod \"ceilometer-0\" (UID: \"2d2d73f9-5768-49cf-b547-7668dfe210fa\") " pod="openstack/ceilometer-0" Feb 23 09:11:15 crc kubenswrapper[4940]: I0223 09:11:15.283148 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2d2d73f9-5768-49cf-b547-7668dfe210fa-scripts\") pod \"ceilometer-0\" (UID: \"2d2d73f9-5768-49cf-b547-7668dfe210fa\") " pod="openstack/ceilometer-0" Feb 23 09:11:15 crc kubenswrapper[4940]: I0223 09:11:15.361239 4940 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="849f2281-2c86-44e5-88b0-5123f155ad0a" path="/var/lib/kubelet/pods/849f2281-2c86-44e5-88b0-5123f155ad0a/volumes" Feb 23 09:11:15 crc kubenswrapper[4940]: I0223 09:11:15.385520 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2d2d73f9-5768-49cf-b547-7668dfe210fa-scripts\") pod \"ceilometer-0\" (UID: \"2d2d73f9-5768-49cf-b547-7668dfe210fa\") " pod="openstack/ceilometer-0" Feb 23 09:11:15 crc kubenswrapper[4940]: I0223 09:11:15.385599 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2d2d73f9-5768-49cf-b547-7668dfe210fa-run-httpd\") pod \"ceilometer-0\" (UID: \"2d2d73f9-5768-49cf-b547-7668dfe210fa\") " pod="openstack/ceilometer-0" Feb 23 09:11:15 crc kubenswrapper[4940]: I0223 09:11:15.385721 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2d2d73f9-5768-49cf-b547-7668dfe210fa-config-data\") pod \"ceilometer-0\" (UID: \"2d2d73f9-5768-49cf-b547-7668dfe210fa\") " pod="openstack/ceilometer-0" Feb 23 09:11:15 crc kubenswrapper[4940]: I0223 09:11:15.385761 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d2d73f9-5768-49cf-b547-7668dfe210fa-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2d2d73f9-5768-49cf-b547-7668dfe210fa\") " pod="openstack/ceilometer-0" Feb 23 09:11:15 crc kubenswrapper[4940]: I0223 09:11:15.385803 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2d2d73f9-5768-49cf-b547-7668dfe210fa-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2d2d73f9-5768-49cf-b547-7668dfe210fa\") " pod="openstack/ceilometer-0" Feb 23 09:11:15 crc kubenswrapper[4940]: I0223 09:11:15.385845 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2d2d73f9-5768-49cf-b547-7668dfe210fa-log-httpd\") pod \"ceilometer-0\" (UID: \"2d2d73f9-5768-49cf-b547-7668dfe210fa\") " pod="openstack/ceilometer-0" Feb 23 09:11:15 crc kubenswrapper[4940]: I0223 09:11:15.385866 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-85hqs\" (UniqueName: \"kubernetes.io/projected/2d2d73f9-5768-49cf-b547-7668dfe210fa-kube-api-access-85hqs\") pod \"ceilometer-0\" (UID: \"2d2d73f9-5768-49cf-b547-7668dfe210fa\") " pod="openstack/ceilometer-0" Feb 23 09:11:15 crc kubenswrapper[4940]: I0223 09:11:15.386519 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2d2d73f9-5768-49cf-b547-7668dfe210fa-run-httpd\") pod \"ceilometer-0\" (UID: \"2d2d73f9-5768-49cf-b547-7668dfe210fa\") " pod="openstack/ceilometer-0" Feb 23 09:11:15 crc kubenswrapper[4940]: I0223 09:11:15.386791 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2d2d73f9-5768-49cf-b547-7668dfe210fa-log-httpd\") pod \"ceilometer-0\" (UID: \"2d2d73f9-5768-49cf-b547-7668dfe210fa\") " pod="openstack/ceilometer-0" Feb 23 09:11:15 crc kubenswrapper[4940]: I0223 09:11:15.390175 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2d2d73f9-5768-49cf-b547-7668dfe210fa-config-data\") pod \"ceilometer-0\" (UID: \"2d2d73f9-5768-49cf-b547-7668dfe210fa\") " pod="openstack/ceilometer-0" Feb 23 09:11:15 crc kubenswrapper[4940]: I0223 09:11:15.401934 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2d2d73f9-5768-49cf-b547-7668dfe210fa-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2d2d73f9-5768-49cf-b547-7668dfe210fa\") " pod="openstack/ceilometer-0" Feb 23 09:11:15 crc kubenswrapper[4940]: I0223 09:11:15.401946 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2d2d73f9-5768-49cf-b547-7668dfe210fa-scripts\") pod \"ceilometer-0\" (UID: \"2d2d73f9-5768-49cf-b547-7668dfe210fa\") " pod="openstack/ceilometer-0" Feb 23 09:11:15 crc kubenswrapper[4940]: I0223 09:11:15.402048 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d2d73f9-5768-49cf-b547-7668dfe210fa-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2d2d73f9-5768-49cf-b547-7668dfe210fa\") " pod="openstack/ceilometer-0" Feb 23 09:11:15 crc kubenswrapper[4940]: I0223 09:11:15.405203 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-85hqs\" (UniqueName: \"kubernetes.io/projected/2d2d73f9-5768-49cf-b547-7668dfe210fa-kube-api-access-85hqs\") pod \"ceilometer-0\" (UID: \"2d2d73f9-5768-49cf-b547-7668dfe210fa\") " pod="openstack/ceilometer-0" Feb 23 09:11:15 crc kubenswrapper[4940]: I0223 09:11:15.416339 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 23 09:11:16 crc kubenswrapper[4940]: I0223 09:11:16.018793 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 23 09:11:16 crc kubenswrapper[4940]: I0223 09:11:16.034780 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2d2d73f9-5768-49cf-b547-7668dfe210fa","Type":"ContainerStarted","Data":"c13052bdfef55e477feca3b1f0e340338d40223f2157fc82f35f2a0f79189786"} Feb 23 09:11:17 crc kubenswrapper[4940]: I0223 09:11:17.047363 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2d2d73f9-5768-49cf-b547-7668dfe210fa","Type":"ContainerStarted","Data":"b6fc367c9d77e3129f93a0fb6993a5065791bcbdc98937e9c5279a703ca7d539"} Feb 23 09:11:17 crc kubenswrapper[4940]: I0223 09:11:17.538900 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Feb 23 09:11:17 crc kubenswrapper[4940]: I0223 09:11:17.573949 4940 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 23 09:11:17 crc kubenswrapper[4940]: I0223 09:11:17.574324 4940 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 23 09:11:17 crc kubenswrapper[4940]: I0223 09:11:17.616150 4940 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 23 09:11:17 crc kubenswrapper[4940]: I0223 09:11:17.654901 4940 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 23 09:11:18 crc kubenswrapper[4940]: I0223 09:11:18.146781 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2d2d73f9-5768-49cf-b547-7668dfe210fa","Type":"ContainerStarted","Data":"6775a9563f8803d5e09d9c3ae9fe02e9ff382c3075b6e7ca6b066eb0424f3944"} Feb 23 09:11:18 crc kubenswrapper[4940]: I0223 09:11:18.146890 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 23 09:11:18 crc kubenswrapper[4940]: I0223 09:11:18.147148 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 23 09:11:18 crc kubenswrapper[4940]: I0223 09:11:18.795648 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-vvclk"] Feb 23 09:11:18 crc kubenswrapper[4940]: I0223 09:11:18.806936 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-vvclk" Feb 23 09:11:18 crc kubenswrapper[4940]: I0223 09:11:18.811772 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Feb 23 09:11:18 crc kubenswrapper[4940]: I0223 09:11:18.813277 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-vvclk"] Feb 23 09:11:18 crc kubenswrapper[4940]: I0223 09:11:18.821073 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Feb 23 09:11:18 crc kubenswrapper[4940]: I0223 09:11:18.838320 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l67sf\" (UniqueName: \"kubernetes.io/projected/199cf4dd-ab6f-4d59-9a82-86c613352012-kube-api-access-l67sf\") pod \"nova-cell0-cell-mapping-vvclk\" (UID: \"199cf4dd-ab6f-4d59-9a82-86c613352012\") " pod="openstack/nova-cell0-cell-mapping-vvclk" Feb 23 09:11:18 crc kubenswrapper[4940]: I0223 09:11:18.838383 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/199cf4dd-ab6f-4d59-9a82-86c613352012-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-vvclk\" (UID: \"199cf4dd-ab6f-4d59-9a82-86c613352012\") " pod="openstack/nova-cell0-cell-mapping-vvclk" Feb 23 09:11:18 crc kubenswrapper[4940]: I0223 09:11:18.838659 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/199cf4dd-ab6f-4d59-9a82-86c613352012-scripts\") pod \"nova-cell0-cell-mapping-vvclk\" (UID: \"199cf4dd-ab6f-4d59-9a82-86c613352012\") " pod="openstack/nova-cell0-cell-mapping-vvclk" Feb 23 09:11:18 crc kubenswrapper[4940]: I0223 09:11:18.838774 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/199cf4dd-ab6f-4d59-9a82-86c613352012-config-data\") pod \"nova-cell0-cell-mapping-vvclk\" (UID: \"199cf4dd-ab6f-4d59-9a82-86c613352012\") " pod="openstack/nova-cell0-cell-mapping-vvclk" Feb 23 09:11:18 crc kubenswrapper[4940]: I0223 09:11:18.933296 4940 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 23 09:11:18 crc kubenswrapper[4940]: I0223 09:11:18.933334 4940 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 23 09:11:18 crc kubenswrapper[4940]: I0223 09:11:18.941775 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/199cf4dd-ab6f-4d59-9a82-86c613352012-scripts\") pod \"nova-cell0-cell-mapping-vvclk\" (UID: \"199cf4dd-ab6f-4d59-9a82-86c613352012\") " pod="openstack/nova-cell0-cell-mapping-vvclk" Feb 23 09:11:18 crc kubenswrapper[4940]: I0223 09:11:18.941853 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/199cf4dd-ab6f-4d59-9a82-86c613352012-config-data\") pod \"nova-cell0-cell-mapping-vvclk\" (UID: \"199cf4dd-ab6f-4d59-9a82-86c613352012\") " pod="openstack/nova-cell0-cell-mapping-vvclk" Feb 23 09:11:18 crc kubenswrapper[4940]: I0223 09:11:18.941888 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l67sf\" (UniqueName: \"kubernetes.io/projected/199cf4dd-ab6f-4d59-9a82-86c613352012-kube-api-access-l67sf\") pod \"nova-cell0-cell-mapping-vvclk\" (UID: \"199cf4dd-ab6f-4d59-9a82-86c613352012\") " pod="openstack/nova-cell0-cell-mapping-vvclk" Feb 23 09:11:18 crc kubenswrapper[4940]: I0223 09:11:18.941926 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/199cf4dd-ab6f-4d59-9a82-86c613352012-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-vvclk\" (UID: \"199cf4dd-ab6f-4d59-9a82-86c613352012\") " pod="openstack/nova-cell0-cell-mapping-vvclk" Feb 23 09:11:18 crc kubenswrapper[4940]: I0223 09:11:18.942400 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 23 09:11:18 crc kubenswrapper[4940]: I0223 09:11:18.947950 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/199cf4dd-ab6f-4d59-9a82-86c613352012-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-vvclk\" (UID: \"199cf4dd-ab6f-4d59-9a82-86c613352012\") " pod="openstack/nova-cell0-cell-mapping-vvclk" Feb 23 09:11:18 crc kubenswrapper[4940]: I0223 09:11:18.952822 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/199cf4dd-ab6f-4d59-9a82-86c613352012-scripts\") pod \"nova-cell0-cell-mapping-vvclk\" (UID: \"199cf4dd-ab6f-4d59-9a82-86c613352012\") " pod="openstack/nova-cell0-cell-mapping-vvclk" Feb 23 09:11:18 crc kubenswrapper[4940]: I0223 09:11:18.954341 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 23 09:11:18 crc kubenswrapper[4940]: I0223 09:11:18.978023 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 23 09:11:18 crc kubenswrapper[4940]: I0223 09:11:18.984538 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l67sf\" (UniqueName: \"kubernetes.io/projected/199cf4dd-ab6f-4d59-9a82-86c613352012-kube-api-access-l67sf\") pod \"nova-cell0-cell-mapping-vvclk\" (UID: \"199cf4dd-ab6f-4d59-9a82-86c613352012\") " pod="openstack/nova-cell0-cell-mapping-vvclk" Feb 23 09:11:18 crc kubenswrapper[4940]: I0223 09:11:18.987621 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 23 09:11:19 crc kubenswrapper[4940]: I0223 09:11:19.003240 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/199cf4dd-ab6f-4d59-9a82-86c613352012-config-data\") pod \"nova-cell0-cell-mapping-vvclk\" (UID: \"199cf4dd-ab6f-4d59-9a82-86c613352012\") " pod="openstack/nova-cell0-cell-mapping-vvclk" Feb 23 09:11:19 crc kubenswrapper[4940]: I0223 09:11:19.045993 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f670bbd9-012d-433a-90ec-91662031476c-logs\") pod \"nova-api-0\" (UID: \"f670bbd9-012d-433a-90ec-91662031476c\") " pod="openstack/nova-api-0" Feb 23 09:11:19 crc kubenswrapper[4940]: I0223 09:11:19.046107 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f670bbd9-012d-433a-90ec-91662031476c-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"f670bbd9-012d-433a-90ec-91662031476c\") " pod="openstack/nova-api-0" Feb 23 09:11:19 crc kubenswrapper[4940]: I0223 09:11:19.046210 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f670bbd9-012d-433a-90ec-91662031476c-config-data\") pod \"nova-api-0\" (UID: \"f670bbd9-012d-433a-90ec-91662031476c\") " pod="openstack/nova-api-0" Feb 23 09:11:19 crc kubenswrapper[4940]: I0223 09:11:19.046341 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lpwrs\" (UniqueName: \"kubernetes.io/projected/f670bbd9-012d-433a-90ec-91662031476c-kube-api-access-lpwrs\") pod \"nova-api-0\" (UID: \"f670bbd9-012d-433a-90ec-91662031476c\") " pod="openstack/nova-api-0" Feb 23 09:11:19 crc kubenswrapper[4940]: I0223 09:11:19.072204 4940 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 23 09:11:19 crc kubenswrapper[4940]: I0223 09:11:19.073023 4940 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 23 09:11:19 crc kubenswrapper[4940]: I0223 09:11:19.092971 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 23 09:11:19 crc kubenswrapper[4940]: I0223 09:11:19.094659 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 23 09:11:19 crc kubenswrapper[4940]: I0223 09:11:19.099999 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 23 09:11:19 crc kubenswrapper[4940]: I0223 09:11:19.137700 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 23 09:11:19 crc kubenswrapper[4940]: I0223 09:11:19.147752 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lpwrs\" (UniqueName: \"kubernetes.io/projected/f670bbd9-012d-433a-90ec-91662031476c-kube-api-access-lpwrs\") pod \"nova-api-0\" (UID: \"f670bbd9-012d-433a-90ec-91662031476c\") " pod="openstack/nova-api-0" Feb 23 09:11:19 crc kubenswrapper[4940]: I0223 09:11:19.147819 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f670bbd9-012d-433a-90ec-91662031476c-logs\") pod \"nova-api-0\" (UID: \"f670bbd9-012d-433a-90ec-91662031476c\") " pod="openstack/nova-api-0" Feb 23 09:11:19 crc kubenswrapper[4940]: I0223 09:11:19.147921 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f670bbd9-012d-433a-90ec-91662031476c-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"f670bbd9-012d-433a-90ec-91662031476c\") " pod="openstack/nova-api-0" Feb 23 09:11:19 crc kubenswrapper[4940]: I0223 09:11:19.148048 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f670bbd9-012d-433a-90ec-91662031476c-config-data\") pod \"nova-api-0\" (UID: \"f670bbd9-012d-433a-90ec-91662031476c\") " pod="openstack/nova-api-0" Feb 23 09:11:19 crc kubenswrapper[4940]: I0223 09:11:19.150872 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f670bbd9-012d-433a-90ec-91662031476c-logs\") pod \"nova-api-0\" (UID: \"f670bbd9-012d-433a-90ec-91662031476c\") " pod="openstack/nova-api-0" Feb 23 09:11:19 crc kubenswrapper[4940]: I0223 09:11:19.151143 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-vvclk" Feb 23 09:11:19 crc kubenswrapper[4940]: I0223 09:11:19.155775 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f670bbd9-012d-433a-90ec-91662031476c-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"f670bbd9-012d-433a-90ec-91662031476c\") " pod="openstack/nova-api-0" Feb 23 09:11:19 crc kubenswrapper[4940]: I0223 09:11:19.167540 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f670bbd9-012d-433a-90ec-91662031476c-config-data\") pod \"nova-api-0\" (UID: \"f670bbd9-012d-433a-90ec-91662031476c\") " pod="openstack/nova-api-0" Feb 23 09:11:19 crc kubenswrapper[4940]: I0223 09:11:19.174331 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 23 09:11:19 crc kubenswrapper[4940]: I0223 09:11:19.175500 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 23 09:11:19 crc kubenswrapper[4940]: I0223 09:11:19.186305 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Feb 23 09:11:19 crc kubenswrapper[4940]: I0223 09:11:19.426126 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/070f7f4a-ea04-483e-a0c6-719372ff945e-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"070f7f4a-ea04-483e-a0c6-719372ff945e\") " pod="openstack/nova-cell1-novncproxy-0" Feb 23 09:11:19 crc kubenswrapper[4940]: I0223 09:11:19.426398 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/261fa9bc-8be5-4db8-a549-14d2917921cd-logs\") pod \"nova-metadata-0\" (UID: \"261fa9bc-8be5-4db8-a549-14d2917921cd\") " pod="openstack/nova-metadata-0" Feb 23 09:11:19 crc kubenswrapper[4940]: I0223 09:11:19.426690 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/070f7f4a-ea04-483e-a0c6-719372ff945e-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"070f7f4a-ea04-483e-a0c6-719372ff945e\") " pod="openstack/nova-cell1-novncproxy-0" Feb 23 09:11:19 crc kubenswrapper[4940]: I0223 09:11:19.426720 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/261fa9bc-8be5-4db8-a549-14d2917921cd-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"261fa9bc-8be5-4db8-a549-14d2917921cd\") " pod="openstack/nova-metadata-0" Feb 23 09:11:19 crc kubenswrapper[4940]: I0223 09:11:19.426754 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/261fa9bc-8be5-4db8-a549-14d2917921cd-config-data\") pod \"nova-metadata-0\" (UID: \"261fa9bc-8be5-4db8-a549-14d2917921cd\") " pod="openstack/nova-metadata-0" Feb 23 09:11:19 crc kubenswrapper[4940]: I0223 09:11:19.246131 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lpwrs\" (UniqueName: \"kubernetes.io/projected/f670bbd9-012d-433a-90ec-91662031476c-kube-api-access-lpwrs\") pod \"nova-api-0\" (UID: \"f670bbd9-012d-433a-90ec-91662031476c\") " pod="openstack/nova-api-0" Feb 23 09:11:19 crc kubenswrapper[4940]: I0223 09:11:19.454302 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dmfp4\" (UniqueName: \"kubernetes.io/projected/070f7f4a-ea04-483e-a0c6-719372ff945e-kube-api-access-dmfp4\") pod \"nova-cell1-novncproxy-0\" (UID: \"070f7f4a-ea04-483e-a0c6-719372ff945e\") " pod="openstack/nova-cell1-novncproxy-0" Feb 23 09:11:19 crc kubenswrapper[4940]: I0223 09:11:19.457076 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ff4pb\" (UniqueName: \"kubernetes.io/projected/261fa9bc-8be5-4db8-a549-14d2917921cd-kube-api-access-ff4pb\") pod \"nova-metadata-0\" (UID: \"261fa9bc-8be5-4db8-a549-14d2917921cd\") " pod="openstack/nova-metadata-0" Feb 23 09:11:19 crc kubenswrapper[4940]: I0223 09:11:19.486412 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2d2d73f9-5768-49cf-b547-7668dfe210fa","Type":"ContainerStarted","Data":"92cd1332968e403d5b6f5dbdaa072b79d31a5463acd5b2ea7ff773a49147e222"} Feb 23 09:11:19 crc kubenswrapper[4940]: I0223 09:11:19.495035 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 23 09:11:19 crc kubenswrapper[4940]: I0223 09:11:19.500082 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 23 09:11:19 crc kubenswrapper[4940]: I0223 09:11:19.560250 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/070f7f4a-ea04-483e-a0c6-719372ff945e-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"070f7f4a-ea04-483e-a0c6-719372ff945e\") " pod="openstack/nova-cell1-novncproxy-0" Feb 23 09:11:19 crc kubenswrapper[4940]: I0223 09:11:19.560444 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/261fa9bc-8be5-4db8-a549-14d2917921cd-logs\") pod \"nova-metadata-0\" (UID: \"261fa9bc-8be5-4db8-a549-14d2917921cd\") " pod="openstack/nova-metadata-0" Feb 23 09:11:19 crc kubenswrapper[4940]: I0223 09:11:19.560506 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/070f7f4a-ea04-483e-a0c6-719372ff945e-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"070f7f4a-ea04-483e-a0c6-719372ff945e\") " pod="openstack/nova-cell1-novncproxy-0" Feb 23 09:11:19 crc kubenswrapper[4940]: I0223 09:11:19.560532 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/261fa9bc-8be5-4db8-a549-14d2917921cd-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"261fa9bc-8be5-4db8-a549-14d2917921cd\") " pod="openstack/nova-metadata-0" Feb 23 09:11:19 crc kubenswrapper[4940]: I0223 09:11:19.560569 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/261fa9bc-8be5-4db8-a549-14d2917921cd-config-data\") pod \"nova-metadata-0\" (UID: \"261fa9bc-8be5-4db8-a549-14d2917921cd\") " pod="openstack/nova-metadata-0" Feb 23 09:11:19 crc kubenswrapper[4940]: I0223 09:11:19.560601 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dmfp4\" (UniqueName: \"kubernetes.io/projected/070f7f4a-ea04-483e-a0c6-719372ff945e-kube-api-access-dmfp4\") pod \"nova-cell1-novncproxy-0\" (UID: \"070f7f4a-ea04-483e-a0c6-719372ff945e\") " pod="openstack/nova-cell1-novncproxy-0" Feb 23 09:11:19 crc kubenswrapper[4940]: I0223 09:11:19.560695 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ff4pb\" (UniqueName: \"kubernetes.io/projected/261fa9bc-8be5-4db8-a549-14d2917921cd-kube-api-access-ff4pb\") pod \"nova-metadata-0\" (UID: \"261fa9bc-8be5-4db8-a549-14d2917921cd\") " pod="openstack/nova-metadata-0" Feb 23 09:11:19 crc kubenswrapper[4940]: I0223 09:11:19.586442 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/261fa9bc-8be5-4db8-a549-14d2917921cd-logs\") pod \"nova-metadata-0\" (UID: \"261fa9bc-8be5-4db8-a549-14d2917921cd\") " pod="openstack/nova-metadata-0" Feb 23 09:11:19 crc kubenswrapper[4940]: I0223 09:11:19.613398 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/261fa9bc-8be5-4db8-a549-14d2917921cd-config-data\") pod \"nova-metadata-0\" (UID: \"261fa9bc-8be5-4db8-a549-14d2917921cd\") " pod="openstack/nova-metadata-0" Feb 23 09:11:19 crc kubenswrapper[4940]: I0223 09:11:19.618405 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 23 09:11:19 crc kubenswrapper[4940]: I0223 09:11:19.618702 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ff4pb\" (UniqueName: \"kubernetes.io/projected/261fa9bc-8be5-4db8-a549-14d2917921cd-kube-api-access-ff4pb\") pod \"nova-metadata-0\" (UID: \"261fa9bc-8be5-4db8-a549-14d2917921cd\") " pod="openstack/nova-metadata-0" Feb 23 09:11:19 crc kubenswrapper[4940]: I0223 09:11:19.619293 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/261fa9bc-8be5-4db8-a549-14d2917921cd-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"261fa9bc-8be5-4db8-a549-14d2917921cd\") " pod="openstack/nova-metadata-0" Feb 23 09:11:19 crc kubenswrapper[4940]: I0223 09:11:19.619390 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/070f7f4a-ea04-483e-a0c6-719372ff945e-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"070f7f4a-ea04-483e-a0c6-719372ff945e\") " pod="openstack/nova-cell1-novncproxy-0" Feb 23 09:11:19 crc kubenswrapper[4940]: I0223 09:11:19.619779 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/070f7f4a-ea04-483e-a0c6-719372ff945e-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"070f7f4a-ea04-483e-a0c6-719372ff945e\") " pod="openstack/nova-cell1-novncproxy-0" Feb 23 09:11:19 crc kubenswrapper[4940]: I0223 09:11:19.645311 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dmfp4\" (UniqueName: \"kubernetes.io/projected/070f7f4a-ea04-483e-a0c6-719372ff945e-kube-api-access-dmfp4\") pod \"nova-cell1-novncproxy-0\" (UID: \"070f7f4a-ea04-483e-a0c6-719372ff945e\") " pod="openstack/nova-cell1-novncproxy-0" Feb 23 09:11:19 crc kubenswrapper[4940]: I0223 09:11:19.659536 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 23 09:11:19 crc kubenswrapper[4940]: I0223 09:11:19.661497 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 23 09:11:19 crc kubenswrapper[4940]: I0223 09:11:19.669695 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 23 09:11:19 crc kubenswrapper[4940]: I0223 09:11:19.680674 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 23 09:11:19 crc kubenswrapper[4940]: I0223 09:11:19.704697 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 23 09:11:19 crc kubenswrapper[4940]: I0223 09:11:19.717114 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6b6c754dc9-dwspq"] Feb 23 09:11:19 crc kubenswrapper[4940]: I0223 09:11:19.718734 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b6c754dc9-dwspq" Feb 23 09:11:19 crc kubenswrapper[4940]: I0223 09:11:19.729482 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 23 09:11:19 crc kubenswrapper[4940]: I0223 09:11:19.730123 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6b6c754dc9-dwspq"] Feb 23 09:11:19 crc kubenswrapper[4940]: I0223 09:11:19.780372 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/641aa3c3-0f60-4529-9422-9343f561827f-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"641aa3c3-0f60-4529-9422-9343f561827f\") " pod="openstack/nova-scheduler-0" Feb 23 09:11:19 crc kubenswrapper[4940]: I0223 09:11:19.780478 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/88000ec3-b551-47f7-99c3-79b10c5dcdaf-config\") pod \"dnsmasq-dns-6b6c754dc9-dwspq\" (UID: \"88000ec3-b551-47f7-99c3-79b10c5dcdaf\") " pod="openstack/dnsmasq-dns-6b6c754dc9-dwspq" Feb 23 09:11:19 crc kubenswrapper[4940]: I0223 09:11:19.781544 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/88000ec3-b551-47f7-99c3-79b10c5dcdaf-dns-svc\") pod \"dnsmasq-dns-6b6c754dc9-dwspq\" (UID: \"88000ec3-b551-47f7-99c3-79b10c5dcdaf\") " pod="openstack/dnsmasq-dns-6b6c754dc9-dwspq" Feb 23 09:11:19 crc kubenswrapper[4940]: I0223 09:11:19.781589 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/88000ec3-b551-47f7-99c3-79b10c5dcdaf-dns-swift-storage-0\") pod \"dnsmasq-dns-6b6c754dc9-dwspq\" (UID: \"88000ec3-b551-47f7-99c3-79b10c5dcdaf\") " pod="openstack/dnsmasq-dns-6b6c754dc9-dwspq" Feb 23 09:11:19 crc kubenswrapper[4940]: I0223 09:11:19.781650 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q2vbt\" (UniqueName: \"kubernetes.io/projected/641aa3c3-0f60-4529-9422-9343f561827f-kube-api-access-q2vbt\") pod \"nova-scheduler-0\" (UID: \"641aa3c3-0f60-4529-9422-9343f561827f\") " pod="openstack/nova-scheduler-0" Feb 23 09:11:19 crc kubenswrapper[4940]: I0223 09:11:19.781687 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/88000ec3-b551-47f7-99c3-79b10c5dcdaf-ovsdbserver-sb\") pod \"dnsmasq-dns-6b6c754dc9-dwspq\" (UID: \"88000ec3-b551-47f7-99c3-79b10c5dcdaf\") " pod="openstack/dnsmasq-dns-6b6c754dc9-dwspq" Feb 23 09:11:19 crc kubenswrapper[4940]: I0223 09:11:19.781703 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/641aa3c3-0f60-4529-9422-9343f561827f-config-data\") pod \"nova-scheduler-0\" (UID: \"641aa3c3-0f60-4529-9422-9343f561827f\") " pod="openstack/nova-scheduler-0" Feb 23 09:11:19 crc kubenswrapper[4940]: I0223 09:11:19.781733 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nwkqf\" (UniqueName: \"kubernetes.io/projected/88000ec3-b551-47f7-99c3-79b10c5dcdaf-kube-api-access-nwkqf\") pod \"dnsmasq-dns-6b6c754dc9-dwspq\" (UID: \"88000ec3-b551-47f7-99c3-79b10c5dcdaf\") " pod="openstack/dnsmasq-dns-6b6c754dc9-dwspq" Feb 23 09:11:19 crc kubenswrapper[4940]: I0223 09:11:19.781755 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/88000ec3-b551-47f7-99c3-79b10c5dcdaf-ovsdbserver-nb\") pod \"dnsmasq-dns-6b6c754dc9-dwspq\" (UID: \"88000ec3-b551-47f7-99c3-79b10c5dcdaf\") " pod="openstack/dnsmasq-dns-6b6c754dc9-dwspq" Feb 23 09:11:20 crc kubenswrapper[4940]: I0223 09:11:20.761108 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/88000ec3-b551-47f7-99c3-79b10c5dcdaf-dns-svc\") pod \"dnsmasq-dns-6b6c754dc9-dwspq\" (UID: \"88000ec3-b551-47f7-99c3-79b10c5dcdaf\") " pod="openstack/dnsmasq-dns-6b6c754dc9-dwspq" Feb 23 09:11:20 crc kubenswrapper[4940]: I0223 09:11:20.761505 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/88000ec3-b551-47f7-99c3-79b10c5dcdaf-dns-swift-storage-0\") pod \"dnsmasq-dns-6b6c754dc9-dwspq\" (UID: \"88000ec3-b551-47f7-99c3-79b10c5dcdaf\") " pod="openstack/dnsmasq-dns-6b6c754dc9-dwspq" Feb 23 09:11:20 crc kubenswrapper[4940]: I0223 09:11:20.761560 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q2vbt\" (UniqueName: \"kubernetes.io/projected/641aa3c3-0f60-4529-9422-9343f561827f-kube-api-access-q2vbt\") pod \"nova-scheduler-0\" (UID: \"641aa3c3-0f60-4529-9422-9343f561827f\") " pod="openstack/nova-scheduler-0" Feb 23 09:11:20 crc kubenswrapper[4940]: I0223 09:11:20.761601 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/88000ec3-b551-47f7-99c3-79b10c5dcdaf-ovsdbserver-sb\") pod \"dnsmasq-dns-6b6c754dc9-dwspq\" (UID: \"88000ec3-b551-47f7-99c3-79b10c5dcdaf\") " pod="openstack/dnsmasq-dns-6b6c754dc9-dwspq" Feb 23 09:11:20 crc kubenswrapper[4940]: I0223 09:11:20.761652 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/641aa3c3-0f60-4529-9422-9343f561827f-config-data\") pod \"nova-scheduler-0\" (UID: \"641aa3c3-0f60-4529-9422-9343f561827f\") " pod="openstack/nova-scheduler-0" Feb 23 09:11:20 crc kubenswrapper[4940]: I0223 09:11:20.761747 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nwkqf\" (UniqueName: \"kubernetes.io/projected/88000ec3-b551-47f7-99c3-79b10c5dcdaf-kube-api-access-nwkqf\") pod \"dnsmasq-dns-6b6c754dc9-dwspq\" (UID: \"88000ec3-b551-47f7-99c3-79b10c5dcdaf\") " pod="openstack/dnsmasq-dns-6b6c754dc9-dwspq" Feb 23 09:11:20 crc kubenswrapper[4940]: I0223 09:11:20.761773 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/88000ec3-b551-47f7-99c3-79b10c5dcdaf-ovsdbserver-nb\") pod \"dnsmasq-dns-6b6c754dc9-dwspq\" (UID: \"88000ec3-b551-47f7-99c3-79b10c5dcdaf\") " pod="openstack/dnsmasq-dns-6b6c754dc9-dwspq" Feb 23 09:11:20 crc kubenswrapper[4940]: I0223 09:11:20.761816 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/641aa3c3-0f60-4529-9422-9343f561827f-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"641aa3c3-0f60-4529-9422-9343f561827f\") " pod="openstack/nova-scheduler-0" Feb 23 09:11:20 crc kubenswrapper[4940]: I0223 09:11:20.761940 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/88000ec3-b551-47f7-99c3-79b10c5dcdaf-config\") pod \"dnsmasq-dns-6b6c754dc9-dwspq\" (UID: \"88000ec3-b551-47f7-99c3-79b10c5dcdaf\") " pod="openstack/dnsmasq-dns-6b6c754dc9-dwspq" Feb 23 09:11:20 crc kubenswrapper[4940]: I0223 09:11:20.763344 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/88000ec3-b551-47f7-99c3-79b10c5dcdaf-config\") pod \"dnsmasq-dns-6b6c754dc9-dwspq\" (UID: \"88000ec3-b551-47f7-99c3-79b10c5dcdaf\") " pod="openstack/dnsmasq-dns-6b6c754dc9-dwspq" Feb 23 09:11:20 crc kubenswrapper[4940]: I0223 09:11:20.774891 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/88000ec3-b551-47f7-99c3-79b10c5dcdaf-ovsdbserver-sb\") pod \"dnsmasq-dns-6b6c754dc9-dwspq\" (UID: \"88000ec3-b551-47f7-99c3-79b10c5dcdaf\") " pod="openstack/dnsmasq-dns-6b6c754dc9-dwspq" Feb 23 09:11:20 crc kubenswrapper[4940]: I0223 09:11:20.776046 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/88000ec3-b551-47f7-99c3-79b10c5dcdaf-dns-svc\") pod \"dnsmasq-dns-6b6c754dc9-dwspq\" (UID: \"88000ec3-b551-47f7-99c3-79b10c5dcdaf\") " pod="openstack/dnsmasq-dns-6b6c754dc9-dwspq" Feb 23 09:11:20 crc kubenswrapper[4940]: I0223 09:11:20.776707 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/88000ec3-b551-47f7-99c3-79b10c5dcdaf-dns-swift-storage-0\") pod \"dnsmasq-dns-6b6c754dc9-dwspq\" (UID: \"88000ec3-b551-47f7-99c3-79b10c5dcdaf\") " pod="openstack/dnsmasq-dns-6b6c754dc9-dwspq" Feb 23 09:11:20 crc kubenswrapper[4940]: I0223 09:11:20.780532 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/88000ec3-b551-47f7-99c3-79b10c5dcdaf-ovsdbserver-nb\") pod \"dnsmasq-dns-6b6c754dc9-dwspq\" (UID: \"88000ec3-b551-47f7-99c3-79b10c5dcdaf\") " pod="openstack/dnsmasq-dns-6b6c754dc9-dwspq" Feb 23 09:11:20 crc kubenswrapper[4940]: I0223 09:11:20.782470 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 23 09:11:20 crc kubenswrapper[4940]: I0223 09:11:20.822795 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/641aa3c3-0f60-4529-9422-9343f561827f-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"641aa3c3-0f60-4529-9422-9343f561827f\") " pod="openstack/nova-scheduler-0" Feb 23 09:11:20 crc kubenswrapper[4940]: I0223 09:11:20.830626 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q2vbt\" (UniqueName: \"kubernetes.io/projected/641aa3c3-0f60-4529-9422-9343f561827f-kube-api-access-q2vbt\") pod \"nova-scheduler-0\" (UID: \"641aa3c3-0f60-4529-9422-9343f561827f\") " pod="openstack/nova-scheduler-0" Feb 23 09:11:20 crc kubenswrapper[4940]: I0223 09:11:20.837812 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nwkqf\" (UniqueName: \"kubernetes.io/projected/88000ec3-b551-47f7-99c3-79b10c5dcdaf-kube-api-access-nwkqf\") pod \"dnsmasq-dns-6b6c754dc9-dwspq\" (UID: \"88000ec3-b551-47f7-99c3-79b10c5dcdaf\") " pod="openstack/dnsmasq-dns-6b6c754dc9-dwspq" Feb 23 09:11:20 crc kubenswrapper[4940]: I0223 09:11:20.838815 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/641aa3c3-0f60-4529-9422-9343f561827f-config-data\") pod \"nova-scheduler-0\" (UID: \"641aa3c3-0f60-4529-9422-9343f561827f\") " pod="openstack/nova-scheduler-0" Feb 23 09:11:20 crc kubenswrapper[4940]: I0223 09:11:20.906731 4940 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 23 09:11:20 crc kubenswrapper[4940]: I0223 09:11:20.907096 4940 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 23 09:11:20 crc kubenswrapper[4940]: I0223 09:11:20.917212 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 23 09:11:20 crc kubenswrapper[4940]: I0223 09:11:20.994682 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-vvclk"] Feb 23 09:11:21 crc kubenswrapper[4940]: I0223 09:11:21.012084 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b6c754dc9-dwspq" Feb 23 09:11:21 crc kubenswrapper[4940]: I0223 09:11:21.246143 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 23 09:11:21 crc kubenswrapper[4940]: W0223 09:11:21.416757 4940 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf670bbd9_012d_433a_90ec_91662031476c.slice/crio-df5e9bc224253ed1d5f8a4fee8b76cf3ca2abe596a3562f306b52be892dbc767 WatchSource:0}: Error finding container df5e9bc224253ed1d5f8a4fee8b76cf3ca2abe596a3562f306b52be892dbc767: Status 404 returned error can't find the container with id df5e9bc224253ed1d5f8a4fee8b76cf3ca2abe596a3562f306b52be892dbc767 Feb 23 09:11:22 crc kubenswrapper[4940]: I0223 09:11:22.011962 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-vvclk" event={"ID":"199cf4dd-ab6f-4d59-9a82-86c613352012","Type":"ContainerStarted","Data":"879d09cd05fdd673a469aaef8f885d26bd2f4d676ecd9134c6269c67a9b0c59e"} Feb 23 09:11:22 crc kubenswrapper[4940]: I0223 09:11:22.012383 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-vvclk" event={"ID":"199cf4dd-ab6f-4d59-9a82-86c613352012","Type":"ContainerStarted","Data":"09873dfa5206f12526298bd90399256a60812a1ac95415d38c4613ed27051548"} Feb 23 09:11:22 crc kubenswrapper[4940]: I0223 09:11:22.053103 4940 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-vvclk" podStartSLOduration=4.053079407 podStartE2EDuration="4.053079407s" podCreationTimestamp="2026-02-23 09:11:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 09:11:22.042990272 +0000 UTC m=+1413.426196449" watchObservedRunningTime="2026-02-23 09:11:22.053079407 +0000 UTC m=+1413.436285564" Feb 23 09:11:22 crc kubenswrapper[4940]: I0223 09:11:22.076379 4940 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 23 09:11:22 crc kubenswrapper[4940]: I0223 09:11:22.076408 4940 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 23 09:11:22 crc kubenswrapper[4940]: I0223 09:11:22.076857 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f670bbd9-012d-433a-90ec-91662031476c","Type":"ContainerStarted","Data":"df5e9bc224253ed1d5f8a4fee8b76cf3ca2abe596a3562f306b52be892dbc767"} Feb 23 09:11:22 crc kubenswrapper[4940]: I0223 09:11:22.721502 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 23 09:11:22 crc kubenswrapper[4940]: I0223 09:11:22.755008 4940 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-cell1-galera-0" podUID="f228632e-c649-4cbf-9a32-5baad303ef28" containerName="galera" probeResult="failure" output="command timed out" Feb 23 09:11:22 crc kubenswrapper[4940]: I0223 09:11:22.763574 4940 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-cell1-galera-0" podUID="f228632e-c649-4cbf-9a32-5baad303ef28" containerName="galera" probeResult="failure" output="command timed out" Feb 23 09:11:22 crc kubenswrapper[4940]: I0223 09:11:22.821600 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 23 09:11:23 crc kubenswrapper[4940]: I0223 09:11:23.177338 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6b6c754dc9-dwspq"] Feb 23 09:11:23 crc kubenswrapper[4940]: I0223 09:11:23.200015 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"641aa3c3-0f60-4529-9422-9343f561827f","Type":"ContainerStarted","Data":"51e257c07327a6cf9d23979b642b55e65f07d5aacd04bb8fb2f28ab0dfca6b7e"} Feb 23 09:11:23 crc kubenswrapper[4940]: I0223 09:11:23.204806 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 23 09:11:23 crc kubenswrapper[4940]: I0223 09:11:23.213172 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"261fa9bc-8be5-4db8-a549-14d2917921cd","Type":"ContainerStarted","Data":"0deaf77bb82ebcc3a0314660473fc1e81838b837b39ee7d3f00ee795ee8eef85"} Feb 23 09:11:23 crc kubenswrapper[4940]: I0223 09:11:23.329427 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-ctwwt"] Feb 23 09:11:23 crc kubenswrapper[4940]: I0223 09:11:23.348752 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-ctwwt" Feb 23 09:11:23 crc kubenswrapper[4940]: I0223 09:11:23.357015 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Feb 23 09:11:23 crc kubenswrapper[4940]: I0223 09:11:23.357012 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Feb 23 09:11:23 crc kubenswrapper[4940]: I0223 09:11:23.399678 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-ctwwt"] Feb 23 09:11:23 crc kubenswrapper[4940]: I0223 09:11:23.466208 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f88e18b-bcfa-4446-bbdf-8824c2c94f65-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-ctwwt\" (UID: \"5f88e18b-bcfa-4446-bbdf-8824c2c94f65\") " pod="openstack/nova-cell1-conductor-db-sync-ctwwt" Feb 23 09:11:23 crc kubenswrapper[4940]: I0223 09:11:23.466295 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5f88e18b-bcfa-4446-bbdf-8824c2c94f65-config-data\") pod \"nova-cell1-conductor-db-sync-ctwwt\" (UID: \"5f88e18b-bcfa-4446-bbdf-8824c2c94f65\") " pod="openstack/nova-cell1-conductor-db-sync-ctwwt" Feb 23 09:11:23 crc kubenswrapper[4940]: I0223 09:11:23.466480 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5bbht\" (UniqueName: \"kubernetes.io/projected/5f88e18b-bcfa-4446-bbdf-8824c2c94f65-kube-api-access-5bbht\") pod \"nova-cell1-conductor-db-sync-ctwwt\" (UID: \"5f88e18b-bcfa-4446-bbdf-8824c2c94f65\") " pod="openstack/nova-cell1-conductor-db-sync-ctwwt" Feb 23 09:11:23 crc kubenswrapper[4940]: I0223 09:11:23.796959 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5f88e18b-bcfa-4446-bbdf-8824c2c94f65-scripts\") pod \"nova-cell1-conductor-db-sync-ctwwt\" (UID: \"5f88e18b-bcfa-4446-bbdf-8824c2c94f65\") " pod="openstack/nova-cell1-conductor-db-sync-ctwwt" Feb 23 09:11:23 crc kubenswrapper[4940]: I0223 09:11:23.898945 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5f88e18b-bcfa-4446-bbdf-8824c2c94f65-config-data\") pod \"nova-cell1-conductor-db-sync-ctwwt\" (UID: \"5f88e18b-bcfa-4446-bbdf-8824c2c94f65\") " pod="openstack/nova-cell1-conductor-db-sync-ctwwt" Feb 23 09:11:23 crc kubenswrapper[4940]: I0223 09:11:23.901015 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5bbht\" (UniqueName: \"kubernetes.io/projected/5f88e18b-bcfa-4446-bbdf-8824c2c94f65-kube-api-access-5bbht\") pod \"nova-cell1-conductor-db-sync-ctwwt\" (UID: \"5f88e18b-bcfa-4446-bbdf-8824c2c94f65\") " pod="openstack/nova-cell1-conductor-db-sync-ctwwt" Feb 23 09:11:23 crc kubenswrapper[4940]: I0223 09:11:23.901796 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5f88e18b-bcfa-4446-bbdf-8824c2c94f65-scripts\") pod \"nova-cell1-conductor-db-sync-ctwwt\" (UID: \"5f88e18b-bcfa-4446-bbdf-8824c2c94f65\") " pod="openstack/nova-cell1-conductor-db-sync-ctwwt" Feb 23 09:11:23 crc kubenswrapper[4940]: I0223 09:11:23.902039 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f88e18b-bcfa-4446-bbdf-8824c2c94f65-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-ctwwt\" (UID: \"5f88e18b-bcfa-4446-bbdf-8824c2c94f65\") " pod="openstack/nova-cell1-conductor-db-sync-ctwwt" Feb 23 09:11:23 crc kubenswrapper[4940]: I0223 09:11:23.923052 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f88e18b-bcfa-4446-bbdf-8824c2c94f65-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-ctwwt\" (UID: \"5f88e18b-bcfa-4446-bbdf-8824c2c94f65\") " pod="openstack/nova-cell1-conductor-db-sync-ctwwt" Feb 23 09:11:23 crc kubenswrapper[4940]: I0223 09:11:23.923328 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5f88e18b-bcfa-4446-bbdf-8824c2c94f65-config-data\") pod \"nova-cell1-conductor-db-sync-ctwwt\" (UID: \"5f88e18b-bcfa-4446-bbdf-8824c2c94f65\") " pod="openstack/nova-cell1-conductor-db-sync-ctwwt" Feb 23 09:11:23 crc kubenswrapper[4940]: I0223 09:11:23.926142 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5bbht\" (UniqueName: \"kubernetes.io/projected/5f88e18b-bcfa-4446-bbdf-8824c2c94f65-kube-api-access-5bbht\") pod \"nova-cell1-conductor-db-sync-ctwwt\" (UID: \"5f88e18b-bcfa-4446-bbdf-8824c2c94f65\") " pod="openstack/nova-cell1-conductor-db-sync-ctwwt" Feb 23 09:11:23 crc kubenswrapper[4940]: I0223 09:11:23.928099 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5f88e18b-bcfa-4446-bbdf-8824c2c94f65-scripts\") pod \"nova-cell1-conductor-db-sync-ctwwt\" (UID: \"5f88e18b-bcfa-4446-bbdf-8824c2c94f65\") " pod="openstack/nova-cell1-conductor-db-sync-ctwwt" Feb 23 09:11:24 crc kubenswrapper[4940]: I0223 09:11:24.376974 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-ctwwt" Feb 23 09:11:24 crc kubenswrapper[4940]: I0223 09:11:24.432555 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b6c754dc9-dwspq" event={"ID":"88000ec3-b551-47f7-99c3-79b10c5dcdaf","Type":"ContainerStarted","Data":"965bbe927c7833763a52cfb025f821b1df8b7bf32a4f0e279b3b2007fa5ab308"} Feb 23 09:11:24 crc kubenswrapper[4940]: I0223 09:11:24.432633 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b6c754dc9-dwspq" event={"ID":"88000ec3-b551-47f7-99c3-79b10c5dcdaf","Type":"ContainerStarted","Data":"3c7bc377ec6854548ce502c58c02554a13bbb84ad40ce08e3db6d2967057a1c3"} Feb 23 09:11:24 crc kubenswrapper[4940]: I0223 09:11:24.484406 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2d2d73f9-5768-49cf-b547-7668dfe210fa","Type":"ContainerStarted","Data":"7743fbc2d1e0cf589f7569542a4a293f2b0d98669b1f5b7c2dfbdd6625aedb6a"} Feb 23 09:11:24 crc kubenswrapper[4940]: I0223 09:11:24.484730 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 23 09:11:24 crc kubenswrapper[4940]: I0223 09:11:24.503252 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"070f7f4a-ea04-483e-a0c6-719372ff945e","Type":"ContainerStarted","Data":"31b2872875f0077a3d25df81302a74215149e83a6e62fdddabc479f08711e9f7"} Feb 23 09:11:24 crc kubenswrapper[4940]: I0223 09:11:24.577256 4940 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.58506021 podStartE2EDuration="9.577229691s" podCreationTimestamp="2026-02-23 09:11:15 +0000 UTC" firstStartedPulling="2026-02-23 09:11:16.018567633 +0000 UTC m=+1407.401773800" lastFinishedPulling="2026-02-23 09:11:22.010737124 +0000 UTC m=+1413.393943281" observedRunningTime="2026-02-23 09:11:24.513519661 +0000 UTC m=+1415.896725828" watchObservedRunningTime="2026-02-23 09:11:24.577229691 +0000 UTC m=+1415.960435848" Feb 23 09:11:25 crc kubenswrapper[4940]: I0223 09:11:25.316148 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-ctwwt"] Feb 23 09:11:25 crc kubenswrapper[4940]: I0223 09:11:25.748664 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 23 09:11:25 crc kubenswrapper[4940]: I0223 09:11:25.748784 4940 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 23 09:11:25 crc kubenswrapper[4940]: I0223 09:11:25.817487 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-ctwwt" event={"ID":"5f88e18b-bcfa-4446-bbdf-8824c2c94f65","Type":"ContainerStarted","Data":"1c775f5925847df116e750e3c3fbbf85acd5623e6c61976cc6e6c93483a82fd3"} Feb 23 09:11:25 crc kubenswrapper[4940]: I0223 09:11:25.841315 4940 generic.go:334] "Generic (PLEG): container finished" podID="88000ec3-b551-47f7-99c3-79b10c5dcdaf" containerID="965bbe927c7833763a52cfb025f821b1df8b7bf32a4f0e279b3b2007fa5ab308" exitCode=0 Feb 23 09:11:25 crc kubenswrapper[4940]: I0223 09:11:25.843355 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b6c754dc9-dwspq" event={"ID":"88000ec3-b551-47f7-99c3-79b10c5dcdaf","Type":"ContainerDied","Data":"965bbe927c7833763a52cfb025f821b1df8b7bf32a4f0e279b3b2007fa5ab308"} Feb 23 09:11:25 crc kubenswrapper[4940]: I0223 09:11:25.843390 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b6c754dc9-dwspq" event={"ID":"88000ec3-b551-47f7-99c3-79b10c5dcdaf","Type":"ContainerStarted","Data":"8bbfb7a869fbdb4ca66fcf6102a3b88f8c75bcf22c61502094ff3e4a377b11b7"} Feb 23 09:11:25 crc kubenswrapper[4940]: I0223 09:11:25.843426 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6b6c754dc9-dwspq" Feb 23 09:11:26 crc kubenswrapper[4940]: I0223 09:11:26.021792 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 23 09:11:26 crc kubenswrapper[4940]: I0223 09:11:26.083168 4940 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6b6c754dc9-dwspq" podStartSLOduration=7.083148811 podStartE2EDuration="7.083148811s" podCreationTimestamp="2026-02-23 09:11:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 09:11:25.867179675 +0000 UTC m=+1417.250385832" watchObservedRunningTime="2026-02-23 09:11:26.083148811 +0000 UTC m=+1417.466354968" Feb 23 09:11:26 crc kubenswrapper[4940]: I0223 09:11:26.616117 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 23 09:11:26 crc kubenswrapper[4940]: I0223 09:11:26.616451 4940 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 23 09:11:26 crc kubenswrapper[4940]: I0223 09:11:26.696969 4940 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 23 09:11:26 crc kubenswrapper[4940]: I0223 09:11:26.705197 4940 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 23 09:11:27 crc kubenswrapper[4940]: I0223 09:11:27.220804 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-ctwwt" event={"ID":"5f88e18b-bcfa-4446-bbdf-8824c2c94f65","Type":"ContainerStarted","Data":"e46cc3d8abf9da36dc6d70b4803b2e8c3cb35392fdd83d5a8598848f99040823"} Feb 23 09:11:27 crc kubenswrapper[4940]: I0223 09:11:27.878378 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 23 09:11:27 crc kubenswrapper[4940]: I0223 09:11:27.910891 4940 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-ctwwt" podStartSLOduration=4.910864862 podStartE2EDuration="4.910864862s" podCreationTimestamp="2026-02-23 09:11:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 09:11:27.266561996 +0000 UTC m=+1418.649768173" watchObservedRunningTime="2026-02-23 09:11:27.910864862 +0000 UTC m=+1419.294071029" Feb 23 09:11:31 crc kubenswrapper[4940]: I0223 09:11:31.014810 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6b6c754dc9-dwspq" Feb 23 09:11:31 crc kubenswrapper[4940]: I0223 09:11:31.114131 4940 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-56696ff475-gv984"] Feb 23 09:11:31 crc kubenswrapper[4940]: I0223 09:11:31.114475 4940 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-56696ff475-gv984" podUID="4ddde6a1-2d30-4c95-aac1-ab2f32130f14" containerName="dnsmasq-dns" containerID="cri-o://6c4b56f59540d71e867900ff830d197fc4c40d5bdb06f597b5ce3e5b60639c97" gracePeriod=10 Feb 23 09:11:31 crc kubenswrapper[4940]: I0223 09:11:31.283244 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"070f7f4a-ea04-483e-a0c6-719372ff945e","Type":"ContainerStarted","Data":"b46cb1fb248c63fc449bf292c35c3c6b3bde0ee2982647e59059beb0771def29"} Feb 23 09:11:31 crc kubenswrapper[4940]: I0223 09:11:31.283401 4940 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="070f7f4a-ea04-483e-a0c6-719372ff945e" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://b46cb1fb248c63fc449bf292c35c3c6b3bde0ee2982647e59059beb0771def29" gracePeriod=30 Feb 23 09:11:31 crc kubenswrapper[4940]: I0223 09:11:31.304771 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f670bbd9-012d-433a-90ec-91662031476c","Type":"ContainerStarted","Data":"5a92f3f4362efdeed075c276c87faa8689c42a71fae90b57f96ab998690ccc7d"} Feb 23 09:11:31 crc kubenswrapper[4940]: I0223 09:11:31.304830 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f670bbd9-012d-433a-90ec-91662031476c","Type":"ContainerStarted","Data":"1e464dfcdfbb9e73debefa16adbbc384cba48854000a225c6fdce4bcf7a8ca62"} Feb 23 09:11:31 crc kubenswrapper[4940]: I0223 09:11:31.307237 4940 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=5.50189548 podStartE2EDuration="12.307219431s" podCreationTimestamp="2026-02-23 09:11:19 +0000 UTC" firstStartedPulling="2026-02-23 09:11:23.255737864 +0000 UTC m=+1414.638944021" lastFinishedPulling="2026-02-23 09:11:30.061061815 +0000 UTC m=+1421.444267972" observedRunningTime="2026-02-23 09:11:31.3053101 +0000 UTC m=+1422.688516277" watchObservedRunningTime="2026-02-23 09:11:31.307219431 +0000 UTC m=+1422.690425588" Feb 23 09:11:31 crc kubenswrapper[4940]: I0223 09:11:31.322295 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"641aa3c3-0f60-4529-9422-9343f561827f","Type":"ContainerStarted","Data":"6f2799bcf55c485f6fecb0cd16f3f3064476b4085480847ace38dd7078af23c0"} Feb 23 09:11:31 crc kubenswrapper[4940]: I0223 09:11:31.325463 4940 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="261fa9bc-8be5-4db8-a549-14d2917921cd" containerName="nova-metadata-log" containerID="cri-o://3053df7c9665f3fe141fb86090bc583ccc1060f636fd08d38ce8711cfa7275f7" gracePeriod=30 Feb 23 09:11:31 crc kubenswrapper[4940]: I0223 09:11:31.325842 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"261fa9bc-8be5-4db8-a549-14d2917921cd","Type":"ContainerStarted","Data":"ffa6aafa373b088c4937f18d6b892263415d97e0700bf69f15b1abcf6c14bf5d"} Feb 23 09:11:31 crc kubenswrapper[4940]: I0223 09:11:31.325909 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"261fa9bc-8be5-4db8-a549-14d2917921cd","Type":"ContainerStarted","Data":"3053df7c9665f3fe141fb86090bc583ccc1060f636fd08d38ce8711cfa7275f7"} Feb 23 09:11:31 crc kubenswrapper[4940]: I0223 09:11:31.325932 4940 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="261fa9bc-8be5-4db8-a549-14d2917921cd" containerName="nova-metadata-metadata" containerID="cri-o://ffa6aafa373b088c4937f18d6b892263415d97e0700bf69f15b1abcf6c14bf5d" gracePeriod=30 Feb 23 09:11:31 crc kubenswrapper[4940]: I0223 09:11:31.330638 4940 generic.go:334] "Generic (PLEG): container finished" podID="4ddde6a1-2d30-4c95-aac1-ab2f32130f14" containerID="6c4b56f59540d71e867900ff830d197fc4c40d5bdb06f597b5ce3e5b60639c97" exitCode=0 Feb 23 09:11:31 crc kubenswrapper[4940]: I0223 09:11:31.330694 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56696ff475-gv984" event={"ID":"4ddde6a1-2d30-4c95-aac1-ab2f32130f14","Type":"ContainerDied","Data":"6c4b56f59540d71e867900ff830d197fc4c40d5bdb06f597b5ce3e5b60639c97"} Feb 23 09:11:31 crc kubenswrapper[4940]: I0223 09:11:31.340964 4940 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=4.741528793 podStartE2EDuration="13.340939864s" podCreationTimestamp="2026-02-23 09:11:18 +0000 UTC" firstStartedPulling="2026-02-23 09:11:21.460701314 +0000 UTC m=+1412.843907471" lastFinishedPulling="2026-02-23 09:11:30.060112365 +0000 UTC m=+1421.443318542" observedRunningTime="2026-02-23 09:11:31.327421301 +0000 UTC m=+1422.710627478" watchObservedRunningTime="2026-02-23 09:11:31.340939864 +0000 UTC m=+1422.724146021" Feb 23 09:11:31 crc kubenswrapper[4940]: I0223 09:11:31.381699 4940 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=4.614934432 podStartE2EDuration="12.381393147s" podCreationTimestamp="2026-02-23 09:11:19 +0000 UTC" firstStartedPulling="2026-02-23 09:11:22.302458926 +0000 UTC m=+1413.685665083" lastFinishedPulling="2026-02-23 09:11:30.068917641 +0000 UTC m=+1421.452123798" observedRunningTime="2026-02-23 09:11:31.374707998 +0000 UTC m=+1422.757914165" watchObservedRunningTime="2026-02-23 09:11:31.381393147 +0000 UTC m=+1422.764599304" Feb 23 09:11:31 crc kubenswrapper[4940]: I0223 09:11:31.405338 4940 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=5.466021958 podStartE2EDuration="12.405321355s" podCreationTimestamp="2026-02-23 09:11:19 +0000 UTC" firstStartedPulling="2026-02-23 09:11:23.121976825 +0000 UTC m=+1414.505182982" lastFinishedPulling="2026-02-23 09:11:30.061276222 +0000 UTC m=+1421.444482379" observedRunningTime="2026-02-23 09:11:31.397164869 +0000 UTC m=+1422.780371026" watchObservedRunningTime="2026-02-23 09:11:31.405321355 +0000 UTC m=+1422.788527512" Feb 23 09:11:31 crc kubenswrapper[4940]: I0223 09:11:31.757865 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56696ff475-gv984" Feb 23 09:11:31 crc kubenswrapper[4940]: I0223 09:11:31.837470 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wghvg\" (UniqueName: \"kubernetes.io/projected/4ddde6a1-2d30-4c95-aac1-ab2f32130f14-kube-api-access-wghvg\") pod \"4ddde6a1-2d30-4c95-aac1-ab2f32130f14\" (UID: \"4ddde6a1-2d30-4c95-aac1-ab2f32130f14\") " Feb 23 09:11:31 crc kubenswrapper[4940]: I0223 09:11:31.837585 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4ddde6a1-2d30-4c95-aac1-ab2f32130f14-dns-svc\") pod \"4ddde6a1-2d30-4c95-aac1-ab2f32130f14\" (UID: \"4ddde6a1-2d30-4c95-aac1-ab2f32130f14\") " Feb 23 09:11:31 crc kubenswrapper[4940]: I0223 09:11:31.837631 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4ddde6a1-2d30-4c95-aac1-ab2f32130f14-config\") pod \"4ddde6a1-2d30-4c95-aac1-ab2f32130f14\" (UID: \"4ddde6a1-2d30-4c95-aac1-ab2f32130f14\") " Feb 23 09:11:31 crc kubenswrapper[4940]: I0223 09:11:31.837937 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4ddde6a1-2d30-4c95-aac1-ab2f32130f14-dns-swift-storage-0\") pod \"4ddde6a1-2d30-4c95-aac1-ab2f32130f14\" (UID: \"4ddde6a1-2d30-4c95-aac1-ab2f32130f14\") " Feb 23 09:11:31 crc kubenswrapper[4940]: I0223 09:11:31.837956 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4ddde6a1-2d30-4c95-aac1-ab2f32130f14-ovsdbserver-sb\") pod \"4ddde6a1-2d30-4c95-aac1-ab2f32130f14\" (UID: \"4ddde6a1-2d30-4c95-aac1-ab2f32130f14\") " Feb 23 09:11:31 crc kubenswrapper[4940]: I0223 09:11:31.837985 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4ddde6a1-2d30-4c95-aac1-ab2f32130f14-ovsdbserver-nb\") pod \"4ddde6a1-2d30-4c95-aac1-ab2f32130f14\" (UID: \"4ddde6a1-2d30-4c95-aac1-ab2f32130f14\") " Feb 23 09:11:31 crc kubenswrapper[4940]: I0223 09:11:31.867832 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4ddde6a1-2d30-4c95-aac1-ab2f32130f14-kube-api-access-wghvg" (OuterVolumeSpecName: "kube-api-access-wghvg") pod "4ddde6a1-2d30-4c95-aac1-ab2f32130f14" (UID: "4ddde6a1-2d30-4c95-aac1-ab2f32130f14"). InnerVolumeSpecName "kube-api-access-wghvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 09:11:31 crc kubenswrapper[4940]: I0223 09:11:31.928091 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4ddde6a1-2d30-4c95-aac1-ab2f32130f14-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "4ddde6a1-2d30-4c95-aac1-ab2f32130f14" (UID: "4ddde6a1-2d30-4c95-aac1-ab2f32130f14"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 09:11:31 crc kubenswrapper[4940]: I0223 09:11:31.949721 4940 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4ddde6a1-2d30-4c95-aac1-ab2f32130f14-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 23 09:11:31 crc kubenswrapper[4940]: I0223 09:11:31.949740 4940 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wghvg\" (UniqueName: \"kubernetes.io/projected/4ddde6a1-2d30-4c95-aac1-ab2f32130f14-kube-api-access-wghvg\") on node \"crc\" DevicePath \"\"" Feb 23 09:11:31 crc kubenswrapper[4940]: I0223 09:11:31.956226 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4ddde6a1-2d30-4c95-aac1-ab2f32130f14-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "4ddde6a1-2d30-4c95-aac1-ab2f32130f14" (UID: "4ddde6a1-2d30-4c95-aac1-ab2f32130f14"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 09:11:31 crc kubenswrapper[4940]: I0223 09:11:31.971148 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4ddde6a1-2d30-4c95-aac1-ab2f32130f14-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "4ddde6a1-2d30-4c95-aac1-ab2f32130f14" (UID: "4ddde6a1-2d30-4c95-aac1-ab2f32130f14"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 09:11:31 crc kubenswrapper[4940]: I0223 09:11:31.980080 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4ddde6a1-2d30-4c95-aac1-ab2f32130f14-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "4ddde6a1-2d30-4c95-aac1-ab2f32130f14" (UID: "4ddde6a1-2d30-4c95-aac1-ab2f32130f14"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 09:11:31 crc kubenswrapper[4940]: I0223 09:11:31.982253 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4ddde6a1-2d30-4c95-aac1-ab2f32130f14-config" (OuterVolumeSpecName: "config") pod "4ddde6a1-2d30-4c95-aac1-ab2f32130f14" (UID: "4ddde6a1-2d30-4c95-aac1-ab2f32130f14"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 09:11:31 crc kubenswrapper[4940]: I0223 09:11:31.989563 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 23 09:11:32 crc kubenswrapper[4940]: I0223 09:11:32.055515 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ff4pb\" (UniqueName: \"kubernetes.io/projected/261fa9bc-8be5-4db8-a549-14d2917921cd-kube-api-access-ff4pb\") pod \"261fa9bc-8be5-4db8-a549-14d2917921cd\" (UID: \"261fa9bc-8be5-4db8-a549-14d2917921cd\") " Feb 23 09:11:32 crc kubenswrapper[4940]: I0223 09:11:32.055653 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/261fa9bc-8be5-4db8-a549-14d2917921cd-logs\") pod \"261fa9bc-8be5-4db8-a549-14d2917921cd\" (UID: \"261fa9bc-8be5-4db8-a549-14d2917921cd\") " Feb 23 09:11:32 crc kubenswrapper[4940]: I0223 09:11:32.055731 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/261fa9bc-8be5-4db8-a549-14d2917921cd-config-data\") pod \"261fa9bc-8be5-4db8-a549-14d2917921cd\" (UID: \"261fa9bc-8be5-4db8-a549-14d2917921cd\") " Feb 23 09:11:32 crc kubenswrapper[4940]: I0223 09:11:32.055782 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/261fa9bc-8be5-4db8-a549-14d2917921cd-combined-ca-bundle\") pod \"261fa9bc-8be5-4db8-a549-14d2917921cd\" (UID: \"261fa9bc-8be5-4db8-a549-14d2917921cd\") " Feb 23 09:11:32 crc kubenswrapper[4940]: I0223 09:11:32.056166 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/261fa9bc-8be5-4db8-a549-14d2917921cd-logs" (OuterVolumeSpecName: "logs") pod "261fa9bc-8be5-4db8-a549-14d2917921cd" (UID: "261fa9bc-8be5-4db8-a549-14d2917921cd"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 09:11:32 crc kubenswrapper[4940]: I0223 09:11:32.057421 4940 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4ddde6a1-2d30-4c95-aac1-ab2f32130f14-config\") on node \"crc\" DevicePath \"\"" Feb 23 09:11:32 crc kubenswrapper[4940]: I0223 09:11:32.057444 4940 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/261fa9bc-8be5-4db8-a549-14d2917921cd-logs\") on node \"crc\" DevicePath \"\"" Feb 23 09:11:32 crc kubenswrapper[4940]: I0223 09:11:32.057457 4940 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4ddde6a1-2d30-4c95-aac1-ab2f32130f14-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 23 09:11:32 crc kubenswrapper[4940]: I0223 09:11:32.057471 4940 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4ddde6a1-2d30-4c95-aac1-ab2f32130f14-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 23 09:11:32 crc kubenswrapper[4940]: I0223 09:11:32.057482 4940 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4ddde6a1-2d30-4c95-aac1-ab2f32130f14-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 23 09:11:32 crc kubenswrapper[4940]: I0223 09:11:32.058769 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/261fa9bc-8be5-4db8-a549-14d2917921cd-kube-api-access-ff4pb" (OuterVolumeSpecName: "kube-api-access-ff4pb") pod "261fa9bc-8be5-4db8-a549-14d2917921cd" (UID: "261fa9bc-8be5-4db8-a549-14d2917921cd"). InnerVolumeSpecName "kube-api-access-ff4pb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 09:11:32 crc kubenswrapper[4940]: I0223 09:11:32.085603 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/261fa9bc-8be5-4db8-a549-14d2917921cd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "261fa9bc-8be5-4db8-a549-14d2917921cd" (UID: "261fa9bc-8be5-4db8-a549-14d2917921cd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 09:11:32 crc kubenswrapper[4940]: I0223 09:11:32.093919 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/261fa9bc-8be5-4db8-a549-14d2917921cd-config-data" (OuterVolumeSpecName: "config-data") pod "261fa9bc-8be5-4db8-a549-14d2917921cd" (UID: "261fa9bc-8be5-4db8-a549-14d2917921cd"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 09:11:32 crc kubenswrapper[4940]: I0223 09:11:32.159700 4940 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ff4pb\" (UniqueName: \"kubernetes.io/projected/261fa9bc-8be5-4db8-a549-14d2917921cd-kube-api-access-ff4pb\") on node \"crc\" DevicePath \"\"" Feb 23 09:11:32 crc kubenswrapper[4940]: I0223 09:11:32.159738 4940 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/261fa9bc-8be5-4db8-a549-14d2917921cd-config-data\") on node \"crc\" DevicePath \"\"" Feb 23 09:11:32 crc kubenswrapper[4940]: I0223 09:11:32.159747 4940 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/261fa9bc-8be5-4db8-a549-14d2917921cd-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 09:11:32 crc kubenswrapper[4940]: I0223 09:11:32.358579 4940 generic.go:334] "Generic (PLEG): container finished" podID="261fa9bc-8be5-4db8-a549-14d2917921cd" containerID="ffa6aafa373b088c4937f18d6b892263415d97e0700bf69f15b1abcf6c14bf5d" exitCode=0 Feb 23 09:11:32 crc kubenswrapper[4940]: I0223 09:11:32.358640 4940 generic.go:334] "Generic (PLEG): container finished" podID="261fa9bc-8be5-4db8-a549-14d2917921cd" containerID="3053df7c9665f3fe141fb86090bc583ccc1060f636fd08d38ce8711cfa7275f7" exitCode=143 Feb 23 09:11:32 crc kubenswrapper[4940]: I0223 09:11:32.358726 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"261fa9bc-8be5-4db8-a549-14d2917921cd","Type":"ContainerDied","Data":"ffa6aafa373b088c4937f18d6b892263415d97e0700bf69f15b1abcf6c14bf5d"} Feb 23 09:11:32 crc kubenswrapper[4940]: I0223 09:11:32.358795 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"261fa9bc-8be5-4db8-a549-14d2917921cd","Type":"ContainerDied","Data":"3053df7c9665f3fe141fb86090bc583ccc1060f636fd08d38ce8711cfa7275f7"} Feb 23 09:11:32 crc kubenswrapper[4940]: I0223 09:11:32.358815 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"261fa9bc-8be5-4db8-a549-14d2917921cd","Type":"ContainerDied","Data":"0deaf77bb82ebcc3a0314660473fc1e81838b837b39ee7d3f00ee795ee8eef85"} Feb 23 09:11:32 crc kubenswrapper[4940]: I0223 09:11:32.358846 4940 scope.go:117] "RemoveContainer" containerID="ffa6aafa373b088c4937f18d6b892263415d97e0700bf69f15b1abcf6c14bf5d" Feb 23 09:11:32 crc kubenswrapper[4940]: I0223 09:11:32.359058 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 23 09:11:32 crc kubenswrapper[4940]: I0223 09:11:32.382857 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56696ff475-gv984" Feb 23 09:11:32 crc kubenswrapper[4940]: I0223 09:11:32.383669 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56696ff475-gv984" event={"ID":"4ddde6a1-2d30-4c95-aac1-ab2f32130f14","Type":"ContainerDied","Data":"88e47992b3f03052d1d84eb4c4fb42281e097e3b818b50306ec6e627d936ad0d"} Feb 23 09:11:32 crc kubenswrapper[4940]: I0223 09:11:32.384263 4940 scope.go:117] "RemoveContainer" containerID="3053df7c9665f3fe141fb86090bc583ccc1060f636fd08d38ce8711cfa7275f7" Feb 23 09:11:32 crc kubenswrapper[4940]: I0223 09:11:32.424978 4940 scope.go:117] "RemoveContainer" containerID="ffa6aafa373b088c4937f18d6b892263415d97e0700bf69f15b1abcf6c14bf5d" Feb 23 09:11:32 crc kubenswrapper[4940]: E0223 09:11:32.425747 4940 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ffa6aafa373b088c4937f18d6b892263415d97e0700bf69f15b1abcf6c14bf5d\": container with ID starting with ffa6aafa373b088c4937f18d6b892263415d97e0700bf69f15b1abcf6c14bf5d not found: ID does not exist" containerID="ffa6aafa373b088c4937f18d6b892263415d97e0700bf69f15b1abcf6c14bf5d" Feb 23 09:11:32 crc kubenswrapper[4940]: I0223 09:11:32.425781 4940 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ffa6aafa373b088c4937f18d6b892263415d97e0700bf69f15b1abcf6c14bf5d"} err="failed to get container status \"ffa6aafa373b088c4937f18d6b892263415d97e0700bf69f15b1abcf6c14bf5d\": rpc error: code = NotFound desc = could not find container \"ffa6aafa373b088c4937f18d6b892263415d97e0700bf69f15b1abcf6c14bf5d\": container with ID starting with ffa6aafa373b088c4937f18d6b892263415d97e0700bf69f15b1abcf6c14bf5d not found: ID does not exist" Feb 23 09:11:32 crc kubenswrapper[4940]: I0223 09:11:32.425802 4940 scope.go:117] "RemoveContainer" containerID="3053df7c9665f3fe141fb86090bc583ccc1060f636fd08d38ce8711cfa7275f7" Feb 23 09:11:32 crc kubenswrapper[4940]: E0223 09:11:32.427961 4940 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3053df7c9665f3fe141fb86090bc583ccc1060f636fd08d38ce8711cfa7275f7\": container with ID starting with 3053df7c9665f3fe141fb86090bc583ccc1060f636fd08d38ce8711cfa7275f7 not found: ID does not exist" containerID="3053df7c9665f3fe141fb86090bc583ccc1060f636fd08d38ce8711cfa7275f7" Feb 23 09:11:32 crc kubenswrapper[4940]: I0223 09:11:32.427986 4940 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3053df7c9665f3fe141fb86090bc583ccc1060f636fd08d38ce8711cfa7275f7"} err="failed to get container status \"3053df7c9665f3fe141fb86090bc583ccc1060f636fd08d38ce8711cfa7275f7\": rpc error: code = NotFound desc = could not find container \"3053df7c9665f3fe141fb86090bc583ccc1060f636fd08d38ce8711cfa7275f7\": container with ID starting with 3053df7c9665f3fe141fb86090bc583ccc1060f636fd08d38ce8711cfa7275f7 not found: ID does not exist" Feb 23 09:11:32 crc kubenswrapper[4940]: I0223 09:11:32.428002 4940 scope.go:117] "RemoveContainer" containerID="ffa6aafa373b088c4937f18d6b892263415d97e0700bf69f15b1abcf6c14bf5d" Feb 23 09:11:32 crc kubenswrapper[4940]: I0223 09:11:32.438176 4940 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ffa6aafa373b088c4937f18d6b892263415d97e0700bf69f15b1abcf6c14bf5d"} err="failed to get container status \"ffa6aafa373b088c4937f18d6b892263415d97e0700bf69f15b1abcf6c14bf5d\": rpc error: code = NotFound desc = could not find container \"ffa6aafa373b088c4937f18d6b892263415d97e0700bf69f15b1abcf6c14bf5d\": container with ID starting with ffa6aafa373b088c4937f18d6b892263415d97e0700bf69f15b1abcf6c14bf5d not found: ID does not exist" Feb 23 09:11:32 crc kubenswrapper[4940]: I0223 09:11:32.438228 4940 scope.go:117] "RemoveContainer" containerID="3053df7c9665f3fe141fb86090bc583ccc1060f636fd08d38ce8711cfa7275f7" Feb 23 09:11:32 crc kubenswrapper[4940]: I0223 09:11:32.440269 4940 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 23 09:11:32 crc kubenswrapper[4940]: I0223 09:11:32.443835 4940 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3053df7c9665f3fe141fb86090bc583ccc1060f636fd08d38ce8711cfa7275f7"} err="failed to get container status \"3053df7c9665f3fe141fb86090bc583ccc1060f636fd08d38ce8711cfa7275f7\": rpc error: code = NotFound desc = could not find container \"3053df7c9665f3fe141fb86090bc583ccc1060f636fd08d38ce8711cfa7275f7\": container with ID starting with 3053df7c9665f3fe141fb86090bc583ccc1060f636fd08d38ce8711cfa7275f7 not found: ID does not exist" Feb 23 09:11:32 crc kubenswrapper[4940]: I0223 09:11:32.443909 4940 scope.go:117] "RemoveContainer" containerID="6c4b56f59540d71e867900ff830d197fc4c40d5bdb06f597b5ce3e5b60639c97" Feb 23 09:11:32 crc kubenswrapper[4940]: I0223 09:11:32.456667 4940 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 23 09:11:32 crc kubenswrapper[4940]: I0223 09:11:32.469331 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 23 09:11:32 crc kubenswrapper[4940]: E0223 09:11:32.469936 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ddde6a1-2d30-4c95-aac1-ab2f32130f14" containerName="dnsmasq-dns" Feb 23 09:11:32 crc kubenswrapper[4940]: I0223 09:11:32.469963 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ddde6a1-2d30-4c95-aac1-ab2f32130f14" containerName="dnsmasq-dns" Feb 23 09:11:32 crc kubenswrapper[4940]: E0223 09:11:32.470002 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="261fa9bc-8be5-4db8-a549-14d2917921cd" containerName="nova-metadata-metadata" Feb 23 09:11:32 crc kubenswrapper[4940]: I0223 09:11:32.470013 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="261fa9bc-8be5-4db8-a549-14d2917921cd" containerName="nova-metadata-metadata" Feb 23 09:11:32 crc kubenswrapper[4940]: E0223 09:11:32.470037 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="261fa9bc-8be5-4db8-a549-14d2917921cd" containerName="nova-metadata-log" Feb 23 09:11:32 crc kubenswrapper[4940]: I0223 09:11:32.470046 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="261fa9bc-8be5-4db8-a549-14d2917921cd" containerName="nova-metadata-log" Feb 23 09:11:32 crc kubenswrapper[4940]: E0223 09:11:32.470065 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ddde6a1-2d30-4c95-aac1-ab2f32130f14" containerName="init" Feb 23 09:11:32 crc kubenswrapper[4940]: I0223 09:11:32.470074 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ddde6a1-2d30-4c95-aac1-ab2f32130f14" containerName="init" Feb 23 09:11:32 crc kubenswrapper[4940]: I0223 09:11:32.470335 4940 memory_manager.go:354] "RemoveStaleState removing state" podUID="4ddde6a1-2d30-4c95-aac1-ab2f32130f14" containerName="dnsmasq-dns" Feb 23 09:11:32 crc kubenswrapper[4940]: I0223 09:11:32.470356 4940 memory_manager.go:354] "RemoveStaleState removing state" podUID="261fa9bc-8be5-4db8-a549-14d2917921cd" containerName="nova-metadata-log" Feb 23 09:11:32 crc kubenswrapper[4940]: I0223 09:11:32.470388 4940 memory_manager.go:354] "RemoveStaleState removing state" podUID="261fa9bc-8be5-4db8-a549-14d2917921cd" containerName="nova-metadata-metadata" Feb 23 09:11:32 crc kubenswrapper[4940]: I0223 09:11:32.471842 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 23 09:11:32 crc kubenswrapper[4940]: I0223 09:11:32.473372 4940 scope.go:117] "RemoveContainer" containerID="db15bfaf7a8b72c195d6fba810b00b10561032db7fdd4cbff5d73c18bb181fe0" Feb 23 09:11:32 crc kubenswrapper[4940]: I0223 09:11:32.476386 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 23 09:11:32 crc kubenswrapper[4940]: I0223 09:11:32.476682 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 23 09:11:32 crc kubenswrapper[4940]: I0223 09:11:32.515724 4940 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-56696ff475-gv984"] Feb 23 09:11:32 crc kubenswrapper[4940]: I0223 09:11:32.531060 4940 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-56696ff475-gv984"] Feb 23 09:11:32 crc kubenswrapper[4940]: I0223 09:11:32.548424 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 23 09:11:32 crc kubenswrapper[4940]: I0223 09:11:32.588515 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/7ab19ece-433f-48c7-8da3-d025f968169c-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"7ab19ece-433f-48c7-8da3-d025f968169c\") " pod="openstack/nova-metadata-0" Feb 23 09:11:32 crc kubenswrapper[4940]: I0223 09:11:32.588677 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7ab19ece-433f-48c7-8da3-d025f968169c-logs\") pod \"nova-metadata-0\" (UID: \"7ab19ece-433f-48c7-8da3-d025f968169c\") " pod="openstack/nova-metadata-0" Feb 23 09:11:32 crc kubenswrapper[4940]: I0223 09:11:32.588925 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nmkrr\" (UniqueName: \"kubernetes.io/projected/7ab19ece-433f-48c7-8da3-d025f968169c-kube-api-access-nmkrr\") pod \"nova-metadata-0\" (UID: \"7ab19ece-433f-48c7-8da3-d025f968169c\") " pod="openstack/nova-metadata-0" Feb 23 09:11:32 crc kubenswrapper[4940]: I0223 09:11:32.588980 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ab19ece-433f-48c7-8da3-d025f968169c-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"7ab19ece-433f-48c7-8da3-d025f968169c\") " pod="openstack/nova-metadata-0" Feb 23 09:11:32 crc kubenswrapper[4940]: I0223 09:11:32.589000 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ab19ece-433f-48c7-8da3-d025f968169c-config-data\") pod \"nova-metadata-0\" (UID: \"7ab19ece-433f-48c7-8da3-d025f968169c\") " pod="openstack/nova-metadata-0" Feb 23 09:11:32 crc kubenswrapper[4940]: I0223 09:11:32.691638 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nmkrr\" (UniqueName: \"kubernetes.io/projected/7ab19ece-433f-48c7-8da3-d025f968169c-kube-api-access-nmkrr\") pod \"nova-metadata-0\" (UID: \"7ab19ece-433f-48c7-8da3-d025f968169c\") " pod="openstack/nova-metadata-0" Feb 23 09:11:32 crc kubenswrapper[4940]: I0223 09:11:32.691713 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ab19ece-433f-48c7-8da3-d025f968169c-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"7ab19ece-433f-48c7-8da3-d025f968169c\") " pod="openstack/nova-metadata-0" Feb 23 09:11:32 crc kubenswrapper[4940]: I0223 09:11:32.691737 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ab19ece-433f-48c7-8da3-d025f968169c-config-data\") pod \"nova-metadata-0\" (UID: \"7ab19ece-433f-48c7-8da3-d025f968169c\") " pod="openstack/nova-metadata-0" Feb 23 09:11:32 crc kubenswrapper[4940]: I0223 09:11:32.691805 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/7ab19ece-433f-48c7-8da3-d025f968169c-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"7ab19ece-433f-48c7-8da3-d025f968169c\") " pod="openstack/nova-metadata-0" Feb 23 09:11:32 crc kubenswrapper[4940]: I0223 09:11:32.691865 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7ab19ece-433f-48c7-8da3-d025f968169c-logs\") pod \"nova-metadata-0\" (UID: \"7ab19ece-433f-48c7-8da3-d025f968169c\") " pod="openstack/nova-metadata-0" Feb 23 09:11:32 crc kubenswrapper[4940]: I0223 09:11:32.692338 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7ab19ece-433f-48c7-8da3-d025f968169c-logs\") pod \"nova-metadata-0\" (UID: \"7ab19ece-433f-48c7-8da3-d025f968169c\") " pod="openstack/nova-metadata-0" Feb 23 09:11:32 crc kubenswrapper[4940]: I0223 09:11:32.698321 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ab19ece-433f-48c7-8da3-d025f968169c-config-data\") pod \"nova-metadata-0\" (UID: \"7ab19ece-433f-48c7-8da3-d025f968169c\") " pod="openstack/nova-metadata-0" Feb 23 09:11:32 crc kubenswrapper[4940]: I0223 09:11:32.699536 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ab19ece-433f-48c7-8da3-d025f968169c-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"7ab19ece-433f-48c7-8da3-d025f968169c\") " pod="openstack/nova-metadata-0" Feb 23 09:11:32 crc kubenswrapper[4940]: I0223 09:11:32.700952 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/7ab19ece-433f-48c7-8da3-d025f968169c-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"7ab19ece-433f-48c7-8da3-d025f968169c\") " pod="openstack/nova-metadata-0" Feb 23 09:11:32 crc kubenswrapper[4940]: I0223 09:11:32.715073 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nmkrr\" (UniqueName: \"kubernetes.io/projected/7ab19ece-433f-48c7-8da3-d025f968169c-kube-api-access-nmkrr\") pod \"nova-metadata-0\" (UID: \"7ab19ece-433f-48c7-8da3-d025f968169c\") " pod="openstack/nova-metadata-0" Feb 23 09:11:32 crc kubenswrapper[4940]: I0223 09:11:32.889170 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 23 09:11:33 crc kubenswrapper[4940]: I0223 09:11:33.374877 4940 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="261fa9bc-8be5-4db8-a549-14d2917921cd" path="/var/lib/kubelet/pods/261fa9bc-8be5-4db8-a549-14d2917921cd/volumes" Feb 23 09:11:33 crc kubenswrapper[4940]: I0223 09:11:33.376765 4940 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4ddde6a1-2d30-4c95-aac1-ab2f32130f14" path="/var/lib/kubelet/pods/4ddde6a1-2d30-4c95-aac1-ab2f32130f14/volumes" Feb 23 09:11:33 crc kubenswrapper[4940]: I0223 09:11:33.378300 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 23 09:11:33 crc kubenswrapper[4940]: W0223 09:11:33.394150 4940 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7ab19ece_433f_48c7_8da3_d025f968169c.slice/crio-7f1bd89c46e815978a227b8efc3bd3195a52edc738f1486dda3ed0f3e35d8d36 WatchSource:0}: Error finding container 7f1bd89c46e815978a227b8efc3bd3195a52edc738f1486dda3ed0f3e35d8d36: Status 404 returned error can't find the container with id 7f1bd89c46e815978a227b8efc3bd3195a52edc738f1486dda3ed0f3e35d8d36 Feb 23 09:11:34 crc kubenswrapper[4940]: I0223 09:11:34.409842 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"7ab19ece-433f-48c7-8da3-d025f968169c","Type":"ContainerStarted","Data":"0e62f4a77391bfa625a3fe0df1a7ea63a73118344160cc4904547ac7bf0887ff"} Feb 23 09:11:34 crc kubenswrapper[4940]: I0223 09:11:34.410143 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"7ab19ece-433f-48c7-8da3-d025f968169c","Type":"ContainerStarted","Data":"f1d90aa5a3c7ae92f5ee82674c318b48c6fb8b829670f91a69709eaf2a8e86a1"} Feb 23 09:11:34 crc kubenswrapper[4940]: I0223 09:11:34.410158 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"7ab19ece-433f-48c7-8da3-d025f968169c","Type":"ContainerStarted","Data":"7f1bd89c46e815978a227b8efc3bd3195a52edc738f1486dda3ed0f3e35d8d36"} Feb 23 09:11:34 crc kubenswrapper[4940]: I0223 09:11:34.444532 4940 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.4445069569999998 podStartE2EDuration="2.444506957s" podCreationTimestamp="2026-02-23 09:11:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 09:11:34.432733416 +0000 UTC m=+1425.815939623" watchObservedRunningTime="2026-02-23 09:11:34.444506957 +0000 UTC m=+1425.827713124" Feb 23 09:11:35 crc kubenswrapper[4940]: I0223 09:11:35.784077 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Feb 23 09:11:35 crc kubenswrapper[4940]: I0223 09:11:35.919129 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 23 09:11:36 crc kubenswrapper[4940]: I0223 09:11:36.452356 4940 generic.go:334] "Generic (PLEG): container finished" podID="199cf4dd-ab6f-4d59-9a82-86c613352012" containerID="879d09cd05fdd673a469aaef8f885d26bd2f4d676ecd9134c6269c67a9b0c59e" exitCode=0 Feb 23 09:11:36 crc kubenswrapper[4940]: I0223 09:11:36.452489 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-vvclk" event={"ID":"199cf4dd-ab6f-4d59-9a82-86c613352012","Type":"ContainerDied","Data":"879d09cd05fdd673a469aaef8f885d26bd2f4d676ecd9134c6269c67a9b0c59e"} Feb 23 09:11:37 crc kubenswrapper[4940]: I0223 09:11:37.462872 4940 generic.go:334] "Generic (PLEG): container finished" podID="5f88e18b-bcfa-4446-bbdf-8824c2c94f65" containerID="e46cc3d8abf9da36dc6d70b4803b2e8c3cb35392fdd83d5a8598848f99040823" exitCode=0 Feb 23 09:11:37 crc kubenswrapper[4940]: I0223 09:11:37.462953 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-ctwwt" event={"ID":"5f88e18b-bcfa-4446-bbdf-8824c2c94f65","Type":"ContainerDied","Data":"e46cc3d8abf9da36dc6d70b4803b2e8c3cb35392fdd83d5a8598848f99040823"} Feb 23 09:11:37 crc kubenswrapper[4940]: I0223 09:11:37.888876 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 23 09:11:37 crc kubenswrapper[4940]: I0223 09:11:37.889269 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 23 09:11:37 crc kubenswrapper[4940]: I0223 09:11:37.926782 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-vvclk" Feb 23 09:11:38 crc kubenswrapper[4940]: I0223 09:11:38.131532 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l67sf\" (UniqueName: \"kubernetes.io/projected/199cf4dd-ab6f-4d59-9a82-86c613352012-kube-api-access-l67sf\") pod \"199cf4dd-ab6f-4d59-9a82-86c613352012\" (UID: \"199cf4dd-ab6f-4d59-9a82-86c613352012\") " Feb 23 09:11:38 crc kubenswrapper[4940]: I0223 09:11:38.131859 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/199cf4dd-ab6f-4d59-9a82-86c613352012-scripts\") pod \"199cf4dd-ab6f-4d59-9a82-86c613352012\" (UID: \"199cf4dd-ab6f-4d59-9a82-86c613352012\") " Feb 23 09:11:38 crc kubenswrapper[4940]: I0223 09:11:38.131905 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/199cf4dd-ab6f-4d59-9a82-86c613352012-config-data\") pod \"199cf4dd-ab6f-4d59-9a82-86c613352012\" (UID: \"199cf4dd-ab6f-4d59-9a82-86c613352012\") " Feb 23 09:11:38 crc kubenswrapper[4940]: I0223 09:11:38.131923 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/199cf4dd-ab6f-4d59-9a82-86c613352012-combined-ca-bundle\") pod \"199cf4dd-ab6f-4d59-9a82-86c613352012\" (UID: \"199cf4dd-ab6f-4d59-9a82-86c613352012\") " Feb 23 09:11:38 crc kubenswrapper[4940]: I0223 09:11:38.136671 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/199cf4dd-ab6f-4d59-9a82-86c613352012-scripts" (OuterVolumeSpecName: "scripts") pod "199cf4dd-ab6f-4d59-9a82-86c613352012" (UID: "199cf4dd-ab6f-4d59-9a82-86c613352012"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 09:11:38 crc kubenswrapper[4940]: I0223 09:11:38.146289 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/199cf4dd-ab6f-4d59-9a82-86c613352012-kube-api-access-l67sf" (OuterVolumeSpecName: "kube-api-access-l67sf") pod "199cf4dd-ab6f-4d59-9a82-86c613352012" (UID: "199cf4dd-ab6f-4d59-9a82-86c613352012"). InnerVolumeSpecName "kube-api-access-l67sf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 09:11:38 crc kubenswrapper[4940]: I0223 09:11:38.160766 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/199cf4dd-ab6f-4d59-9a82-86c613352012-config-data" (OuterVolumeSpecName: "config-data") pod "199cf4dd-ab6f-4d59-9a82-86c613352012" (UID: "199cf4dd-ab6f-4d59-9a82-86c613352012"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 09:11:38 crc kubenswrapper[4940]: I0223 09:11:38.164588 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/199cf4dd-ab6f-4d59-9a82-86c613352012-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "199cf4dd-ab6f-4d59-9a82-86c613352012" (UID: "199cf4dd-ab6f-4d59-9a82-86c613352012"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 09:11:38 crc kubenswrapper[4940]: I0223 09:11:38.234482 4940 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/199cf4dd-ab6f-4d59-9a82-86c613352012-config-data\") on node \"crc\" DevicePath \"\"" Feb 23 09:11:38 crc kubenswrapper[4940]: I0223 09:11:38.234516 4940 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/199cf4dd-ab6f-4d59-9a82-86c613352012-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 09:11:38 crc kubenswrapper[4940]: I0223 09:11:38.234528 4940 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l67sf\" (UniqueName: \"kubernetes.io/projected/199cf4dd-ab6f-4d59-9a82-86c613352012-kube-api-access-l67sf\") on node \"crc\" DevicePath \"\"" Feb 23 09:11:38 crc kubenswrapper[4940]: I0223 09:11:38.234536 4940 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/199cf4dd-ab6f-4d59-9a82-86c613352012-scripts\") on node \"crc\" DevicePath \"\"" Feb 23 09:11:38 crc kubenswrapper[4940]: I0223 09:11:38.474077 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-vvclk" event={"ID":"199cf4dd-ab6f-4d59-9a82-86c613352012","Type":"ContainerDied","Data":"09873dfa5206f12526298bd90399256a60812a1ac95415d38c4613ed27051548"} Feb 23 09:11:38 crc kubenswrapper[4940]: I0223 09:11:38.474133 4940 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="09873dfa5206f12526298bd90399256a60812a1ac95415d38c4613ed27051548" Feb 23 09:11:38 crc kubenswrapper[4940]: I0223 09:11:38.474183 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-vvclk" Feb 23 09:11:38 crc kubenswrapper[4940]: I0223 09:11:38.664581 4940 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 23 09:11:38 crc kubenswrapper[4940]: I0223 09:11:38.665378 4940 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="f670bbd9-012d-433a-90ec-91662031476c" containerName="nova-api-log" containerID="cri-o://1e464dfcdfbb9e73debefa16adbbc384cba48854000a225c6fdce4bcf7a8ca62" gracePeriod=30 Feb 23 09:11:38 crc kubenswrapper[4940]: I0223 09:11:38.665476 4940 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="f670bbd9-012d-433a-90ec-91662031476c" containerName="nova-api-api" containerID="cri-o://5a92f3f4362efdeed075c276c87faa8689c42a71fae90b57f96ab998690ccc7d" gracePeriod=30 Feb 23 09:11:38 crc kubenswrapper[4940]: I0223 09:11:38.695459 4940 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 23 09:11:38 crc kubenswrapper[4940]: I0223 09:11:38.695695 4940 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="641aa3c3-0f60-4529-9422-9343f561827f" containerName="nova-scheduler-scheduler" containerID="cri-o://6f2799bcf55c485f6fecb0cd16f3f3064476b4085480847ace38dd7078af23c0" gracePeriod=30 Feb 23 09:11:38 crc kubenswrapper[4940]: I0223 09:11:38.752387 4940 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 23 09:11:39 crc kubenswrapper[4940]: I0223 09:11:39.029438 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-ctwwt" Feb 23 09:11:39 crc kubenswrapper[4940]: I0223 09:11:39.166600 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5bbht\" (UniqueName: \"kubernetes.io/projected/5f88e18b-bcfa-4446-bbdf-8824c2c94f65-kube-api-access-5bbht\") pod \"5f88e18b-bcfa-4446-bbdf-8824c2c94f65\" (UID: \"5f88e18b-bcfa-4446-bbdf-8824c2c94f65\") " Feb 23 09:11:39 crc kubenswrapper[4940]: I0223 09:11:39.166889 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f88e18b-bcfa-4446-bbdf-8824c2c94f65-combined-ca-bundle\") pod \"5f88e18b-bcfa-4446-bbdf-8824c2c94f65\" (UID: \"5f88e18b-bcfa-4446-bbdf-8824c2c94f65\") " Feb 23 09:11:39 crc kubenswrapper[4940]: I0223 09:11:39.167113 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5f88e18b-bcfa-4446-bbdf-8824c2c94f65-scripts\") pod \"5f88e18b-bcfa-4446-bbdf-8824c2c94f65\" (UID: \"5f88e18b-bcfa-4446-bbdf-8824c2c94f65\") " Feb 23 09:11:39 crc kubenswrapper[4940]: I0223 09:11:39.167380 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5f88e18b-bcfa-4446-bbdf-8824c2c94f65-config-data\") pod \"5f88e18b-bcfa-4446-bbdf-8824c2c94f65\" (UID: \"5f88e18b-bcfa-4446-bbdf-8824c2c94f65\") " Feb 23 09:11:39 crc kubenswrapper[4940]: I0223 09:11:39.177970 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5f88e18b-bcfa-4446-bbdf-8824c2c94f65-scripts" (OuterVolumeSpecName: "scripts") pod "5f88e18b-bcfa-4446-bbdf-8824c2c94f65" (UID: "5f88e18b-bcfa-4446-bbdf-8824c2c94f65"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 09:11:39 crc kubenswrapper[4940]: I0223 09:11:39.178761 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5f88e18b-bcfa-4446-bbdf-8824c2c94f65-kube-api-access-5bbht" (OuterVolumeSpecName: "kube-api-access-5bbht") pod "5f88e18b-bcfa-4446-bbdf-8824c2c94f65" (UID: "5f88e18b-bcfa-4446-bbdf-8824c2c94f65"). InnerVolumeSpecName "kube-api-access-5bbht". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 09:11:39 crc kubenswrapper[4940]: I0223 09:11:39.247282 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5f88e18b-bcfa-4446-bbdf-8824c2c94f65-config-data" (OuterVolumeSpecName: "config-data") pod "5f88e18b-bcfa-4446-bbdf-8824c2c94f65" (UID: "5f88e18b-bcfa-4446-bbdf-8824c2c94f65"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 09:11:39 crc kubenswrapper[4940]: I0223 09:11:39.269055 4940 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5bbht\" (UniqueName: \"kubernetes.io/projected/5f88e18b-bcfa-4446-bbdf-8824c2c94f65-kube-api-access-5bbht\") on node \"crc\" DevicePath \"\"" Feb 23 09:11:39 crc kubenswrapper[4940]: I0223 09:11:39.269090 4940 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5f88e18b-bcfa-4446-bbdf-8824c2c94f65-scripts\") on node \"crc\" DevicePath \"\"" Feb 23 09:11:39 crc kubenswrapper[4940]: I0223 09:11:39.269101 4940 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5f88e18b-bcfa-4446-bbdf-8824c2c94f65-config-data\") on node \"crc\" DevicePath \"\"" Feb 23 09:11:39 crc kubenswrapper[4940]: I0223 09:11:39.273384 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5f88e18b-bcfa-4446-bbdf-8824c2c94f65-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5f88e18b-bcfa-4446-bbdf-8824c2c94f65" (UID: "5f88e18b-bcfa-4446-bbdf-8824c2c94f65"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 09:11:39 crc kubenswrapper[4940]: I0223 09:11:39.344469 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 23 09:11:39 crc kubenswrapper[4940]: I0223 09:11:39.385281 4940 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f88e18b-bcfa-4446-bbdf-8824c2c94f65-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 09:11:39 crc kubenswrapper[4940]: I0223 09:11:39.486593 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f670bbd9-012d-433a-90ec-91662031476c-combined-ca-bundle\") pod \"f670bbd9-012d-433a-90ec-91662031476c\" (UID: \"f670bbd9-012d-433a-90ec-91662031476c\") " Feb 23 09:11:39 crc kubenswrapper[4940]: I0223 09:11:39.486719 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f670bbd9-012d-433a-90ec-91662031476c-config-data\") pod \"f670bbd9-012d-433a-90ec-91662031476c\" (UID: \"f670bbd9-012d-433a-90ec-91662031476c\") " Feb 23 09:11:39 crc kubenswrapper[4940]: I0223 09:11:39.486761 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f670bbd9-012d-433a-90ec-91662031476c-logs\") pod \"f670bbd9-012d-433a-90ec-91662031476c\" (UID: \"f670bbd9-012d-433a-90ec-91662031476c\") " Feb 23 09:11:39 crc kubenswrapper[4940]: I0223 09:11:39.486830 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lpwrs\" (UniqueName: \"kubernetes.io/projected/f670bbd9-012d-433a-90ec-91662031476c-kube-api-access-lpwrs\") pod \"f670bbd9-012d-433a-90ec-91662031476c\" (UID: \"f670bbd9-012d-433a-90ec-91662031476c\") " Feb 23 09:11:39 crc kubenswrapper[4940]: I0223 09:11:39.488702 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f670bbd9-012d-433a-90ec-91662031476c-logs" (OuterVolumeSpecName: "logs") pod "f670bbd9-012d-433a-90ec-91662031476c" (UID: "f670bbd9-012d-433a-90ec-91662031476c"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 09:11:39 crc kubenswrapper[4940]: I0223 09:11:39.496759 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f670bbd9-012d-433a-90ec-91662031476c-kube-api-access-lpwrs" (OuterVolumeSpecName: "kube-api-access-lpwrs") pod "f670bbd9-012d-433a-90ec-91662031476c" (UID: "f670bbd9-012d-433a-90ec-91662031476c"). InnerVolumeSpecName "kube-api-access-lpwrs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 09:11:39 crc kubenswrapper[4940]: I0223 09:11:39.504733 4940 generic.go:334] "Generic (PLEG): container finished" podID="f670bbd9-012d-433a-90ec-91662031476c" containerID="5a92f3f4362efdeed075c276c87faa8689c42a71fae90b57f96ab998690ccc7d" exitCode=0 Feb 23 09:11:39 crc kubenswrapper[4940]: I0223 09:11:39.504766 4940 generic.go:334] "Generic (PLEG): container finished" podID="f670bbd9-012d-433a-90ec-91662031476c" containerID="1e464dfcdfbb9e73debefa16adbbc384cba48854000a225c6fdce4bcf7a8ca62" exitCode=143 Feb 23 09:11:39 crc kubenswrapper[4940]: I0223 09:11:39.504816 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 23 09:11:39 crc kubenswrapper[4940]: I0223 09:11:39.505167 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f670bbd9-012d-433a-90ec-91662031476c","Type":"ContainerDied","Data":"5a92f3f4362efdeed075c276c87faa8689c42a71fae90b57f96ab998690ccc7d"} Feb 23 09:11:39 crc kubenswrapper[4940]: I0223 09:11:39.505312 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f670bbd9-012d-433a-90ec-91662031476c","Type":"ContainerDied","Data":"1e464dfcdfbb9e73debefa16adbbc384cba48854000a225c6fdce4bcf7a8ca62"} Feb 23 09:11:39 crc kubenswrapper[4940]: I0223 09:11:39.505416 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f670bbd9-012d-433a-90ec-91662031476c","Type":"ContainerDied","Data":"df5e9bc224253ed1d5f8a4fee8b76cf3ca2abe596a3562f306b52be892dbc767"} Feb 23 09:11:39 crc kubenswrapper[4940]: I0223 09:11:39.505520 4940 scope.go:117] "RemoveContainer" containerID="5a92f3f4362efdeed075c276c87faa8689c42a71fae90b57f96ab998690ccc7d" Feb 23 09:11:39 crc kubenswrapper[4940]: I0223 09:11:39.516075 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-ctwwt" event={"ID":"5f88e18b-bcfa-4446-bbdf-8824c2c94f65","Type":"ContainerDied","Data":"1c775f5925847df116e750e3c3fbbf85acd5623e6c61976cc6e6c93483a82fd3"} Feb 23 09:11:39 crc kubenswrapper[4940]: I0223 09:11:39.516114 4940 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1c775f5925847df116e750e3c3fbbf85acd5623e6c61976cc6e6c93483a82fd3" Feb 23 09:11:39 crc kubenswrapper[4940]: I0223 09:11:39.516141 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-ctwwt" Feb 23 09:11:39 crc kubenswrapper[4940]: I0223 09:11:39.516219 4940 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="7ab19ece-433f-48c7-8da3-d025f968169c" containerName="nova-metadata-log" containerID="cri-o://f1d90aa5a3c7ae92f5ee82674c318b48c6fb8b829670f91a69709eaf2a8e86a1" gracePeriod=30 Feb 23 09:11:39 crc kubenswrapper[4940]: I0223 09:11:39.518299 4940 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="7ab19ece-433f-48c7-8da3-d025f968169c" containerName="nova-metadata-metadata" containerID="cri-o://0e62f4a77391bfa625a3fe0df1a7ea63a73118344160cc4904547ac7bf0887ff" gracePeriod=30 Feb 23 09:11:39 crc kubenswrapper[4940]: I0223 09:11:39.544081 4940 scope.go:117] "RemoveContainer" containerID="1e464dfcdfbb9e73debefa16adbbc384cba48854000a225c6fdce4bcf7a8ca62" Feb 23 09:11:39 crc kubenswrapper[4940]: I0223 09:11:39.552527 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f670bbd9-012d-433a-90ec-91662031476c-config-data" (OuterVolumeSpecName: "config-data") pod "f670bbd9-012d-433a-90ec-91662031476c" (UID: "f670bbd9-012d-433a-90ec-91662031476c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 09:11:39 crc kubenswrapper[4940]: I0223 09:11:39.557473 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f670bbd9-012d-433a-90ec-91662031476c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f670bbd9-012d-433a-90ec-91662031476c" (UID: "f670bbd9-012d-433a-90ec-91662031476c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 09:11:39 crc kubenswrapper[4940]: I0223 09:11:39.569104 4940 scope.go:117] "RemoveContainer" containerID="5a92f3f4362efdeed075c276c87faa8689c42a71fae90b57f96ab998690ccc7d" Feb 23 09:11:39 crc kubenswrapper[4940]: E0223 09:11:39.572454 4940 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5a92f3f4362efdeed075c276c87faa8689c42a71fae90b57f96ab998690ccc7d\": container with ID starting with 5a92f3f4362efdeed075c276c87faa8689c42a71fae90b57f96ab998690ccc7d not found: ID does not exist" containerID="5a92f3f4362efdeed075c276c87faa8689c42a71fae90b57f96ab998690ccc7d" Feb 23 09:11:39 crc kubenswrapper[4940]: I0223 09:11:39.572496 4940 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5a92f3f4362efdeed075c276c87faa8689c42a71fae90b57f96ab998690ccc7d"} err="failed to get container status \"5a92f3f4362efdeed075c276c87faa8689c42a71fae90b57f96ab998690ccc7d\": rpc error: code = NotFound desc = could not find container \"5a92f3f4362efdeed075c276c87faa8689c42a71fae90b57f96ab998690ccc7d\": container with ID starting with 5a92f3f4362efdeed075c276c87faa8689c42a71fae90b57f96ab998690ccc7d not found: ID does not exist" Feb 23 09:11:39 crc kubenswrapper[4940]: I0223 09:11:39.572521 4940 scope.go:117] "RemoveContainer" containerID="1e464dfcdfbb9e73debefa16adbbc384cba48854000a225c6fdce4bcf7a8ca62" Feb 23 09:11:39 crc kubenswrapper[4940]: E0223 09:11:39.576222 4940 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1e464dfcdfbb9e73debefa16adbbc384cba48854000a225c6fdce4bcf7a8ca62\": container with ID starting with 1e464dfcdfbb9e73debefa16adbbc384cba48854000a225c6fdce4bcf7a8ca62 not found: ID does not exist" containerID="1e464dfcdfbb9e73debefa16adbbc384cba48854000a225c6fdce4bcf7a8ca62" Feb 23 09:11:39 crc kubenswrapper[4940]: I0223 09:11:39.576256 4940 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1e464dfcdfbb9e73debefa16adbbc384cba48854000a225c6fdce4bcf7a8ca62"} err="failed to get container status \"1e464dfcdfbb9e73debefa16adbbc384cba48854000a225c6fdce4bcf7a8ca62\": rpc error: code = NotFound desc = could not find container \"1e464dfcdfbb9e73debefa16adbbc384cba48854000a225c6fdce4bcf7a8ca62\": container with ID starting with 1e464dfcdfbb9e73debefa16adbbc384cba48854000a225c6fdce4bcf7a8ca62 not found: ID does not exist" Feb 23 09:11:39 crc kubenswrapper[4940]: I0223 09:11:39.576279 4940 scope.go:117] "RemoveContainer" containerID="5a92f3f4362efdeed075c276c87faa8689c42a71fae90b57f96ab998690ccc7d" Feb 23 09:11:39 crc kubenswrapper[4940]: I0223 09:11:39.580560 4940 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5a92f3f4362efdeed075c276c87faa8689c42a71fae90b57f96ab998690ccc7d"} err="failed to get container status \"5a92f3f4362efdeed075c276c87faa8689c42a71fae90b57f96ab998690ccc7d\": rpc error: code = NotFound desc = could not find container \"5a92f3f4362efdeed075c276c87faa8689c42a71fae90b57f96ab998690ccc7d\": container with ID starting with 5a92f3f4362efdeed075c276c87faa8689c42a71fae90b57f96ab998690ccc7d not found: ID does not exist" Feb 23 09:11:39 crc kubenswrapper[4940]: I0223 09:11:39.580601 4940 scope.go:117] "RemoveContainer" containerID="1e464dfcdfbb9e73debefa16adbbc384cba48854000a225c6fdce4bcf7a8ca62" Feb 23 09:11:39 crc kubenswrapper[4940]: I0223 09:11:39.585898 4940 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1e464dfcdfbb9e73debefa16adbbc384cba48854000a225c6fdce4bcf7a8ca62"} err="failed to get container status \"1e464dfcdfbb9e73debefa16adbbc384cba48854000a225c6fdce4bcf7a8ca62\": rpc error: code = NotFound desc = could not find container \"1e464dfcdfbb9e73debefa16adbbc384cba48854000a225c6fdce4bcf7a8ca62\": container with ID starting with 1e464dfcdfbb9e73debefa16adbbc384cba48854000a225c6fdce4bcf7a8ca62 not found: ID does not exist" Feb 23 09:11:39 crc kubenswrapper[4940]: I0223 09:11:39.586605 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 23 09:11:39 crc kubenswrapper[4940]: E0223 09:11:39.587197 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="199cf4dd-ab6f-4d59-9a82-86c613352012" containerName="nova-manage" Feb 23 09:11:39 crc kubenswrapper[4940]: I0223 09:11:39.587214 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="199cf4dd-ab6f-4d59-9a82-86c613352012" containerName="nova-manage" Feb 23 09:11:39 crc kubenswrapper[4940]: E0223 09:11:39.587228 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f88e18b-bcfa-4446-bbdf-8824c2c94f65" containerName="nova-cell1-conductor-db-sync" Feb 23 09:11:39 crc kubenswrapper[4940]: I0223 09:11:39.587234 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f88e18b-bcfa-4446-bbdf-8824c2c94f65" containerName="nova-cell1-conductor-db-sync" Feb 23 09:11:39 crc kubenswrapper[4940]: E0223 09:11:39.587247 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f670bbd9-012d-433a-90ec-91662031476c" containerName="nova-api-log" Feb 23 09:11:39 crc kubenswrapper[4940]: I0223 09:11:39.587253 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="f670bbd9-012d-433a-90ec-91662031476c" containerName="nova-api-log" Feb 23 09:11:39 crc kubenswrapper[4940]: E0223 09:11:39.587262 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f670bbd9-012d-433a-90ec-91662031476c" containerName="nova-api-api" Feb 23 09:11:39 crc kubenswrapper[4940]: I0223 09:11:39.587268 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="f670bbd9-012d-433a-90ec-91662031476c" containerName="nova-api-api" Feb 23 09:11:39 crc kubenswrapper[4940]: I0223 09:11:39.587443 4940 memory_manager.go:354] "RemoveStaleState removing state" podUID="199cf4dd-ab6f-4d59-9a82-86c613352012" containerName="nova-manage" Feb 23 09:11:39 crc kubenswrapper[4940]: I0223 09:11:39.587461 4940 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f88e18b-bcfa-4446-bbdf-8824c2c94f65" containerName="nova-cell1-conductor-db-sync" Feb 23 09:11:39 crc kubenswrapper[4940]: I0223 09:11:39.587474 4940 memory_manager.go:354] "RemoveStaleState removing state" podUID="f670bbd9-012d-433a-90ec-91662031476c" containerName="nova-api-api" Feb 23 09:11:39 crc kubenswrapper[4940]: I0223 09:11:39.587491 4940 memory_manager.go:354] "RemoveStaleState removing state" podUID="f670bbd9-012d-433a-90ec-91662031476c" containerName="nova-api-log" Feb 23 09:11:39 crc kubenswrapper[4940]: I0223 09:11:39.589230 4940 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f670bbd9-012d-433a-90ec-91662031476c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 09:11:39 crc kubenswrapper[4940]: I0223 09:11:39.589267 4940 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f670bbd9-012d-433a-90ec-91662031476c-config-data\") on node \"crc\" DevicePath \"\"" Feb 23 09:11:39 crc kubenswrapper[4940]: I0223 09:11:39.589282 4940 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f670bbd9-012d-433a-90ec-91662031476c-logs\") on node \"crc\" DevicePath \"\"" Feb 23 09:11:39 crc kubenswrapper[4940]: I0223 09:11:39.589296 4940 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lpwrs\" (UniqueName: \"kubernetes.io/projected/f670bbd9-012d-433a-90ec-91662031476c-kube-api-access-lpwrs\") on node \"crc\" DevicePath \"\"" Feb 23 09:11:39 crc kubenswrapper[4940]: I0223 09:11:39.595336 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Feb 23 09:11:39 crc kubenswrapper[4940]: I0223 09:11:39.598594 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Feb 23 09:11:39 crc kubenswrapper[4940]: I0223 09:11:39.610670 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 23 09:11:39 crc kubenswrapper[4940]: I0223 09:11:39.792641 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-shcxf\" (UniqueName: \"kubernetes.io/projected/c854cfd6-7319-4a6c-8893-a96cd32bdcd0-kube-api-access-shcxf\") pod \"nova-cell1-conductor-0\" (UID: \"c854cfd6-7319-4a6c-8893-a96cd32bdcd0\") " pod="openstack/nova-cell1-conductor-0" Feb 23 09:11:39 crc kubenswrapper[4940]: I0223 09:11:39.792991 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c854cfd6-7319-4a6c-8893-a96cd32bdcd0-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"c854cfd6-7319-4a6c-8893-a96cd32bdcd0\") " pod="openstack/nova-cell1-conductor-0" Feb 23 09:11:39 crc kubenswrapper[4940]: I0223 09:11:39.793024 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c854cfd6-7319-4a6c-8893-a96cd32bdcd0-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"c854cfd6-7319-4a6c-8893-a96cd32bdcd0\") " pod="openstack/nova-cell1-conductor-0" Feb 23 09:11:39 crc kubenswrapper[4940]: I0223 09:11:39.881706 4940 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 23 09:11:39 crc kubenswrapper[4940]: I0223 09:11:39.896781 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-shcxf\" (UniqueName: \"kubernetes.io/projected/c854cfd6-7319-4a6c-8893-a96cd32bdcd0-kube-api-access-shcxf\") pod \"nova-cell1-conductor-0\" (UID: \"c854cfd6-7319-4a6c-8893-a96cd32bdcd0\") " pod="openstack/nova-cell1-conductor-0" Feb 23 09:11:39 crc kubenswrapper[4940]: I0223 09:11:39.896891 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c854cfd6-7319-4a6c-8893-a96cd32bdcd0-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"c854cfd6-7319-4a6c-8893-a96cd32bdcd0\") " pod="openstack/nova-cell1-conductor-0" Feb 23 09:11:39 crc kubenswrapper[4940]: I0223 09:11:39.896927 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c854cfd6-7319-4a6c-8893-a96cd32bdcd0-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"c854cfd6-7319-4a6c-8893-a96cd32bdcd0\") " pod="openstack/nova-cell1-conductor-0" Feb 23 09:11:39 crc kubenswrapper[4940]: I0223 09:11:39.901680 4940 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 23 09:11:39 crc kubenswrapper[4940]: I0223 09:11:39.906128 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c854cfd6-7319-4a6c-8893-a96cd32bdcd0-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"c854cfd6-7319-4a6c-8893-a96cd32bdcd0\") " pod="openstack/nova-cell1-conductor-0" Feb 23 09:11:39 crc kubenswrapper[4940]: I0223 09:11:39.912628 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c854cfd6-7319-4a6c-8893-a96cd32bdcd0-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"c854cfd6-7319-4a6c-8893-a96cd32bdcd0\") " pod="openstack/nova-cell1-conductor-0" Feb 23 09:11:39 crc kubenswrapper[4940]: I0223 09:11:39.939280 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 23 09:11:39 crc kubenswrapper[4940]: I0223 09:11:39.941137 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 23 09:11:39 crc kubenswrapper[4940]: I0223 09:11:39.941242 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 23 09:11:39 crc kubenswrapper[4940]: I0223 09:11:39.945661 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 23 09:11:39 crc kubenswrapper[4940]: I0223 09:11:39.951821 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-shcxf\" (UniqueName: \"kubernetes.io/projected/c854cfd6-7319-4a6c-8893-a96cd32bdcd0-kube-api-access-shcxf\") pod \"nova-cell1-conductor-0\" (UID: \"c854cfd6-7319-4a6c-8893-a96cd32bdcd0\") " pod="openstack/nova-cell1-conductor-0" Feb 23 09:11:40 crc kubenswrapper[4940]: I0223 09:11:40.101213 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e8d07da4-4091-410a-a0a8-befbd51a314e-logs\") pod \"nova-api-0\" (UID: \"e8d07da4-4091-410a-a0a8-befbd51a314e\") " pod="openstack/nova-api-0" Feb 23 09:11:40 crc kubenswrapper[4940]: I0223 09:11:40.101333 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lnz47\" (UniqueName: \"kubernetes.io/projected/e8d07da4-4091-410a-a0a8-befbd51a314e-kube-api-access-lnz47\") pod \"nova-api-0\" (UID: \"e8d07da4-4091-410a-a0a8-befbd51a314e\") " pod="openstack/nova-api-0" Feb 23 09:11:40 crc kubenswrapper[4940]: I0223 09:11:40.101429 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e8d07da4-4091-410a-a0a8-befbd51a314e-config-data\") pod \"nova-api-0\" (UID: \"e8d07da4-4091-410a-a0a8-befbd51a314e\") " pod="openstack/nova-api-0" Feb 23 09:11:40 crc kubenswrapper[4940]: I0223 09:11:40.101548 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e8d07da4-4091-410a-a0a8-befbd51a314e-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"e8d07da4-4091-410a-a0a8-befbd51a314e\") " pod="openstack/nova-api-0" Feb 23 09:11:40 crc kubenswrapper[4940]: I0223 09:11:40.183694 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 23 09:11:40 crc kubenswrapper[4940]: I0223 09:11:40.203438 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e8d07da4-4091-410a-a0a8-befbd51a314e-logs\") pod \"nova-api-0\" (UID: \"e8d07da4-4091-410a-a0a8-befbd51a314e\") " pod="openstack/nova-api-0" Feb 23 09:11:40 crc kubenswrapper[4940]: I0223 09:11:40.203524 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lnz47\" (UniqueName: \"kubernetes.io/projected/e8d07da4-4091-410a-a0a8-befbd51a314e-kube-api-access-lnz47\") pod \"nova-api-0\" (UID: \"e8d07da4-4091-410a-a0a8-befbd51a314e\") " pod="openstack/nova-api-0" Feb 23 09:11:40 crc kubenswrapper[4940]: I0223 09:11:40.203703 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e8d07da4-4091-410a-a0a8-befbd51a314e-config-data\") pod \"nova-api-0\" (UID: \"e8d07da4-4091-410a-a0a8-befbd51a314e\") " pod="openstack/nova-api-0" Feb 23 09:11:40 crc kubenswrapper[4940]: I0223 09:11:40.203869 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e8d07da4-4091-410a-a0a8-befbd51a314e-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"e8d07da4-4091-410a-a0a8-befbd51a314e\") " pod="openstack/nova-api-0" Feb 23 09:11:40 crc kubenswrapper[4940]: I0223 09:11:40.204925 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e8d07da4-4091-410a-a0a8-befbd51a314e-logs\") pod \"nova-api-0\" (UID: \"e8d07da4-4091-410a-a0a8-befbd51a314e\") " pod="openstack/nova-api-0" Feb 23 09:11:40 crc kubenswrapper[4940]: I0223 09:11:40.209045 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e8d07da4-4091-410a-a0a8-befbd51a314e-config-data\") pod \"nova-api-0\" (UID: \"e8d07da4-4091-410a-a0a8-befbd51a314e\") " pod="openstack/nova-api-0" Feb 23 09:11:40 crc kubenswrapper[4940]: I0223 09:11:40.209626 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e8d07da4-4091-410a-a0a8-befbd51a314e-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"e8d07da4-4091-410a-a0a8-befbd51a314e\") " pod="openstack/nova-api-0" Feb 23 09:11:40 crc kubenswrapper[4940]: I0223 09:11:40.210070 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Feb 23 09:11:40 crc kubenswrapper[4940]: I0223 09:11:40.234015 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lnz47\" (UniqueName: \"kubernetes.io/projected/e8d07da4-4091-410a-a0a8-befbd51a314e-kube-api-access-lnz47\") pod \"nova-api-0\" (UID: \"e8d07da4-4091-410a-a0a8-befbd51a314e\") " pod="openstack/nova-api-0" Feb 23 09:11:40 crc kubenswrapper[4940]: I0223 09:11:40.305218 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ab19ece-433f-48c7-8da3-d025f968169c-combined-ca-bundle\") pod \"7ab19ece-433f-48c7-8da3-d025f968169c\" (UID: \"7ab19ece-433f-48c7-8da3-d025f968169c\") " Feb 23 09:11:40 crc kubenswrapper[4940]: I0223 09:11:40.305712 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/7ab19ece-433f-48c7-8da3-d025f968169c-nova-metadata-tls-certs\") pod \"7ab19ece-433f-48c7-8da3-d025f968169c\" (UID: \"7ab19ece-433f-48c7-8da3-d025f968169c\") " Feb 23 09:11:40 crc kubenswrapper[4940]: I0223 09:11:40.305758 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ab19ece-433f-48c7-8da3-d025f968169c-config-data\") pod \"7ab19ece-433f-48c7-8da3-d025f968169c\" (UID: \"7ab19ece-433f-48c7-8da3-d025f968169c\") " Feb 23 09:11:40 crc kubenswrapper[4940]: I0223 09:11:40.306003 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7ab19ece-433f-48c7-8da3-d025f968169c-logs\") pod \"7ab19ece-433f-48c7-8da3-d025f968169c\" (UID: \"7ab19ece-433f-48c7-8da3-d025f968169c\") " Feb 23 09:11:40 crc kubenswrapper[4940]: I0223 09:11:40.306306 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7ab19ece-433f-48c7-8da3-d025f968169c-logs" (OuterVolumeSpecName: "logs") pod "7ab19ece-433f-48c7-8da3-d025f968169c" (UID: "7ab19ece-433f-48c7-8da3-d025f968169c"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 09:11:40 crc kubenswrapper[4940]: I0223 09:11:40.306341 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nmkrr\" (UniqueName: \"kubernetes.io/projected/7ab19ece-433f-48c7-8da3-d025f968169c-kube-api-access-nmkrr\") pod \"7ab19ece-433f-48c7-8da3-d025f968169c\" (UID: \"7ab19ece-433f-48c7-8da3-d025f968169c\") " Feb 23 09:11:40 crc kubenswrapper[4940]: I0223 09:11:40.306948 4940 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7ab19ece-433f-48c7-8da3-d025f968169c-logs\") on node \"crc\" DevicePath \"\"" Feb 23 09:11:40 crc kubenswrapper[4940]: I0223 09:11:40.309669 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7ab19ece-433f-48c7-8da3-d025f968169c-kube-api-access-nmkrr" (OuterVolumeSpecName: "kube-api-access-nmkrr") pod "7ab19ece-433f-48c7-8da3-d025f968169c" (UID: "7ab19ece-433f-48c7-8da3-d025f968169c"). InnerVolumeSpecName "kube-api-access-nmkrr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 09:11:40 crc kubenswrapper[4940]: I0223 09:11:40.320085 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 23 09:11:40 crc kubenswrapper[4940]: I0223 09:11:40.336425 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ab19ece-433f-48c7-8da3-d025f968169c-config-data" (OuterVolumeSpecName: "config-data") pod "7ab19ece-433f-48c7-8da3-d025f968169c" (UID: "7ab19ece-433f-48c7-8da3-d025f968169c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 09:11:40 crc kubenswrapper[4940]: I0223 09:11:40.373003 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ab19ece-433f-48c7-8da3-d025f968169c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7ab19ece-433f-48c7-8da3-d025f968169c" (UID: "7ab19ece-433f-48c7-8da3-d025f968169c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 09:11:40 crc kubenswrapper[4940]: I0223 09:11:40.373504 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ab19ece-433f-48c7-8da3-d025f968169c-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "7ab19ece-433f-48c7-8da3-d025f968169c" (UID: "7ab19ece-433f-48c7-8da3-d025f968169c"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 09:11:40 crc kubenswrapper[4940]: I0223 09:11:40.408248 4940 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nmkrr\" (UniqueName: \"kubernetes.io/projected/7ab19ece-433f-48c7-8da3-d025f968169c-kube-api-access-nmkrr\") on node \"crc\" DevicePath \"\"" Feb 23 09:11:40 crc kubenswrapper[4940]: I0223 09:11:40.408273 4940 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ab19ece-433f-48c7-8da3-d025f968169c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 09:11:40 crc kubenswrapper[4940]: I0223 09:11:40.408282 4940 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/7ab19ece-433f-48c7-8da3-d025f968169c-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 23 09:11:40 crc kubenswrapper[4940]: I0223 09:11:40.408290 4940 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ab19ece-433f-48c7-8da3-d025f968169c-config-data\") on node \"crc\" DevicePath \"\"" Feb 23 09:11:40 crc kubenswrapper[4940]: I0223 09:11:40.536491 4940 generic.go:334] "Generic (PLEG): container finished" podID="7ab19ece-433f-48c7-8da3-d025f968169c" containerID="0e62f4a77391bfa625a3fe0df1a7ea63a73118344160cc4904547ac7bf0887ff" exitCode=0 Feb 23 09:11:40 crc kubenswrapper[4940]: I0223 09:11:40.536791 4940 generic.go:334] "Generic (PLEG): container finished" podID="7ab19ece-433f-48c7-8da3-d025f968169c" containerID="f1d90aa5a3c7ae92f5ee82674c318b48c6fb8b829670f91a69709eaf2a8e86a1" exitCode=143 Feb 23 09:11:40 crc kubenswrapper[4940]: I0223 09:11:40.536794 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 23 09:11:40 crc kubenswrapper[4940]: I0223 09:11:40.537332 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"7ab19ece-433f-48c7-8da3-d025f968169c","Type":"ContainerDied","Data":"0e62f4a77391bfa625a3fe0df1a7ea63a73118344160cc4904547ac7bf0887ff"} Feb 23 09:11:40 crc kubenswrapper[4940]: I0223 09:11:40.537411 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"7ab19ece-433f-48c7-8da3-d025f968169c","Type":"ContainerDied","Data":"f1d90aa5a3c7ae92f5ee82674c318b48c6fb8b829670f91a69709eaf2a8e86a1"} Feb 23 09:11:40 crc kubenswrapper[4940]: I0223 09:11:40.537464 4940 scope.go:117] "RemoveContainer" containerID="0e62f4a77391bfa625a3fe0df1a7ea63a73118344160cc4904547ac7bf0887ff" Feb 23 09:11:40 crc kubenswrapper[4940]: I0223 09:11:40.538400 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"7ab19ece-433f-48c7-8da3-d025f968169c","Type":"ContainerDied","Data":"7f1bd89c46e815978a227b8efc3bd3195a52edc738f1486dda3ed0f3e35d8d36"} Feb 23 09:11:40 crc kubenswrapper[4940]: I0223 09:11:40.583988 4940 scope.go:117] "RemoveContainer" containerID="f1d90aa5a3c7ae92f5ee82674c318b48c6fb8b829670f91a69709eaf2a8e86a1" Feb 23 09:11:40 crc kubenswrapper[4940]: I0223 09:11:40.584174 4940 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 23 09:11:40 crc kubenswrapper[4940]: I0223 09:11:40.595028 4940 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 23 09:11:40 crc kubenswrapper[4940]: I0223 09:11:40.625419 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 23 09:11:40 crc kubenswrapper[4940]: I0223 09:11:40.625472 4940 scope.go:117] "RemoveContainer" containerID="0e62f4a77391bfa625a3fe0df1a7ea63a73118344160cc4904547ac7bf0887ff" Feb 23 09:11:40 crc kubenswrapper[4940]: E0223 09:11:40.626772 4940 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0e62f4a77391bfa625a3fe0df1a7ea63a73118344160cc4904547ac7bf0887ff\": container with ID starting with 0e62f4a77391bfa625a3fe0df1a7ea63a73118344160cc4904547ac7bf0887ff not found: ID does not exist" containerID="0e62f4a77391bfa625a3fe0df1a7ea63a73118344160cc4904547ac7bf0887ff" Feb 23 09:11:40 crc kubenswrapper[4940]: I0223 09:11:40.626805 4940 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0e62f4a77391bfa625a3fe0df1a7ea63a73118344160cc4904547ac7bf0887ff"} err="failed to get container status \"0e62f4a77391bfa625a3fe0df1a7ea63a73118344160cc4904547ac7bf0887ff\": rpc error: code = NotFound desc = could not find container \"0e62f4a77391bfa625a3fe0df1a7ea63a73118344160cc4904547ac7bf0887ff\": container with ID starting with 0e62f4a77391bfa625a3fe0df1a7ea63a73118344160cc4904547ac7bf0887ff not found: ID does not exist" Feb 23 09:11:40 crc kubenswrapper[4940]: I0223 09:11:40.626830 4940 scope.go:117] "RemoveContainer" containerID="f1d90aa5a3c7ae92f5ee82674c318b48c6fb8b829670f91a69709eaf2a8e86a1" Feb 23 09:11:40 crc kubenswrapper[4940]: E0223 09:11:40.631127 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ab19ece-433f-48c7-8da3-d025f968169c" containerName="nova-metadata-log" Feb 23 09:11:40 crc kubenswrapper[4940]: I0223 09:11:40.631154 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ab19ece-433f-48c7-8da3-d025f968169c" containerName="nova-metadata-log" Feb 23 09:11:40 crc kubenswrapper[4940]: E0223 09:11:40.631389 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ab19ece-433f-48c7-8da3-d025f968169c" containerName="nova-metadata-metadata" Feb 23 09:11:40 crc kubenswrapper[4940]: I0223 09:11:40.631395 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ab19ece-433f-48c7-8da3-d025f968169c" containerName="nova-metadata-metadata" Feb 23 09:11:40 crc kubenswrapper[4940]: I0223 09:11:40.631567 4940 memory_manager.go:354] "RemoveStaleState removing state" podUID="7ab19ece-433f-48c7-8da3-d025f968169c" containerName="nova-metadata-metadata" Feb 23 09:11:40 crc kubenswrapper[4940]: I0223 09:11:40.631579 4940 memory_manager.go:354] "RemoveStaleState removing state" podUID="7ab19ece-433f-48c7-8da3-d025f968169c" containerName="nova-metadata-log" Feb 23 09:11:40 crc kubenswrapper[4940]: I0223 09:11:40.632754 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 23 09:11:40 crc kubenswrapper[4940]: E0223 09:11:40.632995 4940 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f1d90aa5a3c7ae92f5ee82674c318b48c6fb8b829670f91a69709eaf2a8e86a1\": container with ID starting with f1d90aa5a3c7ae92f5ee82674c318b48c6fb8b829670f91a69709eaf2a8e86a1 not found: ID does not exist" containerID="f1d90aa5a3c7ae92f5ee82674c318b48c6fb8b829670f91a69709eaf2a8e86a1" Feb 23 09:11:40 crc kubenswrapper[4940]: I0223 09:11:40.633024 4940 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f1d90aa5a3c7ae92f5ee82674c318b48c6fb8b829670f91a69709eaf2a8e86a1"} err="failed to get container status \"f1d90aa5a3c7ae92f5ee82674c318b48c6fb8b829670f91a69709eaf2a8e86a1\": rpc error: code = NotFound desc = could not find container \"f1d90aa5a3c7ae92f5ee82674c318b48c6fb8b829670f91a69709eaf2a8e86a1\": container with ID starting with f1d90aa5a3c7ae92f5ee82674c318b48c6fb8b829670f91a69709eaf2a8e86a1 not found: ID does not exist" Feb 23 09:11:40 crc kubenswrapper[4940]: I0223 09:11:40.633042 4940 scope.go:117] "RemoveContainer" containerID="0e62f4a77391bfa625a3fe0df1a7ea63a73118344160cc4904547ac7bf0887ff" Feb 23 09:11:40 crc kubenswrapper[4940]: I0223 09:11:40.633412 4940 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0e62f4a77391bfa625a3fe0df1a7ea63a73118344160cc4904547ac7bf0887ff"} err="failed to get container status \"0e62f4a77391bfa625a3fe0df1a7ea63a73118344160cc4904547ac7bf0887ff\": rpc error: code = NotFound desc = could not find container \"0e62f4a77391bfa625a3fe0df1a7ea63a73118344160cc4904547ac7bf0887ff\": container with ID starting with 0e62f4a77391bfa625a3fe0df1a7ea63a73118344160cc4904547ac7bf0887ff not found: ID does not exist" Feb 23 09:11:40 crc kubenswrapper[4940]: I0223 09:11:40.633433 4940 scope.go:117] "RemoveContainer" containerID="f1d90aa5a3c7ae92f5ee82674c318b48c6fb8b829670f91a69709eaf2a8e86a1" Feb 23 09:11:40 crc kubenswrapper[4940]: I0223 09:11:40.633651 4940 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f1d90aa5a3c7ae92f5ee82674c318b48c6fb8b829670f91a69709eaf2a8e86a1"} err="failed to get container status \"f1d90aa5a3c7ae92f5ee82674c318b48c6fb8b829670f91a69709eaf2a8e86a1\": rpc error: code = NotFound desc = could not find container \"f1d90aa5a3c7ae92f5ee82674c318b48c6fb8b829670f91a69709eaf2a8e86a1\": container with ID starting with f1d90aa5a3c7ae92f5ee82674c318b48c6fb8b829670f91a69709eaf2a8e86a1 not found: ID does not exist" Feb 23 09:11:40 crc kubenswrapper[4940]: I0223 09:11:40.634833 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 23 09:11:40 crc kubenswrapper[4940]: I0223 09:11:40.635226 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 23 09:11:40 crc kubenswrapper[4940]: I0223 09:11:40.640549 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 23 09:11:40 crc kubenswrapper[4940]: I0223 09:11:40.701907 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 23 09:11:40 crc kubenswrapper[4940]: W0223 09:11:40.707563 4940 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc854cfd6_7319_4a6c_8893_a96cd32bdcd0.slice/crio-e6e8102bb815902faf21112210c67795f32b4b905a8326018cca4709070f4c59 WatchSource:0}: Error finding container e6e8102bb815902faf21112210c67795f32b4b905a8326018cca4709070f4c59: Status 404 returned error can't find the container with id e6e8102bb815902faf21112210c67795f32b4b905a8326018cca4709070f4c59 Feb 23 09:11:40 crc kubenswrapper[4940]: I0223 09:11:40.717485 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kwg59\" (UniqueName: \"kubernetes.io/projected/fca2de66-25b8-4d49-8283-8870c62104d3-kube-api-access-kwg59\") pod \"nova-metadata-0\" (UID: \"fca2de66-25b8-4d49-8283-8870c62104d3\") " pod="openstack/nova-metadata-0" Feb 23 09:11:40 crc kubenswrapper[4940]: I0223 09:11:40.717533 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fca2de66-25b8-4d49-8283-8870c62104d3-logs\") pod \"nova-metadata-0\" (UID: \"fca2de66-25b8-4d49-8283-8870c62104d3\") " pod="openstack/nova-metadata-0" Feb 23 09:11:40 crc kubenswrapper[4940]: I0223 09:11:40.717689 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/fca2de66-25b8-4d49-8283-8870c62104d3-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"fca2de66-25b8-4d49-8283-8870c62104d3\") " pod="openstack/nova-metadata-0" Feb 23 09:11:40 crc kubenswrapper[4940]: I0223 09:11:40.717712 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fca2de66-25b8-4d49-8283-8870c62104d3-config-data\") pod \"nova-metadata-0\" (UID: \"fca2de66-25b8-4d49-8283-8870c62104d3\") " pod="openstack/nova-metadata-0" Feb 23 09:11:40 crc kubenswrapper[4940]: I0223 09:11:40.717914 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fca2de66-25b8-4d49-8283-8870c62104d3-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"fca2de66-25b8-4d49-8283-8870c62104d3\") " pod="openstack/nova-metadata-0" Feb 23 09:11:40 crc kubenswrapper[4940]: I0223 09:11:40.819548 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fca2de66-25b8-4d49-8283-8870c62104d3-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"fca2de66-25b8-4d49-8283-8870c62104d3\") " pod="openstack/nova-metadata-0" Feb 23 09:11:40 crc kubenswrapper[4940]: I0223 09:11:40.819639 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kwg59\" (UniqueName: \"kubernetes.io/projected/fca2de66-25b8-4d49-8283-8870c62104d3-kube-api-access-kwg59\") pod \"nova-metadata-0\" (UID: \"fca2de66-25b8-4d49-8283-8870c62104d3\") " pod="openstack/nova-metadata-0" Feb 23 09:11:40 crc kubenswrapper[4940]: I0223 09:11:40.819670 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fca2de66-25b8-4d49-8283-8870c62104d3-logs\") pod \"nova-metadata-0\" (UID: \"fca2de66-25b8-4d49-8283-8870c62104d3\") " pod="openstack/nova-metadata-0" Feb 23 09:11:40 crc kubenswrapper[4940]: I0223 09:11:40.819729 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/fca2de66-25b8-4d49-8283-8870c62104d3-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"fca2de66-25b8-4d49-8283-8870c62104d3\") " pod="openstack/nova-metadata-0" Feb 23 09:11:40 crc kubenswrapper[4940]: I0223 09:11:40.819746 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fca2de66-25b8-4d49-8283-8870c62104d3-config-data\") pod \"nova-metadata-0\" (UID: \"fca2de66-25b8-4d49-8283-8870c62104d3\") " pod="openstack/nova-metadata-0" Feb 23 09:11:40 crc kubenswrapper[4940]: I0223 09:11:40.820407 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fca2de66-25b8-4d49-8283-8870c62104d3-logs\") pod \"nova-metadata-0\" (UID: \"fca2de66-25b8-4d49-8283-8870c62104d3\") " pod="openstack/nova-metadata-0" Feb 23 09:11:40 crc kubenswrapper[4940]: I0223 09:11:40.823365 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fca2de66-25b8-4d49-8283-8870c62104d3-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"fca2de66-25b8-4d49-8283-8870c62104d3\") " pod="openstack/nova-metadata-0" Feb 23 09:11:40 crc kubenswrapper[4940]: I0223 09:11:40.823633 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fca2de66-25b8-4d49-8283-8870c62104d3-config-data\") pod \"nova-metadata-0\" (UID: \"fca2de66-25b8-4d49-8283-8870c62104d3\") " pod="openstack/nova-metadata-0" Feb 23 09:11:40 crc kubenswrapper[4940]: I0223 09:11:40.823753 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/fca2de66-25b8-4d49-8283-8870c62104d3-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"fca2de66-25b8-4d49-8283-8870c62104d3\") " pod="openstack/nova-metadata-0" Feb 23 09:11:40 crc kubenswrapper[4940]: I0223 09:11:40.836603 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kwg59\" (UniqueName: \"kubernetes.io/projected/fca2de66-25b8-4d49-8283-8870c62104d3-kube-api-access-kwg59\") pod \"nova-metadata-0\" (UID: \"fca2de66-25b8-4d49-8283-8870c62104d3\") " pod="openstack/nova-metadata-0" Feb 23 09:11:40 crc kubenswrapper[4940]: W0223 09:11:40.884748 4940 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode8d07da4_4091_410a_a0a8_befbd51a314e.slice/crio-058f664c0ebef86bf8f954298a2362f0e7ac4e460e4f618d766bb45824df039e WatchSource:0}: Error finding container 058f664c0ebef86bf8f954298a2362f0e7ac4e460e4f618d766bb45824df039e: Status 404 returned error can't find the container with id 058f664c0ebef86bf8f954298a2362f0e7ac4e460e4f618d766bb45824df039e Feb 23 09:11:40 crc kubenswrapper[4940]: I0223 09:11:40.886201 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 23 09:11:40 crc kubenswrapper[4940]: I0223 09:11:40.954103 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 23 09:11:41 crc kubenswrapper[4940]: I0223 09:11:41.368216 4940 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7ab19ece-433f-48c7-8da3-d025f968169c" path="/var/lib/kubelet/pods/7ab19ece-433f-48c7-8da3-d025f968169c/volumes" Feb 23 09:11:41 crc kubenswrapper[4940]: I0223 09:11:41.369355 4940 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f670bbd9-012d-433a-90ec-91662031476c" path="/var/lib/kubelet/pods/f670bbd9-012d-433a-90ec-91662031476c/volumes" Feb 23 09:11:41 crc kubenswrapper[4940]: I0223 09:11:41.553408 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e8d07da4-4091-410a-a0a8-befbd51a314e","Type":"ContainerStarted","Data":"f09ec395112f9bf74fedd3ce504af7ff9ed750636f17c26ba1c4cea34a82f9b1"} Feb 23 09:11:41 crc kubenswrapper[4940]: I0223 09:11:41.553746 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e8d07da4-4091-410a-a0a8-befbd51a314e","Type":"ContainerStarted","Data":"cf282a4f5507687d699bb51e1cb039ae5df98499b0e57c3f7a898d4f5aac3cf0"} Feb 23 09:11:41 crc kubenswrapper[4940]: I0223 09:11:41.553766 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e8d07da4-4091-410a-a0a8-befbd51a314e","Type":"ContainerStarted","Data":"058f664c0ebef86bf8f954298a2362f0e7ac4e460e4f618d766bb45824df039e"} Feb 23 09:11:41 crc kubenswrapper[4940]: I0223 09:11:41.558804 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"c854cfd6-7319-4a6c-8893-a96cd32bdcd0","Type":"ContainerStarted","Data":"54747231302c2fb676c6dcd2f88e07aa94e8167f5b3611a980eeb37b34a6bc8e"} Feb 23 09:11:41 crc kubenswrapper[4940]: I0223 09:11:41.558848 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"c854cfd6-7319-4a6c-8893-a96cd32bdcd0","Type":"ContainerStarted","Data":"e6e8102bb815902faf21112210c67795f32b4b905a8326018cca4709070f4c59"} Feb 23 09:11:41 crc kubenswrapper[4940]: I0223 09:11:41.559217 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Feb 23 09:11:41 crc kubenswrapper[4940]: I0223 09:11:41.614148 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 23 09:11:41 crc kubenswrapper[4940]: I0223 09:11:41.615355 4940 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.615341094 podStartE2EDuration="2.615341094s" podCreationTimestamp="2026-02-23 09:11:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 09:11:41.572488426 +0000 UTC m=+1432.955694583" watchObservedRunningTime="2026-02-23 09:11:41.615341094 +0000 UTC m=+1432.998547251" Feb 23 09:11:41 crc kubenswrapper[4940]: I0223 09:11:41.637809 4940 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.637786389 podStartE2EDuration="2.637786389s" podCreationTimestamp="2026-02-23 09:11:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 09:11:41.5913681 +0000 UTC m=+1432.974574257" watchObservedRunningTime="2026-02-23 09:11:41.637786389 +0000 UTC m=+1433.020992546" Feb 23 09:11:42 crc kubenswrapper[4940]: I0223 09:11:42.579086 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"fca2de66-25b8-4d49-8283-8870c62104d3","Type":"ContainerStarted","Data":"86518621e8ea7a9d2574f992c8be80ab9dc711fef8a8c3753902740fbbbdfdfc"} Feb 23 09:11:42 crc kubenswrapper[4940]: I0223 09:11:42.581702 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"fca2de66-25b8-4d49-8283-8870c62104d3","Type":"ContainerStarted","Data":"6f6639a8ec71ca5ca0c1ea34f58c42a43b52ee354cc5dfc3c6d91d780bcdaf5b"} Feb 23 09:11:42 crc kubenswrapper[4940]: I0223 09:11:42.582778 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"fca2de66-25b8-4d49-8283-8870c62104d3","Type":"ContainerStarted","Data":"83df78500425ea9bebce9b466c18c7b4bc2e69104c68348789a02c092856e0f4"} Feb 23 09:11:42 crc kubenswrapper[4940]: I0223 09:11:42.632820 4940 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.632795359 podStartE2EDuration="2.632795359s" podCreationTimestamp="2026-02-23 09:11:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 09:11:42.613844963 +0000 UTC m=+1433.997051110" watchObservedRunningTime="2026-02-23 09:11:42.632795359 +0000 UTC m=+1434.016001516" Feb 23 09:11:43 crc kubenswrapper[4940]: I0223 09:11:43.184177 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 23 09:11:43 crc kubenswrapper[4940]: I0223 09:11:43.274673 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/641aa3c3-0f60-4529-9422-9343f561827f-config-data\") pod \"641aa3c3-0f60-4529-9422-9343f561827f\" (UID: \"641aa3c3-0f60-4529-9422-9343f561827f\") " Feb 23 09:11:43 crc kubenswrapper[4940]: I0223 09:11:43.274808 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q2vbt\" (UniqueName: \"kubernetes.io/projected/641aa3c3-0f60-4529-9422-9343f561827f-kube-api-access-q2vbt\") pod \"641aa3c3-0f60-4529-9422-9343f561827f\" (UID: \"641aa3c3-0f60-4529-9422-9343f561827f\") " Feb 23 09:11:43 crc kubenswrapper[4940]: I0223 09:11:43.274866 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/641aa3c3-0f60-4529-9422-9343f561827f-combined-ca-bundle\") pod \"641aa3c3-0f60-4529-9422-9343f561827f\" (UID: \"641aa3c3-0f60-4529-9422-9343f561827f\") " Feb 23 09:11:43 crc kubenswrapper[4940]: I0223 09:11:43.280970 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/641aa3c3-0f60-4529-9422-9343f561827f-kube-api-access-q2vbt" (OuterVolumeSpecName: "kube-api-access-q2vbt") pod "641aa3c3-0f60-4529-9422-9343f561827f" (UID: "641aa3c3-0f60-4529-9422-9343f561827f"). InnerVolumeSpecName "kube-api-access-q2vbt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 09:11:43 crc kubenswrapper[4940]: I0223 09:11:43.308928 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/641aa3c3-0f60-4529-9422-9343f561827f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "641aa3c3-0f60-4529-9422-9343f561827f" (UID: "641aa3c3-0f60-4529-9422-9343f561827f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 09:11:43 crc kubenswrapper[4940]: I0223 09:11:43.309351 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/641aa3c3-0f60-4529-9422-9343f561827f-config-data" (OuterVolumeSpecName: "config-data") pod "641aa3c3-0f60-4529-9422-9343f561827f" (UID: "641aa3c3-0f60-4529-9422-9343f561827f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 09:11:43 crc kubenswrapper[4940]: I0223 09:11:43.377066 4940 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/641aa3c3-0f60-4529-9422-9343f561827f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 09:11:43 crc kubenswrapper[4940]: I0223 09:11:43.377113 4940 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/641aa3c3-0f60-4529-9422-9343f561827f-config-data\") on node \"crc\" DevicePath \"\"" Feb 23 09:11:43 crc kubenswrapper[4940]: I0223 09:11:43.377124 4940 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q2vbt\" (UniqueName: \"kubernetes.io/projected/641aa3c3-0f60-4529-9422-9343f561827f-kube-api-access-q2vbt\") on node \"crc\" DevicePath \"\"" Feb 23 09:11:43 crc kubenswrapper[4940]: I0223 09:11:43.589866 4940 generic.go:334] "Generic (PLEG): container finished" podID="641aa3c3-0f60-4529-9422-9343f561827f" containerID="6f2799bcf55c485f6fecb0cd16f3f3064476b4085480847ace38dd7078af23c0" exitCode=0 Feb 23 09:11:43 crc kubenswrapper[4940]: I0223 09:11:43.590036 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"641aa3c3-0f60-4529-9422-9343f561827f","Type":"ContainerDied","Data":"6f2799bcf55c485f6fecb0cd16f3f3064476b4085480847ace38dd7078af23c0"} Feb 23 09:11:43 crc kubenswrapper[4940]: I0223 09:11:43.592532 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"641aa3c3-0f60-4529-9422-9343f561827f","Type":"ContainerDied","Data":"51e257c07327a6cf9d23979b642b55e65f07d5aacd04bb8fb2f28ab0dfca6b7e"} Feb 23 09:11:43 crc kubenswrapper[4940]: I0223 09:11:43.592686 4940 scope.go:117] "RemoveContainer" containerID="6f2799bcf55c485f6fecb0cd16f3f3064476b4085480847ace38dd7078af23c0" Feb 23 09:11:43 crc kubenswrapper[4940]: I0223 09:11:43.590122 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 23 09:11:43 crc kubenswrapper[4940]: I0223 09:11:43.617945 4940 scope.go:117] "RemoveContainer" containerID="6f2799bcf55c485f6fecb0cd16f3f3064476b4085480847ace38dd7078af23c0" Feb 23 09:11:43 crc kubenswrapper[4940]: E0223 09:11:43.618589 4940 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6f2799bcf55c485f6fecb0cd16f3f3064476b4085480847ace38dd7078af23c0\": container with ID starting with 6f2799bcf55c485f6fecb0cd16f3f3064476b4085480847ace38dd7078af23c0 not found: ID does not exist" containerID="6f2799bcf55c485f6fecb0cd16f3f3064476b4085480847ace38dd7078af23c0" Feb 23 09:11:43 crc kubenswrapper[4940]: I0223 09:11:43.618718 4940 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6f2799bcf55c485f6fecb0cd16f3f3064476b4085480847ace38dd7078af23c0"} err="failed to get container status \"6f2799bcf55c485f6fecb0cd16f3f3064476b4085480847ace38dd7078af23c0\": rpc error: code = NotFound desc = could not find container \"6f2799bcf55c485f6fecb0cd16f3f3064476b4085480847ace38dd7078af23c0\": container with ID starting with 6f2799bcf55c485f6fecb0cd16f3f3064476b4085480847ace38dd7078af23c0 not found: ID does not exist" Feb 23 09:11:43 crc kubenswrapper[4940]: I0223 09:11:43.624075 4940 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 23 09:11:43 crc kubenswrapper[4940]: I0223 09:11:43.639187 4940 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Feb 23 09:11:43 crc kubenswrapper[4940]: I0223 09:11:43.648502 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 23 09:11:43 crc kubenswrapper[4940]: E0223 09:11:43.648978 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="641aa3c3-0f60-4529-9422-9343f561827f" containerName="nova-scheduler-scheduler" Feb 23 09:11:43 crc kubenswrapper[4940]: I0223 09:11:43.648994 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="641aa3c3-0f60-4529-9422-9343f561827f" containerName="nova-scheduler-scheduler" Feb 23 09:11:43 crc kubenswrapper[4940]: I0223 09:11:43.649413 4940 memory_manager.go:354] "RemoveStaleState removing state" podUID="641aa3c3-0f60-4529-9422-9343f561827f" containerName="nova-scheduler-scheduler" Feb 23 09:11:43 crc kubenswrapper[4940]: I0223 09:11:43.650141 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 23 09:11:43 crc kubenswrapper[4940]: I0223 09:11:43.652393 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 23 09:11:43 crc kubenswrapper[4940]: I0223 09:11:43.660098 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 23 09:11:43 crc kubenswrapper[4940]: I0223 09:11:43.783577 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da9fd460-b67b-4141-8f98-a68f5d73aec4-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"da9fd460-b67b-4141-8f98-a68f5d73aec4\") " pod="openstack/nova-scheduler-0" Feb 23 09:11:43 crc kubenswrapper[4940]: I0223 09:11:43.783786 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9cnmt\" (UniqueName: \"kubernetes.io/projected/da9fd460-b67b-4141-8f98-a68f5d73aec4-kube-api-access-9cnmt\") pod \"nova-scheduler-0\" (UID: \"da9fd460-b67b-4141-8f98-a68f5d73aec4\") " pod="openstack/nova-scheduler-0" Feb 23 09:11:43 crc kubenswrapper[4940]: I0223 09:11:43.784049 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/da9fd460-b67b-4141-8f98-a68f5d73aec4-config-data\") pod \"nova-scheduler-0\" (UID: \"da9fd460-b67b-4141-8f98-a68f5d73aec4\") " pod="openstack/nova-scheduler-0" Feb 23 09:11:43 crc kubenswrapper[4940]: I0223 09:11:43.894049 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/da9fd460-b67b-4141-8f98-a68f5d73aec4-config-data\") pod \"nova-scheduler-0\" (UID: \"da9fd460-b67b-4141-8f98-a68f5d73aec4\") " pod="openstack/nova-scheduler-0" Feb 23 09:11:43 crc kubenswrapper[4940]: I0223 09:11:43.894563 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da9fd460-b67b-4141-8f98-a68f5d73aec4-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"da9fd460-b67b-4141-8f98-a68f5d73aec4\") " pod="openstack/nova-scheduler-0" Feb 23 09:11:43 crc kubenswrapper[4940]: I0223 09:11:43.894877 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9cnmt\" (UniqueName: \"kubernetes.io/projected/da9fd460-b67b-4141-8f98-a68f5d73aec4-kube-api-access-9cnmt\") pod \"nova-scheduler-0\" (UID: \"da9fd460-b67b-4141-8f98-a68f5d73aec4\") " pod="openstack/nova-scheduler-0" Feb 23 09:11:43 crc kubenswrapper[4940]: I0223 09:11:43.903569 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/da9fd460-b67b-4141-8f98-a68f5d73aec4-config-data\") pod \"nova-scheduler-0\" (UID: \"da9fd460-b67b-4141-8f98-a68f5d73aec4\") " pod="openstack/nova-scheduler-0" Feb 23 09:11:43 crc kubenswrapper[4940]: I0223 09:11:43.909393 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da9fd460-b67b-4141-8f98-a68f5d73aec4-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"da9fd460-b67b-4141-8f98-a68f5d73aec4\") " pod="openstack/nova-scheduler-0" Feb 23 09:11:43 crc kubenswrapper[4940]: I0223 09:11:43.913869 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9cnmt\" (UniqueName: \"kubernetes.io/projected/da9fd460-b67b-4141-8f98-a68f5d73aec4-kube-api-access-9cnmt\") pod \"nova-scheduler-0\" (UID: \"da9fd460-b67b-4141-8f98-a68f5d73aec4\") " pod="openstack/nova-scheduler-0" Feb 23 09:11:43 crc kubenswrapper[4940]: I0223 09:11:43.969726 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 23 09:11:45 crc kubenswrapper[4940]: I0223 09:11:45.076749 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 23 09:11:45 crc kubenswrapper[4940]: W0223 09:11:45.081272 4940 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podda9fd460_b67b_4141_8f98_a68f5d73aec4.slice/crio-47059df0300a7bc02bfbc4350792cdc4c8780c3be71cfecd6e0e596d77ac3b99 WatchSource:0}: Error finding container 47059df0300a7bc02bfbc4350792cdc4c8780c3be71cfecd6e0e596d77ac3b99: Status 404 returned error can't find the container with id 47059df0300a7bc02bfbc4350792cdc4c8780c3be71cfecd6e0e596d77ac3b99 Feb 23 09:11:45 crc kubenswrapper[4940]: I0223 09:11:45.252775 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Feb 23 09:11:45 crc kubenswrapper[4940]: I0223 09:11:45.357466 4940 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="641aa3c3-0f60-4529-9422-9343f561827f" path="/var/lib/kubelet/pods/641aa3c3-0f60-4529-9422-9343f561827f/volumes" Feb 23 09:11:45 crc kubenswrapper[4940]: I0223 09:11:45.424077 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Feb 23 09:11:45 crc kubenswrapper[4940]: I0223 09:11:45.610641 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"da9fd460-b67b-4141-8f98-a68f5d73aec4","Type":"ContainerStarted","Data":"c1ecf7c2ce85585b35fecca7f72e88e82e22a491e7734ab0f3f1dbe10b2a3b9e"} Feb 23 09:11:45 crc kubenswrapper[4940]: I0223 09:11:45.610883 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"da9fd460-b67b-4141-8f98-a68f5d73aec4","Type":"ContainerStarted","Data":"47059df0300a7bc02bfbc4350792cdc4c8780c3be71cfecd6e0e596d77ac3b99"} Feb 23 09:11:45 crc kubenswrapper[4940]: I0223 09:11:45.635801 4940 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.6357789719999998 podStartE2EDuration="2.635778972s" podCreationTimestamp="2026-02-23 09:11:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 09:11:45.63284732 +0000 UTC m=+1437.016053497" watchObservedRunningTime="2026-02-23 09:11:45.635778972 +0000 UTC m=+1437.018985129" Feb 23 09:11:45 crc kubenswrapper[4940]: I0223 09:11:45.954531 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 23 09:11:45 crc kubenswrapper[4940]: I0223 09:11:45.954885 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 23 09:11:48 crc kubenswrapper[4940]: I0223 09:11:48.970518 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 23 09:11:49 crc kubenswrapper[4940]: I0223 09:11:49.015765 4940 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 23 09:11:49 crc kubenswrapper[4940]: I0223 09:11:49.016316 4940 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="e38a100d-49bb-4138-a8c7-3eade8ae78f6" containerName="kube-state-metrics" containerID="cri-o://dca5188ad7f58440fcc360a71472b1bd9a8634c005b1d1db477a303d16535c42" gracePeriod=30 Feb 23 09:11:49 crc kubenswrapper[4940]: I0223 09:11:49.509489 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 23 09:11:49 crc kubenswrapper[4940]: I0223 09:11:49.619872 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkhtl\" (UniqueName: \"kubernetes.io/projected/e38a100d-49bb-4138-a8c7-3eade8ae78f6-kube-api-access-jkhtl\") pod \"e38a100d-49bb-4138-a8c7-3eade8ae78f6\" (UID: \"e38a100d-49bb-4138-a8c7-3eade8ae78f6\") " Feb 23 09:11:49 crc kubenswrapper[4940]: I0223 09:11:49.625252 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e38a100d-49bb-4138-a8c7-3eade8ae78f6-kube-api-access-jkhtl" (OuterVolumeSpecName: "kube-api-access-jkhtl") pod "e38a100d-49bb-4138-a8c7-3eade8ae78f6" (UID: "e38a100d-49bb-4138-a8c7-3eade8ae78f6"). InnerVolumeSpecName "kube-api-access-jkhtl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 09:11:49 crc kubenswrapper[4940]: I0223 09:11:49.647431 4940 generic.go:334] "Generic (PLEG): container finished" podID="e38a100d-49bb-4138-a8c7-3eade8ae78f6" containerID="dca5188ad7f58440fcc360a71472b1bd9a8634c005b1d1db477a303d16535c42" exitCode=2 Feb 23 09:11:49 crc kubenswrapper[4940]: I0223 09:11:49.647477 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"e38a100d-49bb-4138-a8c7-3eade8ae78f6","Type":"ContainerDied","Data":"dca5188ad7f58440fcc360a71472b1bd9a8634c005b1d1db477a303d16535c42"} Feb 23 09:11:49 crc kubenswrapper[4940]: I0223 09:11:49.647514 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"e38a100d-49bb-4138-a8c7-3eade8ae78f6","Type":"ContainerDied","Data":"3fe4742bf6fb38311ecd6e100b5fe9025a408d4d741188abd326e7be1b0b9a87"} Feb 23 09:11:49 crc kubenswrapper[4940]: I0223 09:11:49.647532 4940 scope.go:117] "RemoveContainer" containerID="dca5188ad7f58440fcc360a71472b1bd9a8634c005b1d1db477a303d16535c42" Feb 23 09:11:49 crc kubenswrapper[4940]: I0223 09:11:49.647800 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 23 09:11:49 crc kubenswrapper[4940]: I0223 09:11:49.689915 4940 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 23 09:11:49 crc kubenswrapper[4940]: I0223 09:11:49.690188 4940 scope.go:117] "RemoveContainer" containerID="dca5188ad7f58440fcc360a71472b1bd9a8634c005b1d1db477a303d16535c42" Feb 23 09:11:49 crc kubenswrapper[4940]: E0223 09:11:49.690561 4940 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dca5188ad7f58440fcc360a71472b1bd9a8634c005b1d1db477a303d16535c42\": container with ID starting with dca5188ad7f58440fcc360a71472b1bd9a8634c005b1d1db477a303d16535c42 not found: ID does not exist" containerID="dca5188ad7f58440fcc360a71472b1bd9a8634c005b1d1db477a303d16535c42" Feb 23 09:11:49 crc kubenswrapper[4940]: I0223 09:11:49.690602 4940 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dca5188ad7f58440fcc360a71472b1bd9a8634c005b1d1db477a303d16535c42"} err="failed to get container status \"dca5188ad7f58440fcc360a71472b1bd9a8634c005b1d1db477a303d16535c42\": rpc error: code = NotFound desc = could not find container \"dca5188ad7f58440fcc360a71472b1bd9a8634c005b1d1db477a303d16535c42\": container with ID starting with dca5188ad7f58440fcc360a71472b1bd9a8634c005b1d1db477a303d16535c42 not found: ID does not exist" Feb 23 09:11:49 crc kubenswrapper[4940]: I0223 09:11:49.710364 4940 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 23 09:11:49 crc kubenswrapper[4940]: I0223 09:11:49.722174 4940 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkhtl\" (UniqueName: \"kubernetes.io/projected/e38a100d-49bb-4138-a8c7-3eade8ae78f6-kube-api-access-jkhtl\") on node \"crc\" DevicePath \"\"" Feb 23 09:11:49 crc kubenswrapper[4940]: I0223 09:11:49.724417 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Feb 23 09:11:49 crc kubenswrapper[4940]: E0223 09:11:49.724868 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e38a100d-49bb-4138-a8c7-3eade8ae78f6" containerName="kube-state-metrics" Feb 23 09:11:49 crc kubenswrapper[4940]: I0223 09:11:49.724891 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="e38a100d-49bb-4138-a8c7-3eade8ae78f6" containerName="kube-state-metrics" Feb 23 09:11:49 crc kubenswrapper[4940]: I0223 09:11:49.725137 4940 memory_manager.go:354] "RemoveStaleState removing state" podUID="e38a100d-49bb-4138-a8c7-3eade8ae78f6" containerName="kube-state-metrics" Feb 23 09:11:49 crc kubenswrapper[4940]: I0223 09:11:49.725944 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 23 09:11:49 crc kubenswrapper[4940]: I0223 09:11:49.728763 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Feb 23 09:11:49 crc kubenswrapper[4940]: I0223 09:11:49.728978 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Feb 23 09:11:49 crc kubenswrapper[4940]: I0223 09:11:49.735862 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 23 09:11:49 crc kubenswrapper[4940]: I0223 09:11:49.925921 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6l5sz\" (UniqueName: \"kubernetes.io/projected/15d7a09a-83f9-4b41-a280-e0d7257ee6f3-kube-api-access-6l5sz\") pod \"kube-state-metrics-0\" (UID: \"15d7a09a-83f9-4b41-a280-e0d7257ee6f3\") " pod="openstack/kube-state-metrics-0" Feb 23 09:11:49 crc kubenswrapper[4940]: I0223 09:11:49.926142 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/15d7a09a-83f9-4b41-a280-e0d7257ee6f3-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"15d7a09a-83f9-4b41-a280-e0d7257ee6f3\") " pod="openstack/kube-state-metrics-0" Feb 23 09:11:49 crc kubenswrapper[4940]: I0223 09:11:49.926201 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15d7a09a-83f9-4b41-a280-e0d7257ee6f3-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"15d7a09a-83f9-4b41-a280-e0d7257ee6f3\") " pod="openstack/kube-state-metrics-0" Feb 23 09:11:49 crc kubenswrapper[4940]: I0223 09:11:49.926580 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/15d7a09a-83f9-4b41-a280-e0d7257ee6f3-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"15d7a09a-83f9-4b41-a280-e0d7257ee6f3\") " pod="openstack/kube-state-metrics-0" Feb 23 09:11:50 crc kubenswrapper[4940]: I0223 09:11:50.028900 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6l5sz\" (UniqueName: \"kubernetes.io/projected/15d7a09a-83f9-4b41-a280-e0d7257ee6f3-kube-api-access-6l5sz\") pod \"kube-state-metrics-0\" (UID: \"15d7a09a-83f9-4b41-a280-e0d7257ee6f3\") " pod="openstack/kube-state-metrics-0" Feb 23 09:11:50 crc kubenswrapper[4940]: I0223 09:11:50.029014 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/15d7a09a-83f9-4b41-a280-e0d7257ee6f3-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"15d7a09a-83f9-4b41-a280-e0d7257ee6f3\") " pod="openstack/kube-state-metrics-0" Feb 23 09:11:50 crc kubenswrapper[4940]: I0223 09:11:50.029065 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15d7a09a-83f9-4b41-a280-e0d7257ee6f3-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"15d7a09a-83f9-4b41-a280-e0d7257ee6f3\") " pod="openstack/kube-state-metrics-0" Feb 23 09:11:50 crc kubenswrapper[4940]: I0223 09:11:50.029218 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/15d7a09a-83f9-4b41-a280-e0d7257ee6f3-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"15d7a09a-83f9-4b41-a280-e0d7257ee6f3\") " pod="openstack/kube-state-metrics-0" Feb 23 09:11:50 crc kubenswrapper[4940]: I0223 09:11:50.032996 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15d7a09a-83f9-4b41-a280-e0d7257ee6f3-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"15d7a09a-83f9-4b41-a280-e0d7257ee6f3\") " pod="openstack/kube-state-metrics-0" Feb 23 09:11:50 crc kubenswrapper[4940]: I0223 09:11:50.033352 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/15d7a09a-83f9-4b41-a280-e0d7257ee6f3-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"15d7a09a-83f9-4b41-a280-e0d7257ee6f3\") " pod="openstack/kube-state-metrics-0" Feb 23 09:11:50 crc kubenswrapper[4940]: I0223 09:11:50.034996 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/15d7a09a-83f9-4b41-a280-e0d7257ee6f3-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"15d7a09a-83f9-4b41-a280-e0d7257ee6f3\") " pod="openstack/kube-state-metrics-0" Feb 23 09:11:50 crc kubenswrapper[4940]: I0223 09:11:50.054389 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6l5sz\" (UniqueName: \"kubernetes.io/projected/15d7a09a-83f9-4b41-a280-e0d7257ee6f3-kube-api-access-6l5sz\") pod \"kube-state-metrics-0\" (UID: \"15d7a09a-83f9-4b41-a280-e0d7257ee6f3\") " pod="openstack/kube-state-metrics-0" Feb 23 09:11:50 crc kubenswrapper[4940]: I0223 09:11:50.320693 4940 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 23 09:11:50 crc kubenswrapper[4940]: I0223 09:11:50.321542 4940 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 23 09:11:50 crc kubenswrapper[4940]: I0223 09:11:50.349327 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 23 09:11:50 crc kubenswrapper[4940]: I0223 09:11:50.820527 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 23 09:11:50 crc kubenswrapper[4940]: I0223 09:11:50.868050 4940 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 23 09:11:50 crc kubenswrapper[4940]: I0223 09:11:50.868397 4940 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2d2d73f9-5768-49cf-b547-7668dfe210fa" containerName="ceilometer-central-agent" containerID="cri-o://b6fc367c9d77e3129f93a0fb6993a5065791bcbdc98937e9c5279a703ca7d539" gracePeriod=30 Feb 23 09:11:50 crc kubenswrapper[4940]: I0223 09:11:50.868460 4940 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2d2d73f9-5768-49cf-b547-7668dfe210fa" containerName="proxy-httpd" containerID="cri-o://7743fbc2d1e0cf589f7569542a4a293f2b0d98669b1f5b7c2dfbdd6625aedb6a" gracePeriod=30 Feb 23 09:11:50 crc kubenswrapper[4940]: I0223 09:11:50.868494 4940 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2d2d73f9-5768-49cf-b547-7668dfe210fa" containerName="ceilometer-notification-agent" containerID="cri-o://6775a9563f8803d5e09d9c3ae9fe02e9ff382c3075b6e7ca6b066eb0424f3944" gracePeriod=30 Feb 23 09:11:50 crc kubenswrapper[4940]: I0223 09:11:50.868595 4940 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2d2d73f9-5768-49cf-b547-7668dfe210fa" containerName="sg-core" containerID="cri-o://92cd1332968e403d5b6f5dbdaa072b79d31a5463acd5b2ea7ff773a49147e222" gracePeriod=30 Feb 23 09:11:50 crc kubenswrapper[4940]: I0223 09:11:50.955640 4940 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 23 09:11:50 crc kubenswrapper[4940]: I0223 09:11:50.957338 4940 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 23 09:11:51 crc kubenswrapper[4940]: I0223 09:11:51.362553 4940 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e38a100d-49bb-4138-a8c7-3eade8ae78f6" path="/var/lib/kubelet/pods/e38a100d-49bb-4138-a8c7-3eade8ae78f6/volumes" Feb 23 09:11:51 crc kubenswrapper[4940]: I0223 09:11:51.404783 4940 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="e8d07da4-4091-410a-a0a8-befbd51a314e" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.213:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 23 09:11:51 crc kubenswrapper[4940]: I0223 09:11:51.404824 4940 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="e8d07da4-4091-410a-a0a8-befbd51a314e" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.213:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 23 09:11:51 crc kubenswrapper[4940]: I0223 09:11:51.678495 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"15d7a09a-83f9-4b41-a280-e0d7257ee6f3","Type":"ContainerStarted","Data":"7fa2588225aa68721c169b5ca8fe0a1ea57ad160afe1e9b1b709a1fe3ab40b8b"} Feb 23 09:11:51 crc kubenswrapper[4940]: I0223 09:11:51.679125 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Feb 23 09:11:51 crc kubenswrapper[4940]: I0223 09:11:51.679173 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"15d7a09a-83f9-4b41-a280-e0d7257ee6f3","Type":"ContainerStarted","Data":"b00a3b792fa104f2cb42e1ce14ed3d6004283dd5dc440b4dc55281e8382cc76c"} Feb 23 09:11:51 crc kubenswrapper[4940]: I0223 09:11:51.681703 4940 generic.go:334] "Generic (PLEG): container finished" podID="2d2d73f9-5768-49cf-b547-7668dfe210fa" containerID="7743fbc2d1e0cf589f7569542a4a293f2b0d98669b1f5b7c2dfbdd6625aedb6a" exitCode=0 Feb 23 09:11:51 crc kubenswrapper[4940]: I0223 09:11:51.681728 4940 generic.go:334] "Generic (PLEG): container finished" podID="2d2d73f9-5768-49cf-b547-7668dfe210fa" containerID="92cd1332968e403d5b6f5dbdaa072b79d31a5463acd5b2ea7ff773a49147e222" exitCode=2 Feb 23 09:11:51 crc kubenswrapper[4940]: I0223 09:11:51.681741 4940 generic.go:334] "Generic (PLEG): container finished" podID="2d2d73f9-5768-49cf-b547-7668dfe210fa" containerID="b6fc367c9d77e3129f93a0fb6993a5065791bcbdc98937e9c5279a703ca7d539" exitCode=0 Feb 23 09:11:51 crc kubenswrapper[4940]: I0223 09:11:51.682012 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2d2d73f9-5768-49cf-b547-7668dfe210fa","Type":"ContainerDied","Data":"7743fbc2d1e0cf589f7569542a4a293f2b0d98669b1f5b7c2dfbdd6625aedb6a"} Feb 23 09:11:51 crc kubenswrapper[4940]: I0223 09:11:51.682058 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2d2d73f9-5768-49cf-b547-7668dfe210fa","Type":"ContainerDied","Data":"92cd1332968e403d5b6f5dbdaa072b79d31a5463acd5b2ea7ff773a49147e222"} Feb 23 09:11:51 crc kubenswrapper[4940]: I0223 09:11:51.682072 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2d2d73f9-5768-49cf-b547-7668dfe210fa","Type":"ContainerDied","Data":"b6fc367c9d77e3129f93a0fb6993a5065791bcbdc98937e9c5279a703ca7d539"} Feb 23 09:11:51 crc kubenswrapper[4940]: I0223 09:11:51.698812 4940 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=2.302055148 podStartE2EDuration="2.698773243s" podCreationTimestamp="2026-02-23 09:11:49 +0000 UTC" firstStartedPulling="2026-02-23 09:11:50.822414144 +0000 UTC m=+1442.205620301" lastFinishedPulling="2026-02-23 09:11:51.219132239 +0000 UTC m=+1442.602338396" observedRunningTime="2026-02-23 09:11:51.694222659 +0000 UTC m=+1443.077428836" watchObservedRunningTime="2026-02-23 09:11:51.698773243 +0000 UTC m=+1443.081979400" Feb 23 09:11:51 crc kubenswrapper[4940]: I0223 09:11:51.962882 4940 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="fca2de66-25b8-4d49-8283-8870c62104d3" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.214:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 23 09:11:51 crc kubenswrapper[4940]: I0223 09:11:51.967022 4940 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="fca2de66-25b8-4d49-8283-8870c62104d3" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.214:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 23 09:11:52 crc kubenswrapper[4940]: I0223 09:11:52.503059 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 23 09:11:52 crc kubenswrapper[4940]: I0223 09:11:52.594729 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d2d73f9-5768-49cf-b547-7668dfe210fa-combined-ca-bundle\") pod \"2d2d73f9-5768-49cf-b547-7668dfe210fa\" (UID: \"2d2d73f9-5768-49cf-b547-7668dfe210fa\") " Feb 23 09:11:52 crc kubenswrapper[4940]: I0223 09:11:52.600013 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-85hqs\" (UniqueName: \"kubernetes.io/projected/2d2d73f9-5768-49cf-b547-7668dfe210fa-kube-api-access-85hqs\") pod \"2d2d73f9-5768-49cf-b547-7668dfe210fa\" (UID: \"2d2d73f9-5768-49cf-b547-7668dfe210fa\") " Feb 23 09:11:52 crc kubenswrapper[4940]: I0223 09:11:52.600093 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2d2d73f9-5768-49cf-b547-7668dfe210fa-scripts\") pod \"2d2d73f9-5768-49cf-b547-7668dfe210fa\" (UID: \"2d2d73f9-5768-49cf-b547-7668dfe210fa\") " Feb 23 09:11:52 crc kubenswrapper[4940]: I0223 09:11:52.600115 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2d2d73f9-5768-49cf-b547-7668dfe210fa-config-data\") pod \"2d2d73f9-5768-49cf-b547-7668dfe210fa\" (UID: \"2d2d73f9-5768-49cf-b547-7668dfe210fa\") " Feb 23 09:11:52 crc kubenswrapper[4940]: I0223 09:11:52.600175 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2d2d73f9-5768-49cf-b547-7668dfe210fa-sg-core-conf-yaml\") pod \"2d2d73f9-5768-49cf-b547-7668dfe210fa\" (UID: \"2d2d73f9-5768-49cf-b547-7668dfe210fa\") " Feb 23 09:11:52 crc kubenswrapper[4940]: I0223 09:11:52.600316 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2d2d73f9-5768-49cf-b547-7668dfe210fa-log-httpd\") pod \"2d2d73f9-5768-49cf-b547-7668dfe210fa\" (UID: \"2d2d73f9-5768-49cf-b547-7668dfe210fa\") " Feb 23 09:11:52 crc kubenswrapper[4940]: I0223 09:11:52.600407 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2d2d73f9-5768-49cf-b547-7668dfe210fa-run-httpd\") pod \"2d2d73f9-5768-49cf-b547-7668dfe210fa\" (UID: \"2d2d73f9-5768-49cf-b547-7668dfe210fa\") " Feb 23 09:11:52 crc kubenswrapper[4940]: I0223 09:11:52.600914 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2d2d73f9-5768-49cf-b547-7668dfe210fa-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "2d2d73f9-5768-49cf-b547-7668dfe210fa" (UID: "2d2d73f9-5768-49cf-b547-7668dfe210fa"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 09:11:52 crc kubenswrapper[4940]: I0223 09:11:52.604425 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2d2d73f9-5768-49cf-b547-7668dfe210fa-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "2d2d73f9-5768-49cf-b547-7668dfe210fa" (UID: "2d2d73f9-5768-49cf-b547-7668dfe210fa"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 09:11:52 crc kubenswrapper[4940]: I0223 09:11:52.606214 4940 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2d2d73f9-5768-49cf-b547-7668dfe210fa-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 23 09:11:52 crc kubenswrapper[4940]: I0223 09:11:52.606244 4940 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2d2d73f9-5768-49cf-b547-7668dfe210fa-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 23 09:11:52 crc kubenswrapper[4940]: I0223 09:11:52.609080 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2d2d73f9-5768-49cf-b547-7668dfe210fa-kube-api-access-85hqs" (OuterVolumeSpecName: "kube-api-access-85hqs") pod "2d2d73f9-5768-49cf-b547-7668dfe210fa" (UID: "2d2d73f9-5768-49cf-b547-7668dfe210fa"). InnerVolumeSpecName "kube-api-access-85hqs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 09:11:52 crc kubenswrapper[4940]: I0223 09:11:52.628090 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d2d73f9-5768-49cf-b547-7668dfe210fa-scripts" (OuterVolumeSpecName: "scripts") pod "2d2d73f9-5768-49cf-b547-7668dfe210fa" (UID: "2d2d73f9-5768-49cf-b547-7668dfe210fa"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 09:11:52 crc kubenswrapper[4940]: I0223 09:11:52.685640 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d2d73f9-5768-49cf-b547-7668dfe210fa-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "2d2d73f9-5768-49cf-b547-7668dfe210fa" (UID: "2d2d73f9-5768-49cf-b547-7668dfe210fa"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 09:11:52 crc kubenswrapper[4940]: I0223 09:11:52.699477 4940 generic.go:334] "Generic (PLEG): container finished" podID="2d2d73f9-5768-49cf-b547-7668dfe210fa" containerID="6775a9563f8803d5e09d9c3ae9fe02e9ff382c3075b6e7ca6b066eb0424f3944" exitCode=0 Feb 23 09:11:52 crc kubenswrapper[4940]: I0223 09:11:52.700862 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 23 09:11:52 crc kubenswrapper[4940]: I0223 09:11:52.701474 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2d2d73f9-5768-49cf-b547-7668dfe210fa","Type":"ContainerDied","Data":"6775a9563f8803d5e09d9c3ae9fe02e9ff382c3075b6e7ca6b066eb0424f3944"} Feb 23 09:11:52 crc kubenswrapper[4940]: I0223 09:11:52.701507 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2d2d73f9-5768-49cf-b547-7668dfe210fa","Type":"ContainerDied","Data":"c13052bdfef55e477feca3b1f0e340338d40223f2157fc82f35f2a0f79189786"} Feb 23 09:11:52 crc kubenswrapper[4940]: I0223 09:11:52.701531 4940 scope.go:117] "RemoveContainer" containerID="7743fbc2d1e0cf589f7569542a4a293f2b0d98669b1f5b7c2dfbdd6625aedb6a" Feb 23 09:11:52 crc kubenswrapper[4940]: I0223 09:11:52.708978 4940 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-85hqs\" (UniqueName: \"kubernetes.io/projected/2d2d73f9-5768-49cf-b547-7668dfe210fa-kube-api-access-85hqs\") on node \"crc\" DevicePath \"\"" Feb 23 09:11:52 crc kubenswrapper[4940]: I0223 09:11:52.709021 4940 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2d2d73f9-5768-49cf-b547-7668dfe210fa-scripts\") on node \"crc\" DevicePath \"\"" Feb 23 09:11:52 crc kubenswrapper[4940]: I0223 09:11:52.709035 4940 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2d2d73f9-5768-49cf-b547-7668dfe210fa-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 23 09:11:52 crc kubenswrapper[4940]: I0223 09:11:52.741404 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d2d73f9-5768-49cf-b547-7668dfe210fa-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2d2d73f9-5768-49cf-b547-7668dfe210fa" (UID: "2d2d73f9-5768-49cf-b547-7668dfe210fa"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 09:11:52 crc kubenswrapper[4940]: I0223 09:11:52.743252 4940 scope.go:117] "RemoveContainer" containerID="92cd1332968e403d5b6f5dbdaa072b79d31a5463acd5b2ea7ff773a49147e222" Feb 23 09:11:52 crc kubenswrapper[4940]: I0223 09:11:52.748365 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d2d73f9-5768-49cf-b547-7668dfe210fa-config-data" (OuterVolumeSpecName: "config-data") pod "2d2d73f9-5768-49cf-b547-7668dfe210fa" (UID: "2d2d73f9-5768-49cf-b547-7668dfe210fa"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 09:11:52 crc kubenswrapper[4940]: I0223 09:11:52.776814 4940 scope.go:117] "RemoveContainer" containerID="6775a9563f8803d5e09d9c3ae9fe02e9ff382c3075b6e7ca6b066eb0424f3944" Feb 23 09:11:52 crc kubenswrapper[4940]: I0223 09:11:52.797299 4940 scope.go:117] "RemoveContainer" containerID="b6fc367c9d77e3129f93a0fb6993a5065791bcbdc98937e9c5279a703ca7d539" Feb 23 09:11:52 crc kubenswrapper[4940]: I0223 09:11:52.810662 4940 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2d2d73f9-5768-49cf-b547-7668dfe210fa-config-data\") on node \"crc\" DevicePath \"\"" Feb 23 09:11:52 crc kubenswrapper[4940]: I0223 09:11:52.810817 4940 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d2d73f9-5768-49cf-b547-7668dfe210fa-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 09:11:52 crc kubenswrapper[4940]: I0223 09:11:52.817554 4940 scope.go:117] "RemoveContainer" containerID="7743fbc2d1e0cf589f7569542a4a293f2b0d98669b1f5b7c2dfbdd6625aedb6a" Feb 23 09:11:52 crc kubenswrapper[4940]: E0223 09:11:52.818075 4940 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7743fbc2d1e0cf589f7569542a4a293f2b0d98669b1f5b7c2dfbdd6625aedb6a\": container with ID starting with 7743fbc2d1e0cf589f7569542a4a293f2b0d98669b1f5b7c2dfbdd6625aedb6a not found: ID does not exist" containerID="7743fbc2d1e0cf589f7569542a4a293f2b0d98669b1f5b7c2dfbdd6625aedb6a" Feb 23 09:11:52 crc kubenswrapper[4940]: I0223 09:11:52.818120 4940 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7743fbc2d1e0cf589f7569542a4a293f2b0d98669b1f5b7c2dfbdd6625aedb6a"} err="failed to get container status \"7743fbc2d1e0cf589f7569542a4a293f2b0d98669b1f5b7c2dfbdd6625aedb6a\": rpc error: code = NotFound desc = could not find container \"7743fbc2d1e0cf589f7569542a4a293f2b0d98669b1f5b7c2dfbdd6625aedb6a\": container with ID starting with 7743fbc2d1e0cf589f7569542a4a293f2b0d98669b1f5b7c2dfbdd6625aedb6a not found: ID does not exist" Feb 23 09:11:52 crc kubenswrapper[4940]: I0223 09:11:52.818150 4940 scope.go:117] "RemoveContainer" containerID="92cd1332968e403d5b6f5dbdaa072b79d31a5463acd5b2ea7ff773a49147e222" Feb 23 09:11:52 crc kubenswrapper[4940]: E0223 09:11:52.818505 4940 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"92cd1332968e403d5b6f5dbdaa072b79d31a5463acd5b2ea7ff773a49147e222\": container with ID starting with 92cd1332968e403d5b6f5dbdaa072b79d31a5463acd5b2ea7ff773a49147e222 not found: ID does not exist" containerID="92cd1332968e403d5b6f5dbdaa072b79d31a5463acd5b2ea7ff773a49147e222" Feb 23 09:11:52 crc kubenswrapper[4940]: I0223 09:11:52.818548 4940 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"92cd1332968e403d5b6f5dbdaa072b79d31a5463acd5b2ea7ff773a49147e222"} err="failed to get container status \"92cd1332968e403d5b6f5dbdaa072b79d31a5463acd5b2ea7ff773a49147e222\": rpc error: code = NotFound desc = could not find container \"92cd1332968e403d5b6f5dbdaa072b79d31a5463acd5b2ea7ff773a49147e222\": container with ID starting with 92cd1332968e403d5b6f5dbdaa072b79d31a5463acd5b2ea7ff773a49147e222 not found: ID does not exist" Feb 23 09:11:52 crc kubenswrapper[4940]: I0223 09:11:52.818575 4940 scope.go:117] "RemoveContainer" containerID="6775a9563f8803d5e09d9c3ae9fe02e9ff382c3075b6e7ca6b066eb0424f3944" Feb 23 09:11:52 crc kubenswrapper[4940]: E0223 09:11:52.819052 4940 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6775a9563f8803d5e09d9c3ae9fe02e9ff382c3075b6e7ca6b066eb0424f3944\": container with ID starting with 6775a9563f8803d5e09d9c3ae9fe02e9ff382c3075b6e7ca6b066eb0424f3944 not found: ID does not exist" containerID="6775a9563f8803d5e09d9c3ae9fe02e9ff382c3075b6e7ca6b066eb0424f3944" Feb 23 09:11:52 crc kubenswrapper[4940]: I0223 09:11:52.819079 4940 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6775a9563f8803d5e09d9c3ae9fe02e9ff382c3075b6e7ca6b066eb0424f3944"} err="failed to get container status \"6775a9563f8803d5e09d9c3ae9fe02e9ff382c3075b6e7ca6b066eb0424f3944\": rpc error: code = NotFound desc = could not find container \"6775a9563f8803d5e09d9c3ae9fe02e9ff382c3075b6e7ca6b066eb0424f3944\": container with ID starting with 6775a9563f8803d5e09d9c3ae9fe02e9ff382c3075b6e7ca6b066eb0424f3944 not found: ID does not exist" Feb 23 09:11:52 crc kubenswrapper[4940]: I0223 09:11:52.819097 4940 scope.go:117] "RemoveContainer" containerID="b6fc367c9d77e3129f93a0fb6993a5065791bcbdc98937e9c5279a703ca7d539" Feb 23 09:11:52 crc kubenswrapper[4940]: E0223 09:11:52.819378 4940 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b6fc367c9d77e3129f93a0fb6993a5065791bcbdc98937e9c5279a703ca7d539\": container with ID starting with b6fc367c9d77e3129f93a0fb6993a5065791bcbdc98937e9c5279a703ca7d539 not found: ID does not exist" containerID="b6fc367c9d77e3129f93a0fb6993a5065791bcbdc98937e9c5279a703ca7d539" Feb 23 09:11:52 crc kubenswrapper[4940]: I0223 09:11:52.819423 4940 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b6fc367c9d77e3129f93a0fb6993a5065791bcbdc98937e9c5279a703ca7d539"} err="failed to get container status \"b6fc367c9d77e3129f93a0fb6993a5065791bcbdc98937e9c5279a703ca7d539\": rpc error: code = NotFound desc = could not find container \"b6fc367c9d77e3129f93a0fb6993a5065791bcbdc98937e9c5279a703ca7d539\": container with ID starting with b6fc367c9d77e3129f93a0fb6993a5065791bcbdc98937e9c5279a703ca7d539 not found: ID does not exist" Feb 23 09:11:53 crc kubenswrapper[4940]: I0223 09:11:53.035863 4940 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 23 09:11:53 crc kubenswrapper[4940]: I0223 09:11:53.049425 4940 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 23 09:11:53 crc kubenswrapper[4940]: I0223 09:11:53.063490 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 23 09:11:53 crc kubenswrapper[4940]: E0223 09:11:53.064039 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d2d73f9-5768-49cf-b547-7668dfe210fa" containerName="ceilometer-notification-agent" Feb 23 09:11:53 crc kubenswrapper[4940]: I0223 09:11:53.064064 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d2d73f9-5768-49cf-b547-7668dfe210fa" containerName="ceilometer-notification-agent" Feb 23 09:11:53 crc kubenswrapper[4940]: E0223 09:11:53.064090 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d2d73f9-5768-49cf-b547-7668dfe210fa" containerName="proxy-httpd" Feb 23 09:11:53 crc kubenswrapper[4940]: I0223 09:11:53.064097 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d2d73f9-5768-49cf-b547-7668dfe210fa" containerName="proxy-httpd" Feb 23 09:11:53 crc kubenswrapper[4940]: E0223 09:11:53.064122 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d2d73f9-5768-49cf-b547-7668dfe210fa" containerName="ceilometer-central-agent" Feb 23 09:11:53 crc kubenswrapper[4940]: I0223 09:11:53.064129 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d2d73f9-5768-49cf-b547-7668dfe210fa" containerName="ceilometer-central-agent" Feb 23 09:11:53 crc kubenswrapper[4940]: E0223 09:11:53.064147 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d2d73f9-5768-49cf-b547-7668dfe210fa" containerName="sg-core" Feb 23 09:11:53 crc kubenswrapper[4940]: I0223 09:11:53.064153 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d2d73f9-5768-49cf-b547-7668dfe210fa" containerName="sg-core" Feb 23 09:11:53 crc kubenswrapper[4940]: I0223 09:11:53.064353 4940 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d2d73f9-5768-49cf-b547-7668dfe210fa" containerName="ceilometer-notification-agent" Feb 23 09:11:53 crc kubenswrapper[4940]: I0223 09:11:53.064369 4940 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d2d73f9-5768-49cf-b547-7668dfe210fa" containerName="sg-core" Feb 23 09:11:53 crc kubenswrapper[4940]: I0223 09:11:53.064386 4940 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d2d73f9-5768-49cf-b547-7668dfe210fa" containerName="ceilometer-central-agent" Feb 23 09:11:53 crc kubenswrapper[4940]: I0223 09:11:53.064403 4940 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d2d73f9-5768-49cf-b547-7668dfe210fa" containerName="proxy-httpd" Feb 23 09:11:53 crc kubenswrapper[4940]: I0223 09:11:53.066357 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 23 09:11:53 crc kubenswrapper[4940]: I0223 09:11:53.071057 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 23 09:11:53 crc kubenswrapper[4940]: I0223 09:11:53.090692 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 23 09:11:53 crc kubenswrapper[4940]: I0223 09:11:53.098355 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Feb 23 09:11:53 crc kubenswrapper[4940]: I0223 09:11:53.098600 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 23 09:11:53 crc kubenswrapper[4940]: I0223 09:11:53.118103 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e120736f-7b92-4b46-8d1d-7d50ecda615a-scripts\") pod \"ceilometer-0\" (UID: \"e120736f-7b92-4b46-8d1d-7d50ecda615a\") " pod="openstack/ceilometer-0" Feb 23 09:11:53 crc kubenswrapper[4940]: I0223 09:11:53.118206 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e120736f-7b92-4b46-8d1d-7d50ecda615a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e120736f-7b92-4b46-8d1d-7d50ecda615a\") " pod="openstack/ceilometer-0" Feb 23 09:11:53 crc kubenswrapper[4940]: I0223 09:11:53.118239 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e120736f-7b92-4b46-8d1d-7d50ecda615a-run-httpd\") pod \"ceilometer-0\" (UID: \"e120736f-7b92-4b46-8d1d-7d50ecda615a\") " pod="openstack/ceilometer-0" Feb 23 09:11:53 crc kubenswrapper[4940]: I0223 09:11:53.118283 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e120736f-7b92-4b46-8d1d-7d50ecda615a-log-httpd\") pod \"ceilometer-0\" (UID: \"e120736f-7b92-4b46-8d1d-7d50ecda615a\") " pod="openstack/ceilometer-0" Feb 23 09:11:53 crc kubenswrapper[4940]: I0223 09:11:53.118338 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/e120736f-7b92-4b46-8d1d-7d50ecda615a-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"e120736f-7b92-4b46-8d1d-7d50ecda615a\") " pod="openstack/ceilometer-0" Feb 23 09:11:53 crc kubenswrapper[4940]: I0223 09:11:53.118446 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2lhch\" (UniqueName: \"kubernetes.io/projected/e120736f-7b92-4b46-8d1d-7d50ecda615a-kube-api-access-2lhch\") pod \"ceilometer-0\" (UID: \"e120736f-7b92-4b46-8d1d-7d50ecda615a\") " pod="openstack/ceilometer-0" Feb 23 09:11:53 crc kubenswrapper[4940]: I0223 09:11:53.118479 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e120736f-7b92-4b46-8d1d-7d50ecda615a-config-data\") pod \"ceilometer-0\" (UID: \"e120736f-7b92-4b46-8d1d-7d50ecda615a\") " pod="openstack/ceilometer-0" Feb 23 09:11:53 crc kubenswrapper[4940]: I0223 09:11:53.118534 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e120736f-7b92-4b46-8d1d-7d50ecda615a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e120736f-7b92-4b46-8d1d-7d50ecda615a\") " pod="openstack/ceilometer-0" Feb 23 09:11:53 crc kubenswrapper[4940]: I0223 09:11:53.220012 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2lhch\" (UniqueName: \"kubernetes.io/projected/e120736f-7b92-4b46-8d1d-7d50ecda615a-kube-api-access-2lhch\") pod \"ceilometer-0\" (UID: \"e120736f-7b92-4b46-8d1d-7d50ecda615a\") " pod="openstack/ceilometer-0" Feb 23 09:11:53 crc kubenswrapper[4940]: I0223 09:11:53.220099 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e120736f-7b92-4b46-8d1d-7d50ecda615a-config-data\") pod \"ceilometer-0\" (UID: \"e120736f-7b92-4b46-8d1d-7d50ecda615a\") " pod="openstack/ceilometer-0" Feb 23 09:11:53 crc kubenswrapper[4940]: I0223 09:11:53.220146 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e120736f-7b92-4b46-8d1d-7d50ecda615a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e120736f-7b92-4b46-8d1d-7d50ecda615a\") " pod="openstack/ceilometer-0" Feb 23 09:11:53 crc kubenswrapper[4940]: I0223 09:11:53.220224 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e120736f-7b92-4b46-8d1d-7d50ecda615a-scripts\") pod \"ceilometer-0\" (UID: \"e120736f-7b92-4b46-8d1d-7d50ecda615a\") " pod="openstack/ceilometer-0" Feb 23 09:11:53 crc kubenswrapper[4940]: I0223 09:11:53.220264 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e120736f-7b92-4b46-8d1d-7d50ecda615a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e120736f-7b92-4b46-8d1d-7d50ecda615a\") " pod="openstack/ceilometer-0" Feb 23 09:11:53 crc kubenswrapper[4940]: I0223 09:11:53.220284 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e120736f-7b92-4b46-8d1d-7d50ecda615a-run-httpd\") pod \"ceilometer-0\" (UID: \"e120736f-7b92-4b46-8d1d-7d50ecda615a\") " pod="openstack/ceilometer-0" Feb 23 09:11:53 crc kubenswrapper[4940]: I0223 09:11:53.220317 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e120736f-7b92-4b46-8d1d-7d50ecda615a-log-httpd\") pod \"ceilometer-0\" (UID: \"e120736f-7b92-4b46-8d1d-7d50ecda615a\") " pod="openstack/ceilometer-0" Feb 23 09:11:53 crc kubenswrapper[4940]: I0223 09:11:53.220357 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/e120736f-7b92-4b46-8d1d-7d50ecda615a-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"e120736f-7b92-4b46-8d1d-7d50ecda615a\") " pod="openstack/ceilometer-0" Feb 23 09:11:53 crc kubenswrapper[4940]: I0223 09:11:53.220845 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e120736f-7b92-4b46-8d1d-7d50ecda615a-run-httpd\") pod \"ceilometer-0\" (UID: \"e120736f-7b92-4b46-8d1d-7d50ecda615a\") " pod="openstack/ceilometer-0" Feb 23 09:11:53 crc kubenswrapper[4940]: I0223 09:11:53.221252 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e120736f-7b92-4b46-8d1d-7d50ecda615a-log-httpd\") pod \"ceilometer-0\" (UID: \"e120736f-7b92-4b46-8d1d-7d50ecda615a\") " pod="openstack/ceilometer-0" Feb 23 09:11:53 crc kubenswrapper[4940]: I0223 09:11:53.226376 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e120736f-7b92-4b46-8d1d-7d50ecda615a-scripts\") pod \"ceilometer-0\" (UID: \"e120736f-7b92-4b46-8d1d-7d50ecda615a\") " pod="openstack/ceilometer-0" Feb 23 09:11:53 crc kubenswrapper[4940]: I0223 09:11:53.226721 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e120736f-7b92-4b46-8d1d-7d50ecda615a-config-data\") pod \"ceilometer-0\" (UID: \"e120736f-7b92-4b46-8d1d-7d50ecda615a\") " pod="openstack/ceilometer-0" Feb 23 09:11:53 crc kubenswrapper[4940]: I0223 09:11:53.235222 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e120736f-7b92-4b46-8d1d-7d50ecda615a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e120736f-7b92-4b46-8d1d-7d50ecda615a\") " pod="openstack/ceilometer-0" Feb 23 09:11:53 crc kubenswrapper[4940]: I0223 09:11:53.240394 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2lhch\" (UniqueName: \"kubernetes.io/projected/e120736f-7b92-4b46-8d1d-7d50ecda615a-kube-api-access-2lhch\") pod \"ceilometer-0\" (UID: \"e120736f-7b92-4b46-8d1d-7d50ecda615a\") " pod="openstack/ceilometer-0" Feb 23 09:11:53 crc kubenswrapper[4940]: I0223 09:11:53.241459 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/e120736f-7b92-4b46-8d1d-7d50ecda615a-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"e120736f-7b92-4b46-8d1d-7d50ecda615a\") " pod="openstack/ceilometer-0" Feb 23 09:11:53 crc kubenswrapper[4940]: I0223 09:11:53.246211 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e120736f-7b92-4b46-8d1d-7d50ecda615a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e120736f-7b92-4b46-8d1d-7d50ecda615a\") " pod="openstack/ceilometer-0" Feb 23 09:11:53 crc kubenswrapper[4940]: I0223 09:11:53.360253 4940 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2d2d73f9-5768-49cf-b547-7668dfe210fa" path="/var/lib/kubelet/pods/2d2d73f9-5768-49cf-b547-7668dfe210fa/volumes" Feb 23 09:11:53 crc kubenswrapper[4940]: I0223 09:11:53.474708 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 23 09:11:53 crc kubenswrapper[4940]: I0223 09:11:53.947585 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 23 09:11:53 crc kubenswrapper[4940]: W0223 09:11:53.952190 4940 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode120736f_7b92_4b46_8d1d_7d50ecda615a.slice/crio-4c4de330d48c7cda0adc7ad1cdbedf7b1918f2d90b4c05fd657f39a4842b5b42 WatchSource:0}: Error finding container 4c4de330d48c7cda0adc7ad1cdbedf7b1918f2d90b4c05fd657f39a4842b5b42: Status 404 returned error can't find the container with id 4c4de330d48c7cda0adc7ad1cdbedf7b1918f2d90b4c05fd657f39a4842b5b42 Feb 23 09:11:53 crc kubenswrapper[4940]: I0223 09:11:53.970081 4940 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 23 09:11:54 crc kubenswrapper[4940]: I0223 09:11:54.002108 4940 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 23 09:11:54 crc kubenswrapper[4940]: I0223 09:11:54.723183 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e120736f-7b92-4b46-8d1d-7d50ecda615a","Type":"ContainerStarted","Data":"4c4de330d48c7cda0adc7ad1cdbedf7b1918f2d90b4c05fd657f39a4842b5b42"} Feb 23 09:11:54 crc kubenswrapper[4940]: I0223 09:11:54.762135 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 23 09:11:55 crc kubenswrapper[4940]: I0223 09:11:55.737536 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e120736f-7b92-4b46-8d1d-7d50ecda615a","Type":"ContainerStarted","Data":"c0387048d5cc6e65589232521b32447365f179a079befde15ff7b1a868aa37d5"} Feb 23 09:11:56 crc kubenswrapper[4940]: I0223 09:11:56.750143 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e120736f-7b92-4b46-8d1d-7d50ecda615a","Type":"ContainerStarted","Data":"e8f248d11b28244b39f5851c9e0c10f97c9f33990d6b81f6d12cd49e231dbda1"} Feb 23 09:11:56 crc kubenswrapper[4940]: I0223 09:11:56.751052 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e120736f-7b92-4b46-8d1d-7d50ecda615a","Type":"ContainerStarted","Data":"e05223ee3e2f63a459c1e9bfc98f8a035aea1e8a41c67f3777ad583c04d5b895"} Feb 23 09:11:59 crc kubenswrapper[4940]: I0223 09:11:59.786885 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e120736f-7b92-4b46-8d1d-7d50ecda615a","Type":"ContainerStarted","Data":"c1fa4d003be46c9defe3f647dc6c601e8473e51d5241467ac8ccb365bc1ffb9e"} Feb 23 09:11:59 crc kubenswrapper[4940]: I0223 09:11:59.787778 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 23 09:11:59 crc kubenswrapper[4940]: I0223 09:11:59.830832 4940 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.737746797 podStartE2EDuration="6.830803646s" podCreationTimestamp="2026-02-23 09:11:53 +0000 UTC" firstStartedPulling="2026-02-23 09:11:53.954440765 +0000 UTC m=+1445.337646922" lastFinishedPulling="2026-02-23 09:11:59.047497614 +0000 UTC m=+1450.430703771" observedRunningTime="2026-02-23 09:11:59.82010401 +0000 UTC m=+1451.203310187" watchObservedRunningTime="2026-02-23 09:11:59.830803646 +0000 UTC m=+1451.214009833" Feb 23 09:12:00 crc kubenswrapper[4940]: I0223 09:12:00.329763 4940 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 23 09:12:00 crc kubenswrapper[4940]: I0223 09:12:00.330442 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 23 09:12:00 crc kubenswrapper[4940]: I0223 09:12:00.332116 4940 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 23 09:12:00 crc kubenswrapper[4940]: I0223 09:12:00.340976 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 23 09:12:00 crc kubenswrapper[4940]: I0223 09:12:00.374379 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Feb 23 09:12:00 crc kubenswrapper[4940]: I0223 09:12:00.806749 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 23 09:12:00 crc kubenswrapper[4940]: I0223 09:12:00.809557 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 23 09:12:00 crc kubenswrapper[4940]: I0223 09:12:00.980937 4940 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 23 09:12:00 crc kubenswrapper[4940]: I0223 09:12:00.985229 4940 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 23 09:12:01 crc kubenswrapper[4940]: I0223 09:12:01.004216 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 23 09:12:01 crc kubenswrapper[4940]: I0223 09:12:01.012676 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5b4c997d87-hgkbr"] Feb 23 09:12:01 crc kubenswrapper[4940]: I0223 09:12:01.015009 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b4c997d87-hgkbr" Feb 23 09:12:01 crc kubenswrapper[4940]: I0223 09:12:01.026219 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5b4c997d87-hgkbr"] Feb 23 09:12:01 crc kubenswrapper[4940]: I0223 09:12:01.093890 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c5af1432-d260-46c1-9502-de04b6978ca4-dns-swift-storage-0\") pod \"dnsmasq-dns-5b4c997d87-hgkbr\" (UID: \"c5af1432-d260-46c1-9502-de04b6978ca4\") " pod="openstack/dnsmasq-dns-5b4c997d87-hgkbr" Feb 23 09:12:01 crc kubenswrapper[4940]: I0223 09:12:01.093941 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n7bdx\" (UniqueName: \"kubernetes.io/projected/c5af1432-d260-46c1-9502-de04b6978ca4-kube-api-access-n7bdx\") pod \"dnsmasq-dns-5b4c997d87-hgkbr\" (UID: \"c5af1432-d260-46c1-9502-de04b6978ca4\") " pod="openstack/dnsmasq-dns-5b4c997d87-hgkbr" Feb 23 09:12:01 crc kubenswrapper[4940]: I0223 09:12:01.094040 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c5af1432-d260-46c1-9502-de04b6978ca4-dns-svc\") pod \"dnsmasq-dns-5b4c997d87-hgkbr\" (UID: \"c5af1432-d260-46c1-9502-de04b6978ca4\") " pod="openstack/dnsmasq-dns-5b4c997d87-hgkbr" Feb 23 09:12:01 crc kubenswrapper[4940]: I0223 09:12:01.094103 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c5af1432-d260-46c1-9502-de04b6978ca4-config\") pod \"dnsmasq-dns-5b4c997d87-hgkbr\" (UID: \"c5af1432-d260-46c1-9502-de04b6978ca4\") " pod="openstack/dnsmasq-dns-5b4c997d87-hgkbr" Feb 23 09:12:01 crc kubenswrapper[4940]: I0223 09:12:01.094126 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c5af1432-d260-46c1-9502-de04b6978ca4-ovsdbserver-sb\") pod \"dnsmasq-dns-5b4c997d87-hgkbr\" (UID: \"c5af1432-d260-46c1-9502-de04b6978ca4\") " pod="openstack/dnsmasq-dns-5b4c997d87-hgkbr" Feb 23 09:12:01 crc kubenswrapper[4940]: I0223 09:12:01.094221 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c5af1432-d260-46c1-9502-de04b6978ca4-ovsdbserver-nb\") pod \"dnsmasq-dns-5b4c997d87-hgkbr\" (UID: \"c5af1432-d260-46c1-9502-de04b6978ca4\") " pod="openstack/dnsmasq-dns-5b4c997d87-hgkbr" Feb 23 09:12:01 crc kubenswrapper[4940]: I0223 09:12:01.196462 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c5af1432-d260-46c1-9502-de04b6978ca4-dns-svc\") pod \"dnsmasq-dns-5b4c997d87-hgkbr\" (UID: \"c5af1432-d260-46c1-9502-de04b6978ca4\") " pod="openstack/dnsmasq-dns-5b4c997d87-hgkbr" Feb 23 09:12:01 crc kubenswrapper[4940]: I0223 09:12:01.197032 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c5af1432-d260-46c1-9502-de04b6978ca4-config\") pod \"dnsmasq-dns-5b4c997d87-hgkbr\" (UID: \"c5af1432-d260-46c1-9502-de04b6978ca4\") " pod="openstack/dnsmasq-dns-5b4c997d87-hgkbr" Feb 23 09:12:01 crc kubenswrapper[4940]: I0223 09:12:01.197081 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c5af1432-d260-46c1-9502-de04b6978ca4-ovsdbserver-sb\") pod \"dnsmasq-dns-5b4c997d87-hgkbr\" (UID: \"c5af1432-d260-46c1-9502-de04b6978ca4\") " pod="openstack/dnsmasq-dns-5b4c997d87-hgkbr" Feb 23 09:12:01 crc kubenswrapper[4940]: I0223 09:12:01.197239 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c5af1432-d260-46c1-9502-de04b6978ca4-ovsdbserver-nb\") pod \"dnsmasq-dns-5b4c997d87-hgkbr\" (UID: \"c5af1432-d260-46c1-9502-de04b6978ca4\") " pod="openstack/dnsmasq-dns-5b4c997d87-hgkbr" Feb 23 09:12:01 crc kubenswrapper[4940]: I0223 09:12:01.197315 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c5af1432-d260-46c1-9502-de04b6978ca4-dns-svc\") pod \"dnsmasq-dns-5b4c997d87-hgkbr\" (UID: \"c5af1432-d260-46c1-9502-de04b6978ca4\") " pod="openstack/dnsmasq-dns-5b4c997d87-hgkbr" Feb 23 09:12:01 crc kubenswrapper[4940]: I0223 09:12:01.197400 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c5af1432-d260-46c1-9502-de04b6978ca4-dns-swift-storage-0\") pod \"dnsmasq-dns-5b4c997d87-hgkbr\" (UID: \"c5af1432-d260-46c1-9502-de04b6978ca4\") " pod="openstack/dnsmasq-dns-5b4c997d87-hgkbr" Feb 23 09:12:01 crc kubenswrapper[4940]: I0223 09:12:01.197426 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n7bdx\" (UniqueName: \"kubernetes.io/projected/c5af1432-d260-46c1-9502-de04b6978ca4-kube-api-access-n7bdx\") pod \"dnsmasq-dns-5b4c997d87-hgkbr\" (UID: \"c5af1432-d260-46c1-9502-de04b6978ca4\") " pod="openstack/dnsmasq-dns-5b4c997d87-hgkbr" Feb 23 09:12:01 crc kubenswrapper[4940]: I0223 09:12:01.198041 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c5af1432-d260-46c1-9502-de04b6978ca4-ovsdbserver-nb\") pod \"dnsmasq-dns-5b4c997d87-hgkbr\" (UID: \"c5af1432-d260-46c1-9502-de04b6978ca4\") " pod="openstack/dnsmasq-dns-5b4c997d87-hgkbr" Feb 23 09:12:01 crc kubenswrapper[4940]: I0223 09:12:01.198118 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c5af1432-d260-46c1-9502-de04b6978ca4-config\") pod \"dnsmasq-dns-5b4c997d87-hgkbr\" (UID: \"c5af1432-d260-46c1-9502-de04b6978ca4\") " pod="openstack/dnsmasq-dns-5b4c997d87-hgkbr" Feb 23 09:12:01 crc kubenswrapper[4940]: I0223 09:12:01.198366 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c5af1432-d260-46c1-9502-de04b6978ca4-ovsdbserver-sb\") pod \"dnsmasq-dns-5b4c997d87-hgkbr\" (UID: \"c5af1432-d260-46c1-9502-de04b6978ca4\") " pod="openstack/dnsmasq-dns-5b4c997d87-hgkbr" Feb 23 09:12:01 crc kubenswrapper[4940]: I0223 09:12:01.198665 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c5af1432-d260-46c1-9502-de04b6978ca4-dns-swift-storage-0\") pod \"dnsmasq-dns-5b4c997d87-hgkbr\" (UID: \"c5af1432-d260-46c1-9502-de04b6978ca4\") " pod="openstack/dnsmasq-dns-5b4c997d87-hgkbr" Feb 23 09:12:01 crc kubenswrapper[4940]: I0223 09:12:01.228600 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n7bdx\" (UniqueName: \"kubernetes.io/projected/c5af1432-d260-46c1-9502-de04b6978ca4-kube-api-access-n7bdx\") pod \"dnsmasq-dns-5b4c997d87-hgkbr\" (UID: \"c5af1432-d260-46c1-9502-de04b6978ca4\") " pod="openstack/dnsmasq-dns-5b4c997d87-hgkbr" Feb 23 09:12:01 crc kubenswrapper[4940]: I0223 09:12:01.339448 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b4c997d87-hgkbr" Feb 23 09:12:01 crc kubenswrapper[4940]: I0223 09:12:01.816931 4940 generic.go:334] "Generic (PLEG): container finished" podID="070f7f4a-ea04-483e-a0c6-719372ff945e" containerID="b46cb1fb248c63fc449bf292c35c3c6b3bde0ee2982647e59059beb0771def29" exitCode=137 Feb 23 09:12:01 crc kubenswrapper[4940]: I0223 09:12:01.816984 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"070f7f4a-ea04-483e-a0c6-719372ff945e","Type":"ContainerDied","Data":"b46cb1fb248c63fc449bf292c35c3c6b3bde0ee2982647e59059beb0771def29"} Feb 23 09:12:01 crc kubenswrapper[4940]: I0223 09:12:01.829311 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 23 09:12:01 crc kubenswrapper[4940]: I0223 09:12:01.948893 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5b4c997d87-hgkbr"] Feb 23 09:12:02 crc kubenswrapper[4940]: I0223 09:12:02.023493 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 23 09:12:02 crc kubenswrapper[4940]: I0223 09:12:02.214910 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/070f7f4a-ea04-483e-a0c6-719372ff945e-config-data\") pod \"070f7f4a-ea04-483e-a0c6-719372ff945e\" (UID: \"070f7f4a-ea04-483e-a0c6-719372ff945e\") " Feb 23 09:12:02 crc kubenswrapper[4940]: I0223 09:12:02.215753 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/070f7f4a-ea04-483e-a0c6-719372ff945e-combined-ca-bundle\") pod \"070f7f4a-ea04-483e-a0c6-719372ff945e\" (UID: \"070f7f4a-ea04-483e-a0c6-719372ff945e\") " Feb 23 09:12:02 crc kubenswrapper[4940]: I0223 09:12:02.216102 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dmfp4\" (UniqueName: \"kubernetes.io/projected/070f7f4a-ea04-483e-a0c6-719372ff945e-kube-api-access-dmfp4\") pod \"070f7f4a-ea04-483e-a0c6-719372ff945e\" (UID: \"070f7f4a-ea04-483e-a0c6-719372ff945e\") " Feb 23 09:12:02 crc kubenswrapper[4940]: I0223 09:12:02.225699 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/070f7f4a-ea04-483e-a0c6-719372ff945e-kube-api-access-dmfp4" (OuterVolumeSpecName: "kube-api-access-dmfp4") pod "070f7f4a-ea04-483e-a0c6-719372ff945e" (UID: "070f7f4a-ea04-483e-a0c6-719372ff945e"). InnerVolumeSpecName "kube-api-access-dmfp4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 09:12:02 crc kubenswrapper[4940]: I0223 09:12:02.267932 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/070f7f4a-ea04-483e-a0c6-719372ff945e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "070f7f4a-ea04-483e-a0c6-719372ff945e" (UID: "070f7f4a-ea04-483e-a0c6-719372ff945e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 09:12:02 crc kubenswrapper[4940]: I0223 09:12:02.270905 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/070f7f4a-ea04-483e-a0c6-719372ff945e-config-data" (OuterVolumeSpecName: "config-data") pod "070f7f4a-ea04-483e-a0c6-719372ff945e" (UID: "070f7f4a-ea04-483e-a0c6-719372ff945e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 09:12:02 crc kubenswrapper[4940]: I0223 09:12:02.318163 4940 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/070f7f4a-ea04-483e-a0c6-719372ff945e-config-data\") on node \"crc\" DevicePath \"\"" Feb 23 09:12:02 crc kubenswrapper[4940]: I0223 09:12:02.318218 4940 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/070f7f4a-ea04-483e-a0c6-719372ff945e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 09:12:02 crc kubenswrapper[4940]: I0223 09:12:02.318247 4940 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dmfp4\" (UniqueName: \"kubernetes.io/projected/070f7f4a-ea04-483e-a0c6-719372ff945e-kube-api-access-dmfp4\") on node \"crc\" DevicePath \"\"" Feb 23 09:12:02 crc kubenswrapper[4940]: I0223 09:12:02.827459 4940 generic.go:334] "Generic (PLEG): container finished" podID="c5af1432-d260-46c1-9502-de04b6978ca4" containerID="8551a511799cde32c8cb464ced3e5564dbf55fa5fb7dbe69e40d54cfd114350c" exitCode=0 Feb 23 09:12:02 crc kubenswrapper[4940]: I0223 09:12:02.827539 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b4c997d87-hgkbr" event={"ID":"c5af1432-d260-46c1-9502-de04b6978ca4","Type":"ContainerDied","Data":"8551a511799cde32c8cb464ced3e5564dbf55fa5fb7dbe69e40d54cfd114350c"} Feb 23 09:12:02 crc kubenswrapper[4940]: I0223 09:12:02.827567 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b4c997d87-hgkbr" event={"ID":"c5af1432-d260-46c1-9502-de04b6978ca4","Type":"ContainerStarted","Data":"986971ce13f9979f4215c500234f068424d10dbd8a4372b6b8f7ec169f8786ec"} Feb 23 09:12:02 crc kubenswrapper[4940]: I0223 09:12:02.829581 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"070f7f4a-ea04-483e-a0c6-719372ff945e","Type":"ContainerDied","Data":"31b2872875f0077a3d25df81302a74215149e83a6e62fdddabc479f08711e9f7"} Feb 23 09:12:02 crc kubenswrapper[4940]: I0223 09:12:02.829657 4940 scope.go:117] "RemoveContainer" containerID="b46cb1fb248c63fc449bf292c35c3c6b3bde0ee2982647e59059beb0771def29" Feb 23 09:12:02 crc kubenswrapper[4940]: I0223 09:12:02.829788 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 23 09:12:02 crc kubenswrapper[4940]: I0223 09:12:02.969388 4940 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 23 09:12:02 crc kubenswrapper[4940]: I0223 09:12:02.969719 4940 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e120736f-7b92-4b46-8d1d-7d50ecda615a" containerName="ceilometer-central-agent" containerID="cri-o://c0387048d5cc6e65589232521b32447365f179a079befde15ff7b1a868aa37d5" gracePeriod=30 Feb 23 09:12:02 crc kubenswrapper[4940]: I0223 09:12:02.973053 4940 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e120736f-7b92-4b46-8d1d-7d50ecda615a" containerName="sg-core" containerID="cri-o://e8f248d11b28244b39f5851c9e0c10f97c9f33990d6b81f6d12cd49e231dbda1" gracePeriod=30 Feb 23 09:12:02 crc kubenswrapper[4940]: I0223 09:12:02.973203 4940 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e120736f-7b92-4b46-8d1d-7d50ecda615a" containerName="proxy-httpd" containerID="cri-o://c1fa4d003be46c9defe3f647dc6c601e8473e51d5241467ac8ccb365bc1ffb9e" gracePeriod=30 Feb 23 09:12:02 crc kubenswrapper[4940]: I0223 09:12:02.973259 4940 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e120736f-7b92-4b46-8d1d-7d50ecda615a" containerName="ceilometer-notification-agent" containerID="cri-o://e05223ee3e2f63a459c1e9bfc98f8a035aea1e8a41c67f3777ad583c04d5b895" gracePeriod=30 Feb 23 09:12:03 crc kubenswrapper[4940]: I0223 09:12:03.072075 4940 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 23 09:12:03 crc kubenswrapper[4940]: I0223 09:12:03.082997 4940 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 23 09:12:03 crc kubenswrapper[4940]: I0223 09:12:03.103235 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 23 09:12:03 crc kubenswrapper[4940]: E0223 09:12:03.103683 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="070f7f4a-ea04-483e-a0c6-719372ff945e" containerName="nova-cell1-novncproxy-novncproxy" Feb 23 09:12:03 crc kubenswrapper[4940]: I0223 09:12:03.103696 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="070f7f4a-ea04-483e-a0c6-719372ff945e" containerName="nova-cell1-novncproxy-novncproxy" Feb 23 09:12:03 crc kubenswrapper[4940]: I0223 09:12:03.103904 4940 memory_manager.go:354] "RemoveStaleState removing state" podUID="070f7f4a-ea04-483e-a0c6-719372ff945e" containerName="nova-cell1-novncproxy-novncproxy" Feb 23 09:12:03 crc kubenswrapper[4940]: I0223 09:12:03.104689 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 23 09:12:03 crc kubenswrapper[4940]: I0223 09:12:03.107773 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Feb 23 09:12:03 crc kubenswrapper[4940]: I0223 09:12:03.107977 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Feb 23 09:12:03 crc kubenswrapper[4940]: I0223 09:12:03.108248 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Feb 23 09:12:03 crc kubenswrapper[4940]: I0223 09:12:03.140769 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-98mlw\" (UniqueName: \"kubernetes.io/projected/1baa0ab5-14b9-4150-872e-e135857e3033-kube-api-access-98mlw\") pod \"nova-cell1-novncproxy-0\" (UID: \"1baa0ab5-14b9-4150-872e-e135857e3033\") " pod="openstack/nova-cell1-novncproxy-0" Feb 23 09:12:03 crc kubenswrapper[4940]: I0223 09:12:03.140868 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/1baa0ab5-14b9-4150-872e-e135857e3033-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"1baa0ab5-14b9-4150-872e-e135857e3033\") " pod="openstack/nova-cell1-novncproxy-0" Feb 23 09:12:03 crc kubenswrapper[4940]: I0223 09:12:03.140965 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1baa0ab5-14b9-4150-872e-e135857e3033-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"1baa0ab5-14b9-4150-872e-e135857e3033\") " pod="openstack/nova-cell1-novncproxy-0" Feb 23 09:12:03 crc kubenswrapper[4940]: I0223 09:12:03.141002 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/1baa0ab5-14b9-4150-872e-e135857e3033-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"1baa0ab5-14b9-4150-872e-e135857e3033\") " pod="openstack/nova-cell1-novncproxy-0" Feb 23 09:12:03 crc kubenswrapper[4940]: I0223 09:12:03.141069 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1baa0ab5-14b9-4150-872e-e135857e3033-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"1baa0ab5-14b9-4150-872e-e135857e3033\") " pod="openstack/nova-cell1-novncproxy-0" Feb 23 09:12:03 crc kubenswrapper[4940]: I0223 09:12:03.144072 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 23 09:12:03 crc kubenswrapper[4940]: I0223 09:12:03.243035 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1baa0ab5-14b9-4150-872e-e135857e3033-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"1baa0ab5-14b9-4150-872e-e135857e3033\") " pod="openstack/nova-cell1-novncproxy-0" Feb 23 09:12:03 crc kubenswrapper[4940]: I0223 09:12:03.243099 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/1baa0ab5-14b9-4150-872e-e135857e3033-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"1baa0ab5-14b9-4150-872e-e135857e3033\") " pod="openstack/nova-cell1-novncproxy-0" Feb 23 09:12:03 crc kubenswrapper[4940]: I0223 09:12:03.243151 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1baa0ab5-14b9-4150-872e-e135857e3033-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"1baa0ab5-14b9-4150-872e-e135857e3033\") " pod="openstack/nova-cell1-novncproxy-0" Feb 23 09:12:03 crc kubenswrapper[4940]: I0223 09:12:03.243270 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-98mlw\" (UniqueName: \"kubernetes.io/projected/1baa0ab5-14b9-4150-872e-e135857e3033-kube-api-access-98mlw\") pod \"nova-cell1-novncproxy-0\" (UID: \"1baa0ab5-14b9-4150-872e-e135857e3033\") " pod="openstack/nova-cell1-novncproxy-0" Feb 23 09:12:03 crc kubenswrapper[4940]: I0223 09:12:03.243347 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/1baa0ab5-14b9-4150-872e-e135857e3033-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"1baa0ab5-14b9-4150-872e-e135857e3033\") " pod="openstack/nova-cell1-novncproxy-0" Feb 23 09:12:03 crc kubenswrapper[4940]: I0223 09:12:03.258118 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/1baa0ab5-14b9-4150-872e-e135857e3033-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"1baa0ab5-14b9-4150-872e-e135857e3033\") " pod="openstack/nova-cell1-novncproxy-0" Feb 23 09:12:03 crc kubenswrapper[4940]: I0223 09:12:03.258449 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/1baa0ab5-14b9-4150-872e-e135857e3033-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"1baa0ab5-14b9-4150-872e-e135857e3033\") " pod="openstack/nova-cell1-novncproxy-0" Feb 23 09:12:03 crc kubenswrapper[4940]: I0223 09:12:03.259251 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1baa0ab5-14b9-4150-872e-e135857e3033-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"1baa0ab5-14b9-4150-872e-e135857e3033\") " pod="openstack/nova-cell1-novncproxy-0" Feb 23 09:12:03 crc kubenswrapper[4940]: I0223 09:12:03.259526 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1baa0ab5-14b9-4150-872e-e135857e3033-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"1baa0ab5-14b9-4150-872e-e135857e3033\") " pod="openstack/nova-cell1-novncproxy-0" Feb 23 09:12:03 crc kubenswrapper[4940]: I0223 09:12:03.271871 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-98mlw\" (UniqueName: \"kubernetes.io/projected/1baa0ab5-14b9-4150-872e-e135857e3033-kube-api-access-98mlw\") pod \"nova-cell1-novncproxy-0\" (UID: \"1baa0ab5-14b9-4150-872e-e135857e3033\") " pod="openstack/nova-cell1-novncproxy-0" Feb 23 09:12:03 crc kubenswrapper[4940]: I0223 09:12:03.356646 4940 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="070f7f4a-ea04-483e-a0c6-719372ff945e" path="/var/lib/kubelet/pods/070f7f4a-ea04-483e-a0c6-719372ff945e/volumes" Feb 23 09:12:03 crc kubenswrapper[4940]: I0223 09:12:03.426266 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 23 09:12:03 crc kubenswrapper[4940]: I0223 09:12:03.690513 4940 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 23 09:12:03 crc kubenswrapper[4940]: I0223 09:12:03.755687 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 23 09:12:03 crc kubenswrapper[4940]: I0223 09:12:03.784394 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e120736f-7b92-4b46-8d1d-7d50ecda615a-run-httpd\") pod \"e120736f-7b92-4b46-8d1d-7d50ecda615a\" (UID: \"e120736f-7b92-4b46-8d1d-7d50ecda615a\") " Feb 23 09:12:03 crc kubenswrapper[4940]: I0223 09:12:03.784451 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/e120736f-7b92-4b46-8d1d-7d50ecda615a-ceilometer-tls-certs\") pod \"e120736f-7b92-4b46-8d1d-7d50ecda615a\" (UID: \"e120736f-7b92-4b46-8d1d-7d50ecda615a\") " Feb 23 09:12:03 crc kubenswrapper[4940]: I0223 09:12:03.784548 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2lhch\" (UniqueName: \"kubernetes.io/projected/e120736f-7b92-4b46-8d1d-7d50ecda615a-kube-api-access-2lhch\") pod \"e120736f-7b92-4b46-8d1d-7d50ecda615a\" (UID: \"e120736f-7b92-4b46-8d1d-7d50ecda615a\") " Feb 23 09:12:03 crc kubenswrapper[4940]: I0223 09:12:03.784605 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e120736f-7b92-4b46-8d1d-7d50ecda615a-combined-ca-bundle\") pod \"e120736f-7b92-4b46-8d1d-7d50ecda615a\" (UID: \"e120736f-7b92-4b46-8d1d-7d50ecda615a\") " Feb 23 09:12:03 crc kubenswrapper[4940]: I0223 09:12:03.784754 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e120736f-7b92-4b46-8d1d-7d50ecda615a-log-httpd\") pod \"e120736f-7b92-4b46-8d1d-7d50ecda615a\" (UID: \"e120736f-7b92-4b46-8d1d-7d50ecda615a\") " Feb 23 09:12:03 crc kubenswrapper[4940]: I0223 09:12:03.784820 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e120736f-7b92-4b46-8d1d-7d50ecda615a-config-data\") pod \"e120736f-7b92-4b46-8d1d-7d50ecda615a\" (UID: \"e120736f-7b92-4b46-8d1d-7d50ecda615a\") " Feb 23 09:12:03 crc kubenswrapper[4940]: I0223 09:12:03.784877 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e120736f-7b92-4b46-8d1d-7d50ecda615a-scripts\") pod \"e120736f-7b92-4b46-8d1d-7d50ecda615a\" (UID: \"e120736f-7b92-4b46-8d1d-7d50ecda615a\") " Feb 23 09:12:03 crc kubenswrapper[4940]: I0223 09:12:03.784939 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e120736f-7b92-4b46-8d1d-7d50ecda615a-sg-core-conf-yaml\") pod \"e120736f-7b92-4b46-8d1d-7d50ecda615a\" (UID: \"e120736f-7b92-4b46-8d1d-7d50ecda615a\") " Feb 23 09:12:03 crc kubenswrapper[4940]: I0223 09:12:03.787016 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e120736f-7b92-4b46-8d1d-7d50ecda615a-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "e120736f-7b92-4b46-8d1d-7d50ecda615a" (UID: "e120736f-7b92-4b46-8d1d-7d50ecda615a"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 09:12:03 crc kubenswrapper[4940]: I0223 09:12:03.801069 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e120736f-7b92-4b46-8d1d-7d50ecda615a-kube-api-access-2lhch" (OuterVolumeSpecName: "kube-api-access-2lhch") pod "e120736f-7b92-4b46-8d1d-7d50ecda615a" (UID: "e120736f-7b92-4b46-8d1d-7d50ecda615a"). InnerVolumeSpecName "kube-api-access-2lhch". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 09:12:03 crc kubenswrapper[4940]: I0223 09:12:03.802872 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e120736f-7b92-4b46-8d1d-7d50ecda615a-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "e120736f-7b92-4b46-8d1d-7d50ecda615a" (UID: "e120736f-7b92-4b46-8d1d-7d50ecda615a"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 09:12:03 crc kubenswrapper[4940]: I0223 09:12:03.803660 4940 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2lhch\" (UniqueName: \"kubernetes.io/projected/e120736f-7b92-4b46-8d1d-7d50ecda615a-kube-api-access-2lhch\") on node \"crc\" DevicePath \"\"" Feb 23 09:12:03 crc kubenswrapper[4940]: I0223 09:12:03.803684 4940 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e120736f-7b92-4b46-8d1d-7d50ecda615a-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 23 09:12:03 crc kubenswrapper[4940]: I0223 09:12:03.803695 4940 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e120736f-7b92-4b46-8d1d-7d50ecda615a-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 23 09:12:03 crc kubenswrapper[4940]: I0223 09:12:03.815405 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e120736f-7b92-4b46-8d1d-7d50ecda615a-scripts" (OuterVolumeSpecName: "scripts") pod "e120736f-7b92-4b46-8d1d-7d50ecda615a" (UID: "e120736f-7b92-4b46-8d1d-7d50ecda615a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 09:12:03 crc kubenswrapper[4940]: I0223 09:12:03.833621 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e120736f-7b92-4b46-8d1d-7d50ecda615a-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "e120736f-7b92-4b46-8d1d-7d50ecda615a" (UID: "e120736f-7b92-4b46-8d1d-7d50ecda615a"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 09:12:03 crc kubenswrapper[4940]: I0223 09:12:03.843068 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b4c997d87-hgkbr" event={"ID":"c5af1432-d260-46c1-9502-de04b6978ca4","Type":"ContainerStarted","Data":"e444a861a71b87a1edc1fc26769e634a7e4ca0943635b8c7c4077988298a1982"} Feb 23 09:12:03 crc kubenswrapper[4940]: I0223 09:12:03.843856 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5b4c997d87-hgkbr" Feb 23 09:12:03 crc kubenswrapper[4940]: I0223 09:12:03.850691 4940 generic.go:334] "Generic (PLEG): container finished" podID="e120736f-7b92-4b46-8d1d-7d50ecda615a" containerID="c1fa4d003be46c9defe3f647dc6c601e8473e51d5241467ac8ccb365bc1ffb9e" exitCode=0 Feb 23 09:12:03 crc kubenswrapper[4940]: I0223 09:12:03.850722 4940 generic.go:334] "Generic (PLEG): container finished" podID="e120736f-7b92-4b46-8d1d-7d50ecda615a" containerID="e8f248d11b28244b39f5851c9e0c10f97c9f33990d6b81f6d12cd49e231dbda1" exitCode=2 Feb 23 09:12:03 crc kubenswrapper[4940]: I0223 09:12:03.850730 4940 generic.go:334] "Generic (PLEG): container finished" podID="e120736f-7b92-4b46-8d1d-7d50ecda615a" containerID="e05223ee3e2f63a459c1e9bfc98f8a035aea1e8a41c67f3777ad583c04d5b895" exitCode=0 Feb 23 09:12:03 crc kubenswrapper[4940]: I0223 09:12:03.850738 4940 generic.go:334] "Generic (PLEG): container finished" podID="e120736f-7b92-4b46-8d1d-7d50ecda615a" containerID="c0387048d5cc6e65589232521b32447365f179a079befde15ff7b1a868aa37d5" exitCode=0 Feb 23 09:12:03 crc kubenswrapper[4940]: I0223 09:12:03.851193 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 23 09:12:03 crc kubenswrapper[4940]: I0223 09:12:03.851689 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e120736f-7b92-4b46-8d1d-7d50ecda615a","Type":"ContainerDied","Data":"c1fa4d003be46c9defe3f647dc6c601e8473e51d5241467ac8ccb365bc1ffb9e"} Feb 23 09:12:03 crc kubenswrapper[4940]: I0223 09:12:03.851718 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e120736f-7b92-4b46-8d1d-7d50ecda615a","Type":"ContainerDied","Data":"e8f248d11b28244b39f5851c9e0c10f97c9f33990d6b81f6d12cd49e231dbda1"} Feb 23 09:12:03 crc kubenswrapper[4940]: I0223 09:12:03.851730 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e120736f-7b92-4b46-8d1d-7d50ecda615a","Type":"ContainerDied","Data":"e05223ee3e2f63a459c1e9bfc98f8a035aea1e8a41c67f3777ad583c04d5b895"} Feb 23 09:12:03 crc kubenswrapper[4940]: I0223 09:12:03.851740 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e120736f-7b92-4b46-8d1d-7d50ecda615a","Type":"ContainerDied","Data":"c0387048d5cc6e65589232521b32447365f179a079befde15ff7b1a868aa37d5"} Feb 23 09:12:03 crc kubenswrapper[4940]: I0223 09:12:03.851749 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e120736f-7b92-4b46-8d1d-7d50ecda615a","Type":"ContainerDied","Data":"4c4de330d48c7cda0adc7ad1cdbedf7b1918f2d90b4c05fd657f39a4842b5b42"} Feb 23 09:12:03 crc kubenswrapper[4940]: I0223 09:12:03.851763 4940 scope.go:117] "RemoveContainer" containerID="c1fa4d003be46c9defe3f647dc6c601e8473e51d5241467ac8ccb365bc1ffb9e" Feb 23 09:12:03 crc kubenswrapper[4940]: I0223 09:12:03.851928 4940 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="e8d07da4-4091-410a-a0a8-befbd51a314e" containerName="nova-api-log" containerID="cri-o://cf282a4f5507687d699bb51e1cb039ae5df98499b0e57c3f7a898d4f5aac3cf0" gracePeriod=30 Feb 23 09:12:03 crc kubenswrapper[4940]: I0223 09:12:03.852107 4940 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="e8d07da4-4091-410a-a0a8-befbd51a314e" containerName="nova-api-api" containerID="cri-o://f09ec395112f9bf74fedd3ce504af7ff9ed750636f17c26ba1c4cea34a82f9b1" gracePeriod=30 Feb 23 09:12:03 crc kubenswrapper[4940]: I0223 09:12:03.874545 4940 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5b4c997d87-hgkbr" podStartSLOduration=3.8745288970000002 podStartE2EDuration="3.874528897s" podCreationTimestamp="2026-02-23 09:12:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 09:12:03.860909188 +0000 UTC m=+1455.244115345" watchObservedRunningTime="2026-02-23 09:12:03.874528897 +0000 UTC m=+1455.257735054" Feb 23 09:12:03 crc kubenswrapper[4940]: I0223 09:12:03.883927 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e120736f-7b92-4b46-8d1d-7d50ecda615a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e120736f-7b92-4b46-8d1d-7d50ecda615a" (UID: "e120736f-7b92-4b46-8d1d-7d50ecda615a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 09:12:03 crc kubenswrapper[4940]: I0223 09:12:03.896782 4940 scope.go:117] "RemoveContainer" containerID="e8f248d11b28244b39f5851c9e0c10f97c9f33990d6b81f6d12cd49e231dbda1" Feb 23 09:12:03 crc kubenswrapper[4940]: I0223 09:12:03.907180 4940 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e120736f-7b92-4b46-8d1d-7d50ecda615a-scripts\") on node \"crc\" DevicePath \"\"" Feb 23 09:12:03 crc kubenswrapper[4940]: I0223 09:12:03.907221 4940 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e120736f-7b92-4b46-8d1d-7d50ecda615a-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 23 09:12:03 crc kubenswrapper[4940]: I0223 09:12:03.907234 4940 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e120736f-7b92-4b46-8d1d-7d50ecda615a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 09:12:03 crc kubenswrapper[4940]: I0223 09:12:03.910771 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e120736f-7b92-4b46-8d1d-7d50ecda615a-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "e120736f-7b92-4b46-8d1d-7d50ecda615a" (UID: "e120736f-7b92-4b46-8d1d-7d50ecda615a"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 09:12:03 crc kubenswrapper[4940]: I0223 09:12:03.932857 4940 scope.go:117] "RemoveContainer" containerID="e05223ee3e2f63a459c1e9bfc98f8a035aea1e8a41c67f3777ad583c04d5b895" Feb 23 09:12:03 crc kubenswrapper[4940]: I0223 09:12:03.947072 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e120736f-7b92-4b46-8d1d-7d50ecda615a-config-data" (OuterVolumeSpecName: "config-data") pod "e120736f-7b92-4b46-8d1d-7d50ecda615a" (UID: "e120736f-7b92-4b46-8d1d-7d50ecda615a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 09:12:03 crc kubenswrapper[4940]: I0223 09:12:03.954923 4940 scope.go:117] "RemoveContainer" containerID="c0387048d5cc6e65589232521b32447365f179a079befde15ff7b1a868aa37d5" Feb 23 09:12:03 crc kubenswrapper[4940]: I0223 09:12:03.981186 4940 scope.go:117] "RemoveContainer" containerID="c1fa4d003be46c9defe3f647dc6c601e8473e51d5241467ac8ccb365bc1ffb9e" Feb 23 09:12:03 crc kubenswrapper[4940]: E0223 09:12:03.981593 4940 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c1fa4d003be46c9defe3f647dc6c601e8473e51d5241467ac8ccb365bc1ffb9e\": container with ID starting with c1fa4d003be46c9defe3f647dc6c601e8473e51d5241467ac8ccb365bc1ffb9e not found: ID does not exist" containerID="c1fa4d003be46c9defe3f647dc6c601e8473e51d5241467ac8ccb365bc1ffb9e" Feb 23 09:12:03 crc kubenswrapper[4940]: I0223 09:12:03.981637 4940 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c1fa4d003be46c9defe3f647dc6c601e8473e51d5241467ac8ccb365bc1ffb9e"} err="failed to get container status \"c1fa4d003be46c9defe3f647dc6c601e8473e51d5241467ac8ccb365bc1ffb9e\": rpc error: code = NotFound desc = could not find container \"c1fa4d003be46c9defe3f647dc6c601e8473e51d5241467ac8ccb365bc1ffb9e\": container with ID starting with c1fa4d003be46c9defe3f647dc6c601e8473e51d5241467ac8ccb365bc1ffb9e not found: ID does not exist" Feb 23 09:12:03 crc kubenswrapper[4940]: I0223 09:12:03.981658 4940 scope.go:117] "RemoveContainer" containerID="e8f248d11b28244b39f5851c9e0c10f97c9f33990d6b81f6d12cd49e231dbda1" Feb 23 09:12:03 crc kubenswrapper[4940]: E0223 09:12:03.981890 4940 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e8f248d11b28244b39f5851c9e0c10f97c9f33990d6b81f6d12cd49e231dbda1\": container with ID starting with e8f248d11b28244b39f5851c9e0c10f97c9f33990d6b81f6d12cd49e231dbda1 not found: ID does not exist" containerID="e8f248d11b28244b39f5851c9e0c10f97c9f33990d6b81f6d12cd49e231dbda1" Feb 23 09:12:03 crc kubenswrapper[4940]: I0223 09:12:03.981915 4940 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e8f248d11b28244b39f5851c9e0c10f97c9f33990d6b81f6d12cd49e231dbda1"} err="failed to get container status \"e8f248d11b28244b39f5851c9e0c10f97c9f33990d6b81f6d12cd49e231dbda1\": rpc error: code = NotFound desc = could not find container \"e8f248d11b28244b39f5851c9e0c10f97c9f33990d6b81f6d12cd49e231dbda1\": container with ID starting with e8f248d11b28244b39f5851c9e0c10f97c9f33990d6b81f6d12cd49e231dbda1 not found: ID does not exist" Feb 23 09:12:03 crc kubenswrapper[4940]: I0223 09:12:03.981928 4940 scope.go:117] "RemoveContainer" containerID="e05223ee3e2f63a459c1e9bfc98f8a035aea1e8a41c67f3777ad583c04d5b895" Feb 23 09:12:03 crc kubenswrapper[4940]: E0223 09:12:03.983044 4940 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e05223ee3e2f63a459c1e9bfc98f8a035aea1e8a41c67f3777ad583c04d5b895\": container with ID starting with e05223ee3e2f63a459c1e9bfc98f8a035aea1e8a41c67f3777ad583c04d5b895 not found: ID does not exist" containerID="e05223ee3e2f63a459c1e9bfc98f8a035aea1e8a41c67f3777ad583c04d5b895" Feb 23 09:12:03 crc kubenswrapper[4940]: I0223 09:12:03.983064 4940 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e05223ee3e2f63a459c1e9bfc98f8a035aea1e8a41c67f3777ad583c04d5b895"} err="failed to get container status \"e05223ee3e2f63a459c1e9bfc98f8a035aea1e8a41c67f3777ad583c04d5b895\": rpc error: code = NotFound desc = could not find container \"e05223ee3e2f63a459c1e9bfc98f8a035aea1e8a41c67f3777ad583c04d5b895\": container with ID starting with e05223ee3e2f63a459c1e9bfc98f8a035aea1e8a41c67f3777ad583c04d5b895 not found: ID does not exist" Feb 23 09:12:03 crc kubenswrapper[4940]: I0223 09:12:03.983080 4940 scope.go:117] "RemoveContainer" containerID="c0387048d5cc6e65589232521b32447365f179a079befde15ff7b1a868aa37d5" Feb 23 09:12:03 crc kubenswrapper[4940]: E0223 09:12:03.985851 4940 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c0387048d5cc6e65589232521b32447365f179a079befde15ff7b1a868aa37d5\": container with ID starting with c0387048d5cc6e65589232521b32447365f179a079befde15ff7b1a868aa37d5 not found: ID does not exist" containerID="c0387048d5cc6e65589232521b32447365f179a079befde15ff7b1a868aa37d5" Feb 23 09:12:03 crc kubenswrapper[4940]: I0223 09:12:03.985971 4940 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c0387048d5cc6e65589232521b32447365f179a079befde15ff7b1a868aa37d5"} err="failed to get container status \"c0387048d5cc6e65589232521b32447365f179a079befde15ff7b1a868aa37d5\": rpc error: code = NotFound desc = could not find container \"c0387048d5cc6e65589232521b32447365f179a079befde15ff7b1a868aa37d5\": container with ID starting with c0387048d5cc6e65589232521b32447365f179a079befde15ff7b1a868aa37d5 not found: ID does not exist" Feb 23 09:12:03 crc kubenswrapper[4940]: I0223 09:12:03.986049 4940 scope.go:117] "RemoveContainer" containerID="c1fa4d003be46c9defe3f647dc6c601e8473e51d5241467ac8ccb365bc1ffb9e" Feb 23 09:12:03 crc kubenswrapper[4940]: I0223 09:12:03.987061 4940 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c1fa4d003be46c9defe3f647dc6c601e8473e51d5241467ac8ccb365bc1ffb9e"} err="failed to get container status \"c1fa4d003be46c9defe3f647dc6c601e8473e51d5241467ac8ccb365bc1ffb9e\": rpc error: code = NotFound desc = could not find container \"c1fa4d003be46c9defe3f647dc6c601e8473e51d5241467ac8ccb365bc1ffb9e\": container with ID starting with c1fa4d003be46c9defe3f647dc6c601e8473e51d5241467ac8ccb365bc1ffb9e not found: ID does not exist" Feb 23 09:12:03 crc kubenswrapper[4940]: I0223 09:12:03.987094 4940 scope.go:117] "RemoveContainer" containerID="e8f248d11b28244b39f5851c9e0c10f97c9f33990d6b81f6d12cd49e231dbda1" Feb 23 09:12:03 crc kubenswrapper[4940]: I0223 09:12:03.987379 4940 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e8f248d11b28244b39f5851c9e0c10f97c9f33990d6b81f6d12cd49e231dbda1"} err="failed to get container status \"e8f248d11b28244b39f5851c9e0c10f97c9f33990d6b81f6d12cd49e231dbda1\": rpc error: code = NotFound desc = could not find container \"e8f248d11b28244b39f5851c9e0c10f97c9f33990d6b81f6d12cd49e231dbda1\": container with ID starting with e8f248d11b28244b39f5851c9e0c10f97c9f33990d6b81f6d12cd49e231dbda1 not found: ID does not exist" Feb 23 09:12:03 crc kubenswrapper[4940]: I0223 09:12:03.987452 4940 scope.go:117] "RemoveContainer" containerID="e05223ee3e2f63a459c1e9bfc98f8a035aea1e8a41c67f3777ad583c04d5b895" Feb 23 09:12:03 crc kubenswrapper[4940]: I0223 09:12:03.987986 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 23 09:12:03 crc kubenswrapper[4940]: I0223 09:12:03.990281 4940 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e05223ee3e2f63a459c1e9bfc98f8a035aea1e8a41c67f3777ad583c04d5b895"} err="failed to get container status \"e05223ee3e2f63a459c1e9bfc98f8a035aea1e8a41c67f3777ad583c04d5b895\": rpc error: code = NotFound desc = could not find container \"e05223ee3e2f63a459c1e9bfc98f8a035aea1e8a41c67f3777ad583c04d5b895\": container with ID starting with e05223ee3e2f63a459c1e9bfc98f8a035aea1e8a41c67f3777ad583c04d5b895 not found: ID does not exist" Feb 23 09:12:03 crc kubenswrapper[4940]: I0223 09:12:03.990321 4940 scope.go:117] "RemoveContainer" containerID="c0387048d5cc6e65589232521b32447365f179a079befde15ff7b1a868aa37d5" Feb 23 09:12:03 crc kubenswrapper[4940]: I0223 09:12:03.990669 4940 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c0387048d5cc6e65589232521b32447365f179a079befde15ff7b1a868aa37d5"} err="failed to get container status \"c0387048d5cc6e65589232521b32447365f179a079befde15ff7b1a868aa37d5\": rpc error: code = NotFound desc = could not find container \"c0387048d5cc6e65589232521b32447365f179a079befde15ff7b1a868aa37d5\": container with ID starting with c0387048d5cc6e65589232521b32447365f179a079befde15ff7b1a868aa37d5 not found: ID does not exist" Feb 23 09:12:03 crc kubenswrapper[4940]: I0223 09:12:03.990748 4940 scope.go:117] "RemoveContainer" containerID="c1fa4d003be46c9defe3f647dc6c601e8473e51d5241467ac8ccb365bc1ffb9e" Feb 23 09:12:03 crc kubenswrapper[4940]: I0223 09:12:03.991126 4940 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c1fa4d003be46c9defe3f647dc6c601e8473e51d5241467ac8ccb365bc1ffb9e"} err="failed to get container status \"c1fa4d003be46c9defe3f647dc6c601e8473e51d5241467ac8ccb365bc1ffb9e\": rpc error: code = NotFound desc = could not find container \"c1fa4d003be46c9defe3f647dc6c601e8473e51d5241467ac8ccb365bc1ffb9e\": container with ID starting with c1fa4d003be46c9defe3f647dc6c601e8473e51d5241467ac8ccb365bc1ffb9e not found: ID does not exist" Feb 23 09:12:03 crc kubenswrapper[4940]: I0223 09:12:03.991197 4940 scope.go:117] "RemoveContainer" containerID="e8f248d11b28244b39f5851c9e0c10f97c9f33990d6b81f6d12cd49e231dbda1" Feb 23 09:12:03 crc kubenswrapper[4940]: I0223 09:12:03.991648 4940 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e8f248d11b28244b39f5851c9e0c10f97c9f33990d6b81f6d12cd49e231dbda1"} err="failed to get container status \"e8f248d11b28244b39f5851c9e0c10f97c9f33990d6b81f6d12cd49e231dbda1\": rpc error: code = NotFound desc = could not find container \"e8f248d11b28244b39f5851c9e0c10f97c9f33990d6b81f6d12cd49e231dbda1\": container with ID starting with e8f248d11b28244b39f5851c9e0c10f97c9f33990d6b81f6d12cd49e231dbda1 not found: ID does not exist" Feb 23 09:12:03 crc kubenswrapper[4940]: I0223 09:12:03.991746 4940 scope.go:117] "RemoveContainer" containerID="e05223ee3e2f63a459c1e9bfc98f8a035aea1e8a41c67f3777ad583c04d5b895" Feb 23 09:12:03 crc kubenswrapper[4940]: I0223 09:12:03.992021 4940 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e05223ee3e2f63a459c1e9bfc98f8a035aea1e8a41c67f3777ad583c04d5b895"} err="failed to get container status \"e05223ee3e2f63a459c1e9bfc98f8a035aea1e8a41c67f3777ad583c04d5b895\": rpc error: code = NotFound desc = could not find container \"e05223ee3e2f63a459c1e9bfc98f8a035aea1e8a41c67f3777ad583c04d5b895\": container with ID starting with e05223ee3e2f63a459c1e9bfc98f8a035aea1e8a41c67f3777ad583c04d5b895 not found: ID does not exist" Feb 23 09:12:03 crc kubenswrapper[4940]: I0223 09:12:03.992089 4940 scope.go:117] "RemoveContainer" containerID="c0387048d5cc6e65589232521b32447365f179a079befde15ff7b1a868aa37d5" Feb 23 09:12:03 crc kubenswrapper[4940]: I0223 09:12:03.992706 4940 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c0387048d5cc6e65589232521b32447365f179a079befde15ff7b1a868aa37d5"} err="failed to get container status \"c0387048d5cc6e65589232521b32447365f179a079befde15ff7b1a868aa37d5\": rpc error: code = NotFound desc = could not find container \"c0387048d5cc6e65589232521b32447365f179a079befde15ff7b1a868aa37d5\": container with ID starting with c0387048d5cc6e65589232521b32447365f179a079befde15ff7b1a868aa37d5 not found: ID does not exist" Feb 23 09:12:03 crc kubenswrapper[4940]: I0223 09:12:03.992908 4940 scope.go:117] "RemoveContainer" containerID="c1fa4d003be46c9defe3f647dc6c601e8473e51d5241467ac8ccb365bc1ffb9e" Feb 23 09:12:03 crc kubenswrapper[4940]: I0223 09:12:03.993897 4940 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c1fa4d003be46c9defe3f647dc6c601e8473e51d5241467ac8ccb365bc1ffb9e"} err="failed to get container status \"c1fa4d003be46c9defe3f647dc6c601e8473e51d5241467ac8ccb365bc1ffb9e\": rpc error: code = NotFound desc = could not find container \"c1fa4d003be46c9defe3f647dc6c601e8473e51d5241467ac8ccb365bc1ffb9e\": container with ID starting with c1fa4d003be46c9defe3f647dc6c601e8473e51d5241467ac8ccb365bc1ffb9e not found: ID does not exist" Feb 23 09:12:03 crc kubenswrapper[4940]: I0223 09:12:03.993947 4940 scope.go:117] "RemoveContainer" containerID="e8f248d11b28244b39f5851c9e0c10f97c9f33990d6b81f6d12cd49e231dbda1" Feb 23 09:12:03 crc kubenswrapper[4940]: I0223 09:12:03.994264 4940 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e8f248d11b28244b39f5851c9e0c10f97c9f33990d6b81f6d12cd49e231dbda1"} err="failed to get container status \"e8f248d11b28244b39f5851c9e0c10f97c9f33990d6b81f6d12cd49e231dbda1\": rpc error: code = NotFound desc = could not find container \"e8f248d11b28244b39f5851c9e0c10f97c9f33990d6b81f6d12cd49e231dbda1\": container with ID starting with e8f248d11b28244b39f5851c9e0c10f97c9f33990d6b81f6d12cd49e231dbda1 not found: ID does not exist" Feb 23 09:12:03 crc kubenswrapper[4940]: I0223 09:12:03.994287 4940 scope.go:117] "RemoveContainer" containerID="e05223ee3e2f63a459c1e9bfc98f8a035aea1e8a41c67f3777ad583c04d5b895" Feb 23 09:12:03 crc kubenswrapper[4940]: I0223 09:12:03.994557 4940 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e05223ee3e2f63a459c1e9bfc98f8a035aea1e8a41c67f3777ad583c04d5b895"} err="failed to get container status \"e05223ee3e2f63a459c1e9bfc98f8a035aea1e8a41c67f3777ad583c04d5b895\": rpc error: code = NotFound desc = could not find container \"e05223ee3e2f63a459c1e9bfc98f8a035aea1e8a41c67f3777ad583c04d5b895\": container with ID starting with e05223ee3e2f63a459c1e9bfc98f8a035aea1e8a41c67f3777ad583c04d5b895 not found: ID does not exist" Feb 23 09:12:03 crc kubenswrapper[4940]: I0223 09:12:03.994601 4940 scope.go:117] "RemoveContainer" containerID="c0387048d5cc6e65589232521b32447365f179a079befde15ff7b1a868aa37d5" Feb 23 09:12:03 crc kubenswrapper[4940]: I0223 09:12:03.994891 4940 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c0387048d5cc6e65589232521b32447365f179a079befde15ff7b1a868aa37d5"} err="failed to get container status \"c0387048d5cc6e65589232521b32447365f179a079befde15ff7b1a868aa37d5\": rpc error: code = NotFound desc = could not find container \"c0387048d5cc6e65589232521b32447365f179a079befde15ff7b1a868aa37d5\": container with ID starting with c0387048d5cc6e65589232521b32447365f179a079befde15ff7b1a868aa37d5 not found: ID does not exist" Feb 23 09:12:04 crc kubenswrapper[4940]: I0223 09:12:04.012170 4940 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/e120736f-7b92-4b46-8d1d-7d50ecda615a-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 23 09:12:04 crc kubenswrapper[4940]: I0223 09:12:04.012209 4940 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e120736f-7b92-4b46-8d1d-7d50ecda615a-config-data\") on node \"crc\" DevicePath \"\"" Feb 23 09:12:04 crc kubenswrapper[4940]: I0223 09:12:04.184641 4940 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 23 09:12:04 crc kubenswrapper[4940]: I0223 09:12:04.194818 4940 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 23 09:12:04 crc kubenswrapper[4940]: I0223 09:12:04.209756 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 23 09:12:04 crc kubenswrapper[4940]: E0223 09:12:04.210369 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e120736f-7b92-4b46-8d1d-7d50ecda615a" containerName="ceilometer-notification-agent" Feb 23 09:12:04 crc kubenswrapper[4940]: I0223 09:12:04.210398 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="e120736f-7b92-4b46-8d1d-7d50ecda615a" containerName="ceilometer-notification-agent" Feb 23 09:12:04 crc kubenswrapper[4940]: E0223 09:12:04.210427 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e120736f-7b92-4b46-8d1d-7d50ecda615a" containerName="ceilometer-central-agent" Feb 23 09:12:04 crc kubenswrapper[4940]: I0223 09:12:04.210436 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="e120736f-7b92-4b46-8d1d-7d50ecda615a" containerName="ceilometer-central-agent" Feb 23 09:12:04 crc kubenswrapper[4940]: E0223 09:12:04.210451 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e120736f-7b92-4b46-8d1d-7d50ecda615a" containerName="proxy-httpd" Feb 23 09:12:04 crc kubenswrapper[4940]: I0223 09:12:04.210459 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="e120736f-7b92-4b46-8d1d-7d50ecda615a" containerName="proxy-httpd" Feb 23 09:12:04 crc kubenswrapper[4940]: E0223 09:12:04.210494 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e120736f-7b92-4b46-8d1d-7d50ecda615a" containerName="sg-core" Feb 23 09:12:04 crc kubenswrapper[4940]: I0223 09:12:04.210502 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="e120736f-7b92-4b46-8d1d-7d50ecda615a" containerName="sg-core" Feb 23 09:12:04 crc kubenswrapper[4940]: I0223 09:12:04.210723 4940 memory_manager.go:354] "RemoveStaleState removing state" podUID="e120736f-7b92-4b46-8d1d-7d50ecda615a" containerName="proxy-httpd" Feb 23 09:12:04 crc kubenswrapper[4940]: I0223 09:12:04.210745 4940 memory_manager.go:354] "RemoveStaleState removing state" podUID="e120736f-7b92-4b46-8d1d-7d50ecda615a" containerName="ceilometer-central-agent" Feb 23 09:12:04 crc kubenswrapper[4940]: I0223 09:12:04.210760 4940 memory_manager.go:354] "RemoveStaleState removing state" podUID="e120736f-7b92-4b46-8d1d-7d50ecda615a" containerName="sg-core" Feb 23 09:12:04 crc kubenswrapper[4940]: I0223 09:12:04.210785 4940 memory_manager.go:354] "RemoveStaleState removing state" podUID="e120736f-7b92-4b46-8d1d-7d50ecda615a" containerName="ceilometer-notification-agent" Feb 23 09:12:04 crc kubenswrapper[4940]: I0223 09:12:04.213994 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 23 09:12:04 crc kubenswrapper[4940]: I0223 09:12:04.217301 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Feb 23 09:12:04 crc kubenswrapper[4940]: I0223 09:12:04.217517 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 23 09:12:04 crc kubenswrapper[4940]: I0223 09:12:04.217806 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 23 09:12:04 crc kubenswrapper[4940]: I0223 09:12:04.228899 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 23 09:12:04 crc kubenswrapper[4940]: I0223 09:12:04.320555 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e70b635-2fe6-4e11-97ca-1cc8bb5e3d59-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"6e70b635-2fe6-4e11-97ca-1cc8bb5e3d59\") " pod="openstack/ceilometer-0" Feb 23 09:12:04 crc kubenswrapper[4940]: I0223 09:12:04.320598 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6e70b635-2fe6-4e11-97ca-1cc8bb5e3d59-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"6e70b635-2fe6-4e11-97ca-1cc8bb5e3d59\") " pod="openstack/ceilometer-0" Feb 23 09:12:04 crc kubenswrapper[4940]: I0223 09:12:04.320649 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bfqjf\" (UniqueName: \"kubernetes.io/projected/6e70b635-2fe6-4e11-97ca-1cc8bb5e3d59-kube-api-access-bfqjf\") pod \"ceilometer-0\" (UID: \"6e70b635-2fe6-4e11-97ca-1cc8bb5e3d59\") " pod="openstack/ceilometer-0" Feb 23 09:12:04 crc kubenswrapper[4940]: I0223 09:12:04.320683 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6e70b635-2fe6-4e11-97ca-1cc8bb5e3d59-config-data\") pod \"ceilometer-0\" (UID: \"6e70b635-2fe6-4e11-97ca-1cc8bb5e3d59\") " pod="openstack/ceilometer-0" Feb 23 09:12:04 crc kubenswrapper[4940]: I0223 09:12:04.320759 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/6e70b635-2fe6-4e11-97ca-1cc8bb5e3d59-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"6e70b635-2fe6-4e11-97ca-1cc8bb5e3d59\") " pod="openstack/ceilometer-0" Feb 23 09:12:04 crc kubenswrapper[4940]: I0223 09:12:04.320784 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6e70b635-2fe6-4e11-97ca-1cc8bb5e3d59-log-httpd\") pod \"ceilometer-0\" (UID: \"6e70b635-2fe6-4e11-97ca-1cc8bb5e3d59\") " pod="openstack/ceilometer-0" Feb 23 09:12:04 crc kubenswrapper[4940]: I0223 09:12:04.320799 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6e70b635-2fe6-4e11-97ca-1cc8bb5e3d59-run-httpd\") pod \"ceilometer-0\" (UID: \"6e70b635-2fe6-4e11-97ca-1cc8bb5e3d59\") " pod="openstack/ceilometer-0" Feb 23 09:12:04 crc kubenswrapper[4940]: I0223 09:12:04.320849 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6e70b635-2fe6-4e11-97ca-1cc8bb5e3d59-scripts\") pod \"ceilometer-0\" (UID: \"6e70b635-2fe6-4e11-97ca-1cc8bb5e3d59\") " pod="openstack/ceilometer-0" Feb 23 09:12:04 crc kubenswrapper[4940]: I0223 09:12:04.423044 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6e70b635-2fe6-4e11-97ca-1cc8bb5e3d59-config-data\") pod \"ceilometer-0\" (UID: \"6e70b635-2fe6-4e11-97ca-1cc8bb5e3d59\") " pod="openstack/ceilometer-0" Feb 23 09:12:04 crc kubenswrapper[4940]: I0223 09:12:04.423405 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/6e70b635-2fe6-4e11-97ca-1cc8bb5e3d59-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"6e70b635-2fe6-4e11-97ca-1cc8bb5e3d59\") " pod="openstack/ceilometer-0" Feb 23 09:12:04 crc kubenswrapper[4940]: I0223 09:12:04.423441 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6e70b635-2fe6-4e11-97ca-1cc8bb5e3d59-log-httpd\") pod \"ceilometer-0\" (UID: \"6e70b635-2fe6-4e11-97ca-1cc8bb5e3d59\") " pod="openstack/ceilometer-0" Feb 23 09:12:04 crc kubenswrapper[4940]: I0223 09:12:04.423464 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6e70b635-2fe6-4e11-97ca-1cc8bb5e3d59-run-httpd\") pod \"ceilometer-0\" (UID: \"6e70b635-2fe6-4e11-97ca-1cc8bb5e3d59\") " pod="openstack/ceilometer-0" Feb 23 09:12:04 crc kubenswrapper[4940]: I0223 09:12:04.423554 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6e70b635-2fe6-4e11-97ca-1cc8bb5e3d59-scripts\") pod \"ceilometer-0\" (UID: \"6e70b635-2fe6-4e11-97ca-1cc8bb5e3d59\") " pod="openstack/ceilometer-0" Feb 23 09:12:04 crc kubenswrapper[4940]: I0223 09:12:04.423736 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e70b635-2fe6-4e11-97ca-1cc8bb5e3d59-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"6e70b635-2fe6-4e11-97ca-1cc8bb5e3d59\") " pod="openstack/ceilometer-0" Feb 23 09:12:04 crc kubenswrapper[4940]: I0223 09:12:04.423767 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6e70b635-2fe6-4e11-97ca-1cc8bb5e3d59-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"6e70b635-2fe6-4e11-97ca-1cc8bb5e3d59\") " pod="openstack/ceilometer-0" Feb 23 09:12:04 crc kubenswrapper[4940]: I0223 09:12:04.423803 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bfqjf\" (UniqueName: \"kubernetes.io/projected/6e70b635-2fe6-4e11-97ca-1cc8bb5e3d59-kube-api-access-bfqjf\") pod \"ceilometer-0\" (UID: \"6e70b635-2fe6-4e11-97ca-1cc8bb5e3d59\") " pod="openstack/ceilometer-0" Feb 23 09:12:04 crc kubenswrapper[4940]: I0223 09:12:04.424145 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6e70b635-2fe6-4e11-97ca-1cc8bb5e3d59-run-httpd\") pod \"ceilometer-0\" (UID: \"6e70b635-2fe6-4e11-97ca-1cc8bb5e3d59\") " pod="openstack/ceilometer-0" Feb 23 09:12:04 crc kubenswrapper[4940]: I0223 09:12:04.424460 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6e70b635-2fe6-4e11-97ca-1cc8bb5e3d59-log-httpd\") pod \"ceilometer-0\" (UID: \"6e70b635-2fe6-4e11-97ca-1cc8bb5e3d59\") " pod="openstack/ceilometer-0" Feb 23 09:12:04 crc kubenswrapper[4940]: I0223 09:12:04.428933 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6e70b635-2fe6-4e11-97ca-1cc8bb5e3d59-config-data\") pod \"ceilometer-0\" (UID: \"6e70b635-2fe6-4e11-97ca-1cc8bb5e3d59\") " pod="openstack/ceilometer-0" Feb 23 09:12:04 crc kubenswrapper[4940]: I0223 09:12:04.429666 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e70b635-2fe6-4e11-97ca-1cc8bb5e3d59-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"6e70b635-2fe6-4e11-97ca-1cc8bb5e3d59\") " pod="openstack/ceilometer-0" Feb 23 09:12:04 crc kubenswrapper[4940]: I0223 09:12:04.430342 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6e70b635-2fe6-4e11-97ca-1cc8bb5e3d59-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"6e70b635-2fe6-4e11-97ca-1cc8bb5e3d59\") " pod="openstack/ceilometer-0" Feb 23 09:12:04 crc kubenswrapper[4940]: I0223 09:12:04.430527 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/6e70b635-2fe6-4e11-97ca-1cc8bb5e3d59-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"6e70b635-2fe6-4e11-97ca-1cc8bb5e3d59\") " pod="openstack/ceilometer-0" Feb 23 09:12:04 crc kubenswrapper[4940]: I0223 09:12:04.431374 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6e70b635-2fe6-4e11-97ca-1cc8bb5e3d59-scripts\") pod \"ceilometer-0\" (UID: \"6e70b635-2fe6-4e11-97ca-1cc8bb5e3d59\") " pod="openstack/ceilometer-0" Feb 23 09:12:04 crc kubenswrapper[4940]: I0223 09:12:04.442383 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bfqjf\" (UniqueName: \"kubernetes.io/projected/6e70b635-2fe6-4e11-97ca-1cc8bb5e3d59-kube-api-access-bfqjf\") pod \"ceilometer-0\" (UID: \"6e70b635-2fe6-4e11-97ca-1cc8bb5e3d59\") " pod="openstack/ceilometer-0" Feb 23 09:12:04 crc kubenswrapper[4940]: I0223 09:12:04.531043 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 23 09:12:04 crc kubenswrapper[4940]: I0223 09:12:04.866997 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"1baa0ab5-14b9-4150-872e-e135857e3033","Type":"ContainerStarted","Data":"aeffa871766ba5ee89d46738f3bb465bb7daa1a2e3a14b15ffd4ffb5f1591540"} Feb 23 09:12:04 crc kubenswrapper[4940]: I0223 09:12:04.867224 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"1baa0ab5-14b9-4150-872e-e135857e3033","Type":"ContainerStarted","Data":"e8feeb154d0a8d34082e9a521113830c2a6ce4dda4f5669112f4bace96003210"} Feb 23 09:12:04 crc kubenswrapper[4940]: I0223 09:12:04.871060 4940 generic.go:334] "Generic (PLEG): container finished" podID="e8d07da4-4091-410a-a0a8-befbd51a314e" containerID="cf282a4f5507687d699bb51e1cb039ae5df98499b0e57c3f7a898d4f5aac3cf0" exitCode=143 Feb 23 09:12:04 crc kubenswrapper[4940]: I0223 09:12:04.872452 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e8d07da4-4091-410a-a0a8-befbd51a314e","Type":"ContainerDied","Data":"cf282a4f5507687d699bb51e1cb039ae5df98499b0e57c3f7a898d4f5aac3cf0"} Feb 23 09:12:04 crc kubenswrapper[4940]: I0223 09:12:04.896993 4940 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=1.8969693890000001 podStartE2EDuration="1.896969389s" podCreationTimestamp="2026-02-23 09:12:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 09:12:04.884849068 +0000 UTC m=+1456.268055235" watchObservedRunningTime="2026-02-23 09:12:04.896969389 +0000 UTC m=+1456.280175546" Feb 23 09:12:05 crc kubenswrapper[4940]: I0223 09:12:05.069521 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 23 09:12:05 crc kubenswrapper[4940]: I0223 09:12:05.127667 4940 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 23 09:12:05 crc kubenswrapper[4940]: I0223 09:12:05.361229 4940 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e120736f-7b92-4b46-8d1d-7d50ecda615a" path="/var/lib/kubelet/pods/e120736f-7b92-4b46-8d1d-7d50ecda615a/volumes" Feb 23 09:12:05 crc kubenswrapper[4940]: I0223 09:12:05.881311 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6e70b635-2fe6-4e11-97ca-1cc8bb5e3d59","Type":"ContainerStarted","Data":"d1b944b4afb6c500c651e3d8e2204841c18c0bfd65f3ef6fc188d60092844884"} Feb 23 09:12:06 crc kubenswrapper[4940]: I0223 09:12:06.895412 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6e70b635-2fe6-4e11-97ca-1cc8bb5e3d59","Type":"ContainerStarted","Data":"891c4ab12aa1dd3d46128f62f8a4595bc7086e20d4b321393b3a581e130b83ca"} Feb 23 09:12:06 crc kubenswrapper[4940]: I0223 09:12:06.895984 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6e70b635-2fe6-4e11-97ca-1cc8bb5e3d59","Type":"ContainerStarted","Data":"2e3fd33ec21a9aa8d5cf3efedda211d2d3f09ea74496e9859a0565aa9a5baf41"} Feb 23 09:12:07 crc kubenswrapper[4940]: I0223 09:12:07.419702 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 23 09:12:07 crc kubenswrapper[4940]: I0223 09:12:07.510185 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e8d07da4-4091-410a-a0a8-befbd51a314e-logs\") pod \"e8d07da4-4091-410a-a0a8-befbd51a314e\" (UID: \"e8d07da4-4091-410a-a0a8-befbd51a314e\") " Feb 23 09:12:07 crc kubenswrapper[4940]: I0223 09:12:07.510248 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e8d07da4-4091-410a-a0a8-befbd51a314e-combined-ca-bundle\") pod \"e8d07da4-4091-410a-a0a8-befbd51a314e\" (UID: \"e8d07da4-4091-410a-a0a8-befbd51a314e\") " Feb 23 09:12:07 crc kubenswrapper[4940]: I0223 09:12:07.510539 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e8d07da4-4091-410a-a0a8-befbd51a314e-config-data\") pod \"e8d07da4-4091-410a-a0a8-befbd51a314e\" (UID: \"e8d07da4-4091-410a-a0a8-befbd51a314e\") " Feb 23 09:12:07 crc kubenswrapper[4940]: I0223 09:12:07.510639 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lnz47\" (UniqueName: \"kubernetes.io/projected/e8d07da4-4091-410a-a0a8-befbd51a314e-kube-api-access-lnz47\") pod \"e8d07da4-4091-410a-a0a8-befbd51a314e\" (UID: \"e8d07da4-4091-410a-a0a8-befbd51a314e\") " Feb 23 09:12:07 crc kubenswrapper[4940]: I0223 09:12:07.511131 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e8d07da4-4091-410a-a0a8-befbd51a314e-logs" (OuterVolumeSpecName: "logs") pod "e8d07da4-4091-410a-a0a8-befbd51a314e" (UID: "e8d07da4-4091-410a-a0a8-befbd51a314e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 09:12:07 crc kubenswrapper[4940]: I0223 09:12:07.511413 4940 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e8d07da4-4091-410a-a0a8-befbd51a314e-logs\") on node \"crc\" DevicePath \"\"" Feb 23 09:12:07 crc kubenswrapper[4940]: I0223 09:12:07.524076 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e8d07da4-4091-410a-a0a8-befbd51a314e-kube-api-access-lnz47" (OuterVolumeSpecName: "kube-api-access-lnz47") pod "e8d07da4-4091-410a-a0a8-befbd51a314e" (UID: "e8d07da4-4091-410a-a0a8-befbd51a314e"). InnerVolumeSpecName "kube-api-access-lnz47". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 09:12:07 crc kubenswrapper[4940]: I0223 09:12:07.546290 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e8d07da4-4091-410a-a0a8-befbd51a314e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e8d07da4-4091-410a-a0a8-befbd51a314e" (UID: "e8d07da4-4091-410a-a0a8-befbd51a314e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 09:12:07 crc kubenswrapper[4940]: I0223 09:12:07.569768 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e8d07da4-4091-410a-a0a8-befbd51a314e-config-data" (OuterVolumeSpecName: "config-data") pod "e8d07da4-4091-410a-a0a8-befbd51a314e" (UID: "e8d07da4-4091-410a-a0a8-befbd51a314e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 09:12:07 crc kubenswrapper[4940]: I0223 09:12:07.614592 4940 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e8d07da4-4091-410a-a0a8-befbd51a314e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 09:12:07 crc kubenswrapper[4940]: I0223 09:12:07.614695 4940 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e8d07da4-4091-410a-a0a8-befbd51a314e-config-data\") on node \"crc\" DevicePath \"\"" Feb 23 09:12:07 crc kubenswrapper[4940]: I0223 09:12:07.614713 4940 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lnz47\" (UniqueName: \"kubernetes.io/projected/e8d07da4-4091-410a-a0a8-befbd51a314e-kube-api-access-lnz47\") on node \"crc\" DevicePath \"\"" Feb 23 09:12:07 crc kubenswrapper[4940]: I0223 09:12:07.909258 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6e70b635-2fe6-4e11-97ca-1cc8bb5e3d59","Type":"ContainerStarted","Data":"cad5fc5debebe4a272309f9f82f4688e5c078783da2c2291809153d749d9a510"} Feb 23 09:12:07 crc kubenswrapper[4940]: I0223 09:12:07.912298 4940 generic.go:334] "Generic (PLEG): container finished" podID="e8d07da4-4091-410a-a0a8-befbd51a314e" containerID="f09ec395112f9bf74fedd3ce504af7ff9ed750636f17c26ba1c4cea34a82f9b1" exitCode=0 Feb 23 09:12:07 crc kubenswrapper[4940]: I0223 09:12:07.912414 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 23 09:12:07 crc kubenswrapper[4940]: I0223 09:12:07.912431 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e8d07da4-4091-410a-a0a8-befbd51a314e","Type":"ContainerDied","Data":"f09ec395112f9bf74fedd3ce504af7ff9ed750636f17c26ba1c4cea34a82f9b1"} Feb 23 09:12:07 crc kubenswrapper[4940]: I0223 09:12:07.912824 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e8d07da4-4091-410a-a0a8-befbd51a314e","Type":"ContainerDied","Data":"058f664c0ebef86bf8f954298a2362f0e7ac4e460e4f618d766bb45824df039e"} Feb 23 09:12:07 crc kubenswrapper[4940]: I0223 09:12:07.912862 4940 scope.go:117] "RemoveContainer" containerID="f09ec395112f9bf74fedd3ce504af7ff9ed750636f17c26ba1c4cea34a82f9b1" Feb 23 09:12:07 crc kubenswrapper[4940]: I0223 09:12:07.954307 4940 scope.go:117] "RemoveContainer" containerID="cf282a4f5507687d699bb51e1cb039ae5df98499b0e57c3f7a898d4f5aac3cf0" Feb 23 09:12:07 crc kubenswrapper[4940]: I0223 09:12:07.961197 4940 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 23 09:12:08 crc kubenswrapper[4940]: I0223 09:12:08.006649 4940 scope.go:117] "RemoveContainer" containerID="f09ec395112f9bf74fedd3ce504af7ff9ed750636f17c26ba1c4cea34a82f9b1" Feb 23 09:12:08 crc kubenswrapper[4940]: E0223 09:12:08.007231 4940 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f09ec395112f9bf74fedd3ce504af7ff9ed750636f17c26ba1c4cea34a82f9b1\": container with ID starting with f09ec395112f9bf74fedd3ce504af7ff9ed750636f17c26ba1c4cea34a82f9b1 not found: ID does not exist" containerID="f09ec395112f9bf74fedd3ce504af7ff9ed750636f17c26ba1c4cea34a82f9b1" Feb 23 09:12:08 crc kubenswrapper[4940]: I0223 09:12:08.007286 4940 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f09ec395112f9bf74fedd3ce504af7ff9ed750636f17c26ba1c4cea34a82f9b1"} err="failed to get container status \"f09ec395112f9bf74fedd3ce504af7ff9ed750636f17c26ba1c4cea34a82f9b1\": rpc error: code = NotFound desc = could not find container \"f09ec395112f9bf74fedd3ce504af7ff9ed750636f17c26ba1c4cea34a82f9b1\": container with ID starting with f09ec395112f9bf74fedd3ce504af7ff9ed750636f17c26ba1c4cea34a82f9b1 not found: ID does not exist" Feb 23 09:12:08 crc kubenswrapper[4940]: I0223 09:12:08.007314 4940 scope.go:117] "RemoveContainer" containerID="cf282a4f5507687d699bb51e1cb039ae5df98499b0e57c3f7a898d4f5aac3cf0" Feb 23 09:12:08 crc kubenswrapper[4940]: E0223 09:12:08.007911 4940 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cf282a4f5507687d699bb51e1cb039ae5df98499b0e57c3f7a898d4f5aac3cf0\": container with ID starting with cf282a4f5507687d699bb51e1cb039ae5df98499b0e57c3f7a898d4f5aac3cf0 not found: ID does not exist" containerID="cf282a4f5507687d699bb51e1cb039ae5df98499b0e57c3f7a898d4f5aac3cf0" Feb 23 09:12:08 crc kubenswrapper[4940]: I0223 09:12:08.007945 4940 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cf282a4f5507687d699bb51e1cb039ae5df98499b0e57c3f7a898d4f5aac3cf0"} err="failed to get container status \"cf282a4f5507687d699bb51e1cb039ae5df98499b0e57c3f7a898d4f5aac3cf0\": rpc error: code = NotFound desc = could not find container \"cf282a4f5507687d699bb51e1cb039ae5df98499b0e57c3f7a898d4f5aac3cf0\": container with ID starting with cf282a4f5507687d699bb51e1cb039ae5df98499b0e57c3f7a898d4f5aac3cf0 not found: ID does not exist" Feb 23 09:12:08 crc kubenswrapper[4940]: I0223 09:12:08.011638 4940 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 23 09:12:08 crc kubenswrapper[4940]: I0223 09:12:08.027220 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 23 09:12:08 crc kubenswrapper[4940]: E0223 09:12:08.028228 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8d07da4-4091-410a-a0a8-befbd51a314e" containerName="nova-api-api" Feb 23 09:12:08 crc kubenswrapper[4940]: I0223 09:12:08.028267 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8d07da4-4091-410a-a0a8-befbd51a314e" containerName="nova-api-api" Feb 23 09:12:08 crc kubenswrapper[4940]: E0223 09:12:08.028371 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8d07da4-4091-410a-a0a8-befbd51a314e" containerName="nova-api-log" Feb 23 09:12:08 crc kubenswrapper[4940]: I0223 09:12:08.028383 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8d07da4-4091-410a-a0a8-befbd51a314e" containerName="nova-api-log" Feb 23 09:12:08 crc kubenswrapper[4940]: I0223 09:12:08.029194 4940 memory_manager.go:354] "RemoveStaleState removing state" podUID="e8d07da4-4091-410a-a0a8-befbd51a314e" containerName="nova-api-api" Feb 23 09:12:08 crc kubenswrapper[4940]: I0223 09:12:08.029237 4940 memory_manager.go:354] "RemoveStaleState removing state" podUID="e8d07da4-4091-410a-a0a8-befbd51a314e" containerName="nova-api-log" Feb 23 09:12:08 crc kubenswrapper[4940]: I0223 09:12:08.031900 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 23 09:12:08 crc kubenswrapper[4940]: I0223 09:12:08.034712 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Feb 23 09:12:08 crc kubenswrapper[4940]: I0223 09:12:08.036377 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 23 09:12:08 crc kubenswrapper[4940]: I0223 09:12:08.037719 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Feb 23 09:12:08 crc kubenswrapper[4940]: I0223 09:12:08.042267 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 23 09:12:08 crc kubenswrapper[4940]: I0223 09:12:08.127282 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/16a728c5-ed82-4c5a-86ee-8d2b2442dbd6-internal-tls-certs\") pod \"nova-api-0\" (UID: \"16a728c5-ed82-4c5a-86ee-8d2b2442dbd6\") " pod="openstack/nova-api-0" Feb 23 09:12:08 crc kubenswrapper[4940]: I0223 09:12:08.127721 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k2xqz\" (UniqueName: \"kubernetes.io/projected/16a728c5-ed82-4c5a-86ee-8d2b2442dbd6-kube-api-access-k2xqz\") pod \"nova-api-0\" (UID: \"16a728c5-ed82-4c5a-86ee-8d2b2442dbd6\") " pod="openstack/nova-api-0" Feb 23 09:12:08 crc kubenswrapper[4940]: I0223 09:12:08.127770 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/16a728c5-ed82-4c5a-86ee-8d2b2442dbd6-config-data\") pod \"nova-api-0\" (UID: \"16a728c5-ed82-4c5a-86ee-8d2b2442dbd6\") " pod="openstack/nova-api-0" Feb 23 09:12:08 crc kubenswrapper[4940]: I0223 09:12:08.127846 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16a728c5-ed82-4c5a-86ee-8d2b2442dbd6-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"16a728c5-ed82-4c5a-86ee-8d2b2442dbd6\") " pod="openstack/nova-api-0" Feb 23 09:12:08 crc kubenswrapper[4940]: I0223 09:12:08.127927 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/16a728c5-ed82-4c5a-86ee-8d2b2442dbd6-public-tls-certs\") pod \"nova-api-0\" (UID: \"16a728c5-ed82-4c5a-86ee-8d2b2442dbd6\") " pod="openstack/nova-api-0" Feb 23 09:12:08 crc kubenswrapper[4940]: I0223 09:12:08.127984 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/16a728c5-ed82-4c5a-86ee-8d2b2442dbd6-logs\") pod \"nova-api-0\" (UID: \"16a728c5-ed82-4c5a-86ee-8d2b2442dbd6\") " pod="openstack/nova-api-0" Feb 23 09:12:08 crc kubenswrapper[4940]: I0223 09:12:08.229914 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k2xqz\" (UniqueName: \"kubernetes.io/projected/16a728c5-ed82-4c5a-86ee-8d2b2442dbd6-kube-api-access-k2xqz\") pod \"nova-api-0\" (UID: \"16a728c5-ed82-4c5a-86ee-8d2b2442dbd6\") " pod="openstack/nova-api-0" Feb 23 09:12:08 crc kubenswrapper[4940]: I0223 09:12:08.229978 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/16a728c5-ed82-4c5a-86ee-8d2b2442dbd6-config-data\") pod \"nova-api-0\" (UID: \"16a728c5-ed82-4c5a-86ee-8d2b2442dbd6\") " pod="openstack/nova-api-0" Feb 23 09:12:08 crc kubenswrapper[4940]: I0223 09:12:08.230064 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16a728c5-ed82-4c5a-86ee-8d2b2442dbd6-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"16a728c5-ed82-4c5a-86ee-8d2b2442dbd6\") " pod="openstack/nova-api-0" Feb 23 09:12:08 crc kubenswrapper[4940]: I0223 09:12:08.230131 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/16a728c5-ed82-4c5a-86ee-8d2b2442dbd6-public-tls-certs\") pod \"nova-api-0\" (UID: \"16a728c5-ed82-4c5a-86ee-8d2b2442dbd6\") " pod="openstack/nova-api-0" Feb 23 09:12:08 crc kubenswrapper[4940]: I0223 09:12:08.230170 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/16a728c5-ed82-4c5a-86ee-8d2b2442dbd6-logs\") pod \"nova-api-0\" (UID: \"16a728c5-ed82-4c5a-86ee-8d2b2442dbd6\") " pod="openstack/nova-api-0" Feb 23 09:12:08 crc kubenswrapper[4940]: I0223 09:12:08.230228 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/16a728c5-ed82-4c5a-86ee-8d2b2442dbd6-internal-tls-certs\") pod \"nova-api-0\" (UID: \"16a728c5-ed82-4c5a-86ee-8d2b2442dbd6\") " pod="openstack/nova-api-0" Feb 23 09:12:08 crc kubenswrapper[4940]: I0223 09:12:08.230757 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/16a728c5-ed82-4c5a-86ee-8d2b2442dbd6-logs\") pod \"nova-api-0\" (UID: \"16a728c5-ed82-4c5a-86ee-8d2b2442dbd6\") " pod="openstack/nova-api-0" Feb 23 09:12:08 crc kubenswrapper[4940]: I0223 09:12:08.234565 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16a728c5-ed82-4c5a-86ee-8d2b2442dbd6-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"16a728c5-ed82-4c5a-86ee-8d2b2442dbd6\") " pod="openstack/nova-api-0" Feb 23 09:12:08 crc kubenswrapper[4940]: I0223 09:12:08.234730 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/16a728c5-ed82-4c5a-86ee-8d2b2442dbd6-public-tls-certs\") pod \"nova-api-0\" (UID: \"16a728c5-ed82-4c5a-86ee-8d2b2442dbd6\") " pod="openstack/nova-api-0" Feb 23 09:12:08 crc kubenswrapper[4940]: I0223 09:12:08.235008 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/16a728c5-ed82-4c5a-86ee-8d2b2442dbd6-config-data\") pod \"nova-api-0\" (UID: \"16a728c5-ed82-4c5a-86ee-8d2b2442dbd6\") " pod="openstack/nova-api-0" Feb 23 09:12:08 crc kubenswrapper[4940]: I0223 09:12:08.235159 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/16a728c5-ed82-4c5a-86ee-8d2b2442dbd6-internal-tls-certs\") pod \"nova-api-0\" (UID: \"16a728c5-ed82-4c5a-86ee-8d2b2442dbd6\") " pod="openstack/nova-api-0" Feb 23 09:12:08 crc kubenswrapper[4940]: I0223 09:12:08.261219 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k2xqz\" (UniqueName: \"kubernetes.io/projected/16a728c5-ed82-4c5a-86ee-8d2b2442dbd6-kube-api-access-k2xqz\") pod \"nova-api-0\" (UID: \"16a728c5-ed82-4c5a-86ee-8d2b2442dbd6\") " pod="openstack/nova-api-0" Feb 23 09:12:08 crc kubenswrapper[4940]: I0223 09:12:08.364019 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 23 09:12:08 crc kubenswrapper[4940]: I0223 09:12:08.427375 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Feb 23 09:12:08 crc kubenswrapper[4940]: W0223 09:12:08.878364 4940 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod16a728c5_ed82_4c5a_86ee_8d2b2442dbd6.slice/crio-a726096242b518a60edbe45d234f57d82c56bad94af169fd6706c22711793e39 WatchSource:0}: Error finding container a726096242b518a60edbe45d234f57d82c56bad94af169fd6706c22711793e39: Status 404 returned error can't find the container with id a726096242b518a60edbe45d234f57d82c56bad94af169fd6706c22711793e39 Feb 23 09:12:08 crc kubenswrapper[4940]: I0223 09:12:08.879554 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 23 09:12:08 crc kubenswrapper[4940]: I0223 09:12:08.927092 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"16a728c5-ed82-4c5a-86ee-8d2b2442dbd6","Type":"ContainerStarted","Data":"a726096242b518a60edbe45d234f57d82c56bad94af169fd6706c22711793e39"} Feb 23 09:12:09 crc kubenswrapper[4940]: I0223 09:12:09.363268 4940 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e8d07da4-4091-410a-a0a8-befbd51a314e" path="/var/lib/kubelet/pods/e8d07da4-4091-410a-a0a8-befbd51a314e/volumes" Feb 23 09:12:09 crc kubenswrapper[4940]: I0223 09:12:09.939136 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6e70b635-2fe6-4e11-97ca-1cc8bb5e3d59","Type":"ContainerStarted","Data":"5da7f7a5d6dfd748769abbafd5488305835eba4da623241da49f0d3c1bac2277"} Feb 23 09:12:09 crc kubenswrapper[4940]: I0223 09:12:09.940391 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 23 09:12:09 crc kubenswrapper[4940]: I0223 09:12:09.939336 4940 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="6e70b635-2fe6-4e11-97ca-1cc8bb5e3d59" containerName="sg-core" containerID="cri-o://cad5fc5debebe4a272309f9f82f4688e5c078783da2c2291809153d749d9a510" gracePeriod=30 Feb 23 09:12:09 crc kubenswrapper[4940]: I0223 09:12:09.939347 4940 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="6e70b635-2fe6-4e11-97ca-1cc8bb5e3d59" containerName="ceilometer-notification-agent" containerID="cri-o://891c4ab12aa1dd3d46128f62f8a4595bc7086e20d4b321393b3a581e130b83ca" gracePeriod=30 Feb 23 09:12:09 crc kubenswrapper[4940]: I0223 09:12:09.939343 4940 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="6e70b635-2fe6-4e11-97ca-1cc8bb5e3d59" containerName="proxy-httpd" containerID="cri-o://5da7f7a5d6dfd748769abbafd5488305835eba4da623241da49f0d3c1bac2277" gracePeriod=30 Feb 23 09:12:09 crc kubenswrapper[4940]: I0223 09:12:09.939268 4940 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="6e70b635-2fe6-4e11-97ca-1cc8bb5e3d59" containerName="ceilometer-central-agent" containerID="cri-o://2e3fd33ec21a9aa8d5cf3efedda211d2d3f09ea74496e9859a0565aa9a5baf41" gracePeriod=30 Feb 23 09:12:09 crc kubenswrapper[4940]: I0223 09:12:09.942468 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"16a728c5-ed82-4c5a-86ee-8d2b2442dbd6","Type":"ContainerStarted","Data":"184d86a46cc6ae21aa66be071de5a4c5c5a5259dba16256ccdfe17b1358f392e"} Feb 23 09:12:09 crc kubenswrapper[4940]: I0223 09:12:09.942512 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"16a728c5-ed82-4c5a-86ee-8d2b2442dbd6","Type":"ContainerStarted","Data":"6e1d6d935bcf139cf87179127110e5d97c6f0e6289235ef8869a430b740ecc4d"} Feb 23 09:12:09 crc kubenswrapper[4940]: I0223 09:12:09.968920 4940 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.053990684 podStartE2EDuration="5.968893684s" podCreationTimestamp="2026-02-23 09:12:04 +0000 UTC" firstStartedPulling="2026-02-23 09:12:05.070479295 +0000 UTC m=+1456.453685452" lastFinishedPulling="2026-02-23 09:12:08.985382285 +0000 UTC m=+1460.368588452" observedRunningTime="2026-02-23 09:12:09.964214876 +0000 UTC m=+1461.347421023" watchObservedRunningTime="2026-02-23 09:12:09.968893684 +0000 UTC m=+1461.352099871" Feb 23 09:12:09 crc kubenswrapper[4940]: I0223 09:12:09.995334 4940 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.9953070139999998 podStartE2EDuration="2.995307014s" podCreationTimestamp="2026-02-23 09:12:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 09:12:09.985233727 +0000 UTC m=+1461.368439884" watchObservedRunningTime="2026-02-23 09:12:09.995307014 +0000 UTC m=+1461.378513181" Feb 23 09:12:10 crc kubenswrapper[4940]: I0223 09:12:10.955416 4940 generic.go:334] "Generic (PLEG): container finished" podID="6e70b635-2fe6-4e11-97ca-1cc8bb5e3d59" containerID="5da7f7a5d6dfd748769abbafd5488305835eba4da623241da49f0d3c1bac2277" exitCode=0 Feb 23 09:12:10 crc kubenswrapper[4940]: I0223 09:12:10.955455 4940 generic.go:334] "Generic (PLEG): container finished" podID="6e70b635-2fe6-4e11-97ca-1cc8bb5e3d59" containerID="cad5fc5debebe4a272309f9f82f4688e5c078783da2c2291809153d749d9a510" exitCode=2 Feb 23 09:12:10 crc kubenswrapper[4940]: I0223 09:12:10.955467 4940 generic.go:334] "Generic (PLEG): container finished" podID="6e70b635-2fe6-4e11-97ca-1cc8bb5e3d59" containerID="891c4ab12aa1dd3d46128f62f8a4595bc7086e20d4b321393b3a581e130b83ca" exitCode=0 Feb 23 09:12:10 crc kubenswrapper[4940]: I0223 09:12:10.955506 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6e70b635-2fe6-4e11-97ca-1cc8bb5e3d59","Type":"ContainerDied","Data":"5da7f7a5d6dfd748769abbafd5488305835eba4da623241da49f0d3c1bac2277"} Feb 23 09:12:10 crc kubenswrapper[4940]: I0223 09:12:10.955563 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6e70b635-2fe6-4e11-97ca-1cc8bb5e3d59","Type":"ContainerDied","Data":"cad5fc5debebe4a272309f9f82f4688e5c078783da2c2291809153d749d9a510"} Feb 23 09:12:10 crc kubenswrapper[4940]: I0223 09:12:10.955577 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6e70b635-2fe6-4e11-97ca-1cc8bb5e3d59","Type":"ContainerDied","Data":"891c4ab12aa1dd3d46128f62f8a4595bc7086e20d4b321393b3a581e130b83ca"} Feb 23 09:12:11 crc kubenswrapper[4940]: I0223 09:12:11.342945 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5b4c997d87-hgkbr" Feb 23 09:12:11 crc kubenswrapper[4940]: I0223 09:12:11.432156 4940 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6b6c754dc9-dwspq"] Feb 23 09:12:11 crc kubenswrapper[4940]: I0223 09:12:11.432440 4940 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6b6c754dc9-dwspq" podUID="88000ec3-b551-47f7-99c3-79b10c5dcdaf" containerName="dnsmasq-dns" containerID="cri-o://8bbfb7a869fbdb4ca66fcf6102a3b88f8c75bcf22c61502094ff3e4a377b11b7" gracePeriod=10 Feb 23 09:12:11 crc kubenswrapper[4940]: I0223 09:12:11.969177 4940 generic.go:334] "Generic (PLEG): container finished" podID="88000ec3-b551-47f7-99c3-79b10c5dcdaf" containerID="8bbfb7a869fbdb4ca66fcf6102a3b88f8c75bcf22c61502094ff3e4a377b11b7" exitCode=0 Feb 23 09:12:11 crc kubenswrapper[4940]: I0223 09:12:11.969270 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b6c754dc9-dwspq" event={"ID":"88000ec3-b551-47f7-99c3-79b10c5dcdaf","Type":"ContainerDied","Data":"8bbfb7a869fbdb4ca66fcf6102a3b88f8c75bcf22c61502094ff3e4a377b11b7"} Feb 23 09:12:12 crc kubenswrapper[4940]: I0223 09:12:12.172133 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b6c754dc9-dwspq" Feb 23 09:12:12 crc kubenswrapper[4940]: I0223 09:12:12.239080 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/88000ec3-b551-47f7-99c3-79b10c5dcdaf-dns-swift-storage-0\") pod \"88000ec3-b551-47f7-99c3-79b10c5dcdaf\" (UID: \"88000ec3-b551-47f7-99c3-79b10c5dcdaf\") " Feb 23 09:12:12 crc kubenswrapper[4940]: I0223 09:12:12.239188 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nwkqf\" (UniqueName: \"kubernetes.io/projected/88000ec3-b551-47f7-99c3-79b10c5dcdaf-kube-api-access-nwkqf\") pod \"88000ec3-b551-47f7-99c3-79b10c5dcdaf\" (UID: \"88000ec3-b551-47f7-99c3-79b10c5dcdaf\") " Feb 23 09:12:12 crc kubenswrapper[4940]: I0223 09:12:12.239267 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/88000ec3-b551-47f7-99c3-79b10c5dcdaf-ovsdbserver-sb\") pod \"88000ec3-b551-47f7-99c3-79b10c5dcdaf\" (UID: \"88000ec3-b551-47f7-99c3-79b10c5dcdaf\") " Feb 23 09:12:12 crc kubenswrapper[4940]: I0223 09:12:12.239348 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/88000ec3-b551-47f7-99c3-79b10c5dcdaf-ovsdbserver-nb\") pod \"88000ec3-b551-47f7-99c3-79b10c5dcdaf\" (UID: \"88000ec3-b551-47f7-99c3-79b10c5dcdaf\") " Feb 23 09:12:12 crc kubenswrapper[4940]: I0223 09:12:12.239394 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/88000ec3-b551-47f7-99c3-79b10c5dcdaf-config\") pod \"88000ec3-b551-47f7-99c3-79b10c5dcdaf\" (UID: \"88000ec3-b551-47f7-99c3-79b10c5dcdaf\") " Feb 23 09:12:12 crc kubenswrapper[4940]: I0223 09:12:12.239552 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/88000ec3-b551-47f7-99c3-79b10c5dcdaf-dns-svc\") pod \"88000ec3-b551-47f7-99c3-79b10c5dcdaf\" (UID: \"88000ec3-b551-47f7-99c3-79b10c5dcdaf\") " Feb 23 09:12:12 crc kubenswrapper[4940]: I0223 09:12:12.273906 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/88000ec3-b551-47f7-99c3-79b10c5dcdaf-kube-api-access-nwkqf" (OuterVolumeSpecName: "kube-api-access-nwkqf") pod "88000ec3-b551-47f7-99c3-79b10c5dcdaf" (UID: "88000ec3-b551-47f7-99c3-79b10c5dcdaf"). InnerVolumeSpecName "kube-api-access-nwkqf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 09:12:12 crc kubenswrapper[4940]: I0223 09:12:12.311874 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/88000ec3-b551-47f7-99c3-79b10c5dcdaf-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "88000ec3-b551-47f7-99c3-79b10c5dcdaf" (UID: "88000ec3-b551-47f7-99c3-79b10c5dcdaf"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 09:12:12 crc kubenswrapper[4940]: I0223 09:12:12.312706 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/88000ec3-b551-47f7-99c3-79b10c5dcdaf-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "88000ec3-b551-47f7-99c3-79b10c5dcdaf" (UID: "88000ec3-b551-47f7-99c3-79b10c5dcdaf"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 09:12:12 crc kubenswrapper[4940]: I0223 09:12:12.317014 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/88000ec3-b551-47f7-99c3-79b10c5dcdaf-config" (OuterVolumeSpecName: "config") pod "88000ec3-b551-47f7-99c3-79b10c5dcdaf" (UID: "88000ec3-b551-47f7-99c3-79b10c5dcdaf"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 09:12:12 crc kubenswrapper[4940]: I0223 09:12:12.324516 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/88000ec3-b551-47f7-99c3-79b10c5dcdaf-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "88000ec3-b551-47f7-99c3-79b10c5dcdaf" (UID: "88000ec3-b551-47f7-99c3-79b10c5dcdaf"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 09:12:12 crc kubenswrapper[4940]: I0223 09:12:12.343452 4940 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/88000ec3-b551-47f7-99c3-79b10c5dcdaf-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 23 09:12:12 crc kubenswrapper[4940]: I0223 09:12:12.343791 4940 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nwkqf\" (UniqueName: \"kubernetes.io/projected/88000ec3-b551-47f7-99c3-79b10c5dcdaf-kube-api-access-nwkqf\") on node \"crc\" DevicePath \"\"" Feb 23 09:12:12 crc kubenswrapper[4940]: I0223 09:12:12.343919 4940 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/88000ec3-b551-47f7-99c3-79b10c5dcdaf-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 23 09:12:12 crc kubenswrapper[4940]: I0223 09:12:12.344318 4940 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/88000ec3-b551-47f7-99c3-79b10c5dcdaf-config\") on node \"crc\" DevicePath \"\"" Feb 23 09:12:12 crc kubenswrapper[4940]: I0223 09:12:12.344518 4940 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/88000ec3-b551-47f7-99c3-79b10c5dcdaf-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 23 09:12:12 crc kubenswrapper[4940]: I0223 09:12:12.356769 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/88000ec3-b551-47f7-99c3-79b10c5dcdaf-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "88000ec3-b551-47f7-99c3-79b10c5dcdaf" (UID: "88000ec3-b551-47f7-99c3-79b10c5dcdaf"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 09:12:12 crc kubenswrapper[4940]: I0223 09:12:12.387038 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 23 09:12:12 crc kubenswrapper[4940]: I0223 09:12:12.446058 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bfqjf\" (UniqueName: \"kubernetes.io/projected/6e70b635-2fe6-4e11-97ca-1cc8bb5e3d59-kube-api-access-bfqjf\") pod \"6e70b635-2fe6-4e11-97ca-1cc8bb5e3d59\" (UID: \"6e70b635-2fe6-4e11-97ca-1cc8bb5e3d59\") " Feb 23 09:12:12 crc kubenswrapper[4940]: I0223 09:12:12.446165 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/6e70b635-2fe6-4e11-97ca-1cc8bb5e3d59-ceilometer-tls-certs\") pod \"6e70b635-2fe6-4e11-97ca-1cc8bb5e3d59\" (UID: \"6e70b635-2fe6-4e11-97ca-1cc8bb5e3d59\") " Feb 23 09:12:12 crc kubenswrapper[4940]: I0223 09:12:12.446206 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6e70b635-2fe6-4e11-97ca-1cc8bb5e3d59-run-httpd\") pod \"6e70b635-2fe6-4e11-97ca-1cc8bb5e3d59\" (UID: \"6e70b635-2fe6-4e11-97ca-1cc8bb5e3d59\") " Feb 23 09:12:12 crc kubenswrapper[4940]: I0223 09:12:12.446285 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e70b635-2fe6-4e11-97ca-1cc8bb5e3d59-combined-ca-bundle\") pod \"6e70b635-2fe6-4e11-97ca-1cc8bb5e3d59\" (UID: \"6e70b635-2fe6-4e11-97ca-1cc8bb5e3d59\") " Feb 23 09:12:12 crc kubenswrapper[4940]: I0223 09:12:12.446305 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6e70b635-2fe6-4e11-97ca-1cc8bb5e3d59-config-data\") pod \"6e70b635-2fe6-4e11-97ca-1cc8bb5e3d59\" (UID: \"6e70b635-2fe6-4e11-97ca-1cc8bb5e3d59\") " Feb 23 09:12:12 crc kubenswrapper[4940]: I0223 09:12:12.446346 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6e70b635-2fe6-4e11-97ca-1cc8bb5e3d59-sg-core-conf-yaml\") pod \"6e70b635-2fe6-4e11-97ca-1cc8bb5e3d59\" (UID: \"6e70b635-2fe6-4e11-97ca-1cc8bb5e3d59\") " Feb 23 09:12:12 crc kubenswrapper[4940]: I0223 09:12:12.446498 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6e70b635-2fe6-4e11-97ca-1cc8bb5e3d59-log-httpd\") pod \"6e70b635-2fe6-4e11-97ca-1cc8bb5e3d59\" (UID: \"6e70b635-2fe6-4e11-97ca-1cc8bb5e3d59\") " Feb 23 09:12:12 crc kubenswrapper[4940]: I0223 09:12:12.446526 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6e70b635-2fe6-4e11-97ca-1cc8bb5e3d59-scripts\") pod \"6e70b635-2fe6-4e11-97ca-1cc8bb5e3d59\" (UID: \"6e70b635-2fe6-4e11-97ca-1cc8bb5e3d59\") " Feb 23 09:12:12 crc kubenswrapper[4940]: I0223 09:12:12.447194 4940 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/88000ec3-b551-47f7-99c3-79b10c5dcdaf-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 23 09:12:12 crc kubenswrapper[4940]: I0223 09:12:12.447288 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6e70b635-2fe6-4e11-97ca-1cc8bb5e3d59-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "6e70b635-2fe6-4e11-97ca-1cc8bb5e3d59" (UID: "6e70b635-2fe6-4e11-97ca-1cc8bb5e3d59"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 09:12:12 crc kubenswrapper[4940]: I0223 09:12:12.447507 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6e70b635-2fe6-4e11-97ca-1cc8bb5e3d59-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "6e70b635-2fe6-4e11-97ca-1cc8bb5e3d59" (UID: "6e70b635-2fe6-4e11-97ca-1cc8bb5e3d59"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 09:12:12 crc kubenswrapper[4940]: I0223 09:12:12.451474 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6e70b635-2fe6-4e11-97ca-1cc8bb5e3d59-scripts" (OuterVolumeSpecName: "scripts") pod "6e70b635-2fe6-4e11-97ca-1cc8bb5e3d59" (UID: "6e70b635-2fe6-4e11-97ca-1cc8bb5e3d59"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 09:12:12 crc kubenswrapper[4940]: I0223 09:12:12.452042 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6e70b635-2fe6-4e11-97ca-1cc8bb5e3d59-kube-api-access-bfqjf" (OuterVolumeSpecName: "kube-api-access-bfqjf") pod "6e70b635-2fe6-4e11-97ca-1cc8bb5e3d59" (UID: "6e70b635-2fe6-4e11-97ca-1cc8bb5e3d59"). InnerVolumeSpecName "kube-api-access-bfqjf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 09:12:12 crc kubenswrapper[4940]: I0223 09:12:12.476633 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6e70b635-2fe6-4e11-97ca-1cc8bb5e3d59-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "6e70b635-2fe6-4e11-97ca-1cc8bb5e3d59" (UID: "6e70b635-2fe6-4e11-97ca-1cc8bb5e3d59"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 09:12:12 crc kubenswrapper[4940]: I0223 09:12:12.508770 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6e70b635-2fe6-4e11-97ca-1cc8bb5e3d59-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "6e70b635-2fe6-4e11-97ca-1cc8bb5e3d59" (UID: "6e70b635-2fe6-4e11-97ca-1cc8bb5e3d59"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 09:12:12 crc kubenswrapper[4940]: I0223 09:12:12.547240 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6e70b635-2fe6-4e11-97ca-1cc8bb5e3d59-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6e70b635-2fe6-4e11-97ca-1cc8bb5e3d59" (UID: "6e70b635-2fe6-4e11-97ca-1cc8bb5e3d59"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 09:12:12 crc kubenswrapper[4940]: I0223 09:12:12.549514 4940 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6e70b635-2fe6-4e11-97ca-1cc8bb5e3d59-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 23 09:12:12 crc kubenswrapper[4940]: I0223 09:12:12.549539 4940 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6e70b635-2fe6-4e11-97ca-1cc8bb5e3d59-scripts\") on node \"crc\" DevicePath \"\"" Feb 23 09:12:12 crc kubenswrapper[4940]: I0223 09:12:12.549549 4940 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bfqjf\" (UniqueName: \"kubernetes.io/projected/6e70b635-2fe6-4e11-97ca-1cc8bb5e3d59-kube-api-access-bfqjf\") on node \"crc\" DevicePath \"\"" Feb 23 09:12:12 crc kubenswrapper[4940]: I0223 09:12:12.549579 4940 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/6e70b635-2fe6-4e11-97ca-1cc8bb5e3d59-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 23 09:12:12 crc kubenswrapper[4940]: I0223 09:12:12.549589 4940 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6e70b635-2fe6-4e11-97ca-1cc8bb5e3d59-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 23 09:12:12 crc kubenswrapper[4940]: I0223 09:12:12.549601 4940 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e70b635-2fe6-4e11-97ca-1cc8bb5e3d59-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 09:12:12 crc kubenswrapper[4940]: I0223 09:12:12.549630 4940 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6e70b635-2fe6-4e11-97ca-1cc8bb5e3d59-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 23 09:12:12 crc kubenswrapper[4940]: I0223 09:12:12.558043 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6e70b635-2fe6-4e11-97ca-1cc8bb5e3d59-config-data" (OuterVolumeSpecName: "config-data") pod "6e70b635-2fe6-4e11-97ca-1cc8bb5e3d59" (UID: "6e70b635-2fe6-4e11-97ca-1cc8bb5e3d59"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 09:12:12 crc kubenswrapper[4940]: I0223 09:12:12.652086 4940 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6e70b635-2fe6-4e11-97ca-1cc8bb5e3d59-config-data\") on node \"crc\" DevicePath \"\"" Feb 23 09:12:12 crc kubenswrapper[4940]: I0223 09:12:12.979827 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b6c754dc9-dwspq" event={"ID":"88000ec3-b551-47f7-99c3-79b10c5dcdaf","Type":"ContainerDied","Data":"3c7bc377ec6854548ce502c58c02554a13bbb84ad40ce08e3db6d2967057a1c3"} Feb 23 09:12:12 crc kubenswrapper[4940]: I0223 09:12:12.979876 4940 scope.go:117] "RemoveContainer" containerID="8bbfb7a869fbdb4ca66fcf6102a3b88f8c75bcf22c61502094ff3e4a377b11b7" Feb 23 09:12:12 crc kubenswrapper[4940]: I0223 09:12:12.979880 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b6c754dc9-dwspq" Feb 23 09:12:12 crc kubenswrapper[4940]: I0223 09:12:12.982728 4940 generic.go:334] "Generic (PLEG): container finished" podID="6e70b635-2fe6-4e11-97ca-1cc8bb5e3d59" containerID="2e3fd33ec21a9aa8d5cf3efedda211d2d3f09ea74496e9859a0565aa9a5baf41" exitCode=0 Feb 23 09:12:12 crc kubenswrapper[4940]: I0223 09:12:12.982773 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6e70b635-2fe6-4e11-97ca-1cc8bb5e3d59","Type":"ContainerDied","Data":"2e3fd33ec21a9aa8d5cf3efedda211d2d3f09ea74496e9859a0565aa9a5baf41"} Feb 23 09:12:12 crc kubenswrapper[4940]: I0223 09:12:12.982800 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6e70b635-2fe6-4e11-97ca-1cc8bb5e3d59","Type":"ContainerDied","Data":"d1b944b4afb6c500c651e3d8e2204841c18c0bfd65f3ef6fc188d60092844884"} Feb 23 09:12:12 crc kubenswrapper[4940]: I0223 09:12:12.982862 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 23 09:12:13 crc kubenswrapper[4940]: I0223 09:12:13.003302 4940 scope.go:117] "RemoveContainer" containerID="965bbe927c7833763a52cfb025f821b1df8b7bf32a4f0e279b3b2007fa5ab308" Feb 23 09:12:13 crc kubenswrapper[4940]: I0223 09:12:13.032335 4940 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6b6c754dc9-dwspq"] Feb 23 09:12:13 crc kubenswrapper[4940]: I0223 09:12:13.043561 4940 scope.go:117] "RemoveContainer" containerID="5da7f7a5d6dfd748769abbafd5488305835eba4da623241da49f0d3c1bac2277" Feb 23 09:12:13 crc kubenswrapper[4940]: I0223 09:12:13.051087 4940 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6b6c754dc9-dwspq"] Feb 23 09:12:13 crc kubenswrapper[4940]: I0223 09:12:13.063308 4940 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 23 09:12:13 crc kubenswrapper[4940]: I0223 09:12:13.083189 4940 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 23 09:12:13 crc kubenswrapper[4940]: I0223 09:12:13.091679 4940 scope.go:117] "RemoveContainer" containerID="cad5fc5debebe4a272309f9f82f4688e5c078783da2c2291809153d749d9a510" Feb 23 09:12:13 crc kubenswrapper[4940]: I0223 09:12:13.108325 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 23 09:12:13 crc kubenswrapper[4940]: E0223 09:12:13.108913 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e70b635-2fe6-4e11-97ca-1cc8bb5e3d59" containerName="ceilometer-central-agent" Feb 23 09:12:13 crc kubenswrapper[4940]: I0223 09:12:13.108937 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e70b635-2fe6-4e11-97ca-1cc8bb5e3d59" containerName="ceilometer-central-agent" Feb 23 09:12:13 crc kubenswrapper[4940]: E0223 09:12:13.108965 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="88000ec3-b551-47f7-99c3-79b10c5dcdaf" containerName="dnsmasq-dns" Feb 23 09:12:13 crc kubenswrapper[4940]: I0223 09:12:13.108974 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="88000ec3-b551-47f7-99c3-79b10c5dcdaf" containerName="dnsmasq-dns" Feb 23 09:12:13 crc kubenswrapper[4940]: E0223 09:12:13.108987 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e70b635-2fe6-4e11-97ca-1cc8bb5e3d59" containerName="proxy-httpd" Feb 23 09:12:13 crc kubenswrapper[4940]: I0223 09:12:13.108996 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e70b635-2fe6-4e11-97ca-1cc8bb5e3d59" containerName="proxy-httpd" Feb 23 09:12:13 crc kubenswrapper[4940]: E0223 09:12:13.109012 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="88000ec3-b551-47f7-99c3-79b10c5dcdaf" containerName="init" Feb 23 09:12:13 crc kubenswrapper[4940]: I0223 09:12:13.109020 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="88000ec3-b551-47f7-99c3-79b10c5dcdaf" containerName="init" Feb 23 09:12:13 crc kubenswrapper[4940]: E0223 09:12:13.109037 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e70b635-2fe6-4e11-97ca-1cc8bb5e3d59" containerName="ceilometer-notification-agent" Feb 23 09:12:13 crc kubenswrapper[4940]: I0223 09:12:13.109045 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e70b635-2fe6-4e11-97ca-1cc8bb5e3d59" containerName="ceilometer-notification-agent" Feb 23 09:12:13 crc kubenswrapper[4940]: E0223 09:12:13.109062 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e70b635-2fe6-4e11-97ca-1cc8bb5e3d59" containerName="sg-core" Feb 23 09:12:13 crc kubenswrapper[4940]: I0223 09:12:13.109069 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e70b635-2fe6-4e11-97ca-1cc8bb5e3d59" containerName="sg-core" Feb 23 09:12:13 crc kubenswrapper[4940]: I0223 09:12:13.109328 4940 memory_manager.go:354] "RemoveStaleState removing state" podUID="6e70b635-2fe6-4e11-97ca-1cc8bb5e3d59" containerName="proxy-httpd" Feb 23 09:12:13 crc kubenswrapper[4940]: I0223 09:12:13.109353 4940 memory_manager.go:354] "RemoveStaleState removing state" podUID="6e70b635-2fe6-4e11-97ca-1cc8bb5e3d59" containerName="sg-core" Feb 23 09:12:13 crc kubenswrapper[4940]: I0223 09:12:13.109363 4940 memory_manager.go:354] "RemoveStaleState removing state" podUID="6e70b635-2fe6-4e11-97ca-1cc8bb5e3d59" containerName="ceilometer-central-agent" Feb 23 09:12:13 crc kubenswrapper[4940]: I0223 09:12:13.109377 4940 memory_manager.go:354] "RemoveStaleState removing state" podUID="88000ec3-b551-47f7-99c3-79b10c5dcdaf" containerName="dnsmasq-dns" Feb 23 09:12:13 crc kubenswrapper[4940]: I0223 09:12:13.109393 4940 memory_manager.go:354] "RemoveStaleState removing state" podUID="6e70b635-2fe6-4e11-97ca-1cc8bb5e3d59" containerName="ceilometer-notification-agent" Feb 23 09:12:13 crc kubenswrapper[4940]: I0223 09:12:13.111741 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 23 09:12:13 crc kubenswrapper[4940]: I0223 09:12:13.113977 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Feb 23 09:12:13 crc kubenswrapper[4940]: I0223 09:12:13.117738 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 23 09:12:13 crc kubenswrapper[4940]: I0223 09:12:13.117960 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 23 09:12:13 crc kubenswrapper[4940]: I0223 09:12:13.120338 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 23 09:12:13 crc kubenswrapper[4940]: I0223 09:12:13.130440 4940 scope.go:117] "RemoveContainer" containerID="891c4ab12aa1dd3d46128f62f8a4595bc7086e20d4b321393b3a581e130b83ca" Feb 23 09:12:13 crc kubenswrapper[4940]: I0223 09:12:13.152049 4940 scope.go:117] "RemoveContainer" containerID="2e3fd33ec21a9aa8d5cf3efedda211d2d3f09ea74496e9859a0565aa9a5baf41" Feb 23 09:12:13 crc kubenswrapper[4940]: I0223 09:12:13.165579 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3545f0d9-3f75-4de3-ab04-716362d1a057-scripts\") pod \"ceilometer-0\" (UID: \"3545f0d9-3f75-4de3-ab04-716362d1a057\") " pod="openstack/ceilometer-0" Feb 23 09:12:13 crc kubenswrapper[4940]: I0223 09:12:13.165662 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3545f0d9-3f75-4de3-ab04-716362d1a057-log-httpd\") pod \"ceilometer-0\" (UID: \"3545f0d9-3f75-4de3-ab04-716362d1a057\") " pod="openstack/ceilometer-0" Feb 23 09:12:13 crc kubenswrapper[4940]: I0223 09:12:13.165701 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3545f0d9-3f75-4de3-ab04-716362d1a057-run-httpd\") pod \"ceilometer-0\" (UID: \"3545f0d9-3f75-4de3-ab04-716362d1a057\") " pod="openstack/ceilometer-0" Feb 23 09:12:13 crc kubenswrapper[4940]: I0223 09:12:13.165753 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/3545f0d9-3f75-4de3-ab04-716362d1a057-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"3545f0d9-3f75-4de3-ab04-716362d1a057\") " pod="openstack/ceilometer-0" Feb 23 09:12:13 crc kubenswrapper[4940]: I0223 09:12:13.165793 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3545f0d9-3f75-4de3-ab04-716362d1a057-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"3545f0d9-3f75-4de3-ab04-716362d1a057\") " pod="openstack/ceilometer-0" Feb 23 09:12:13 crc kubenswrapper[4940]: I0223 09:12:13.165836 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rrx4h\" (UniqueName: \"kubernetes.io/projected/3545f0d9-3f75-4de3-ab04-716362d1a057-kube-api-access-rrx4h\") pod \"ceilometer-0\" (UID: \"3545f0d9-3f75-4de3-ab04-716362d1a057\") " pod="openstack/ceilometer-0" Feb 23 09:12:13 crc kubenswrapper[4940]: I0223 09:12:13.165863 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3545f0d9-3f75-4de3-ab04-716362d1a057-config-data\") pod \"ceilometer-0\" (UID: \"3545f0d9-3f75-4de3-ab04-716362d1a057\") " pod="openstack/ceilometer-0" Feb 23 09:12:13 crc kubenswrapper[4940]: I0223 09:12:13.165932 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3545f0d9-3f75-4de3-ab04-716362d1a057-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"3545f0d9-3f75-4de3-ab04-716362d1a057\") " pod="openstack/ceilometer-0" Feb 23 09:12:13 crc kubenswrapper[4940]: I0223 09:12:13.178455 4940 scope.go:117] "RemoveContainer" containerID="5da7f7a5d6dfd748769abbafd5488305835eba4da623241da49f0d3c1bac2277" Feb 23 09:12:13 crc kubenswrapper[4940]: E0223 09:12:13.179094 4940 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5da7f7a5d6dfd748769abbafd5488305835eba4da623241da49f0d3c1bac2277\": container with ID starting with 5da7f7a5d6dfd748769abbafd5488305835eba4da623241da49f0d3c1bac2277 not found: ID does not exist" containerID="5da7f7a5d6dfd748769abbafd5488305835eba4da623241da49f0d3c1bac2277" Feb 23 09:12:13 crc kubenswrapper[4940]: I0223 09:12:13.179129 4940 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5da7f7a5d6dfd748769abbafd5488305835eba4da623241da49f0d3c1bac2277"} err="failed to get container status \"5da7f7a5d6dfd748769abbafd5488305835eba4da623241da49f0d3c1bac2277\": rpc error: code = NotFound desc = could not find container \"5da7f7a5d6dfd748769abbafd5488305835eba4da623241da49f0d3c1bac2277\": container with ID starting with 5da7f7a5d6dfd748769abbafd5488305835eba4da623241da49f0d3c1bac2277 not found: ID does not exist" Feb 23 09:12:13 crc kubenswrapper[4940]: I0223 09:12:13.179154 4940 scope.go:117] "RemoveContainer" containerID="cad5fc5debebe4a272309f9f82f4688e5c078783da2c2291809153d749d9a510" Feb 23 09:12:13 crc kubenswrapper[4940]: E0223 09:12:13.179604 4940 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cad5fc5debebe4a272309f9f82f4688e5c078783da2c2291809153d749d9a510\": container with ID starting with cad5fc5debebe4a272309f9f82f4688e5c078783da2c2291809153d749d9a510 not found: ID does not exist" containerID="cad5fc5debebe4a272309f9f82f4688e5c078783da2c2291809153d749d9a510" Feb 23 09:12:13 crc kubenswrapper[4940]: I0223 09:12:13.179672 4940 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cad5fc5debebe4a272309f9f82f4688e5c078783da2c2291809153d749d9a510"} err="failed to get container status \"cad5fc5debebe4a272309f9f82f4688e5c078783da2c2291809153d749d9a510\": rpc error: code = NotFound desc = could not find container \"cad5fc5debebe4a272309f9f82f4688e5c078783da2c2291809153d749d9a510\": container with ID starting with cad5fc5debebe4a272309f9f82f4688e5c078783da2c2291809153d749d9a510 not found: ID does not exist" Feb 23 09:12:13 crc kubenswrapper[4940]: I0223 09:12:13.179705 4940 scope.go:117] "RemoveContainer" containerID="891c4ab12aa1dd3d46128f62f8a4595bc7086e20d4b321393b3a581e130b83ca" Feb 23 09:12:13 crc kubenswrapper[4940]: E0223 09:12:13.180177 4940 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"891c4ab12aa1dd3d46128f62f8a4595bc7086e20d4b321393b3a581e130b83ca\": container with ID starting with 891c4ab12aa1dd3d46128f62f8a4595bc7086e20d4b321393b3a581e130b83ca not found: ID does not exist" containerID="891c4ab12aa1dd3d46128f62f8a4595bc7086e20d4b321393b3a581e130b83ca" Feb 23 09:12:13 crc kubenswrapper[4940]: I0223 09:12:13.180205 4940 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"891c4ab12aa1dd3d46128f62f8a4595bc7086e20d4b321393b3a581e130b83ca"} err="failed to get container status \"891c4ab12aa1dd3d46128f62f8a4595bc7086e20d4b321393b3a581e130b83ca\": rpc error: code = NotFound desc = could not find container \"891c4ab12aa1dd3d46128f62f8a4595bc7086e20d4b321393b3a581e130b83ca\": container with ID starting with 891c4ab12aa1dd3d46128f62f8a4595bc7086e20d4b321393b3a581e130b83ca not found: ID does not exist" Feb 23 09:12:13 crc kubenswrapper[4940]: I0223 09:12:13.180221 4940 scope.go:117] "RemoveContainer" containerID="2e3fd33ec21a9aa8d5cf3efedda211d2d3f09ea74496e9859a0565aa9a5baf41" Feb 23 09:12:13 crc kubenswrapper[4940]: E0223 09:12:13.180552 4940 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2e3fd33ec21a9aa8d5cf3efedda211d2d3f09ea74496e9859a0565aa9a5baf41\": container with ID starting with 2e3fd33ec21a9aa8d5cf3efedda211d2d3f09ea74496e9859a0565aa9a5baf41 not found: ID does not exist" containerID="2e3fd33ec21a9aa8d5cf3efedda211d2d3f09ea74496e9859a0565aa9a5baf41" Feb 23 09:12:13 crc kubenswrapper[4940]: I0223 09:12:13.180883 4940 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2e3fd33ec21a9aa8d5cf3efedda211d2d3f09ea74496e9859a0565aa9a5baf41"} err="failed to get container status \"2e3fd33ec21a9aa8d5cf3efedda211d2d3f09ea74496e9859a0565aa9a5baf41\": rpc error: code = NotFound desc = could not find container \"2e3fd33ec21a9aa8d5cf3efedda211d2d3f09ea74496e9859a0565aa9a5baf41\": container with ID starting with 2e3fd33ec21a9aa8d5cf3efedda211d2d3f09ea74496e9859a0565aa9a5baf41 not found: ID does not exist" Feb 23 09:12:13 crc kubenswrapper[4940]: I0223 09:12:13.267648 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3545f0d9-3f75-4de3-ab04-716362d1a057-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"3545f0d9-3f75-4de3-ab04-716362d1a057\") " pod="openstack/ceilometer-0" Feb 23 09:12:13 crc kubenswrapper[4940]: I0223 09:12:13.267788 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3545f0d9-3f75-4de3-ab04-716362d1a057-scripts\") pod \"ceilometer-0\" (UID: \"3545f0d9-3f75-4de3-ab04-716362d1a057\") " pod="openstack/ceilometer-0" Feb 23 09:12:13 crc kubenswrapper[4940]: I0223 09:12:13.267832 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3545f0d9-3f75-4de3-ab04-716362d1a057-log-httpd\") pod \"ceilometer-0\" (UID: \"3545f0d9-3f75-4de3-ab04-716362d1a057\") " pod="openstack/ceilometer-0" Feb 23 09:12:13 crc kubenswrapper[4940]: I0223 09:12:13.267863 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3545f0d9-3f75-4de3-ab04-716362d1a057-run-httpd\") pod \"ceilometer-0\" (UID: \"3545f0d9-3f75-4de3-ab04-716362d1a057\") " pod="openstack/ceilometer-0" Feb 23 09:12:13 crc kubenswrapper[4940]: I0223 09:12:13.267916 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/3545f0d9-3f75-4de3-ab04-716362d1a057-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"3545f0d9-3f75-4de3-ab04-716362d1a057\") " pod="openstack/ceilometer-0" Feb 23 09:12:13 crc kubenswrapper[4940]: I0223 09:12:13.268475 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3545f0d9-3f75-4de3-ab04-716362d1a057-run-httpd\") pod \"ceilometer-0\" (UID: \"3545f0d9-3f75-4de3-ab04-716362d1a057\") " pod="openstack/ceilometer-0" Feb 23 09:12:13 crc kubenswrapper[4940]: I0223 09:12:13.268634 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3545f0d9-3f75-4de3-ab04-716362d1a057-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"3545f0d9-3f75-4de3-ab04-716362d1a057\") " pod="openstack/ceilometer-0" Feb 23 09:12:13 crc kubenswrapper[4940]: I0223 09:12:13.268714 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3545f0d9-3f75-4de3-ab04-716362d1a057-log-httpd\") pod \"ceilometer-0\" (UID: \"3545f0d9-3f75-4de3-ab04-716362d1a057\") " pod="openstack/ceilometer-0" Feb 23 09:12:13 crc kubenswrapper[4940]: I0223 09:12:13.269109 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rrx4h\" (UniqueName: \"kubernetes.io/projected/3545f0d9-3f75-4de3-ab04-716362d1a057-kube-api-access-rrx4h\") pod \"ceilometer-0\" (UID: \"3545f0d9-3f75-4de3-ab04-716362d1a057\") " pod="openstack/ceilometer-0" Feb 23 09:12:13 crc kubenswrapper[4940]: I0223 09:12:13.269139 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3545f0d9-3f75-4de3-ab04-716362d1a057-config-data\") pod \"ceilometer-0\" (UID: \"3545f0d9-3f75-4de3-ab04-716362d1a057\") " pod="openstack/ceilometer-0" Feb 23 09:12:13 crc kubenswrapper[4940]: I0223 09:12:13.273212 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3545f0d9-3f75-4de3-ab04-716362d1a057-scripts\") pod \"ceilometer-0\" (UID: \"3545f0d9-3f75-4de3-ab04-716362d1a057\") " pod="openstack/ceilometer-0" Feb 23 09:12:13 crc kubenswrapper[4940]: I0223 09:12:13.273407 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3545f0d9-3f75-4de3-ab04-716362d1a057-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"3545f0d9-3f75-4de3-ab04-716362d1a057\") " pod="openstack/ceilometer-0" Feb 23 09:12:13 crc kubenswrapper[4940]: I0223 09:12:13.284676 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3545f0d9-3f75-4de3-ab04-716362d1a057-config-data\") pod \"ceilometer-0\" (UID: \"3545f0d9-3f75-4de3-ab04-716362d1a057\") " pod="openstack/ceilometer-0" Feb 23 09:12:13 crc kubenswrapper[4940]: I0223 09:12:13.287406 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3545f0d9-3f75-4de3-ab04-716362d1a057-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"3545f0d9-3f75-4de3-ab04-716362d1a057\") " pod="openstack/ceilometer-0" Feb 23 09:12:13 crc kubenswrapper[4940]: I0223 09:12:13.287715 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rrx4h\" (UniqueName: \"kubernetes.io/projected/3545f0d9-3f75-4de3-ab04-716362d1a057-kube-api-access-rrx4h\") pod \"ceilometer-0\" (UID: \"3545f0d9-3f75-4de3-ab04-716362d1a057\") " pod="openstack/ceilometer-0" Feb 23 09:12:13 crc kubenswrapper[4940]: I0223 09:12:13.288062 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/3545f0d9-3f75-4de3-ab04-716362d1a057-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"3545f0d9-3f75-4de3-ab04-716362d1a057\") " pod="openstack/ceilometer-0" Feb 23 09:12:13 crc kubenswrapper[4940]: I0223 09:12:13.357033 4940 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6e70b635-2fe6-4e11-97ca-1cc8bb5e3d59" path="/var/lib/kubelet/pods/6e70b635-2fe6-4e11-97ca-1cc8bb5e3d59/volumes" Feb 23 09:12:13 crc kubenswrapper[4940]: I0223 09:12:13.358512 4940 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="88000ec3-b551-47f7-99c3-79b10c5dcdaf" path="/var/lib/kubelet/pods/88000ec3-b551-47f7-99c3-79b10c5dcdaf/volumes" Feb 23 09:12:13 crc kubenswrapper[4940]: I0223 09:12:13.427141 4940 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Feb 23 09:12:13 crc kubenswrapper[4940]: I0223 09:12:13.434918 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 23 09:12:13 crc kubenswrapper[4940]: I0223 09:12:13.447902 4940 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Feb 23 09:12:13 crc kubenswrapper[4940]: I0223 09:12:13.915542 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 23 09:12:14 crc kubenswrapper[4940]: I0223 09:12:14.004540 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3545f0d9-3f75-4de3-ab04-716362d1a057","Type":"ContainerStarted","Data":"ea9dea29b28e94fcc1edb784a9b26a01cc2595f4626cb478c148d733ce951415"} Feb 23 09:12:14 crc kubenswrapper[4940]: I0223 09:12:14.044148 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Feb 23 09:12:14 crc kubenswrapper[4940]: I0223 09:12:14.273478 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-2nkmb"] Feb 23 09:12:14 crc kubenswrapper[4940]: I0223 09:12:14.274722 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-2nkmb" Feb 23 09:12:14 crc kubenswrapper[4940]: I0223 09:12:14.277308 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Feb 23 09:12:14 crc kubenswrapper[4940]: I0223 09:12:14.277588 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Feb 23 09:12:14 crc kubenswrapper[4940]: I0223 09:12:14.303556 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-2nkmb"] Feb 23 09:12:14 crc kubenswrapper[4940]: I0223 09:12:14.406492 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/79be8ad1-5c0e-41a0-b293-46a293c25212-config-data\") pod \"nova-cell1-cell-mapping-2nkmb\" (UID: \"79be8ad1-5c0e-41a0-b293-46a293c25212\") " pod="openstack/nova-cell1-cell-mapping-2nkmb" Feb 23 09:12:14 crc kubenswrapper[4940]: I0223 09:12:14.406704 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79be8ad1-5c0e-41a0-b293-46a293c25212-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-2nkmb\" (UID: \"79be8ad1-5c0e-41a0-b293-46a293c25212\") " pod="openstack/nova-cell1-cell-mapping-2nkmb" Feb 23 09:12:14 crc kubenswrapper[4940]: I0223 09:12:14.406778 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xpxld\" (UniqueName: \"kubernetes.io/projected/79be8ad1-5c0e-41a0-b293-46a293c25212-kube-api-access-xpxld\") pod \"nova-cell1-cell-mapping-2nkmb\" (UID: \"79be8ad1-5c0e-41a0-b293-46a293c25212\") " pod="openstack/nova-cell1-cell-mapping-2nkmb" Feb 23 09:12:14 crc kubenswrapper[4940]: I0223 09:12:14.406814 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/79be8ad1-5c0e-41a0-b293-46a293c25212-scripts\") pod \"nova-cell1-cell-mapping-2nkmb\" (UID: \"79be8ad1-5c0e-41a0-b293-46a293c25212\") " pod="openstack/nova-cell1-cell-mapping-2nkmb" Feb 23 09:12:14 crc kubenswrapper[4940]: I0223 09:12:14.508836 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xpxld\" (UniqueName: \"kubernetes.io/projected/79be8ad1-5c0e-41a0-b293-46a293c25212-kube-api-access-xpxld\") pod \"nova-cell1-cell-mapping-2nkmb\" (UID: \"79be8ad1-5c0e-41a0-b293-46a293c25212\") " pod="openstack/nova-cell1-cell-mapping-2nkmb" Feb 23 09:12:14 crc kubenswrapper[4940]: I0223 09:12:14.508913 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/79be8ad1-5c0e-41a0-b293-46a293c25212-scripts\") pod \"nova-cell1-cell-mapping-2nkmb\" (UID: \"79be8ad1-5c0e-41a0-b293-46a293c25212\") " pod="openstack/nova-cell1-cell-mapping-2nkmb" Feb 23 09:12:14 crc kubenswrapper[4940]: I0223 09:12:14.509590 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/79be8ad1-5c0e-41a0-b293-46a293c25212-config-data\") pod \"nova-cell1-cell-mapping-2nkmb\" (UID: \"79be8ad1-5c0e-41a0-b293-46a293c25212\") " pod="openstack/nova-cell1-cell-mapping-2nkmb" Feb 23 09:12:14 crc kubenswrapper[4940]: I0223 09:12:14.509716 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79be8ad1-5c0e-41a0-b293-46a293c25212-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-2nkmb\" (UID: \"79be8ad1-5c0e-41a0-b293-46a293c25212\") " pod="openstack/nova-cell1-cell-mapping-2nkmb" Feb 23 09:12:14 crc kubenswrapper[4940]: I0223 09:12:14.513310 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/79be8ad1-5c0e-41a0-b293-46a293c25212-scripts\") pod \"nova-cell1-cell-mapping-2nkmb\" (UID: \"79be8ad1-5c0e-41a0-b293-46a293c25212\") " pod="openstack/nova-cell1-cell-mapping-2nkmb" Feb 23 09:12:14 crc kubenswrapper[4940]: I0223 09:12:14.513344 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79be8ad1-5c0e-41a0-b293-46a293c25212-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-2nkmb\" (UID: \"79be8ad1-5c0e-41a0-b293-46a293c25212\") " pod="openstack/nova-cell1-cell-mapping-2nkmb" Feb 23 09:12:14 crc kubenswrapper[4940]: I0223 09:12:14.513873 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/79be8ad1-5c0e-41a0-b293-46a293c25212-config-data\") pod \"nova-cell1-cell-mapping-2nkmb\" (UID: \"79be8ad1-5c0e-41a0-b293-46a293c25212\") " pod="openstack/nova-cell1-cell-mapping-2nkmb" Feb 23 09:12:14 crc kubenswrapper[4940]: I0223 09:12:14.538332 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xpxld\" (UniqueName: \"kubernetes.io/projected/79be8ad1-5c0e-41a0-b293-46a293c25212-kube-api-access-xpxld\") pod \"nova-cell1-cell-mapping-2nkmb\" (UID: \"79be8ad1-5c0e-41a0-b293-46a293c25212\") " pod="openstack/nova-cell1-cell-mapping-2nkmb" Feb 23 09:12:14 crc kubenswrapper[4940]: I0223 09:12:14.599503 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-2nkmb" Feb 23 09:12:15 crc kubenswrapper[4940]: I0223 09:12:15.022688 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3545f0d9-3f75-4de3-ab04-716362d1a057","Type":"ContainerStarted","Data":"ed3d9e013a12f498c554d40a010c4041913be9783d28a4dba9ddb40dd70bdb38"} Feb 23 09:12:15 crc kubenswrapper[4940]: I0223 09:12:15.062426 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-2nkmb"] Feb 23 09:12:15 crc kubenswrapper[4940]: W0223 09:12:15.063426 4940 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod79be8ad1_5c0e_41a0_b293_46a293c25212.slice/crio-7f4d81a4b685a0d5e552d775022b88f61534e2e0c4f4535e3f6f037fdceda24d WatchSource:0}: Error finding container 7f4d81a4b685a0d5e552d775022b88f61534e2e0c4f4535e3f6f037fdceda24d: Status 404 returned error can't find the container with id 7f4d81a4b685a0d5e552d775022b88f61534e2e0c4f4535e3f6f037fdceda24d Feb 23 09:12:16 crc kubenswrapper[4940]: I0223 09:12:16.033243 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3545f0d9-3f75-4de3-ab04-716362d1a057","Type":"ContainerStarted","Data":"5657ffb3e48379d115a34568a5e8a26c840a1ee79be6ef582a6ac637cc6a1d7b"} Feb 23 09:12:16 crc kubenswrapper[4940]: I0223 09:12:16.033535 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3545f0d9-3f75-4de3-ab04-716362d1a057","Type":"ContainerStarted","Data":"f266e97527a686c313b80c1616dfb6e8d279ea864c5cb7d4a5f48adb27d74765"} Feb 23 09:12:16 crc kubenswrapper[4940]: I0223 09:12:16.034551 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-2nkmb" event={"ID":"79be8ad1-5c0e-41a0-b293-46a293c25212","Type":"ContainerStarted","Data":"411c1a85a40487c82b15b811ee4b3e06cf9eadd0da0077b096e2a940b32afbba"} Feb 23 09:12:16 crc kubenswrapper[4940]: I0223 09:12:16.034575 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-2nkmb" event={"ID":"79be8ad1-5c0e-41a0-b293-46a293c25212","Type":"ContainerStarted","Data":"7f4d81a4b685a0d5e552d775022b88f61534e2e0c4f4535e3f6f037fdceda24d"} Feb 23 09:12:16 crc kubenswrapper[4940]: I0223 09:12:16.050593 4940 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-2nkmb" podStartSLOduration=2.0505776 podStartE2EDuration="2.0505776s" podCreationTimestamp="2026-02-23 09:12:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 09:12:16.047082801 +0000 UTC m=+1467.430288988" watchObservedRunningTime="2026-02-23 09:12:16.0505776 +0000 UTC m=+1467.433783757" Feb 23 09:12:18 crc kubenswrapper[4940]: I0223 09:12:18.059863 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3545f0d9-3f75-4de3-ab04-716362d1a057","Type":"ContainerStarted","Data":"d0d9d0ca0b00a27a0b60f61058daddb7d37cd80bd7248393ff2176cf8644dcf8"} Feb 23 09:12:18 crc kubenswrapper[4940]: I0223 09:12:18.062239 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 23 09:12:18 crc kubenswrapper[4940]: I0223 09:12:18.104355 4940 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.71432272 podStartE2EDuration="5.104330604s" podCreationTimestamp="2026-02-23 09:12:13 +0000 UTC" firstStartedPulling="2026-02-23 09:12:13.920068903 +0000 UTC m=+1465.303275060" lastFinishedPulling="2026-02-23 09:12:17.310076757 +0000 UTC m=+1468.693282944" observedRunningTime="2026-02-23 09:12:18.086151282 +0000 UTC m=+1469.469357449" watchObservedRunningTime="2026-02-23 09:12:18.104330604 +0000 UTC m=+1469.487536771" Feb 23 09:12:18 crc kubenswrapper[4940]: I0223 09:12:18.364769 4940 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 23 09:12:18 crc kubenswrapper[4940]: I0223 09:12:18.364819 4940 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 23 09:12:19 crc kubenswrapper[4940]: I0223 09:12:19.376744 4940 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="16a728c5-ed82-4c5a-86ee-8d2b2442dbd6" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.221:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 23 09:12:19 crc kubenswrapper[4940]: I0223 09:12:19.376812 4940 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="16a728c5-ed82-4c5a-86ee-8d2b2442dbd6" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.221:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 23 09:12:21 crc kubenswrapper[4940]: I0223 09:12:21.089244 4940 generic.go:334] "Generic (PLEG): container finished" podID="79be8ad1-5c0e-41a0-b293-46a293c25212" containerID="411c1a85a40487c82b15b811ee4b3e06cf9eadd0da0077b096e2a940b32afbba" exitCode=0 Feb 23 09:12:21 crc kubenswrapper[4940]: I0223 09:12:21.089487 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-2nkmb" event={"ID":"79be8ad1-5c0e-41a0-b293-46a293c25212","Type":"ContainerDied","Data":"411c1a85a40487c82b15b811ee4b3e06cf9eadd0da0077b096e2a940b32afbba"} Feb 23 09:12:22 crc kubenswrapper[4940]: I0223 09:12:22.496527 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-2nkmb" Feb 23 09:12:22 crc kubenswrapper[4940]: I0223 09:12:22.585397 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/79be8ad1-5c0e-41a0-b293-46a293c25212-scripts\") pod \"79be8ad1-5c0e-41a0-b293-46a293c25212\" (UID: \"79be8ad1-5c0e-41a0-b293-46a293c25212\") " Feb 23 09:12:22 crc kubenswrapper[4940]: I0223 09:12:22.585467 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79be8ad1-5c0e-41a0-b293-46a293c25212-combined-ca-bundle\") pod \"79be8ad1-5c0e-41a0-b293-46a293c25212\" (UID: \"79be8ad1-5c0e-41a0-b293-46a293c25212\") " Feb 23 09:12:22 crc kubenswrapper[4940]: I0223 09:12:22.585521 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xpxld\" (UniqueName: \"kubernetes.io/projected/79be8ad1-5c0e-41a0-b293-46a293c25212-kube-api-access-xpxld\") pod \"79be8ad1-5c0e-41a0-b293-46a293c25212\" (UID: \"79be8ad1-5c0e-41a0-b293-46a293c25212\") " Feb 23 09:12:22 crc kubenswrapper[4940]: I0223 09:12:22.585709 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/79be8ad1-5c0e-41a0-b293-46a293c25212-config-data\") pod \"79be8ad1-5c0e-41a0-b293-46a293c25212\" (UID: \"79be8ad1-5c0e-41a0-b293-46a293c25212\") " Feb 23 09:12:22 crc kubenswrapper[4940]: I0223 09:12:22.596950 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/79be8ad1-5c0e-41a0-b293-46a293c25212-kube-api-access-xpxld" (OuterVolumeSpecName: "kube-api-access-xpxld") pod "79be8ad1-5c0e-41a0-b293-46a293c25212" (UID: "79be8ad1-5c0e-41a0-b293-46a293c25212"). InnerVolumeSpecName "kube-api-access-xpxld". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 09:12:22 crc kubenswrapper[4940]: I0223 09:12:22.599789 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/79be8ad1-5c0e-41a0-b293-46a293c25212-scripts" (OuterVolumeSpecName: "scripts") pod "79be8ad1-5c0e-41a0-b293-46a293c25212" (UID: "79be8ad1-5c0e-41a0-b293-46a293c25212"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 09:12:22 crc kubenswrapper[4940]: I0223 09:12:22.714161 4940 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/79be8ad1-5c0e-41a0-b293-46a293c25212-scripts\") on node \"crc\" DevicePath \"\"" Feb 23 09:12:22 crc kubenswrapper[4940]: I0223 09:12:22.715188 4940 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xpxld\" (UniqueName: \"kubernetes.io/projected/79be8ad1-5c0e-41a0-b293-46a293c25212-kube-api-access-xpxld\") on node \"crc\" DevicePath \"\"" Feb 23 09:12:22 crc kubenswrapper[4940]: I0223 09:12:22.716780 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/79be8ad1-5c0e-41a0-b293-46a293c25212-config-data" (OuterVolumeSpecName: "config-data") pod "79be8ad1-5c0e-41a0-b293-46a293c25212" (UID: "79be8ad1-5c0e-41a0-b293-46a293c25212"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 09:12:22 crc kubenswrapper[4940]: I0223 09:12:22.727861 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/79be8ad1-5c0e-41a0-b293-46a293c25212-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "79be8ad1-5c0e-41a0-b293-46a293c25212" (UID: "79be8ad1-5c0e-41a0-b293-46a293c25212"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 09:12:22 crc kubenswrapper[4940]: I0223 09:12:22.817117 4940 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/79be8ad1-5c0e-41a0-b293-46a293c25212-config-data\") on node \"crc\" DevicePath \"\"" Feb 23 09:12:22 crc kubenswrapper[4940]: I0223 09:12:22.817164 4940 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79be8ad1-5c0e-41a0-b293-46a293c25212-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 09:12:23 crc kubenswrapper[4940]: I0223 09:12:23.109310 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-2nkmb" event={"ID":"79be8ad1-5c0e-41a0-b293-46a293c25212","Type":"ContainerDied","Data":"7f4d81a4b685a0d5e552d775022b88f61534e2e0c4f4535e3f6f037fdceda24d"} Feb 23 09:12:23 crc kubenswrapper[4940]: I0223 09:12:23.109557 4940 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7f4d81a4b685a0d5e552d775022b88f61534e2e0c4f4535e3f6f037fdceda24d" Feb 23 09:12:23 crc kubenswrapper[4940]: I0223 09:12:23.109395 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-2nkmb" Feb 23 09:12:23 crc kubenswrapper[4940]: I0223 09:12:23.306465 4940 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 23 09:12:23 crc kubenswrapper[4940]: I0223 09:12:23.306814 4940 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="16a728c5-ed82-4c5a-86ee-8d2b2442dbd6" containerName="nova-api-log" containerID="cri-o://6e1d6d935bcf139cf87179127110e5d97c6f0e6289235ef8869a430b740ecc4d" gracePeriod=30 Feb 23 09:12:23 crc kubenswrapper[4940]: I0223 09:12:23.307389 4940 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="16a728c5-ed82-4c5a-86ee-8d2b2442dbd6" containerName="nova-api-api" containerID="cri-o://184d86a46cc6ae21aa66be071de5a4c5c5a5259dba16256ccdfe17b1358f392e" gracePeriod=30 Feb 23 09:12:23 crc kubenswrapper[4940]: I0223 09:12:23.321323 4940 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 23 09:12:23 crc kubenswrapper[4940]: I0223 09:12:23.321578 4940 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="da9fd460-b67b-4141-8f98-a68f5d73aec4" containerName="nova-scheduler-scheduler" containerID="cri-o://c1ecf7c2ce85585b35fecca7f72e88e82e22a491e7734ab0f3f1dbe10b2a3b9e" gracePeriod=30 Feb 23 09:12:23 crc kubenswrapper[4940]: I0223 09:12:23.336586 4940 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 23 09:12:23 crc kubenswrapper[4940]: I0223 09:12:23.337077 4940 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="fca2de66-25b8-4d49-8283-8870c62104d3" containerName="nova-metadata-metadata" containerID="cri-o://86518621e8ea7a9d2574f992c8be80ab9dc711fef8a8c3753902740fbbbdfdfc" gracePeriod=30 Feb 23 09:12:23 crc kubenswrapper[4940]: I0223 09:12:23.336945 4940 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="fca2de66-25b8-4d49-8283-8870c62104d3" containerName="nova-metadata-log" containerID="cri-o://6f6639a8ec71ca5ca0c1ea34f58c42a43b52ee354cc5dfc3c6d91d780bcdaf5b" gracePeriod=30 Feb 23 09:12:23 crc kubenswrapper[4940]: E0223 09:12:23.972344 4940 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="c1ecf7c2ce85585b35fecca7f72e88e82e22a491e7734ab0f3f1dbe10b2a3b9e" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 23 09:12:23 crc kubenswrapper[4940]: E0223 09:12:23.974796 4940 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="c1ecf7c2ce85585b35fecca7f72e88e82e22a491e7734ab0f3f1dbe10b2a3b9e" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 23 09:12:23 crc kubenswrapper[4940]: E0223 09:12:23.975826 4940 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="c1ecf7c2ce85585b35fecca7f72e88e82e22a491e7734ab0f3f1dbe10b2a3b9e" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 23 09:12:23 crc kubenswrapper[4940]: E0223 09:12:23.975874 4940 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="da9fd460-b67b-4141-8f98-a68f5d73aec4" containerName="nova-scheduler-scheduler" Feb 23 09:12:24 crc kubenswrapper[4940]: I0223 09:12:24.254404 4940 generic.go:334] "Generic (PLEG): container finished" podID="16a728c5-ed82-4c5a-86ee-8d2b2442dbd6" containerID="6e1d6d935bcf139cf87179127110e5d97c6f0e6289235ef8869a430b740ecc4d" exitCode=143 Feb 23 09:12:24 crc kubenswrapper[4940]: I0223 09:12:24.254471 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"16a728c5-ed82-4c5a-86ee-8d2b2442dbd6","Type":"ContainerDied","Data":"6e1d6d935bcf139cf87179127110e5d97c6f0e6289235ef8869a430b740ecc4d"} Feb 23 09:12:24 crc kubenswrapper[4940]: I0223 09:12:24.256796 4940 generic.go:334] "Generic (PLEG): container finished" podID="fca2de66-25b8-4d49-8283-8870c62104d3" containerID="6f6639a8ec71ca5ca0c1ea34f58c42a43b52ee354cc5dfc3c6d91d780bcdaf5b" exitCode=143 Feb 23 09:12:24 crc kubenswrapper[4940]: I0223 09:12:24.256861 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"fca2de66-25b8-4d49-8283-8870c62104d3","Type":"ContainerDied","Data":"6f6639a8ec71ca5ca0c1ea34f58c42a43b52ee354cc5dfc3c6d91d780bcdaf5b"} Feb 23 09:12:26 crc kubenswrapper[4940]: I0223 09:12:26.540665 4940 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="fca2de66-25b8-4d49-8283-8870c62104d3" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.214:8775/\": read tcp 10.217.0.2:47668->10.217.0.214:8775: read: connection reset by peer" Feb 23 09:12:26 crc kubenswrapper[4940]: I0223 09:12:26.540665 4940 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="fca2de66-25b8-4d49-8283-8870c62104d3" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.214:8775/\": read tcp 10.217.0.2:47678->10.217.0.214:8775: read: connection reset by peer" Feb 23 09:12:26 crc kubenswrapper[4940]: E0223 09:12:26.772327 4940 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod16a728c5_ed82_4c5a_86ee_8d2b2442dbd6.slice/crio-184d86a46cc6ae21aa66be071de5a4c5c5a5259dba16256ccdfe17b1358f392e.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfca2de66_25b8_4d49_8283_8870c62104d3.slice/crio-86518621e8ea7a9d2574f992c8be80ab9dc711fef8a8c3753902740fbbbdfdfc.scope\": RecentStats: unable to find data in memory cache]" Feb 23 09:12:27 crc kubenswrapper[4940]: I0223 09:12:27.050393 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 23 09:12:27 crc kubenswrapper[4940]: I0223 09:12:27.057584 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 23 09:12:27 crc kubenswrapper[4940]: I0223 09:12:27.141307 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/fca2de66-25b8-4d49-8283-8870c62104d3-nova-metadata-tls-certs\") pod \"fca2de66-25b8-4d49-8283-8870c62104d3\" (UID: \"fca2de66-25b8-4d49-8283-8870c62104d3\") " Feb 23 09:12:27 crc kubenswrapper[4940]: I0223 09:12:27.141373 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/16a728c5-ed82-4c5a-86ee-8d2b2442dbd6-config-data\") pod \"16a728c5-ed82-4c5a-86ee-8d2b2442dbd6\" (UID: \"16a728c5-ed82-4c5a-86ee-8d2b2442dbd6\") " Feb 23 09:12:27 crc kubenswrapper[4940]: I0223 09:12:27.141413 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16a728c5-ed82-4c5a-86ee-8d2b2442dbd6-combined-ca-bundle\") pod \"16a728c5-ed82-4c5a-86ee-8d2b2442dbd6\" (UID: \"16a728c5-ed82-4c5a-86ee-8d2b2442dbd6\") " Feb 23 09:12:27 crc kubenswrapper[4940]: I0223 09:12:27.141466 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/16a728c5-ed82-4c5a-86ee-8d2b2442dbd6-public-tls-certs\") pod \"16a728c5-ed82-4c5a-86ee-8d2b2442dbd6\" (UID: \"16a728c5-ed82-4c5a-86ee-8d2b2442dbd6\") " Feb 23 09:12:27 crc kubenswrapper[4940]: I0223 09:12:27.141594 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kwg59\" (UniqueName: \"kubernetes.io/projected/fca2de66-25b8-4d49-8283-8870c62104d3-kube-api-access-kwg59\") pod \"fca2de66-25b8-4d49-8283-8870c62104d3\" (UID: \"fca2de66-25b8-4d49-8283-8870c62104d3\") " Feb 23 09:12:27 crc kubenswrapper[4940]: I0223 09:12:27.141769 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/16a728c5-ed82-4c5a-86ee-8d2b2442dbd6-internal-tls-certs\") pod \"16a728c5-ed82-4c5a-86ee-8d2b2442dbd6\" (UID: \"16a728c5-ed82-4c5a-86ee-8d2b2442dbd6\") " Feb 23 09:12:27 crc kubenswrapper[4940]: I0223 09:12:27.141825 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/16a728c5-ed82-4c5a-86ee-8d2b2442dbd6-logs\") pod \"16a728c5-ed82-4c5a-86ee-8d2b2442dbd6\" (UID: \"16a728c5-ed82-4c5a-86ee-8d2b2442dbd6\") " Feb 23 09:12:27 crc kubenswrapper[4940]: I0223 09:12:27.141867 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fca2de66-25b8-4d49-8283-8870c62104d3-config-data\") pod \"fca2de66-25b8-4d49-8283-8870c62104d3\" (UID: \"fca2de66-25b8-4d49-8283-8870c62104d3\") " Feb 23 09:12:27 crc kubenswrapper[4940]: I0223 09:12:27.141896 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fca2de66-25b8-4d49-8283-8870c62104d3-logs\") pod \"fca2de66-25b8-4d49-8283-8870c62104d3\" (UID: \"fca2de66-25b8-4d49-8283-8870c62104d3\") " Feb 23 09:12:27 crc kubenswrapper[4940]: I0223 09:12:27.141937 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fca2de66-25b8-4d49-8283-8870c62104d3-combined-ca-bundle\") pod \"fca2de66-25b8-4d49-8283-8870c62104d3\" (UID: \"fca2de66-25b8-4d49-8283-8870c62104d3\") " Feb 23 09:12:27 crc kubenswrapper[4940]: I0223 09:12:27.141967 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k2xqz\" (UniqueName: \"kubernetes.io/projected/16a728c5-ed82-4c5a-86ee-8d2b2442dbd6-kube-api-access-k2xqz\") pod \"16a728c5-ed82-4c5a-86ee-8d2b2442dbd6\" (UID: \"16a728c5-ed82-4c5a-86ee-8d2b2442dbd6\") " Feb 23 09:12:27 crc kubenswrapper[4940]: I0223 09:12:27.142470 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fca2de66-25b8-4d49-8283-8870c62104d3-logs" (OuterVolumeSpecName: "logs") pod "fca2de66-25b8-4d49-8283-8870c62104d3" (UID: "fca2de66-25b8-4d49-8283-8870c62104d3"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 09:12:27 crc kubenswrapper[4940]: I0223 09:12:27.142916 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/16a728c5-ed82-4c5a-86ee-8d2b2442dbd6-logs" (OuterVolumeSpecName: "logs") pod "16a728c5-ed82-4c5a-86ee-8d2b2442dbd6" (UID: "16a728c5-ed82-4c5a-86ee-8d2b2442dbd6"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 09:12:27 crc kubenswrapper[4940]: I0223 09:12:27.144104 4940 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/16a728c5-ed82-4c5a-86ee-8d2b2442dbd6-logs\") on node \"crc\" DevicePath \"\"" Feb 23 09:12:27 crc kubenswrapper[4940]: I0223 09:12:27.144137 4940 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fca2de66-25b8-4d49-8283-8870c62104d3-logs\") on node \"crc\" DevicePath \"\"" Feb 23 09:12:27 crc kubenswrapper[4940]: I0223 09:12:27.149662 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/16a728c5-ed82-4c5a-86ee-8d2b2442dbd6-kube-api-access-k2xqz" (OuterVolumeSpecName: "kube-api-access-k2xqz") pod "16a728c5-ed82-4c5a-86ee-8d2b2442dbd6" (UID: "16a728c5-ed82-4c5a-86ee-8d2b2442dbd6"). InnerVolumeSpecName "kube-api-access-k2xqz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 09:12:27 crc kubenswrapper[4940]: I0223 09:12:27.155852 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fca2de66-25b8-4d49-8283-8870c62104d3-kube-api-access-kwg59" (OuterVolumeSpecName: "kube-api-access-kwg59") pod "fca2de66-25b8-4d49-8283-8870c62104d3" (UID: "fca2de66-25b8-4d49-8283-8870c62104d3"). InnerVolumeSpecName "kube-api-access-kwg59". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 09:12:27 crc kubenswrapper[4940]: I0223 09:12:27.188244 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fca2de66-25b8-4d49-8283-8870c62104d3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fca2de66-25b8-4d49-8283-8870c62104d3" (UID: "fca2de66-25b8-4d49-8283-8870c62104d3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 09:12:27 crc kubenswrapper[4940]: I0223 09:12:27.188392 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16a728c5-ed82-4c5a-86ee-8d2b2442dbd6-config-data" (OuterVolumeSpecName: "config-data") pod "16a728c5-ed82-4c5a-86ee-8d2b2442dbd6" (UID: "16a728c5-ed82-4c5a-86ee-8d2b2442dbd6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 09:12:27 crc kubenswrapper[4940]: I0223 09:12:27.188498 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16a728c5-ed82-4c5a-86ee-8d2b2442dbd6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "16a728c5-ed82-4c5a-86ee-8d2b2442dbd6" (UID: "16a728c5-ed82-4c5a-86ee-8d2b2442dbd6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 09:12:27 crc kubenswrapper[4940]: I0223 09:12:27.219845 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fca2de66-25b8-4d49-8283-8870c62104d3-config-data" (OuterVolumeSpecName: "config-data") pod "fca2de66-25b8-4d49-8283-8870c62104d3" (UID: "fca2de66-25b8-4d49-8283-8870c62104d3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 09:12:27 crc kubenswrapper[4940]: I0223 09:12:27.220242 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16a728c5-ed82-4c5a-86ee-8d2b2442dbd6-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "16a728c5-ed82-4c5a-86ee-8d2b2442dbd6" (UID: "16a728c5-ed82-4c5a-86ee-8d2b2442dbd6"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 09:12:27 crc kubenswrapper[4940]: I0223 09:12:27.245708 4940 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fca2de66-25b8-4d49-8283-8870c62104d3-config-data\") on node \"crc\" DevicePath \"\"" Feb 23 09:12:27 crc kubenswrapper[4940]: I0223 09:12:27.245742 4940 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fca2de66-25b8-4d49-8283-8870c62104d3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 09:12:27 crc kubenswrapper[4940]: I0223 09:12:27.245756 4940 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k2xqz\" (UniqueName: \"kubernetes.io/projected/16a728c5-ed82-4c5a-86ee-8d2b2442dbd6-kube-api-access-k2xqz\") on node \"crc\" DevicePath \"\"" Feb 23 09:12:27 crc kubenswrapper[4940]: I0223 09:12:27.245764 4940 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/16a728c5-ed82-4c5a-86ee-8d2b2442dbd6-config-data\") on node \"crc\" DevicePath \"\"" Feb 23 09:12:27 crc kubenswrapper[4940]: I0223 09:12:27.245774 4940 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16a728c5-ed82-4c5a-86ee-8d2b2442dbd6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 09:12:27 crc kubenswrapper[4940]: I0223 09:12:27.245784 4940 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/16a728c5-ed82-4c5a-86ee-8d2b2442dbd6-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 23 09:12:27 crc kubenswrapper[4940]: I0223 09:12:27.245795 4940 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kwg59\" (UniqueName: \"kubernetes.io/projected/fca2de66-25b8-4d49-8283-8870c62104d3-kube-api-access-kwg59\") on node \"crc\" DevicePath \"\"" Feb 23 09:12:27 crc kubenswrapper[4940]: I0223 09:12:27.252398 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16a728c5-ed82-4c5a-86ee-8d2b2442dbd6-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "16a728c5-ed82-4c5a-86ee-8d2b2442dbd6" (UID: "16a728c5-ed82-4c5a-86ee-8d2b2442dbd6"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 09:12:27 crc kubenswrapper[4940]: I0223 09:12:27.261250 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fca2de66-25b8-4d49-8283-8870c62104d3-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "fca2de66-25b8-4d49-8283-8870c62104d3" (UID: "fca2de66-25b8-4d49-8283-8870c62104d3"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 09:12:27 crc kubenswrapper[4940]: I0223 09:12:27.292362 4940 generic.go:334] "Generic (PLEG): container finished" podID="16a728c5-ed82-4c5a-86ee-8d2b2442dbd6" containerID="184d86a46cc6ae21aa66be071de5a4c5c5a5259dba16256ccdfe17b1358f392e" exitCode=0 Feb 23 09:12:27 crc kubenswrapper[4940]: I0223 09:12:27.292437 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"16a728c5-ed82-4c5a-86ee-8d2b2442dbd6","Type":"ContainerDied","Data":"184d86a46cc6ae21aa66be071de5a4c5c5a5259dba16256ccdfe17b1358f392e"} Feb 23 09:12:27 crc kubenswrapper[4940]: I0223 09:12:27.292469 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"16a728c5-ed82-4c5a-86ee-8d2b2442dbd6","Type":"ContainerDied","Data":"a726096242b518a60edbe45d234f57d82c56bad94af169fd6706c22711793e39"} Feb 23 09:12:27 crc kubenswrapper[4940]: I0223 09:12:27.292483 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 23 09:12:27 crc kubenswrapper[4940]: I0223 09:12:27.292495 4940 scope.go:117] "RemoveContainer" containerID="184d86a46cc6ae21aa66be071de5a4c5c5a5259dba16256ccdfe17b1358f392e" Feb 23 09:12:27 crc kubenswrapper[4940]: I0223 09:12:27.296105 4940 generic.go:334] "Generic (PLEG): container finished" podID="fca2de66-25b8-4d49-8283-8870c62104d3" containerID="86518621e8ea7a9d2574f992c8be80ab9dc711fef8a8c3753902740fbbbdfdfc" exitCode=0 Feb 23 09:12:27 crc kubenswrapper[4940]: I0223 09:12:27.296139 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"fca2de66-25b8-4d49-8283-8870c62104d3","Type":"ContainerDied","Data":"86518621e8ea7a9d2574f992c8be80ab9dc711fef8a8c3753902740fbbbdfdfc"} Feb 23 09:12:27 crc kubenswrapper[4940]: I0223 09:12:27.296171 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"fca2de66-25b8-4d49-8283-8870c62104d3","Type":"ContainerDied","Data":"83df78500425ea9bebce9b466c18c7b4bc2e69104c68348789a02c092856e0f4"} Feb 23 09:12:27 crc kubenswrapper[4940]: I0223 09:12:27.296242 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 23 09:12:27 crc kubenswrapper[4940]: I0223 09:12:27.342761 4940 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 23 09:12:27 crc kubenswrapper[4940]: I0223 09:12:27.347350 4940 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/16a728c5-ed82-4c5a-86ee-8d2b2442dbd6-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 23 09:12:27 crc kubenswrapper[4940]: I0223 09:12:27.347391 4940 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/fca2de66-25b8-4d49-8283-8870c62104d3-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 23 09:12:27 crc kubenswrapper[4940]: I0223 09:12:27.361562 4940 scope.go:117] "RemoveContainer" containerID="6e1d6d935bcf139cf87179127110e5d97c6f0e6289235ef8869a430b740ecc4d" Feb 23 09:12:27 crc kubenswrapper[4940]: I0223 09:12:27.385558 4940 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 23 09:12:27 crc kubenswrapper[4940]: I0223 09:12:27.385598 4940 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 23 09:12:27 crc kubenswrapper[4940]: I0223 09:12:27.386631 4940 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 23 09:12:27 crc kubenswrapper[4940]: I0223 09:12:27.396044 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 23 09:12:27 crc kubenswrapper[4940]: E0223 09:12:27.396684 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="16a728c5-ed82-4c5a-86ee-8d2b2442dbd6" containerName="nova-api-api" Feb 23 09:12:27 crc kubenswrapper[4940]: I0223 09:12:27.396713 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="16a728c5-ed82-4c5a-86ee-8d2b2442dbd6" containerName="nova-api-api" Feb 23 09:12:27 crc kubenswrapper[4940]: E0223 09:12:27.396741 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fca2de66-25b8-4d49-8283-8870c62104d3" containerName="nova-metadata-metadata" Feb 23 09:12:27 crc kubenswrapper[4940]: I0223 09:12:27.396750 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="fca2de66-25b8-4d49-8283-8870c62104d3" containerName="nova-metadata-metadata" Feb 23 09:12:27 crc kubenswrapper[4940]: E0223 09:12:27.396763 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="79be8ad1-5c0e-41a0-b293-46a293c25212" containerName="nova-manage" Feb 23 09:12:27 crc kubenswrapper[4940]: I0223 09:12:27.396771 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="79be8ad1-5c0e-41a0-b293-46a293c25212" containerName="nova-manage" Feb 23 09:12:27 crc kubenswrapper[4940]: E0223 09:12:27.396792 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fca2de66-25b8-4d49-8283-8870c62104d3" containerName="nova-metadata-log" Feb 23 09:12:27 crc kubenswrapper[4940]: I0223 09:12:27.396799 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="fca2de66-25b8-4d49-8283-8870c62104d3" containerName="nova-metadata-log" Feb 23 09:12:27 crc kubenswrapper[4940]: E0223 09:12:27.396811 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="16a728c5-ed82-4c5a-86ee-8d2b2442dbd6" containerName="nova-api-log" Feb 23 09:12:27 crc kubenswrapper[4940]: I0223 09:12:27.396819 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="16a728c5-ed82-4c5a-86ee-8d2b2442dbd6" containerName="nova-api-log" Feb 23 09:12:27 crc kubenswrapper[4940]: I0223 09:12:27.397130 4940 memory_manager.go:354] "RemoveStaleState removing state" podUID="79be8ad1-5c0e-41a0-b293-46a293c25212" containerName="nova-manage" Feb 23 09:12:27 crc kubenswrapper[4940]: I0223 09:12:27.397165 4940 memory_manager.go:354] "RemoveStaleState removing state" podUID="16a728c5-ed82-4c5a-86ee-8d2b2442dbd6" containerName="nova-api-api" Feb 23 09:12:27 crc kubenswrapper[4940]: I0223 09:12:27.397182 4940 memory_manager.go:354] "RemoveStaleState removing state" podUID="16a728c5-ed82-4c5a-86ee-8d2b2442dbd6" containerName="nova-api-log" Feb 23 09:12:27 crc kubenswrapper[4940]: I0223 09:12:27.397207 4940 memory_manager.go:354] "RemoveStaleState removing state" podUID="fca2de66-25b8-4d49-8283-8870c62104d3" containerName="nova-metadata-metadata" Feb 23 09:12:27 crc kubenswrapper[4940]: I0223 09:12:27.397218 4940 memory_manager.go:354] "RemoveStaleState removing state" podUID="fca2de66-25b8-4d49-8283-8870c62104d3" containerName="nova-metadata-log" Feb 23 09:12:27 crc kubenswrapper[4940]: I0223 09:12:27.398718 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 23 09:12:27 crc kubenswrapper[4940]: I0223 09:12:27.408462 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 23 09:12:27 crc kubenswrapper[4940]: I0223 09:12:27.408604 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Feb 23 09:12:27 crc kubenswrapper[4940]: I0223 09:12:27.408710 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Feb 23 09:12:27 crc kubenswrapper[4940]: I0223 09:12:27.411513 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 23 09:12:27 crc kubenswrapper[4940]: I0223 09:12:27.425569 4940 scope.go:117] "RemoveContainer" containerID="184d86a46cc6ae21aa66be071de5a4c5c5a5259dba16256ccdfe17b1358f392e" Feb 23 09:12:27 crc kubenswrapper[4940]: E0223 09:12:27.426361 4940 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"184d86a46cc6ae21aa66be071de5a4c5c5a5259dba16256ccdfe17b1358f392e\": container with ID starting with 184d86a46cc6ae21aa66be071de5a4c5c5a5259dba16256ccdfe17b1358f392e not found: ID does not exist" containerID="184d86a46cc6ae21aa66be071de5a4c5c5a5259dba16256ccdfe17b1358f392e" Feb 23 09:12:27 crc kubenswrapper[4940]: I0223 09:12:27.426435 4940 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"184d86a46cc6ae21aa66be071de5a4c5c5a5259dba16256ccdfe17b1358f392e"} err="failed to get container status \"184d86a46cc6ae21aa66be071de5a4c5c5a5259dba16256ccdfe17b1358f392e\": rpc error: code = NotFound desc = could not find container \"184d86a46cc6ae21aa66be071de5a4c5c5a5259dba16256ccdfe17b1358f392e\": container with ID starting with 184d86a46cc6ae21aa66be071de5a4c5c5a5259dba16256ccdfe17b1358f392e not found: ID does not exist" Feb 23 09:12:27 crc kubenswrapper[4940]: I0223 09:12:27.426458 4940 scope.go:117] "RemoveContainer" containerID="6e1d6d935bcf139cf87179127110e5d97c6f0e6289235ef8869a430b740ecc4d" Feb 23 09:12:27 crc kubenswrapper[4940]: E0223 09:12:27.426801 4940 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6e1d6d935bcf139cf87179127110e5d97c6f0e6289235ef8869a430b740ecc4d\": container with ID starting with 6e1d6d935bcf139cf87179127110e5d97c6f0e6289235ef8869a430b740ecc4d not found: ID does not exist" containerID="6e1d6d935bcf139cf87179127110e5d97c6f0e6289235ef8869a430b740ecc4d" Feb 23 09:12:27 crc kubenswrapper[4940]: I0223 09:12:27.426824 4940 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6e1d6d935bcf139cf87179127110e5d97c6f0e6289235ef8869a430b740ecc4d"} err="failed to get container status \"6e1d6d935bcf139cf87179127110e5d97c6f0e6289235ef8869a430b740ecc4d\": rpc error: code = NotFound desc = could not find container \"6e1d6d935bcf139cf87179127110e5d97c6f0e6289235ef8869a430b740ecc4d\": container with ID starting with 6e1d6d935bcf139cf87179127110e5d97c6f0e6289235ef8869a430b740ecc4d not found: ID does not exist" Feb 23 09:12:27 crc kubenswrapper[4940]: I0223 09:12:27.426837 4940 scope.go:117] "RemoveContainer" containerID="86518621e8ea7a9d2574f992c8be80ab9dc711fef8a8c3753902740fbbbdfdfc" Feb 23 09:12:27 crc kubenswrapper[4940]: I0223 09:12:27.460991 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/10792692-8f84-43da-aea3-46d28e5ba1f5-logs\") pod \"nova-api-0\" (UID: \"10792692-8f84-43da-aea3-46d28e5ba1f5\") " pod="openstack/nova-api-0" Feb 23 09:12:27 crc kubenswrapper[4940]: I0223 09:12:27.461683 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/10792692-8f84-43da-aea3-46d28e5ba1f5-public-tls-certs\") pod \"nova-api-0\" (UID: \"10792692-8f84-43da-aea3-46d28e5ba1f5\") " pod="openstack/nova-api-0" Feb 23 09:12:27 crc kubenswrapper[4940]: I0223 09:12:27.461867 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/10792692-8f84-43da-aea3-46d28e5ba1f5-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"10792692-8f84-43da-aea3-46d28e5ba1f5\") " pod="openstack/nova-api-0" Feb 23 09:12:27 crc kubenswrapper[4940]: I0223 09:12:27.461960 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/10792692-8f84-43da-aea3-46d28e5ba1f5-config-data\") pod \"nova-api-0\" (UID: \"10792692-8f84-43da-aea3-46d28e5ba1f5\") " pod="openstack/nova-api-0" Feb 23 09:12:27 crc kubenswrapper[4940]: I0223 09:12:27.462179 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xjrld\" (UniqueName: \"kubernetes.io/projected/10792692-8f84-43da-aea3-46d28e5ba1f5-kube-api-access-xjrld\") pod \"nova-api-0\" (UID: \"10792692-8f84-43da-aea3-46d28e5ba1f5\") " pod="openstack/nova-api-0" Feb 23 09:12:27 crc kubenswrapper[4940]: I0223 09:12:27.462265 4940 scope.go:117] "RemoveContainer" containerID="6f6639a8ec71ca5ca0c1ea34f58c42a43b52ee354cc5dfc3c6d91d780bcdaf5b" Feb 23 09:12:27 crc kubenswrapper[4940]: I0223 09:12:27.462408 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/10792692-8f84-43da-aea3-46d28e5ba1f5-internal-tls-certs\") pod \"nova-api-0\" (UID: \"10792692-8f84-43da-aea3-46d28e5ba1f5\") " pod="openstack/nova-api-0" Feb 23 09:12:27 crc kubenswrapper[4940]: I0223 09:12:27.483992 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 23 09:12:27 crc kubenswrapper[4940]: I0223 09:12:27.486479 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 23 09:12:27 crc kubenswrapper[4940]: I0223 09:12:27.496204 4940 scope.go:117] "RemoveContainer" containerID="86518621e8ea7a9d2574f992c8be80ab9dc711fef8a8c3753902740fbbbdfdfc" Feb 23 09:12:27 crc kubenswrapper[4940]: I0223 09:12:27.499793 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 23 09:12:27 crc kubenswrapper[4940]: E0223 09:12:27.500004 4940 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"86518621e8ea7a9d2574f992c8be80ab9dc711fef8a8c3753902740fbbbdfdfc\": container with ID starting with 86518621e8ea7a9d2574f992c8be80ab9dc711fef8a8c3753902740fbbbdfdfc not found: ID does not exist" containerID="86518621e8ea7a9d2574f992c8be80ab9dc711fef8a8c3753902740fbbbdfdfc" Feb 23 09:12:27 crc kubenswrapper[4940]: I0223 09:12:27.500065 4940 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"86518621e8ea7a9d2574f992c8be80ab9dc711fef8a8c3753902740fbbbdfdfc"} err="failed to get container status \"86518621e8ea7a9d2574f992c8be80ab9dc711fef8a8c3753902740fbbbdfdfc\": rpc error: code = NotFound desc = could not find container \"86518621e8ea7a9d2574f992c8be80ab9dc711fef8a8c3753902740fbbbdfdfc\": container with ID starting with 86518621e8ea7a9d2574f992c8be80ab9dc711fef8a8c3753902740fbbbdfdfc not found: ID does not exist" Feb 23 09:12:27 crc kubenswrapper[4940]: I0223 09:12:27.500095 4940 scope.go:117] "RemoveContainer" containerID="6f6639a8ec71ca5ca0c1ea34f58c42a43b52ee354cc5dfc3c6d91d780bcdaf5b" Feb 23 09:12:27 crc kubenswrapper[4940]: I0223 09:12:27.500483 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 23 09:12:27 crc kubenswrapper[4940]: I0223 09:12:27.500859 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 23 09:12:27 crc kubenswrapper[4940]: E0223 09:12:27.501223 4940 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6f6639a8ec71ca5ca0c1ea34f58c42a43b52ee354cc5dfc3c6d91d780bcdaf5b\": container with ID starting with 6f6639a8ec71ca5ca0c1ea34f58c42a43b52ee354cc5dfc3c6d91d780bcdaf5b not found: ID does not exist" containerID="6f6639a8ec71ca5ca0c1ea34f58c42a43b52ee354cc5dfc3c6d91d780bcdaf5b" Feb 23 09:12:27 crc kubenswrapper[4940]: I0223 09:12:27.501267 4940 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6f6639a8ec71ca5ca0c1ea34f58c42a43b52ee354cc5dfc3c6d91d780bcdaf5b"} err="failed to get container status \"6f6639a8ec71ca5ca0c1ea34f58c42a43b52ee354cc5dfc3c6d91d780bcdaf5b\": rpc error: code = NotFound desc = could not find container \"6f6639a8ec71ca5ca0c1ea34f58c42a43b52ee354cc5dfc3c6d91d780bcdaf5b\": container with ID starting with 6f6639a8ec71ca5ca0c1ea34f58c42a43b52ee354cc5dfc3c6d91d780bcdaf5b not found: ID does not exist" Feb 23 09:12:27 crc kubenswrapper[4940]: I0223 09:12:27.569188 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/10792692-8f84-43da-aea3-46d28e5ba1f5-public-tls-certs\") pod \"nova-api-0\" (UID: \"10792692-8f84-43da-aea3-46d28e5ba1f5\") " pod="openstack/nova-api-0" Feb 23 09:12:27 crc kubenswrapper[4940]: I0223 09:12:27.569252 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-72fzk\" (UniqueName: \"kubernetes.io/projected/38eb6728-c410-4f85-ac35-969880b14e26-kube-api-access-72fzk\") pod \"nova-metadata-0\" (UID: \"38eb6728-c410-4f85-ac35-969880b14e26\") " pod="openstack/nova-metadata-0" Feb 23 09:12:27 crc kubenswrapper[4940]: I0223 09:12:27.569296 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/10792692-8f84-43da-aea3-46d28e5ba1f5-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"10792692-8f84-43da-aea3-46d28e5ba1f5\") " pod="openstack/nova-api-0" Feb 23 09:12:27 crc kubenswrapper[4940]: I0223 09:12:27.569315 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/10792692-8f84-43da-aea3-46d28e5ba1f5-config-data\") pod \"nova-api-0\" (UID: \"10792692-8f84-43da-aea3-46d28e5ba1f5\") " pod="openstack/nova-api-0" Feb 23 09:12:27 crc kubenswrapper[4940]: I0223 09:12:27.569498 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38eb6728-c410-4f85-ac35-969880b14e26-config-data\") pod \"nova-metadata-0\" (UID: \"38eb6728-c410-4f85-ac35-969880b14e26\") " pod="openstack/nova-metadata-0" Feb 23 09:12:27 crc kubenswrapper[4940]: I0223 09:12:27.569584 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xjrld\" (UniqueName: \"kubernetes.io/projected/10792692-8f84-43da-aea3-46d28e5ba1f5-kube-api-access-xjrld\") pod \"nova-api-0\" (UID: \"10792692-8f84-43da-aea3-46d28e5ba1f5\") " pod="openstack/nova-api-0" Feb 23 09:12:27 crc kubenswrapper[4940]: I0223 09:12:27.569652 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/10792692-8f84-43da-aea3-46d28e5ba1f5-internal-tls-certs\") pod \"nova-api-0\" (UID: \"10792692-8f84-43da-aea3-46d28e5ba1f5\") " pod="openstack/nova-api-0" Feb 23 09:12:27 crc kubenswrapper[4940]: I0223 09:12:27.569672 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/38eb6728-c410-4f85-ac35-969880b14e26-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"38eb6728-c410-4f85-ac35-969880b14e26\") " pod="openstack/nova-metadata-0" Feb 23 09:12:27 crc kubenswrapper[4940]: I0223 09:12:27.569779 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/10792692-8f84-43da-aea3-46d28e5ba1f5-logs\") pod \"nova-api-0\" (UID: \"10792692-8f84-43da-aea3-46d28e5ba1f5\") " pod="openstack/nova-api-0" Feb 23 09:12:27 crc kubenswrapper[4940]: I0223 09:12:27.569828 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/38eb6728-c410-4f85-ac35-969880b14e26-logs\") pod \"nova-metadata-0\" (UID: \"38eb6728-c410-4f85-ac35-969880b14e26\") " pod="openstack/nova-metadata-0" Feb 23 09:12:27 crc kubenswrapper[4940]: I0223 09:12:27.569955 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38eb6728-c410-4f85-ac35-969880b14e26-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"38eb6728-c410-4f85-ac35-969880b14e26\") " pod="openstack/nova-metadata-0" Feb 23 09:12:27 crc kubenswrapper[4940]: I0223 09:12:27.572182 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/10792692-8f84-43da-aea3-46d28e5ba1f5-logs\") pod \"nova-api-0\" (UID: \"10792692-8f84-43da-aea3-46d28e5ba1f5\") " pod="openstack/nova-api-0" Feb 23 09:12:27 crc kubenswrapper[4940]: I0223 09:12:27.576094 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/10792692-8f84-43da-aea3-46d28e5ba1f5-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"10792692-8f84-43da-aea3-46d28e5ba1f5\") " pod="openstack/nova-api-0" Feb 23 09:12:27 crc kubenswrapper[4940]: I0223 09:12:27.577963 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/10792692-8f84-43da-aea3-46d28e5ba1f5-internal-tls-certs\") pod \"nova-api-0\" (UID: \"10792692-8f84-43da-aea3-46d28e5ba1f5\") " pod="openstack/nova-api-0" Feb 23 09:12:27 crc kubenswrapper[4940]: I0223 09:12:27.579303 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/10792692-8f84-43da-aea3-46d28e5ba1f5-public-tls-certs\") pod \"nova-api-0\" (UID: \"10792692-8f84-43da-aea3-46d28e5ba1f5\") " pod="openstack/nova-api-0" Feb 23 09:12:27 crc kubenswrapper[4940]: I0223 09:12:27.579597 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/10792692-8f84-43da-aea3-46d28e5ba1f5-config-data\") pod \"nova-api-0\" (UID: \"10792692-8f84-43da-aea3-46d28e5ba1f5\") " pod="openstack/nova-api-0" Feb 23 09:12:27 crc kubenswrapper[4940]: I0223 09:12:27.589037 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xjrld\" (UniqueName: \"kubernetes.io/projected/10792692-8f84-43da-aea3-46d28e5ba1f5-kube-api-access-xjrld\") pod \"nova-api-0\" (UID: \"10792692-8f84-43da-aea3-46d28e5ba1f5\") " pod="openstack/nova-api-0" Feb 23 09:12:27 crc kubenswrapper[4940]: I0223 09:12:27.672157 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/38eb6728-c410-4f85-ac35-969880b14e26-logs\") pod \"nova-metadata-0\" (UID: \"38eb6728-c410-4f85-ac35-969880b14e26\") " pod="openstack/nova-metadata-0" Feb 23 09:12:27 crc kubenswrapper[4940]: I0223 09:12:27.672269 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38eb6728-c410-4f85-ac35-969880b14e26-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"38eb6728-c410-4f85-ac35-969880b14e26\") " pod="openstack/nova-metadata-0" Feb 23 09:12:27 crc kubenswrapper[4940]: I0223 09:12:27.672346 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-72fzk\" (UniqueName: \"kubernetes.io/projected/38eb6728-c410-4f85-ac35-969880b14e26-kube-api-access-72fzk\") pod \"nova-metadata-0\" (UID: \"38eb6728-c410-4f85-ac35-969880b14e26\") " pod="openstack/nova-metadata-0" Feb 23 09:12:27 crc kubenswrapper[4940]: I0223 09:12:27.672434 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38eb6728-c410-4f85-ac35-969880b14e26-config-data\") pod \"nova-metadata-0\" (UID: \"38eb6728-c410-4f85-ac35-969880b14e26\") " pod="openstack/nova-metadata-0" Feb 23 09:12:27 crc kubenswrapper[4940]: I0223 09:12:27.672477 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/38eb6728-c410-4f85-ac35-969880b14e26-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"38eb6728-c410-4f85-ac35-969880b14e26\") " pod="openstack/nova-metadata-0" Feb 23 09:12:27 crc kubenswrapper[4940]: I0223 09:12:27.673108 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/38eb6728-c410-4f85-ac35-969880b14e26-logs\") pod \"nova-metadata-0\" (UID: \"38eb6728-c410-4f85-ac35-969880b14e26\") " pod="openstack/nova-metadata-0" Feb 23 09:12:27 crc kubenswrapper[4940]: I0223 09:12:27.676009 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38eb6728-c410-4f85-ac35-969880b14e26-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"38eb6728-c410-4f85-ac35-969880b14e26\") " pod="openstack/nova-metadata-0" Feb 23 09:12:27 crc kubenswrapper[4940]: I0223 09:12:27.676572 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/38eb6728-c410-4f85-ac35-969880b14e26-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"38eb6728-c410-4f85-ac35-969880b14e26\") " pod="openstack/nova-metadata-0" Feb 23 09:12:27 crc kubenswrapper[4940]: I0223 09:12:27.677233 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38eb6728-c410-4f85-ac35-969880b14e26-config-data\") pod \"nova-metadata-0\" (UID: \"38eb6728-c410-4f85-ac35-969880b14e26\") " pod="openstack/nova-metadata-0" Feb 23 09:12:27 crc kubenswrapper[4940]: I0223 09:12:27.689887 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-72fzk\" (UniqueName: \"kubernetes.io/projected/38eb6728-c410-4f85-ac35-969880b14e26-kube-api-access-72fzk\") pod \"nova-metadata-0\" (UID: \"38eb6728-c410-4f85-ac35-969880b14e26\") " pod="openstack/nova-metadata-0" Feb 23 09:12:27 crc kubenswrapper[4940]: I0223 09:12:27.916234 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 23 09:12:27 crc kubenswrapper[4940]: I0223 09:12:27.923382 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 23 09:12:28 crc kubenswrapper[4940]: I0223 09:12:28.454178 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 23 09:12:28 crc kubenswrapper[4940]: W0223 09:12:28.465137 4940 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod10792692_8f84_43da_aea3_46d28e5ba1f5.slice/crio-26120252803c6342ee376901de95ef4fde605eded1bd705dbf9c49d3b6e49598 WatchSource:0}: Error finding container 26120252803c6342ee376901de95ef4fde605eded1bd705dbf9c49d3b6e49598: Status 404 returned error can't find the container with id 26120252803c6342ee376901de95ef4fde605eded1bd705dbf9c49d3b6e49598 Feb 23 09:12:28 crc kubenswrapper[4940]: I0223 09:12:28.533887 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 23 09:12:28 crc kubenswrapper[4940]: E0223 09:12:28.971158 4940 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of c1ecf7c2ce85585b35fecca7f72e88e82e22a491e7734ab0f3f1dbe10b2a3b9e is running failed: container process not found" containerID="c1ecf7c2ce85585b35fecca7f72e88e82e22a491e7734ab0f3f1dbe10b2a3b9e" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 23 09:12:28 crc kubenswrapper[4940]: E0223 09:12:28.972189 4940 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of c1ecf7c2ce85585b35fecca7f72e88e82e22a491e7734ab0f3f1dbe10b2a3b9e is running failed: container process not found" containerID="c1ecf7c2ce85585b35fecca7f72e88e82e22a491e7734ab0f3f1dbe10b2a3b9e" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 23 09:12:28 crc kubenswrapper[4940]: E0223 09:12:28.972605 4940 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of c1ecf7c2ce85585b35fecca7f72e88e82e22a491e7734ab0f3f1dbe10b2a3b9e is running failed: container process not found" containerID="c1ecf7c2ce85585b35fecca7f72e88e82e22a491e7734ab0f3f1dbe10b2a3b9e" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 23 09:12:28 crc kubenswrapper[4940]: E0223 09:12:28.972662 4940 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of c1ecf7c2ce85585b35fecca7f72e88e82e22a491e7734ab0f3f1dbe10b2a3b9e is running failed: container process not found" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="da9fd460-b67b-4141-8f98-a68f5d73aec4" containerName="nova-scheduler-scheduler" Feb 23 09:12:28 crc kubenswrapper[4940]: I0223 09:12:28.988554 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 23 09:12:29 crc kubenswrapper[4940]: I0223 09:12:29.092342 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9cnmt\" (UniqueName: \"kubernetes.io/projected/da9fd460-b67b-4141-8f98-a68f5d73aec4-kube-api-access-9cnmt\") pod \"da9fd460-b67b-4141-8f98-a68f5d73aec4\" (UID: \"da9fd460-b67b-4141-8f98-a68f5d73aec4\") " Feb 23 09:12:29 crc kubenswrapper[4940]: I0223 09:12:29.092782 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da9fd460-b67b-4141-8f98-a68f5d73aec4-combined-ca-bundle\") pod \"da9fd460-b67b-4141-8f98-a68f5d73aec4\" (UID: \"da9fd460-b67b-4141-8f98-a68f5d73aec4\") " Feb 23 09:12:29 crc kubenswrapper[4940]: I0223 09:12:29.093042 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/da9fd460-b67b-4141-8f98-a68f5d73aec4-config-data\") pod \"da9fd460-b67b-4141-8f98-a68f5d73aec4\" (UID: \"da9fd460-b67b-4141-8f98-a68f5d73aec4\") " Feb 23 09:12:29 crc kubenswrapper[4940]: I0223 09:12:29.100233 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/da9fd460-b67b-4141-8f98-a68f5d73aec4-kube-api-access-9cnmt" (OuterVolumeSpecName: "kube-api-access-9cnmt") pod "da9fd460-b67b-4141-8f98-a68f5d73aec4" (UID: "da9fd460-b67b-4141-8f98-a68f5d73aec4"). InnerVolumeSpecName "kube-api-access-9cnmt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 09:12:29 crc kubenswrapper[4940]: I0223 09:12:29.122495 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/da9fd460-b67b-4141-8f98-a68f5d73aec4-config-data" (OuterVolumeSpecName: "config-data") pod "da9fd460-b67b-4141-8f98-a68f5d73aec4" (UID: "da9fd460-b67b-4141-8f98-a68f5d73aec4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 09:12:29 crc kubenswrapper[4940]: I0223 09:12:29.129733 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/da9fd460-b67b-4141-8f98-a68f5d73aec4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "da9fd460-b67b-4141-8f98-a68f5d73aec4" (UID: "da9fd460-b67b-4141-8f98-a68f5d73aec4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 09:12:29 crc kubenswrapper[4940]: I0223 09:12:29.195796 4940 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da9fd460-b67b-4141-8f98-a68f5d73aec4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 09:12:29 crc kubenswrapper[4940]: I0223 09:12:29.195834 4940 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/da9fd460-b67b-4141-8f98-a68f5d73aec4-config-data\") on node \"crc\" DevicePath \"\"" Feb 23 09:12:29 crc kubenswrapper[4940]: I0223 09:12:29.195844 4940 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9cnmt\" (UniqueName: \"kubernetes.io/projected/da9fd460-b67b-4141-8f98-a68f5d73aec4-kube-api-access-9cnmt\") on node \"crc\" DevicePath \"\"" Feb 23 09:12:29 crc kubenswrapper[4940]: I0223 09:12:29.389679 4940 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="16a728c5-ed82-4c5a-86ee-8d2b2442dbd6" path="/var/lib/kubelet/pods/16a728c5-ed82-4c5a-86ee-8d2b2442dbd6/volumes" Feb 23 09:12:29 crc kubenswrapper[4940]: I0223 09:12:29.390798 4940 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fca2de66-25b8-4d49-8283-8870c62104d3" path="/var/lib/kubelet/pods/fca2de66-25b8-4d49-8283-8870c62104d3/volumes" Feb 23 09:12:29 crc kubenswrapper[4940]: I0223 09:12:29.391411 4940 generic.go:334] "Generic (PLEG): container finished" podID="da9fd460-b67b-4141-8f98-a68f5d73aec4" containerID="c1ecf7c2ce85585b35fecca7f72e88e82e22a491e7734ab0f3f1dbe10b2a3b9e" exitCode=0 Feb 23 09:12:29 crc kubenswrapper[4940]: I0223 09:12:29.391459 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 23 09:12:29 crc kubenswrapper[4940]: I0223 09:12:29.392320 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"10792692-8f84-43da-aea3-46d28e5ba1f5","Type":"ContainerStarted","Data":"04a015775b6050c218c89888b9daff9dfe9806ad03f34314fd96f5667a3bea79"} Feb 23 09:12:29 crc kubenswrapper[4940]: I0223 09:12:29.392353 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"10792692-8f84-43da-aea3-46d28e5ba1f5","Type":"ContainerStarted","Data":"f1db46640e43f26447d9fff130f74be476a484a2d4ecc62aff6ab5a6f72f0305"} Feb 23 09:12:29 crc kubenswrapper[4940]: I0223 09:12:29.392365 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"10792692-8f84-43da-aea3-46d28e5ba1f5","Type":"ContainerStarted","Data":"26120252803c6342ee376901de95ef4fde605eded1bd705dbf9c49d3b6e49598"} Feb 23 09:12:29 crc kubenswrapper[4940]: I0223 09:12:29.392375 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"38eb6728-c410-4f85-ac35-969880b14e26","Type":"ContainerStarted","Data":"683c155d5a30760647ba950f124bb5c98abd6ed2a896251a8a155ae3f23bec66"} Feb 23 09:12:29 crc kubenswrapper[4940]: I0223 09:12:29.392386 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"38eb6728-c410-4f85-ac35-969880b14e26","Type":"ContainerStarted","Data":"ec9bc951a64f07e6798515b1d3ade21192a19d68c4d530783993791e90e8a365"} Feb 23 09:12:29 crc kubenswrapper[4940]: I0223 09:12:29.392396 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"da9fd460-b67b-4141-8f98-a68f5d73aec4","Type":"ContainerDied","Data":"c1ecf7c2ce85585b35fecca7f72e88e82e22a491e7734ab0f3f1dbe10b2a3b9e"} Feb 23 09:12:29 crc kubenswrapper[4940]: I0223 09:12:29.392409 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"da9fd460-b67b-4141-8f98-a68f5d73aec4","Type":"ContainerDied","Data":"47059df0300a7bc02bfbc4350792cdc4c8780c3be71cfecd6e0e596d77ac3b99"} Feb 23 09:12:29 crc kubenswrapper[4940]: I0223 09:12:29.392427 4940 scope.go:117] "RemoveContainer" containerID="c1ecf7c2ce85585b35fecca7f72e88e82e22a491e7734ab0f3f1dbe10b2a3b9e" Feb 23 09:12:29 crc kubenswrapper[4940]: I0223 09:12:29.424139 4940 scope.go:117] "RemoveContainer" containerID="c1ecf7c2ce85585b35fecca7f72e88e82e22a491e7734ab0f3f1dbe10b2a3b9e" Feb 23 09:12:29 crc kubenswrapper[4940]: E0223 09:12:29.425099 4940 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c1ecf7c2ce85585b35fecca7f72e88e82e22a491e7734ab0f3f1dbe10b2a3b9e\": container with ID starting with c1ecf7c2ce85585b35fecca7f72e88e82e22a491e7734ab0f3f1dbe10b2a3b9e not found: ID does not exist" containerID="c1ecf7c2ce85585b35fecca7f72e88e82e22a491e7734ab0f3f1dbe10b2a3b9e" Feb 23 09:12:29 crc kubenswrapper[4940]: I0223 09:12:29.425161 4940 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c1ecf7c2ce85585b35fecca7f72e88e82e22a491e7734ab0f3f1dbe10b2a3b9e"} err="failed to get container status \"c1ecf7c2ce85585b35fecca7f72e88e82e22a491e7734ab0f3f1dbe10b2a3b9e\": rpc error: code = NotFound desc = could not find container \"c1ecf7c2ce85585b35fecca7f72e88e82e22a491e7734ab0f3f1dbe10b2a3b9e\": container with ID starting with c1ecf7c2ce85585b35fecca7f72e88e82e22a491e7734ab0f3f1dbe10b2a3b9e not found: ID does not exist" Feb 23 09:12:29 crc kubenswrapper[4940]: I0223 09:12:29.459782 4940 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 23 09:12:29 crc kubenswrapper[4940]: I0223 09:12:29.479563 4940 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Feb 23 09:12:29 crc kubenswrapper[4940]: I0223 09:12:29.491312 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 23 09:12:29 crc kubenswrapper[4940]: E0223 09:12:29.492084 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da9fd460-b67b-4141-8f98-a68f5d73aec4" containerName="nova-scheduler-scheduler" Feb 23 09:12:29 crc kubenswrapper[4940]: I0223 09:12:29.492181 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="da9fd460-b67b-4141-8f98-a68f5d73aec4" containerName="nova-scheduler-scheduler" Feb 23 09:12:29 crc kubenswrapper[4940]: I0223 09:12:29.492486 4940 memory_manager.go:354] "RemoveStaleState removing state" podUID="da9fd460-b67b-4141-8f98-a68f5d73aec4" containerName="nova-scheduler-scheduler" Feb 23 09:12:29 crc kubenswrapper[4940]: I0223 09:12:29.493294 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 23 09:12:29 crc kubenswrapper[4940]: I0223 09:12:29.499634 4940 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.499597025 podStartE2EDuration="2.499597025s" podCreationTimestamp="2026-02-23 09:12:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 09:12:29.479534325 +0000 UTC m=+1480.862740482" watchObservedRunningTime="2026-02-23 09:12:29.499597025 +0000 UTC m=+1480.882803182" Feb 23 09:12:29 crc kubenswrapper[4940]: I0223 09:12:29.501229 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 23 09:12:29 crc kubenswrapper[4940]: I0223 09:12:29.514835 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 23 09:12:29 crc kubenswrapper[4940]: I0223 09:12:29.583813 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/261aaecb-ec48-4d96-9579-35057b0d6394-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"261aaecb-ec48-4d96-9579-35057b0d6394\") " pod="openstack/nova-scheduler-0" Feb 23 09:12:29 crc kubenswrapper[4940]: I0223 09:12:29.583897 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/261aaecb-ec48-4d96-9579-35057b0d6394-config-data\") pod \"nova-scheduler-0\" (UID: \"261aaecb-ec48-4d96-9579-35057b0d6394\") " pod="openstack/nova-scheduler-0" Feb 23 09:12:29 crc kubenswrapper[4940]: I0223 09:12:29.584027 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zcbdp\" (UniqueName: \"kubernetes.io/projected/261aaecb-ec48-4d96-9579-35057b0d6394-kube-api-access-zcbdp\") pod \"nova-scheduler-0\" (UID: \"261aaecb-ec48-4d96-9579-35057b0d6394\") " pod="openstack/nova-scheduler-0" Feb 23 09:12:29 crc kubenswrapper[4940]: I0223 09:12:29.685704 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zcbdp\" (UniqueName: \"kubernetes.io/projected/261aaecb-ec48-4d96-9579-35057b0d6394-kube-api-access-zcbdp\") pod \"nova-scheduler-0\" (UID: \"261aaecb-ec48-4d96-9579-35057b0d6394\") " pod="openstack/nova-scheduler-0" Feb 23 09:12:29 crc kubenswrapper[4940]: I0223 09:12:29.685825 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/261aaecb-ec48-4d96-9579-35057b0d6394-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"261aaecb-ec48-4d96-9579-35057b0d6394\") " pod="openstack/nova-scheduler-0" Feb 23 09:12:29 crc kubenswrapper[4940]: I0223 09:12:29.685853 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/261aaecb-ec48-4d96-9579-35057b0d6394-config-data\") pod \"nova-scheduler-0\" (UID: \"261aaecb-ec48-4d96-9579-35057b0d6394\") " pod="openstack/nova-scheduler-0" Feb 23 09:12:29 crc kubenswrapper[4940]: I0223 09:12:29.691939 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/261aaecb-ec48-4d96-9579-35057b0d6394-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"261aaecb-ec48-4d96-9579-35057b0d6394\") " pod="openstack/nova-scheduler-0" Feb 23 09:12:29 crc kubenswrapper[4940]: I0223 09:12:29.692034 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/261aaecb-ec48-4d96-9579-35057b0d6394-config-data\") pod \"nova-scheduler-0\" (UID: \"261aaecb-ec48-4d96-9579-35057b0d6394\") " pod="openstack/nova-scheduler-0" Feb 23 09:12:29 crc kubenswrapper[4940]: I0223 09:12:29.703276 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zcbdp\" (UniqueName: \"kubernetes.io/projected/261aaecb-ec48-4d96-9579-35057b0d6394-kube-api-access-zcbdp\") pod \"nova-scheduler-0\" (UID: \"261aaecb-ec48-4d96-9579-35057b0d6394\") " pod="openstack/nova-scheduler-0" Feb 23 09:12:29 crc kubenswrapper[4940]: I0223 09:12:29.830059 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 23 09:12:30 crc kubenswrapper[4940]: I0223 09:12:30.276634 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 23 09:12:30 crc kubenswrapper[4940]: W0223 09:12:30.278784 4940 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod261aaecb_ec48_4d96_9579_35057b0d6394.slice/crio-5abc4b417e3ee8225d572a19203ee4d4e2383be67c0256abc19bd3c8eab095c5 WatchSource:0}: Error finding container 5abc4b417e3ee8225d572a19203ee4d4e2383be67c0256abc19bd3c8eab095c5: Status 404 returned error can't find the container with id 5abc4b417e3ee8225d572a19203ee4d4e2383be67c0256abc19bd3c8eab095c5 Feb 23 09:12:30 crc kubenswrapper[4940]: I0223 09:12:30.403105 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"261aaecb-ec48-4d96-9579-35057b0d6394","Type":"ContainerStarted","Data":"5abc4b417e3ee8225d572a19203ee4d4e2383be67c0256abc19bd3c8eab095c5"} Feb 23 09:12:30 crc kubenswrapper[4940]: I0223 09:12:30.405986 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"38eb6728-c410-4f85-ac35-969880b14e26","Type":"ContainerStarted","Data":"a236392d0003f5ac46bf3b1332b54046f65d833b4e16be05680875befbf31dcc"} Feb 23 09:12:31 crc kubenswrapper[4940]: I0223 09:12:31.356635 4940 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="da9fd460-b67b-4141-8f98-a68f5d73aec4" path="/var/lib/kubelet/pods/da9fd460-b67b-4141-8f98-a68f5d73aec4/volumes" Feb 23 09:12:31 crc kubenswrapper[4940]: I0223 09:12:31.420191 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"261aaecb-ec48-4d96-9579-35057b0d6394","Type":"ContainerStarted","Data":"31a3a52867ff261cc4f17b6b24ded0ad765428c95ec42984bebd53a615e3bca0"} Feb 23 09:12:31 crc kubenswrapper[4940]: I0223 09:12:31.440219 4940 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=4.44018443 podStartE2EDuration="4.44018443s" podCreationTimestamp="2026-02-23 09:12:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 09:12:30.428896438 +0000 UTC m=+1481.812102615" watchObservedRunningTime="2026-02-23 09:12:31.44018443 +0000 UTC m=+1482.823390587" Feb 23 09:12:31 crc kubenswrapper[4940]: I0223 09:12:31.445116 4940 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.445099494 podStartE2EDuration="2.445099494s" podCreationTimestamp="2026-02-23 09:12:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 09:12:31.436452123 +0000 UTC m=+1482.819658290" watchObservedRunningTime="2026-02-23 09:12:31.445099494 +0000 UTC m=+1482.828305651" Feb 23 09:12:32 crc kubenswrapper[4940]: I0223 09:12:32.925371 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 23 09:12:32 crc kubenswrapper[4940]: I0223 09:12:32.925767 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 23 09:12:34 crc kubenswrapper[4940]: I0223 09:12:34.830266 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 23 09:12:38 crc kubenswrapper[4940]: I0223 09:12:38.208714 4940 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 23 09:12:38 crc kubenswrapper[4940]: I0223 09:12:38.209274 4940 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 23 09:12:38 crc kubenswrapper[4940]: I0223 09:12:38.212979 4940 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 23 09:12:38 crc kubenswrapper[4940]: I0223 09:12:38.216112 4940 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 23 09:12:39 crc kubenswrapper[4940]: I0223 09:12:39.213807 4940 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="38eb6728-c410-4f85-ac35-969880b14e26" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.225:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 23 09:12:39 crc kubenswrapper[4940]: I0223 09:12:39.215018 4940 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="38eb6728-c410-4f85-ac35-969880b14e26" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.225:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 23 09:12:39 crc kubenswrapper[4940]: I0223 09:12:39.222804 4940 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="10792692-8f84-43da-aea3-46d28e5ba1f5" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.224:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 23 09:12:39 crc kubenswrapper[4940]: I0223 09:12:39.222796 4940 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="10792692-8f84-43da-aea3-46d28e5ba1f5" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.224:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 23 09:12:39 crc kubenswrapper[4940]: I0223 09:12:39.830359 4940 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 23 09:12:39 crc kubenswrapper[4940]: I0223 09:12:39.859207 4940 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 23 09:12:40 crc kubenswrapper[4940]: I0223 09:12:40.798673 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 23 09:12:43 crc kubenswrapper[4940]: I0223 09:12:43.454557 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Feb 23 09:12:45 crc kubenswrapper[4940]: I0223 09:12:45.912741 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-mm4hq"] Feb 23 09:12:45 crc kubenswrapper[4940]: I0223 09:12:45.915721 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mm4hq" Feb 23 09:12:45 crc kubenswrapper[4940]: I0223 09:12:45.945148 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-mm4hq"] Feb 23 09:12:45 crc kubenswrapper[4940]: I0223 09:12:45.954824 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4dkhx\" (UniqueName: \"kubernetes.io/projected/141e9d14-de58-492b-aa30-9ecf534ce324-kube-api-access-4dkhx\") pod \"redhat-operators-mm4hq\" (UID: \"141e9d14-de58-492b-aa30-9ecf534ce324\") " pod="openshift-marketplace/redhat-operators-mm4hq" Feb 23 09:12:45 crc kubenswrapper[4940]: I0223 09:12:45.954897 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/141e9d14-de58-492b-aa30-9ecf534ce324-catalog-content\") pod \"redhat-operators-mm4hq\" (UID: \"141e9d14-de58-492b-aa30-9ecf534ce324\") " pod="openshift-marketplace/redhat-operators-mm4hq" Feb 23 09:12:45 crc kubenswrapper[4940]: I0223 09:12:45.955066 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/141e9d14-de58-492b-aa30-9ecf534ce324-utilities\") pod \"redhat-operators-mm4hq\" (UID: \"141e9d14-de58-492b-aa30-9ecf534ce324\") " pod="openshift-marketplace/redhat-operators-mm4hq" Feb 23 09:12:46 crc kubenswrapper[4940]: I0223 09:12:46.056883 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4dkhx\" (UniqueName: \"kubernetes.io/projected/141e9d14-de58-492b-aa30-9ecf534ce324-kube-api-access-4dkhx\") pod \"redhat-operators-mm4hq\" (UID: \"141e9d14-de58-492b-aa30-9ecf534ce324\") " pod="openshift-marketplace/redhat-operators-mm4hq" Feb 23 09:12:46 crc kubenswrapper[4940]: I0223 09:12:46.056952 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/141e9d14-de58-492b-aa30-9ecf534ce324-catalog-content\") pod \"redhat-operators-mm4hq\" (UID: \"141e9d14-de58-492b-aa30-9ecf534ce324\") " pod="openshift-marketplace/redhat-operators-mm4hq" Feb 23 09:12:46 crc kubenswrapper[4940]: I0223 09:12:46.057052 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/141e9d14-de58-492b-aa30-9ecf534ce324-utilities\") pod \"redhat-operators-mm4hq\" (UID: \"141e9d14-de58-492b-aa30-9ecf534ce324\") " pod="openshift-marketplace/redhat-operators-mm4hq" Feb 23 09:12:46 crc kubenswrapper[4940]: I0223 09:12:46.057443 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/141e9d14-de58-492b-aa30-9ecf534ce324-catalog-content\") pod \"redhat-operators-mm4hq\" (UID: \"141e9d14-de58-492b-aa30-9ecf534ce324\") " pod="openshift-marketplace/redhat-operators-mm4hq" Feb 23 09:12:46 crc kubenswrapper[4940]: I0223 09:12:46.057540 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/141e9d14-de58-492b-aa30-9ecf534ce324-utilities\") pod \"redhat-operators-mm4hq\" (UID: \"141e9d14-de58-492b-aa30-9ecf534ce324\") " pod="openshift-marketplace/redhat-operators-mm4hq" Feb 23 09:12:46 crc kubenswrapper[4940]: I0223 09:12:46.077241 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4dkhx\" (UniqueName: \"kubernetes.io/projected/141e9d14-de58-492b-aa30-9ecf534ce324-kube-api-access-4dkhx\") pod \"redhat-operators-mm4hq\" (UID: \"141e9d14-de58-492b-aa30-9ecf534ce324\") " pod="openshift-marketplace/redhat-operators-mm4hq" Feb 23 09:12:46 crc kubenswrapper[4940]: I0223 09:12:46.247368 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mm4hq" Feb 23 09:12:46 crc kubenswrapper[4940]: I0223 09:12:46.767091 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-mm4hq"] Feb 23 09:12:46 crc kubenswrapper[4940]: W0223 09:12:46.774909 4940 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod141e9d14_de58_492b_aa30_9ecf534ce324.slice/crio-01b1779120d7c5985f0fb219de86f8d94cbc1fc2d739b87eb8e89717b0c97f66 WatchSource:0}: Error finding container 01b1779120d7c5985f0fb219de86f8d94cbc1fc2d739b87eb8e89717b0c97f66: Status 404 returned error can't find the container with id 01b1779120d7c5985f0fb219de86f8d94cbc1fc2d739b87eb8e89717b0c97f66 Feb 23 09:12:46 crc kubenswrapper[4940]: I0223 09:12:46.833900 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mm4hq" event={"ID":"141e9d14-de58-492b-aa30-9ecf534ce324","Type":"ContainerStarted","Data":"01b1779120d7c5985f0fb219de86f8d94cbc1fc2d739b87eb8e89717b0c97f66"} Feb 23 09:12:47 crc kubenswrapper[4940]: I0223 09:12:47.847864 4940 generic.go:334] "Generic (PLEG): container finished" podID="141e9d14-de58-492b-aa30-9ecf534ce324" containerID="ee8276a9cbf2dbad40fc00ef5039c3c3f6a8b73ead6d330746d703cb68e4d4d6" exitCode=0 Feb 23 09:12:47 crc kubenswrapper[4940]: I0223 09:12:47.847963 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mm4hq" event={"ID":"141e9d14-de58-492b-aa30-9ecf534ce324","Type":"ContainerDied","Data":"ee8276a9cbf2dbad40fc00ef5039c3c3f6a8b73ead6d330746d703cb68e4d4d6"} Feb 23 09:12:47 crc kubenswrapper[4940]: I0223 09:12:47.924683 4940 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 23 09:12:47 crc kubenswrapper[4940]: I0223 09:12:47.925485 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 23 09:12:47 crc kubenswrapper[4940]: I0223 09:12:47.925573 4940 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 23 09:12:47 crc kubenswrapper[4940]: I0223 09:12:47.930162 4940 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 23 09:12:47 crc kubenswrapper[4940]: I0223 09:12:47.931368 4940 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 23 09:12:47 crc kubenswrapper[4940]: I0223 09:12:47.934017 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 23 09:12:47 crc kubenswrapper[4940]: I0223 09:12:47.941671 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 23 09:12:48 crc kubenswrapper[4940]: I0223 09:12:48.860649 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mm4hq" event={"ID":"141e9d14-de58-492b-aa30-9ecf534ce324","Type":"ContainerStarted","Data":"0a8b84b7c88640f6f6a0c74ccfeab9d98635845488b1ae734da7c1ec32997b78"} Feb 23 09:12:48 crc kubenswrapper[4940]: I0223 09:12:48.861082 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 23 09:12:48 crc kubenswrapper[4940]: I0223 09:12:48.867741 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 23 09:12:48 crc kubenswrapper[4940]: I0223 09:12:48.867827 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 23 09:12:51 crc kubenswrapper[4940]: I0223 09:12:51.418435 4940 generic.go:334] "Generic (PLEG): container finished" podID="141e9d14-de58-492b-aa30-9ecf534ce324" containerID="0a8b84b7c88640f6f6a0c74ccfeab9d98635845488b1ae734da7c1ec32997b78" exitCode=0 Feb 23 09:12:51 crc kubenswrapper[4940]: I0223 09:12:51.418523 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mm4hq" event={"ID":"141e9d14-de58-492b-aa30-9ecf534ce324","Type":"ContainerDied","Data":"0a8b84b7c88640f6f6a0c74ccfeab9d98635845488b1ae734da7c1ec32997b78"} Feb 23 09:12:52 crc kubenswrapper[4940]: I0223 09:12:52.431304 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mm4hq" event={"ID":"141e9d14-de58-492b-aa30-9ecf534ce324","Type":"ContainerStarted","Data":"2963f44c5696b26e5cf7330c25ae508c58d15b78e2f3e11cb40e961340383808"} Feb 23 09:12:52 crc kubenswrapper[4940]: I0223 09:12:52.451460 4940 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-mm4hq" podStartSLOduration=3.275681386 podStartE2EDuration="7.451425139s" podCreationTimestamp="2026-02-23 09:12:45 +0000 UTC" firstStartedPulling="2026-02-23 09:12:47.850770434 +0000 UTC m=+1499.233976631" lastFinishedPulling="2026-02-23 09:12:52.026514227 +0000 UTC m=+1503.409720384" observedRunningTime="2026-02-23 09:12:52.449281702 +0000 UTC m=+1503.832487899" watchObservedRunningTime="2026-02-23 09:12:52.451425139 +0000 UTC m=+1503.834631316" Feb 23 09:12:56 crc kubenswrapper[4940]: I0223 09:12:56.248776 4940 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-mm4hq" Feb 23 09:12:56 crc kubenswrapper[4940]: I0223 09:12:56.249353 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-mm4hq" Feb 23 09:12:57 crc kubenswrapper[4940]: I0223 09:12:57.048376 4940 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 23 09:12:57 crc kubenswrapper[4940]: I0223 09:12:57.326283 4940 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-mm4hq" podUID="141e9d14-de58-492b-aa30-9ecf534ce324" containerName="registry-server" probeResult="failure" output=< Feb 23 09:12:57 crc kubenswrapper[4940]: timeout: failed to connect service ":50051" within 1s Feb 23 09:12:57 crc kubenswrapper[4940]: > Feb 23 09:12:57 crc kubenswrapper[4940]: I0223 09:12:57.968968 4940 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 23 09:13:01 crc kubenswrapper[4940]: I0223 09:13:01.429124 4940 patch_prober.go:28] interesting pod/machine-config-daemon-26mgs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 23 09:13:01 crc kubenswrapper[4940]: I0223 09:13:01.429733 4940 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" podUID="f3f2cfd6-5ddf-436d-998f-440f1cc642b1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 23 09:13:01 crc kubenswrapper[4940]: I0223 09:13:01.450684 4940 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="987e4448-8da2-41e3-9dba-777d599609f5" containerName="rabbitmq" containerID="cri-o://01b0383984f554ff70445159c3a13d303faa206f7750267de58c56e350331259" gracePeriod=604796 Feb 23 09:13:02 crc kubenswrapper[4940]: I0223 09:13:02.610913 4940 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="7453ffea-4a1f-426c-ac9c-3377973fdb19" containerName="rabbitmq" containerID="cri-o://4ab4848a897a0f76845dc5e576a21f1d49ef5b490b1526a13be85d8b4754c46f" gracePeriod=604796 Feb 23 09:13:07 crc kubenswrapper[4940]: I0223 09:13:07.304695 4940 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-mm4hq" podUID="141e9d14-de58-492b-aa30-9ecf534ce324" containerName="registry-server" probeResult="failure" output=< Feb 23 09:13:07 crc kubenswrapper[4940]: timeout: failed to connect service ":50051" within 1s Feb 23 09:13:07 crc kubenswrapper[4940]: > Feb 23 09:13:08 crc kubenswrapper[4940]: I0223 09:13:08.075367 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 23 09:13:08 crc kubenswrapper[4940]: I0223 09:13:08.225414 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/987e4448-8da2-41e3-9dba-777d599609f5-rabbitmq-erlang-cookie\") pod \"987e4448-8da2-41e3-9dba-777d599609f5\" (UID: \"987e4448-8da2-41e3-9dba-777d599609f5\") " Feb 23 09:13:08 crc kubenswrapper[4940]: I0223 09:13:08.225466 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/987e4448-8da2-41e3-9dba-777d599609f5-rabbitmq-confd\") pod \"987e4448-8da2-41e3-9dba-777d599609f5\" (UID: \"987e4448-8da2-41e3-9dba-777d599609f5\") " Feb 23 09:13:08 crc kubenswrapper[4940]: I0223 09:13:08.225498 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/987e4448-8da2-41e3-9dba-777d599609f5-rabbitmq-plugins\") pod \"987e4448-8da2-41e3-9dba-777d599609f5\" (UID: \"987e4448-8da2-41e3-9dba-777d599609f5\") " Feb 23 09:13:08 crc kubenswrapper[4940]: I0223 09:13:08.225564 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"987e4448-8da2-41e3-9dba-777d599609f5\" (UID: \"987e4448-8da2-41e3-9dba-777d599609f5\") " Feb 23 09:13:08 crc kubenswrapper[4940]: I0223 09:13:08.225680 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/987e4448-8da2-41e3-9dba-777d599609f5-erlang-cookie-secret\") pod \"987e4448-8da2-41e3-9dba-777d599609f5\" (UID: \"987e4448-8da2-41e3-9dba-777d599609f5\") " Feb 23 09:13:08 crc kubenswrapper[4940]: I0223 09:13:08.225744 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/987e4448-8da2-41e3-9dba-777d599609f5-server-conf\") pod \"987e4448-8da2-41e3-9dba-777d599609f5\" (UID: \"987e4448-8da2-41e3-9dba-777d599609f5\") " Feb 23 09:13:08 crc kubenswrapper[4940]: I0223 09:13:08.225814 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/987e4448-8da2-41e3-9dba-777d599609f5-plugins-conf\") pod \"987e4448-8da2-41e3-9dba-777d599609f5\" (UID: \"987e4448-8da2-41e3-9dba-777d599609f5\") " Feb 23 09:13:08 crc kubenswrapper[4940]: I0223 09:13:08.225874 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m9wgl\" (UniqueName: \"kubernetes.io/projected/987e4448-8da2-41e3-9dba-777d599609f5-kube-api-access-m9wgl\") pod \"987e4448-8da2-41e3-9dba-777d599609f5\" (UID: \"987e4448-8da2-41e3-9dba-777d599609f5\") " Feb 23 09:13:08 crc kubenswrapper[4940]: I0223 09:13:08.225900 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/987e4448-8da2-41e3-9dba-777d599609f5-config-data\") pod \"987e4448-8da2-41e3-9dba-777d599609f5\" (UID: \"987e4448-8da2-41e3-9dba-777d599609f5\") " Feb 23 09:13:08 crc kubenswrapper[4940]: I0223 09:13:08.225974 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/987e4448-8da2-41e3-9dba-777d599609f5-rabbitmq-tls\") pod \"987e4448-8da2-41e3-9dba-777d599609f5\" (UID: \"987e4448-8da2-41e3-9dba-777d599609f5\") " Feb 23 09:13:08 crc kubenswrapper[4940]: I0223 09:13:08.226050 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/987e4448-8da2-41e3-9dba-777d599609f5-pod-info\") pod \"987e4448-8da2-41e3-9dba-777d599609f5\" (UID: \"987e4448-8da2-41e3-9dba-777d599609f5\") " Feb 23 09:13:08 crc kubenswrapper[4940]: I0223 09:13:08.226504 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/987e4448-8da2-41e3-9dba-777d599609f5-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "987e4448-8da2-41e3-9dba-777d599609f5" (UID: "987e4448-8da2-41e3-9dba-777d599609f5"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 09:13:08 crc kubenswrapper[4940]: I0223 09:13:08.226563 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/987e4448-8da2-41e3-9dba-777d599609f5-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "987e4448-8da2-41e3-9dba-777d599609f5" (UID: "987e4448-8da2-41e3-9dba-777d599609f5"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 09:13:08 crc kubenswrapper[4940]: I0223 09:13:08.226790 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/987e4448-8da2-41e3-9dba-777d599609f5-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "987e4448-8da2-41e3-9dba-777d599609f5" (UID: "987e4448-8da2-41e3-9dba-777d599609f5"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 09:13:08 crc kubenswrapper[4940]: I0223 09:13:08.230606 4940 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/987e4448-8da2-41e3-9dba-777d599609f5-plugins-conf\") on node \"crc\" DevicePath \"\"" Feb 23 09:13:08 crc kubenswrapper[4940]: I0223 09:13:08.230660 4940 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/987e4448-8da2-41e3-9dba-777d599609f5-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Feb 23 09:13:08 crc kubenswrapper[4940]: I0223 09:13:08.230675 4940 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/987e4448-8da2-41e3-9dba-777d599609f5-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Feb 23 09:13:08 crc kubenswrapper[4940]: I0223 09:13:08.234554 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage07-crc" (OuterVolumeSpecName: "persistence") pod "987e4448-8da2-41e3-9dba-777d599609f5" (UID: "987e4448-8da2-41e3-9dba-777d599609f5"). InnerVolumeSpecName "local-storage07-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 23 09:13:08 crc kubenswrapper[4940]: I0223 09:13:08.242870 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/987e4448-8da2-41e3-9dba-777d599609f5-pod-info" (OuterVolumeSpecName: "pod-info") pod "987e4448-8da2-41e3-9dba-777d599609f5" (UID: "987e4448-8da2-41e3-9dba-777d599609f5"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Feb 23 09:13:08 crc kubenswrapper[4940]: I0223 09:13:08.244964 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/987e4448-8da2-41e3-9dba-777d599609f5-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "987e4448-8da2-41e3-9dba-777d599609f5" (UID: "987e4448-8da2-41e3-9dba-777d599609f5"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 09:13:08 crc kubenswrapper[4940]: I0223 09:13:08.255152 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/987e4448-8da2-41e3-9dba-777d599609f5-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "987e4448-8da2-41e3-9dba-777d599609f5" (UID: "987e4448-8da2-41e3-9dba-777d599609f5"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 09:13:08 crc kubenswrapper[4940]: I0223 09:13:08.283394 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/987e4448-8da2-41e3-9dba-777d599609f5-kube-api-access-m9wgl" (OuterVolumeSpecName: "kube-api-access-m9wgl") pod "987e4448-8da2-41e3-9dba-777d599609f5" (UID: "987e4448-8da2-41e3-9dba-777d599609f5"). InnerVolumeSpecName "kube-api-access-m9wgl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 09:13:08 crc kubenswrapper[4940]: I0223 09:13:08.301650 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/987e4448-8da2-41e3-9dba-777d599609f5-config-data" (OuterVolumeSpecName: "config-data") pod "987e4448-8da2-41e3-9dba-777d599609f5" (UID: "987e4448-8da2-41e3-9dba-777d599609f5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 09:13:08 crc kubenswrapper[4940]: I0223 09:13:08.318977 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/987e4448-8da2-41e3-9dba-777d599609f5-server-conf" (OuterVolumeSpecName: "server-conf") pod "987e4448-8da2-41e3-9dba-777d599609f5" (UID: "987e4448-8da2-41e3-9dba-777d599609f5"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 09:13:08 crc kubenswrapper[4940]: I0223 09:13:08.344657 4940 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m9wgl\" (UniqueName: \"kubernetes.io/projected/987e4448-8da2-41e3-9dba-777d599609f5-kube-api-access-m9wgl\") on node \"crc\" DevicePath \"\"" Feb 23 09:13:08 crc kubenswrapper[4940]: I0223 09:13:08.344698 4940 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/987e4448-8da2-41e3-9dba-777d599609f5-config-data\") on node \"crc\" DevicePath \"\"" Feb 23 09:13:08 crc kubenswrapper[4940]: I0223 09:13:08.344711 4940 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/987e4448-8da2-41e3-9dba-777d599609f5-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Feb 23 09:13:08 crc kubenswrapper[4940]: I0223 09:13:08.344721 4940 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/987e4448-8da2-41e3-9dba-777d599609f5-pod-info\") on node \"crc\" DevicePath \"\"" Feb 23 09:13:08 crc kubenswrapper[4940]: I0223 09:13:08.344758 4940 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" " Feb 23 09:13:08 crc kubenswrapper[4940]: I0223 09:13:08.344770 4940 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/987e4448-8da2-41e3-9dba-777d599609f5-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Feb 23 09:13:08 crc kubenswrapper[4940]: I0223 09:13:08.344784 4940 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/987e4448-8da2-41e3-9dba-777d599609f5-server-conf\") on node \"crc\" DevicePath \"\"" Feb 23 09:13:08 crc kubenswrapper[4940]: I0223 09:13:08.379147 4940 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage07-crc" (UniqueName: "kubernetes.io/local-volume/local-storage07-crc") on node "crc" Feb 23 09:13:08 crc kubenswrapper[4940]: I0223 09:13:08.386374 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/987e4448-8da2-41e3-9dba-777d599609f5-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "987e4448-8da2-41e3-9dba-777d599609f5" (UID: "987e4448-8da2-41e3-9dba-777d599609f5"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 09:13:08 crc kubenswrapper[4940]: I0223 09:13:08.447553 4940 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/987e4448-8da2-41e3-9dba-777d599609f5-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Feb 23 09:13:08 crc kubenswrapper[4940]: I0223 09:13:08.447642 4940 reconciler_common.go:293] "Volume detached for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" DevicePath \"\"" Feb 23 09:13:08 crc kubenswrapper[4940]: I0223 09:13:08.595102 4940 generic.go:334] "Generic (PLEG): container finished" podID="987e4448-8da2-41e3-9dba-777d599609f5" containerID="01b0383984f554ff70445159c3a13d303faa206f7750267de58c56e350331259" exitCode=0 Feb 23 09:13:08 crc kubenswrapper[4940]: I0223 09:13:08.595143 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"987e4448-8da2-41e3-9dba-777d599609f5","Type":"ContainerDied","Data":"01b0383984f554ff70445159c3a13d303faa206f7750267de58c56e350331259"} Feb 23 09:13:08 crc kubenswrapper[4940]: I0223 09:13:08.595169 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"987e4448-8da2-41e3-9dba-777d599609f5","Type":"ContainerDied","Data":"98e288e021d3d09cd8a267c6d906525d120d6a9fff7050756a42252df693a837"} Feb 23 09:13:08 crc kubenswrapper[4940]: I0223 09:13:08.595187 4940 scope.go:117] "RemoveContainer" containerID="01b0383984f554ff70445159c3a13d303faa206f7750267de58c56e350331259" Feb 23 09:13:08 crc kubenswrapper[4940]: I0223 09:13:08.595223 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 23 09:13:08 crc kubenswrapper[4940]: I0223 09:13:08.620492 4940 scope.go:117] "RemoveContainer" containerID="23d2df5af434448a2262ddf59ca2c552ad6b605e7bc232716cfffc3c75d49713" Feb 23 09:13:08 crc kubenswrapper[4940]: I0223 09:13:08.636829 4940 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 23 09:13:08 crc kubenswrapper[4940]: I0223 09:13:08.645179 4940 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 23 09:13:08 crc kubenswrapper[4940]: I0223 09:13:08.661521 4940 scope.go:117] "RemoveContainer" containerID="01b0383984f554ff70445159c3a13d303faa206f7750267de58c56e350331259" Feb 23 09:13:08 crc kubenswrapper[4940]: E0223 09:13:08.663103 4940 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"01b0383984f554ff70445159c3a13d303faa206f7750267de58c56e350331259\": container with ID starting with 01b0383984f554ff70445159c3a13d303faa206f7750267de58c56e350331259 not found: ID does not exist" containerID="01b0383984f554ff70445159c3a13d303faa206f7750267de58c56e350331259" Feb 23 09:13:08 crc kubenswrapper[4940]: I0223 09:13:08.663140 4940 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"01b0383984f554ff70445159c3a13d303faa206f7750267de58c56e350331259"} err="failed to get container status \"01b0383984f554ff70445159c3a13d303faa206f7750267de58c56e350331259\": rpc error: code = NotFound desc = could not find container \"01b0383984f554ff70445159c3a13d303faa206f7750267de58c56e350331259\": container with ID starting with 01b0383984f554ff70445159c3a13d303faa206f7750267de58c56e350331259 not found: ID does not exist" Feb 23 09:13:08 crc kubenswrapper[4940]: I0223 09:13:08.663161 4940 scope.go:117] "RemoveContainer" containerID="23d2df5af434448a2262ddf59ca2c552ad6b605e7bc232716cfffc3c75d49713" Feb 23 09:13:08 crc kubenswrapper[4940]: E0223 09:13:08.663449 4940 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"23d2df5af434448a2262ddf59ca2c552ad6b605e7bc232716cfffc3c75d49713\": container with ID starting with 23d2df5af434448a2262ddf59ca2c552ad6b605e7bc232716cfffc3c75d49713 not found: ID does not exist" containerID="23d2df5af434448a2262ddf59ca2c552ad6b605e7bc232716cfffc3c75d49713" Feb 23 09:13:08 crc kubenswrapper[4940]: I0223 09:13:08.663487 4940 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"23d2df5af434448a2262ddf59ca2c552ad6b605e7bc232716cfffc3c75d49713"} err="failed to get container status \"23d2df5af434448a2262ddf59ca2c552ad6b605e7bc232716cfffc3c75d49713\": rpc error: code = NotFound desc = could not find container \"23d2df5af434448a2262ddf59ca2c552ad6b605e7bc232716cfffc3c75d49713\": container with ID starting with 23d2df5af434448a2262ddf59ca2c552ad6b605e7bc232716cfffc3c75d49713 not found: ID does not exist" Feb 23 09:13:08 crc kubenswrapper[4940]: I0223 09:13:08.681104 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Feb 23 09:13:08 crc kubenswrapper[4940]: E0223 09:13:08.682482 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="987e4448-8da2-41e3-9dba-777d599609f5" containerName="rabbitmq" Feb 23 09:13:08 crc kubenswrapper[4940]: I0223 09:13:08.682522 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="987e4448-8da2-41e3-9dba-777d599609f5" containerName="rabbitmq" Feb 23 09:13:08 crc kubenswrapper[4940]: E0223 09:13:08.682567 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="987e4448-8da2-41e3-9dba-777d599609f5" containerName="setup-container" Feb 23 09:13:08 crc kubenswrapper[4940]: I0223 09:13:08.682576 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="987e4448-8da2-41e3-9dba-777d599609f5" containerName="setup-container" Feb 23 09:13:08 crc kubenswrapper[4940]: I0223 09:13:08.683199 4940 memory_manager.go:354] "RemoveStaleState removing state" podUID="987e4448-8da2-41e3-9dba-777d599609f5" containerName="rabbitmq" Feb 23 09:13:08 crc kubenswrapper[4940]: I0223 09:13:08.684317 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 23 09:13:08 crc kubenswrapper[4940]: I0223 09:13:08.689395 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Feb 23 09:13:08 crc kubenswrapper[4940]: I0223 09:13:08.689476 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Feb 23 09:13:08 crc kubenswrapper[4940]: I0223 09:13:08.689481 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Feb 23 09:13:08 crc kubenswrapper[4940]: I0223 09:13:08.689631 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-g5jj6" Feb 23 09:13:08 crc kubenswrapper[4940]: I0223 09:13:08.689644 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Feb 23 09:13:08 crc kubenswrapper[4940]: I0223 09:13:08.689417 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Feb 23 09:13:08 crc kubenswrapper[4940]: I0223 09:13:08.689417 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Feb 23 09:13:08 crc kubenswrapper[4940]: I0223 09:13:08.714699 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 23 09:13:08 crc kubenswrapper[4940]: I0223 09:13:08.861926 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/438743de-ddf8-4b10-878a-b87c389cd3b6-server-conf\") pod \"rabbitmq-server-0\" (UID: \"438743de-ddf8-4b10-878a-b87c389cd3b6\") " pod="openstack/rabbitmq-server-0" Feb 23 09:13:08 crc kubenswrapper[4940]: I0223 09:13:08.862071 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/438743de-ddf8-4b10-878a-b87c389cd3b6-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"438743de-ddf8-4b10-878a-b87c389cd3b6\") " pod="openstack/rabbitmq-server-0" Feb 23 09:13:08 crc kubenswrapper[4940]: I0223 09:13:08.862343 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/438743de-ddf8-4b10-878a-b87c389cd3b6-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"438743de-ddf8-4b10-878a-b87c389cd3b6\") " pod="openstack/rabbitmq-server-0" Feb 23 09:13:08 crc kubenswrapper[4940]: I0223 09:13:08.862401 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/438743de-ddf8-4b10-878a-b87c389cd3b6-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"438743de-ddf8-4b10-878a-b87c389cd3b6\") " pod="openstack/rabbitmq-server-0" Feb 23 09:13:08 crc kubenswrapper[4940]: I0223 09:13:08.862473 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/438743de-ddf8-4b10-878a-b87c389cd3b6-pod-info\") pod \"rabbitmq-server-0\" (UID: \"438743de-ddf8-4b10-878a-b87c389cd3b6\") " pod="openstack/rabbitmq-server-0" Feb 23 09:13:08 crc kubenswrapper[4940]: I0223 09:13:08.862570 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/438743de-ddf8-4b10-878a-b87c389cd3b6-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"438743de-ddf8-4b10-878a-b87c389cd3b6\") " pod="openstack/rabbitmq-server-0" Feb 23 09:13:08 crc kubenswrapper[4940]: I0223 09:13:08.862636 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/438743de-ddf8-4b10-878a-b87c389cd3b6-config-data\") pod \"rabbitmq-server-0\" (UID: \"438743de-ddf8-4b10-878a-b87c389cd3b6\") " pod="openstack/rabbitmq-server-0" Feb 23 09:13:08 crc kubenswrapper[4940]: I0223 09:13:08.862723 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/438743de-ddf8-4b10-878a-b87c389cd3b6-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"438743de-ddf8-4b10-878a-b87c389cd3b6\") " pod="openstack/rabbitmq-server-0" Feb 23 09:13:08 crc kubenswrapper[4940]: I0223 09:13:08.862855 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6jmr8\" (UniqueName: \"kubernetes.io/projected/438743de-ddf8-4b10-878a-b87c389cd3b6-kube-api-access-6jmr8\") pod \"rabbitmq-server-0\" (UID: \"438743de-ddf8-4b10-878a-b87c389cd3b6\") " pod="openstack/rabbitmq-server-0" Feb 23 09:13:08 crc kubenswrapper[4940]: I0223 09:13:08.863138 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-server-0\" (UID: \"438743de-ddf8-4b10-878a-b87c389cd3b6\") " pod="openstack/rabbitmq-server-0" Feb 23 09:13:08 crc kubenswrapper[4940]: I0223 09:13:08.863221 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/438743de-ddf8-4b10-878a-b87c389cd3b6-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"438743de-ddf8-4b10-878a-b87c389cd3b6\") " pod="openstack/rabbitmq-server-0" Feb 23 09:13:08 crc kubenswrapper[4940]: I0223 09:13:08.965447 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-server-0\" (UID: \"438743de-ddf8-4b10-878a-b87c389cd3b6\") " pod="openstack/rabbitmq-server-0" Feb 23 09:13:08 crc kubenswrapper[4940]: I0223 09:13:08.965817 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/438743de-ddf8-4b10-878a-b87c389cd3b6-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"438743de-ddf8-4b10-878a-b87c389cd3b6\") " pod="openstack/rabbitmq-server-0" Feb 23 09:13:08 crc kubenswrapper[4940]: I0223 09:13:08.965924 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/438743de-ddf8-4b10-878a-b87c389cd3b6-server-conf\") pod \"rabbitmq-server-0\" (UID: \"438743de-ddf8-4b10-878a-b87c389cd3b6\") " pod="openstack/rabbitmq-server-0" Feb 23 09:13:08 crc kubenswrapper[4940]: I0223 09:13:08.965999 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/438743de-ddf8-4b10-878a-b87c389cd3b6-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"438743de-ddf8-4b10-878a-b87c389cd3b6\") " pod="openstack/rabbitmq-server-0" Feb 23 09:13:08 crc kubenswrapper[4940]: I0223 09:13:08.966027 4940 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-server-0\" (UID: \"438743de-ddf8-4b10-878a-b87c389cd3b6\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/rabbitmq-server-0" Feb 23 09:13:08 crc kubenswrapper[4940]: I0223 09:13:08.966079 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/438743de-ddf8-4b10-878a-b87c389cd3b6-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"438743de-ddf8-4b10-878a-b87c389cd3b6\") " pod="openstack/rabbitmq-server-0" Feb 23 09:13:08 crc kubenswrapper[4940]: I0223 09:13:08.966110 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/438743de-ddf8-4b10-878a-b87c389cd3b6-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"438743de-ddf8-4b10-878a-b87c389cd3b6\") " pod="openstack/rabbitmq-server-0" Feb 23 09:13:08 crc kubenswrapper[4940]: I0223 09:13:08.966136 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/438743de-ddf8-4b10-878a-b87c389cd3b6-pod-info\") pod \"rabbitmq-server-0\" (UID: \"438743de-ddf8-4b10-878a-b87c389cd3b6\") " pod="openstack/rabbitmq-server-0" Feb 23 09:13:08 crc kubenswrapper[4940]: I0223 09:13:08.966160 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/438743de-ddf8-4b10-878a-b87c389cd3b6-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"438743de-ddf8-4b10-878a-b87c389cd3b6\") " pod="openstack/rabbitmq-server-0" Feb 23 09:13:08 crc kubenswrapper[4940]: I0223 09:13:08.966192 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/438743de-ddf8-4b10-878a-b87c389cd3b6-config-data\") pod \"rabbitmq-server-0\" (UID: \"438743de-ddf8-4b10-878a-b87c389cd3b6\") " pod="openstack/rabbitmq-server-0" Feb 23 09:13:08 crc kubenswrapper[4940]: I0223 09:13:08.966239 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/438743de-ddf8-4b10-878a-b87c389cd3b6-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"438743de-ddf8-4b10-878a-b87c389cd3b6\") " pod="openstack/rabbitmq-server-0" Feb 23 09:13:08 crc kubenswrapper[4940]: I0223 09:13:08.966292 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6jmr8\" (UniqueName: \"kubernetes.io/projected/438743de-ddf8-4b10-878a-b87c389cd3b6-kube-api-access-6jmr8\") pod \"rabbitmq-server-0\" (UID: \"438743de-ddf8-4b10-878a-b87c389cd3b6\") " pod="openstack/rabbitmq-server-0" Feb 23 09:13:08 crc kubenswrapper[4940]: I0223 09:13:08.967067 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/438743de-ddf8-4b10-878a-b87c389cd3b6-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"438743de-ddf8-4b10-878a-b87c389cd3b6\") " pod="openstack/rabbitmq-server-0" Feb 23 09:13:08 crc kubenswrapper[4940]: I0223 09:13:08.967273 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/438743de-ddf8-4b10-878a-b87c389cd3b6-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"438743de-ddf8-4b10-878a-b87c389cd3b6\") " pod="openstack/rabbitmq-server-0" Feb 23 09:13:08 crc kubenswrapper[4940]: I0223 09:13:08.967529 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/438743de-ddf8-4b10-878a-b87c389cd3b6-server-conf\") pod \"rabbitmq-server-0\" (UID: \"438743de-ddf8-4b10-878a-b87c389cd3b6\") " pod="openstack/rabbitmq-server-0" Feb 23 09:13:08 crc kubenswrapper[4940]: I0223 09:13:08.968194 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/438743de-ddf8-4b10-878a-b87c389cd3b6-config-data\") pod \"rabbitmq-server-0\" (UID: \"438743de-ddf8-4b10-878a-b87c389cd3b6\") " pod="openstack/rabbitmq-server-0" Feb 23 09:13:08 crc kubenswrapper[4940]: I0223 09:13:08.968521 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/438743de-ddf8-4b10-878a-b87c389cd3b6-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"438743de-ddf8-4b10-878a-b87c389cd3b6\") " pod="openstack/rabbitmq-server-0" Feb 23 09:13:08 crc kubenswrapper[4940]: I0223 09:13:08.971893 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/438743de-ddf8-4b10-878a-b87c389cd3b6-pod-info\") pod \"rabbitmq-server-0\" (UID: \"438743de-ddf8-4b10-878a-b87c389cd3b6\") " pod="openstack/rabbitmq-server-0" Feb 23 09:13:08 crc kubenswrapper[4940]: I0223 09:13:08.976120 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/438743de-ddf8-4b10-878a-b87c389cd3b6-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"438743de-ddf8-4b10-878a-b87c389cd3b6\") " pod="openstack/rabbitmq-server-0" Feb 23 09:13:08 crc kubenswrapper[4940]: I0223 09:13:08.982394 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/438743de-ddf8-4b10-878a-b87c389cd3b6-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"438743de-ddf8-4b10-878a-b87c389cd3b6\") " pod="openstack/rabbitmq-server-0" Feb 23 09:13:08 crc kubenswrapper[4940]: I0223 09:13:08.984449 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/438743de-ddf8-4b10-878a-b87c389cd3b6-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"438743de-ddf8-4b10-878a-b87c389cd3b6\") " pod="openstack/rabbitmq-server-0" Feb 23 09:13:08 crc kubenswrapper[4940]: I0223 09:13:08.988929 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6jmr8\" (UniqueName: \"kubernetes.io/projected/438743de-ddf8-4b10-878a-b87c389cd3b6-kube-api-access-6jmr8\") pod \"rabbitmq-server-0\" (UID: \"438743de-ddf8-4b10-878a-b87c389cd3b6\") " pod="openstack/rabbitmq-server-0" Feb 23 09:13:09 crc kubenswrapper[4940]: I0223 09:13:09.068211 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-server-0\" (UID: \"438743de-ddf8-4b10-878a-b87c389cd3b6\") " pod="openstack/rabbitmq-server-0" Feb 23 09:13:09 crc kubenswrapper[4940]: I0223 09:13:09.170349 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 23 09:13:09 crc kubenswrapper[4940]: I0223 09:13:09.272651 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/7453ffea-4a1f-426c-ac9c-3377973fdb19-rabbitmq-erlang-cookie\") pod \"7453ffea-4a1f-426c-ac9c-3377973fdb19\" (UID: \"7453ffea-4a1f-426c-ac9c-3377973fdb19\") " Feb 23 09:13:09 crc kubenswrapper[4940]: I0223 09:13:09.272751 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/7453ffea-4a1f-426c-ac9c-3377973fdb19-plugins-conf\") pod \"7453ffea-4a1f-426c-ac9c-3377973fdb19\" (UID: \"7453ffea-4a1f-426c-ac9c-3377973fdb19\") " Feb 23 09:13:09 crc kubenswrapper[4940]: I0223 09:13:09.272847 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x59bq\" (UniqueName: \"kubernetes.io/projected/7453ffea-4a1f-426c-ac9c-3377973fdb19-kube-api-access-x59bq\") pod \"7453ffea-4a1f-426c-ac9c-3377973fdb19\" (UID: \"7453ffea-4a1f-426c-ac9c-3377973fdb19\") " Feb 23 09:13:09 crc kubenswrapper[4940]: I0223 09:13:09.272888 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/7453ffea-4a1f-426c-ac9c-3377973fdb19-erlang-cookie-secret\") pod \"7453ffea-4a1f-426c-ac9c-3377973fdb19\" (UID: \"7453ffea-4a1f-426c-ac9c-3377973fdb19\") " Feb 23 09:13:09 crc kubenswrapper[4940]: I0223 09:13:09.272959 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/7453ffea-4a1f-426c-ac9c-3377973fdb19-pod-info\") pod \"7453ffea-4a1f-426c-ac9c-3377973fdb19\" (UID: \"7453ffea-4a1f-426c-ac9c-3377973fdb19\") " Feb 23 09:13:09 crc kubenswrapper[4940]: I0223 09:13:09.273005 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"7453ffea-4a1f-426c-ac9c-3377973fdb19\" (UID: \"7453ffea-4a1f-426c-ac9c-3377973fdb19\") " Feb 23 09:13:09 crc kubenswrapper[4940]: I0223 09:13:09.273030 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/7453ffea-4a1f-426c-ac9c-3377973fdb19-server-conf\") pod \"7453ffea-4a1f-426c-ac9c-3377973fdb19\" (UID: \"7453ffea-4a1f-426c-ac9c-3377973fdb19\") " Feb 23 09:13:09 crc kubenswrapper[4940]: I0223 09:13:09.273075 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/7453ffea-4a1f-426c-ac9c-3377973fdb19-rabbitmq-confd\") pod \"7453ffea-4a1f-426c-ac9c-3377973fdb19\" (UID: \"7453ffea-4a1f-426c-ac9c-3377973fdb19\") " Feb 23 09:13:09 crc kubenswrapper[4940]: I0223 09:13:09.273101 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/7453ffea-4a1f-426c-ac9c-3377973fdb19-rabbitmq-plugins\") pod \"7453ffea-4a1f-426c-ac9c-3377973fdb19\" (UID: \"7453ffea-4a1f-426c-ac9c-3377973fdb19\") " Feb 23 09:13:09 crc kubenswrapper[4940]: I0223 09:13:09.273158 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7453ffea-4a1f-426c-ac9c-3377973fdb19-config-data\") pod \"7453ffea-4a1f-426c-ac9c-3377973fdb19\" (UID: \"7453ffea-4a1f-426c-ac9c-3377973fdb19\") " Feb 23 09:13:09 crc kubenswrapper[4940]: I0223 09:13:09.273200 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/7453ffea-4a1f-426c-ac9c-3377973fdb19-rabbitmq-tls\") pod \"7453ffea-4a1f-426c-ac9c-3377973fdb19\" (UID: \"7453ffea-4a1f-426c-ac9c-3377973fdb19\") " Feb 23 09:13:09 crc kubenswrapper[4940]: I0223 09:13:09.273333 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7453ffea-4a1f-426c-ac9c-3377973fdb19-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "7453ffea-4a1f-426c-ac9c-3377973fdb19" (UID: "7453ffea-4a1f-426c-ac9c-3377973fdb19"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 09:13:09 crc kubenswrapper[4940]: I0223 09:13:09.273981 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7453ffea-4a1f-426c-ac9c-3377973fdb19-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "7453ffea-4a1f-426c-ac9c-3377973fdb19" (UID: "7453ffea-4a1f-426c-ac9c-3377973fdb19"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 09:13:09 crc kubenswrapper[4940]: I0223 09:13:09.274289 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7453ffea-4a1f-426c-ac9c-3377973fdb19-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "7453ffea-4a1f-426c-ac9c-3377973fdb19" (UID: "7453ffea-4a1f-426c-ac9c-3377973fdb19"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 09:13:09 crc kubenswrapper[4940]: I0223 09:13:09.274485 4940 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/7453ffea-4a1f-426c-ac9c-3377973fdb19-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Feb 23 09:13:09 crc kubenswrapper[4940]: I0223 09:13:09.274511 4940 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/7453ffea-4a1f-426c-ac9c-3377973fdb19-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Feb 23 09:13:09 crc kubenswrapper[4940]: I0223 09:13:09.280892 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage08-crc" (OuterVolumeSpecName: "persistence") pod "7453ffea-4a1f-426c-ac9c-3377973fdb19" (UID: "7453ffea-4a1f-426c-ac9c-3377973fdb19"). InnerVolumeSpecName "local-storage08-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 23 09:13:09 crc kubenswrapper[4940]: I0223 09:13:09.282908 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7453ffea-4a1f-426c-ac9c-3377973fdb19-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "7453ffea-4a1f-426c-ac9c-3377973fdb19" (UID: "7453ffea-4a1f-426c-ac9c-3377973fdb19"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 09:13:09 crc kubenswrapper[4940]: I0223 09:13:09.283378 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7453ffea-4a1f-426c-ac9c-3377973fdb19-kube-api-access-x59bq" (OuterVolumeSpecName: "kube-api-access-x59bq") pod "7453ffea-4a1f-426c-ac9c-3377973fdb19" (UID: "7453ffea-4a1f-426c-ac9c-3377973fdb19"). InnerVolumeSpecName "kube-api-access-x59bq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 09:13:09 crc kubenswrapper[4940]: I0223 09:13:09.284299 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7453ffea-4a1f-426c-ac9c-3377973fdb19-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "7453ffea-4a1f-426c-ac9c-3377973fdb19" (UID: "7453ffea-4a1f-426c-ac9c-3377973fdb19"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 09:13:09 crc kubenswrapper[4940]: I0223 09:13:09.284314 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/7453ffea-4a1f-426c-ac9c-3377973fdb19-pod-info" (OuterVolumeSpecName: "pod-info") pod "7453ffea-4a1f-426c-ac9c-3377973fdb19" (UID: "7453ffea-4a1f-426c-ac9c-3377973fdb19"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Feb 23 09:13:09 crc kubenswrapper[4940]: I0223 09:13:09.308904 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7453ffea-4a1f-426c-ac9c-3377973fdb19-config-data" (OuterVolumeSpecName: "config-data") pod "7453ffea-4a1f-426c-ac9c-3377973fdb19" (UID: "7453ffea-4a1f-426c-ac9c-3377973fdb19"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 09:13:09 crc kubenswrapper[4940]: I0223 09:13:09.339838 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7453ffea-4a1f-426c-ac9c-3377973fdb19-server-conf" (OuterVolumeSpecName: "server-conf") pod "7453ffea-4a1f-426c-ac9c-3377973fdb19" (UID: "7453ffea-4a1f-426c-ac9c-3377973fdb19"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 09:13:09 crc kubenswrapper[4940]: I0223 09:13:09.343099 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 23 09:13:09 crc kubenswrapper[4940]: I0223 09:13:09.377038 4940 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/7453ffea-4a1f-426c-ac9c-3377973fdb19-pod-info\") on node \"crc\" DevicePath \"\"" Feb 23 09:13:09 crc kubenswrapper[4940]: I0223 09:13:09.377099 4940 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" " Feb 23 09:13:09 crc kubenswrapper[4940]: I0223 09:13:09.377115 4940 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/7453ffea-4a1f-426c-ac9c-3377973fdb19-server-conf\") on node \"crc\" DevicePath \"\"" Feb 23 09:13:09 crc kubenswrapper[4940]: I0223 09:13:09.377127 4940 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7453ffea-4a1f-426c-ac9c-3377973fdb19-config-data\") on node \"crc\" DevicePath \"\"" Feb 23 09:13:09 crc kubenswrapper[4940]: I0223 09:13:09.377139 4940 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/7453ffea-4a1f-426c-ac9c-3377973fdb19-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Feb 23 09:13:09 crc kubenswrapper[4940]: I0223 09:13:09.377157 4940 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/7453ffea-4a1f-426c-ac9c-3377973fdb19-plugins-conf\") on node \"crc\" DevicePath \"\"" Feb 23 09:13:09 crc kubenswrapper[4940]: I0223 09:13:09.377176 4940 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x59bq\" (UniqueName: \"kubernetes.io/projected/7453ffea-4a1f-426c-ac9c-3377973fdb19-kube-api-access-x59bq\") on node \"crc\" DevicePath \"\"" Feb 23 09:13:09 crc kubenswrapper[4940]: I0223 09:13:09.377194 4940 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/7453ffea-4a1f-426c-ac9c-3377973fdb19-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Feb 23 09:13:09 crc kubenswrapper[4940]: I0223 09:13:09.379251 4940 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="987e4448-8da2-41e3-9dba-777d599609f5" path="/var/lib/kubelet/pods/987e4448-8da2-41e3-9dba-777d599609f5/volumes" Feb 23 09:13:09 crc kubenswrapper[4940]: I0223 09:13:09.403680 4940 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage08-crc" (UniqueName: "kubernetes.io/local-volume/local-storage08-crc") on node "crc" Feb 23 09:13:09 crc kubenswrapper[4940]: I0223 09:13:09.421371 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7453ffea-4a1f-426c-ac9c-3377973fdb19-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "7453ffea-4a1f-426c-ac9c-3377973fdb19" (UID: "7453ffea-4a1f-426c-ac9c-3377973fdb19"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 09:13:09 crc kubenswrapper[4940]: I0223 09:13:09.480072 4940 reconciler_common.go:293] "Volume detached for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" DevicePath \"\"" Feb 23 09:13:09 crc kubenswrapper[4940]: I0223 09:13:09.480100 4940 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/7453ffea-4a1f-426c-ac9c-3377973fdb19-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Feb 23 09:13:09 crc kubenswrapper[4940]: I0223 09:13:09.611493 4940 generic.go:334] "Generic (PLEG): container finished" podID="7453ffea-4a1f-426c-ac9c-3377973fdb19" containerID="4ab4848a897a0f76845dc5e576a21f1d49ef5b490b1526a13be85d8b4754c46f" exitCode=0 Feb 23 09:13:09 crc kubenswrapper[4940]: I0223 09:13:09.611586 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"7453ffea-4a1f-426c-ac9c-3377973fdb19","Type":"ContainerDied","Data":"4ab4848a897a0f76845dc5e576a21f1d49ef5b490b1526a13be85d8b4754c46f"} Feb 23 09:13:09 crc kubenswrapper[4940]: I0223 09:13:09.611695 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 23 09:13:09 crc kubenswrapper[4940]: I0223 09:13:09.611769 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"7453ffea-4a1f-426c-ac9c-3377973fdb19","Type":"ContainerDied","Data":"72648ff7c46df20d8ff12411dce7e3bb33e795f32f329f812bc084bc9863a2af"} Feb 23 09:13:09 crc kubenswrapper[4940]: I0223 09:13:09.611807 4940 scope.go:117] "RemoveContainer" containerID="4ab4848a897a0f76845dc5e576a21f1d49ef5b490b1526a13be85d8b4754c46f" Feb 23 09:13:09 crc kubenswrapper[4940]: I0223 09:13:09.657778 4940 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 23 09:13:09 crc kubenswrapper[4940]: I0223 09:13:09.662940 4940 scope.go:117] "RemoveContainer" containerID="bb4754e574db70293cbad0d3fe8fac2eb6386e6f7db81f1c3fe396e83c71ff0e" Feb 23 09:13:09 crc kubenswrapper[4940]: I0223 09:13:09.713313 4940 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 23 09:13:09 crc kubenswrapper[4940]: I0223 09:13:09.734604 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 23 09:13:09 crc kubenswrapper[4940]: E0223 09:13:09.735354 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7453ffea-4a1f-426c-ac9c-3377973fdb19" containerName="setup-container" Feb 23 09:13:09 crc kubenswrapper[4940]: I0223 09:13:09.735378 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="7453ffea-4a1f-426c-ac9c-3377973fdb19" containerName="setup-container" Feb 23 09:13:09 crc kubenswrapper[4940]: E0223 09:13:09.735409 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7453ffea-4a1f-426c-ac9c-3377973fdb19" containerName="rabbitmq" Feb 23 09:13:09 crc kubenswrapper[4940]: I0223 09:13:09.735416 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="7453ffea-4a1f-426c-ac9c-3377973fdb19" containerName="rabbitmq" Feb 23 09:13:09 crc kubenswrapper[4940]: I0223 09:13:09.735712 4940 memory_manager.go:354] "RemoveStaleState removing state" podUID="7453ffea-4a1f-426c-ac9c-3377973fdb19" containerName="rabbitmq" Feb 23 09:13:09 crc kubenswrapper[4940]: I0223 09:13:09.737046 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 23 09:13:09 crc kubenswrapper[4940]: I0223 09:13:09.743372 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Feb 23 09:13:09 crc kubenswrapper[4940]: I0223 09:13:09.743385 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-n24ms" Feb 23 09:13:09 crc kubenswrapper[4940]: I0223 09:13:09.743416 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Feb 23 09:13:09 crc kubenswrapper[4940]: I0223 09:13:09.743683 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Feb 23 09:13:09 crc kubenswrapper[4940]: I0223 09:13:09.743848 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Feb 23 09:13:09 crc kubenswrapper[4940]: I0223 09:13:09.743978 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Feb 23 09:13:09 crc kubenswrapper[4940]: I0223 09:13:09.744274 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Feb 23 09:13:09 crc kubenswrapper[4940]: I0223 09:13:09.751746 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 23 09:13:09 crc kubenswrapper[4940]: I0223 09:13:09.773573 4940 scope.go:117] "RemoveContainer" containerID="4ab4848a897a0f76845dc5e576a21f1d49ef5b490b1526a13be85d8b4754c46f" Feb 23 09:13:09 crc kubenswrapper[4940]: E0223 09:13:09.774258 4940 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4ab4848a897a0f76845dc5e576a21f1d49ef5b490b1526a13be85d8b4754c46f\": container with ID starting with 4ab4848a897a0f76845dc5e576a21f1d49ef5b490b1526a13be85d8b4754c46f not found: ID does not exist" containerID="4ab4848a897a0f76845dc5e576a21f1d49ef5b490b1526a13be85d8b4754c46f" Feb 23 09:13:09 crc kubenswrapper[4940]: I0223 09:13:09.774304 4940 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4ab4848a897a0f76845dc5e576a21f1d49ef5b490b1526a13be85d8b4754c46f"} err="failed to get container status \"4ab4848a897a0f76845dc5e576a21f1d49ef5b490b1526a13be85d8b4754c46f\": rpc error: code = NotFound desc = could not find container \"4ab4848a897a0f76845dc5e576a21f1d49ef5b490b1526a13be85d8b4754c46f\": container with ID starting with 4ab4848a897a0f76845dc5e576a21f1d49ef5b490b1526a13be85d8b4754c46f not found: ID does not exist" Feb 23 09:13:09 crc kubenswrapper[4940]: I0223 09:13:09.774333 4940 scope.go:117] "RemoveContainer" containerID="bb4754e574db70293cbad0d3fe8fac2eb6386e6f7db81f1c3fe396e83c71ff0e" Feb 23 09:13:09 crc kubenswrapper[4940]: E0223 09:13:09.774707 4940 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bb4754e574db70293cbad0d3fe8fac2eb6386e6f7db81f1c3fe396e83c71ff0e\": container with ID starting with bb4754e574db70293cbad0d3fe8fac2eb6386e6f7db81f1c3fe396e83c71ff0e not found: ID does not exist" containerID="bb4754e574db70293cbad0d3fe8fac2eb6386e6f7db81f1c3fe396e83c71ff0e" Feb 23 09:13:09 crc kubenswrapper[4940]: I0223 09:13:09.774730 4940 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bb4754e574db70293cbad0d3fe8fac2eb6386e6f7db81f1c3fe396e83c71ff0e"} err="failed to get container status \"bb4754e574db70293cbad0d3fe8fac2eb6386e6f7db81f1c3fe396e83c71ff0e\": rpc error: code = NotFound desc = could not find container \"bb4754e574db70293cbad0d3fe8fac2eb6386e6f7db81f1c3fe396e83c71ff0e\": container with ID starting with bb4754e574db70293cbad0d3fe8fac2eb6386e6f7db81f1c3fe396e83c71ff0e not found: ID does not exist" Feb 23 09:13:09 crc kubenswrapper[4940]: I0223 09:13:09.852657 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 23 09:13:09 crc kubenswrapper[4940]: I0223 09:13:09.892022 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/89409108-7455-4318-83ba-65a6dd96d76c-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"89409108-7455-4318-83ba-65a6dd96d76c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 09:13:09 crc kubenswrapper[4940]: I0223 09:13:09.892105 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/89409108-7455-4318-83ba-65a6dd96d76c-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"89409108-7455-4318-83ba-65a6dd96d76c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 09:13:09 crc kubenswrapper[4940]: I0223 09:13:09.892140 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/89409108-7455-4318-83ba-65a6dd96d76c-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"89409108-7455-4318-83ba-65a6dd96d76c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 09:13:09 crc kubenswrapper[4940]: I0223 09:13:09.892181 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/89409108-7455-4318-83ba-65a6dd96d76c-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"89409108-7455-4318-83ba-65a6dd96d76c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 09:13:09 crc kubenswrapper[4940]: I0223 09:13:09.892202 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"89409108-7455-4318-83ba-65a6dd96d76c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 09:13:09 crc kubenswrapper[4940]: I0223 09:13:09.892219 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/89409108-7455-4318-83ba-65a6dd96d76c-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"89409108-7455-4318-83ba-65a6dd96d76c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 09:13:09 crc kubenswrapper[4940]: I0223 09:13:09.892236 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/89409108-7455-4318-83ba-65a6dd96d76c-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"89409108-7455-4318-83ba-65a6dd96d76c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 09:13:09 crc kubenswrapper[4940]: I0223 09:13:09.892252 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7nfxf\" (UniqueName: \"kubernetes.io/projected/89409108-7455-4318-83ba-65a6dd96d76c-kube-api-access-7nfxf\") pod \"rabbitmq-cell1-server-0\" (UID: \"89409108-7455-4318-83ba-65a6dd96d76c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 09:13:09 crc kubenswrapper[4940]: I0223 09:13:09.892313 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/89409108-7455-4318-83ba-65a6dd96d76c-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"89409108-7455-4318-83ba-65a6dd96d76c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 09:13:09 crc kubenswrapper[4940]: I0223 09:13:09.892347 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/89409108-7455-4318-83ba-65a6dd96d76c-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"89409108-7455-4318-83ba-65a6dd96d76c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 09:13:09 crc kubenswrapper[4940]: I0223 09:13:09.892374 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/89409108-7455-4318-83ba-65a6dd96d76c-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"89409108-7455-4318-83ba-65a6dd96d76c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 09:13:09 crc kubenswrapper[4940]: I0223 09:13:09.995133 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/89409108-7455-4318-83ba-65a6dd96d76c-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"89409108-7455-4318-83ba-65a6dd96d76c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 09:13:09 crc kubenswrapper[4940]: I0223 09:13:09.995447 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/89409108-7455-4318-83ba-65a6dd96d76c-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"89409108-7455-4318-83ba-65a6dd96d76c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 09:13:09 crc kubenswrapper[4940]: I0223 09:13:09.995739 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/89409108-7455-4318-83ba-65a6dd96d76c-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"89409108-7455-4318-83ba-65a6dd96d76c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 09:13:09 crc kubenswrapper[4940]: I0223 09:13:09.996895 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/89409108-7455-4318-83ba-65a6dd96d76c-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"89409108-7455-4318-83ba-65a6dd96d76c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 09:13:09 crc kubenswrapper[4940]: I0223 09:13:09.998103 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/89409108-7455-4318-83ba-65a6dd96d76c-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"89409108-7455-4318-83ba-65a6dd96d76c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 09:13:09 crc kubenswrapper[4940]: I0223 09:13:09.998764 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/89409108-7455-4318-83ba-65a6dd96d76c-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"89409108-7455-4318-83ba-65a6dd96d76c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 09:13:09 crc kubenswrapper[4940]: I0223 09:13:09.999815 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/89409108-7455-4318-83ba-65a6dd96d76c-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"89409108-7455-4318-83ba-65a6dd96d76c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 09:13:10 crc kubenswrapper[4940]: I0223 09:13:10.000143 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/89409108-7455-4318-83ba-65a6dd96d76c-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"89409108-7455-4318-83ba-65a6dd96d76c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 09:13:10 crc kubenswrapper[4940]: I0223 09:13:10.001014 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/89409108-7455-4318-83ba-65a6dd96d76c-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"89409108-7455-4318-83ba-65a6dd96d76c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 09:13:10 crc kubenswrapper[4940]: I0223 09:13:10.001272 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"89409108-7455-4318-83ba-65a6dd96d76c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 09:13:10 crc kubenswrapper[4940]: I0223 09:13:10.001447 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/89409108-7455-4318-83ba-65a6dd96d76c-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"89409108-7455-4318-83ba-65a6dd96d76c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 09:13:10 crc kubenswrapper[4940]: I0223 09:13:10.001588 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/89409108-7455-4318-83ba-65a6dd96d76c-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"89409108-7455-4318-83ba-65a6dd96d76c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 09:13:10 crc kubenswrapper[4940]: I0223 09:13:10.001586 4940 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"89409108-7455-4318-83ba-65a6dd96d76c\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/rabbitmq-cell1-server-0" Feb 23 09:13:10 crc kubenswrapper[4940]: I0223 09:13:10.001595 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/89409108-7455-4318-83ba-65a6dd96d76c-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"89409108-7455-4318-83ba-65a6dd96d76c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 09:13:10 crc kubenswrapper[4940]: I0223 09:13:10.002166 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/89409108-7455-4318-83ba-65a6dd96d76c-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"89409108-7455-4318-83ba-65a6dd96d76c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 09:13:10 crc kubenswrapper[4940]: I0223 09:13:10.002212 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7nfxf\" (UniqueName: \"kubernetes.io/projected/89409108-7455-4318-83ba-65a6dd96d76c-kube-api-access-7nfxf\") pod \"rabbitmq-cell1-server-0\" (UID: \"89409108-7455-4318-83ba-65a6dd96d76c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 09:13:10 crc kubenswrapper[4940]: I0223 09:13:10.002799 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/89409108-7455-4318-83ba-65a6dd96d76c-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"89409108-7455-4318-83ba-65a6dd96d76c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 09:13:10 crc kubenswrapper[4940]: I0223 09:13:10.005196 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/89409108-7455-4318-83ba-65a6dd96d76c-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"89409108-7455-4318-83ba-65a6dd96d76c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 09:13:10 crc kubenswrapper[4940]: I0223 09:13:10.009183 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/89409108-7455-4318-83ba-65a6dd96d76c-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"89409108-7455-4318-83ba-65a6dd96d76c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 09:13:10 crc kubenswrapper[4940]: I0223 09:13:10.011332 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/89409108-7455-4318-83ba-65a6dd96d76c-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"89409108-7455-4318-83ba-65a6dd96d76c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 09:13:10 crc kubenswrapper[4940]: I0223 09:13:10.012670 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/89409108-7455-4318-83ba-65a6dd96d76c-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"89409108-7455-4318-83ba-65a6dd96d76c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 09:13:10 crc kubenswrapper[4940]: I0223 09:13:10.174527 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7nfxf\" (UniqueName: \"kubernetes.io/projected/89409108-7455-4318-83ba-65a6dd96d76c-kube-api-access-7nfxf\") pod \"rabbitmq-cell1-server-0\" (UID: \"89409108-7455-4318-83ba-65a6dd96d76c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 09:13:10 crc kubenswrapper[4940]: I0223 09:13:10.192140 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"89409108-7455-4318-83ba-65a6dd96d76c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 09:13:10 crc kubenswrapper[4940]: I0223 09:13:10.397591 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 23 09:13:10 crc kubenswrapper[4940]: I0223 09:13:10.632193 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"438743de-ddf8-4b10-878a-b87c389cd3b6","Type":"ContainerStarted","Data":"a013d4c5635e52cbf22b5a89bd5c91ef67d73dc35d4ba633174fd82b89df2405"} Feb 23 09:13:10 crc kubenswrapper[4940]: I0223 09:13:10.896127 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5559d4f67f-lphr5"] Feb 23 09:13:10 crc kubenswrapper[4940]: I0223 09:13:10.898331 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5559d4f67f-lphr5" Feb 23 09:13:10 crc kubenswrapper[4940]: I0223 09:13:10.907802 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-edpm-ipam" Feb 23 09:13:10 crc kubenswrapper[4940]: I0223 09:13:10.914321 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5559d4f67f-lphr5"] Feb 23 09:13:10 crc kubenswrapper[4940]: I0223 09:13:10.961246 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 23 09:13:11 crc kubenswrapper[4940]: I0223 09:13:11.095405 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/25fa996a-bfc0-4b64-a588-c45d80c9e0b8-openstack-edpm-ipam\") pod \"dnsmasq-dns-5559d4f67f-lphr5\" (UID: \"25fa996a-bfc0-4b64-a588-c45d80c9e0b8\") " pod="openstack/dnsmasq-dns-5559d4f67f-lphr5" Feb 23 09:13:11 crc kubenswrapper[4940]: I0223 09:13:11.095532 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/25fa996a-bfc0-4b64-a588-c45d80c9e0b8-ovsdbserver-nb\") pod \"dnsmasq-dns-5559d4f67f-lphr5\" (UID: \"25fa996a-bfc0-4b64-a588-c45d80c9e0b8\") " pod="openstack/dnsmasq-dns-5559d4f67f-lphr5" Feb 23 09:13:11 crc kubenswrapper[4940]: I0223 09:13:11.095578 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7jhwz\" (UniqueName: \"kubernetes.io/projected/25fa996a-bfc0-4b64-a588-c45d80c9e0b8-kube-api-access-7jhwz\") pod \"dnsmasq-dns-5559d4f67f-lphr5\" (UID: \"25fa996a-bfc0-4b64-a588-c45d80c9e0b8\") " pod="openstack/dnsmasq-dns-5559d4f67f-lphr5" Feb 23 09:13:11 crc kubenswrapper[4940]: I0223 09:13:11.095633 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/25fa996a-bfc0-4b64-a588-c45d80c9e0b8-dns-swift-storage-0\") pod \"dnsmasq-dns-5559d4f67f-lphr5\" (UID: \"25fa996a-bfc0-4b64-a588-c45d80c9e0b8\") " pod="openstack/dnsmasq-dns-5559d4f67f-lphr5" Feb 23 09:13:11 crc kubenswrapper[4940]: I0223 09:13:11.095664 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/25fa996a-bfc0-4b64-a588-c45d80c9e0b8-ovsdbserver-sb\") pod \"dnsmasq-dns-5559d4f67f-lphr5\" (UID: \"25fa996a-bfc0-4b64-a588-c45d80c9e0b8\") " pod="openstack/dnsmasq-dns-5559d4f67f-lphr5" Feb 23 09:13:11 crc kubenswrapper[4940]: I0223 09:13:11.095732 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/25fa996a-bfc0-4b64-a588-c45d80c9e0b8-dns-svc\") pod \"dnsmasq-dns-5559d4f67f-lphr5\" (UID: \"25fa996a-bfc0-4b64-a588-c45d80c9e0b8\") " pod="openstack/dnsmasq-dns-5559d4f67f-lphr5" Feb 23 09:13:11 crc kubenswrapper[4940]: I0223 09:13:11.095757 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/25fa996a-bfc0-4b64-a588-c45d80c9e0b8-config\") pod \"dnsmasq-dns-5559d4f67f-lphr5\" (UID: \"25fa996a-bfc0-4b64-a588-c45d80c9e0b8\") " pod="openstack/dnsmasq-dns-5559d4f67f-lphr5" Feb 23 09:13:11 crc kubenswrapper[4940]: I0223 09:13:11.197531 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/25fa996a-bfc0-4b64-a588-c45d80c9e0b8-openstack-edpm-ipam\") pod \"dnsmasq-dns-5559d4f67f-lphr5\" (UID: \"25fa996a-bfc0-4b64-a588-c45d80c9e0b8\") " pod="openstack/dnsmasq-dns-5559d4f67f-lphr5" Feb 23 09:13:11 crc kubenswrapper[4940]: I0223 09:13:11.197664 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/25fa996a-bfc0-4b64-a588-c45d80c9e0b8-ovsdbserver-nb\") pod \"dnsmasq-dns-5559d4f67f-lphr5\" (UID: \"25fa996a-bfc0-4b64-a588-c45d80c9e0b8\") " pod="openstack/dnsmasq-dns-5559d4f67f-lphr5" Feb 23 09:13:11 crc kubenswrapper[4940]: I0223 09:13:11.197715 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7jhwz\" (UniqueName: \"kubernetes.io/projected/25fa996a-bfc0-4b64-a588-c45d80c9e0b8-kube-api-access-7jhwz\") pod \"dnsmasq-dns-5559d4f67f-lphr5\" (UID: \"25fa996a-bfc0-4b64-a588-c45d80c9e0b8\") " pod="openstack/dnsmasq-dns-5559d4f67f-lphr5" Feb 23 09:13:11 crc kubenswrapper[4940]: I0223 09:13:11.197759 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/25fa996a-bfc0-4b64-a588-c45d80c9e0b8-dns-swift-storage-0\") pod \"dnsmasq-dns-5559d4f67f-lphr5\" (UID: \"25fa996a-bfc0-4b64-a588-c45d80c9e0b8\") " pod="openstack/dnsmasq-dns-5559d4f67f-lphr5" Feb 23 09:13:11 crc kubenswrapper[4940]: I0223 09:13:11.197782 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/25fa996a-bfc0-4b64-a588-c45d80c9e0b8-ovsdbserver-sb\") pod \"dnsmasq-dns-5559d4f67f-lphr5\" (UID: \"25fa996a-bfc0-4b64-a588-c45d80c9e0b8\") " pod="openstack/dnsmasq-dns-5559d4f67f-lphr5" Feb 23 09:13:11 crc kubenswrapper[4940]: I0223 09:13:11.197865 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/25fa996a-bfc0-4b64-a588-c45d80c9e0b8-dns-svc\") pod \"dnsmasq-dns-5559d4f67f-lphr5\" (UID: \"25fa996a-bfc0-4b64-a588-c45d80c9e0b8\") " pod="openstack/dnsmasq-dns-5559d4f67f-lphr5" Feb 23 09:13:11 crc kubenswrapper[4940]: I0223 09:13:11.197891 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/25fa996a-bfc0-4b64-a588-c45d80c9e0b8-config\") pod \"dnsmasq-dns-5559d4f67f-lphr5\" (UID: \"25fa996a-bfc0-4b64-a588-c45d80c9e0b8\") " pod="openstack/dnsmasq-dns-5559d4f67f-lphr5" Feb 23 09:13:11 crc kubenswrapper[4940]: I0223 09:13:11.198723 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/25fa996a-bfc0-4b64-a588-c45d80c9e0b8-ovsdbserver-nb\") pod \"dnsmasq-dns-5559d4f67f-lphr5\" (UID: \"25fa996a-bfc0-4b64-a588-c45d80c9e0b8\") " pod="openstack/dnsmasq-dns-5559d4f67f-lphr5" Feb 23 09:13:11 crc kubenswrapper[4940]: I0223 09:13:11.198784 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/25fa996a-bfc0-4b64-a588-c45d80c9e0b8-config\") pod \"dnsmasq-dns-5559d4f67f-lphr5\" (UID: \"25fa996a-bfc0-4b64-a588-c45d80c9e0b8\") " pod="openstack/dnsmasq-dns-5559d4f67f-lphr5" Feb 23 09:13:11 crc kubenswrapper[4940]: I0223 09:13:11.199101 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/25fa996a-bfc0-4b64-a588-c45d80c9e0b8-dns-svc\") pod \"dnsmasq-dns-5559d4f67f-lphr5\" (UID: \"25fa996a-bfc0-4b64-a588-c45d80c9e0b8\") " pod="openstack/dnsmasq-dns-5559d4f67f-lphr5" Feb 23 09:13:11 crc kubenswrapper[4940]: I0223 09:13:11.199313 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/25fa996a-bfc0-4b64-a588-c45d80c9e0b8-openstack-edpm-ipam\") pod \"dnsmasq-dns-5559d4f67f-lphr5\" (UID: \"25fa996a-bfc0-4b64-a588-c45d80c9e0b8\") " pod="openstack/dnsmasq-dns-5559d4f67f-lphr5" Feb 23 09:13:11 crc kubenswrapper[4940]: I0223 09:13:11.199954 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/25fa996a-bfc0-4b64-a588-c45d80c9e0b8-ovsdbserver-sb\") pod \"dnsmasq-dns-5559d4f67f-lphr5\" (UID: \"25fa996a-bfc0-4b64-a588-c45d80c9e0b8\") " pod="openstack/dnsmasq-dns-5559d4f67f-lphr5" Feb 23 09:13:11 crc kubenswrapper[4940]: I0223 09:13:11.200089 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/25fa996a-bfc0-4b64-a588-c45d80c9e0b8-dns-swift-storage-0\") pod \"dnsmasq-dns-5559d4f67f-lphr5\" (UID: \"25fa996a-bfc0-4b64-a588-c45d80c9e0b8\") " pod="openstack/dnsmasq-dns-5559d4f67f-lphr5" Feb 23 09:13:11 crc kubenswrapper[4940]: I0223 09:13:11.219237 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7jhwz\" (UniqueName: \"kubernetes.io/projected/25fa996a-bfc0-4b64-a588-c45d80c9e0b8-kube-api-access-7jhwz\") pod \"dnsmasq-dns-5559d4f67f-lphr5\" (UID: \"25fa996a-bfc0-4b64-a588-c45d80c9e0b8\") " pod="openstack/dnsmasq-dns-5559d4f67f-lphr5" Feb 23 09:13:11 crc kubenswrapper[4940]: I0223 09:13:11.359515 4940 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7453ffea-4a1f-426c-ac9c-3377973fdb19" path="/var/lib/kubelet/pods/7453ffea-4a1f-426c-ac9c-3377973fdb19/volumes" Feb 23 09:13:11 crc kubenswrapper[4940]: I0223 09:13:11.529512 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5559d4f67f-lphr5" Feb 23 09:13:11 crc kubenswrapper[4940]: I0223 09:13:11.645711 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"89409108-7455-4318-83ba-65a6dd96d76c","Type":"ContainerStarted","Data":"c88d92839b20d26c53d9a0e1df4eada00cd4bbb856cefde709cd6a1539203701"} Feb 23 09:13:12 crc kubenswrapper[4940]: I0223 09:13:12.153446 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5559d4f67f-lphr5"] Feb 23 09:13:12 crc kubenswrapper[4940]: W0223 09:13:12.231431 4940 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod25fa996a_bfc0_4b64_a588_c45d80c9e0b8.slice/crio-bf3fbdec2c73678ee96684a35d8c06fd17fcaccaee95bb06c75c5b89938fc4ed WatchSource:0}: Error finding container bf3fbdec2c73678ee96684a35d8c06fd17fcaccaee95bb06c75c5b89938fc4ed: Status 404 returned error can't find the container with id bf3fbdec2c73678ee96684a35d8c06fd17fcaccaee95bb06c75c5b89938fc4ed Feb 23 09:13:12 crc kubenswrapper[4940]: I0223 09:13:12.668181 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5559d4f67f-lphr5" event={"ID":"25fa996a-bfc0-4b64-a588-c45d80c9e0b8","Type":"ContainerStarted","Data":"bf3fbdec2c73678ee96684a35d8c06fd17fcaccaee95bb06c75c5b89938fc4ed"} Feb 23 09:13:12 crc kubenswrapper[4940]: I0223 09:13:12.671797 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"438743de-ddf8-4b10-878a-b87c389cd3b6","Type":"ContainerStarted","Data":"6a1f86722dfe96633622e2b6662c60cd9c8f167ffc1d4e04f4dc76c1bff7da66"} Feb 23 09:13:13 crc kubenswrapper[4940]: I0223 09:13:13.686755 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"89409108-7455-4318-83ba-65a6dd96d76c","Type":"ContainerStarted","Data":"bb131a2b964b3b473a6d1a8f721051c3d61ebc0803e88011037b91b236d31517"} Feb 23 09:13:13 crc kubenswrapper[4940]: I0223 09:13:13.692772 4940 generic.go:334] "Generic (PLEG): container finished" podID="25fa996a-bfc0-4b64-a588-c45d80c9e0b8" containerID="e0ab0b61e89f151465d6d8c44bab3115e0e93b8c172b96e0f7538f1ed34e4960" exitCode=0 Feb 23 09:13:13 crc kubenswrapper[4940]: I0223 09:13:13.692830 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5559d4f67f-lphr5" event={"ID":"25fa996a-bfc0-4b64-a588-c45d80c9e0b8","Type":"ContainerDied","Data":"e0ab0b61e89f151465d6d8c44bab3115e0e93b8c172b96e0f7538f1ed34e4960"} Feb 23 09:13:13 crc kubenswrapper[4940]: I0223 09:13:13.955946 4940 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="7453ffea-4a1f-426c-ac9c-3377973fdb19" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.99:5671: i/o timeout" Feb 23 09:13:14 crc kubenswrapper[4940]: I0223 09:13:14.706543 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5559d4f67f-lphr5" event={"ID":"25fa996a-bfc0-4b64-a588-c45d80c9e0b8","Type":"ContainerStarted","Data":"03c8ff89f1404812c85783e37aed7af503f90cb2759eee1ff1e2910e1e51095b"} Feb 23 09:13:14 crc kubenswrapper[4940]: I0223 09:13:14.742331 4940 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5559d4f67f-lphr5" podStartSLOduration=4.742312119 podStartE2EDuration="4.742312119s" podCreationTimestamp="2026-02-23 09:13:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 09:13:14.73599297 +0000 UTC m=+1526.119199157" watchObservedRunningTime="2026-02-23 09:13:14.742312119 +0000 UTC m=+1526.125518276" Feb 23 09:13:15 crc kubenswrapper[4940]: I0223 09:13:15.716163 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5559d4f67f-lphr5" Feb 23 09:13:16 crc kubenswrapper[4940]: I0223 09:13:16.320831 4940 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-mm4hq" Feb 23 09:13:16 crc kubenswrapper[4940]: I0223 09:13:16.371042 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-mm4hq" Feb 23 09:13:17 crc kubenswrapper[4940]: I0223 09:13:17.116451 4940 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-mm4hq"] Feb 23 09:13:17 crc kubenswrapper[4940]: I0223 09:13:17.734896 4940 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-mm4hq" podUID="141e9d14-de58-492b-aa30-9ecf534ce324" containerName="registry-server" containerID="cri-o://2963f44c5696b26e5cf7330c25ae508c58d15b78e2f3e11cb40e961340383808" gracePeriod=2 Feb 23 09:13:18 crc kubenswrapper[4940]: I0223 09:13:18.285363 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mm4hq" Feb 23 09:13:18 crc kubenswrapper[4940]: I0223 09:13:18.398531 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/141e9d14-de58-492b-aa30-9ecf534ce324-catalog-content\") pod \"141e9d14-de58-492b-aa30-9ecf534ce324\" (UID: \"141e9d14-de58-492b-aa30-9ecf534ce324\") " Feb 23 09:13:18 crc kubenswrapper[4940]: I0223 09:13:18.398603 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/141e9d14-de58-492b-aa30-9ecf534ce324-utilities\") pod \"141e9d14-de58-492b-aa30-9ecf534ce324\" (UID: \"141e9d14-de58-492b-aa30-9ecf534ce324\") " Feb 23 09:13:18 crc kubenswrapper[4940]: I0223 09:13:18.398758 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4dkhx\" (UniqueName: \"kubernetes.io/projected/141e9d14-de58-492b-aa30-9ecf534ce324-kube-api-access-4dkhx\") pod \"141e9d14-de58-492b-aa30-9ecf534ce324\" (UID: \"141e9d14-de58-492b-aa30-9ecf534ce324\") " Feb 23 09:13:18 crc kubenswrapper[4940]: I0223 09:13:18.399850 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/141e9d14-de58-492b-aa30-9ecf534ce324-utilities" (OuterVolumeSpecName: "utilities") pod "141e9d14-de58-492b-aa30-9ecf534ce324" (UID: "141e9d14-de58-492b-aa30-9ecf534ce324"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 09:13:18 crc kubenswrapper[4940]: I0223 09:13:18.404197 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/141e9d14-de58-492b-aa30-9ecf534ce324-kube-api-access-4dkhx" (OuterVolumeSpecName: "kube-api-access-4dkhx") pod "141e9d14-de58-492b-aa30-9ecf534ce324" (UID: "141e9d14-de58-492b-aa30-9ecf534ce324"). InnerVolumeSpecName "kube-api-access-4dkhx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 09:13:18 crc kubenswrapper[4940]: I0223 09:13:18.501215 4940 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/141e9d14-de58-492b-aa30-9ecf534ce324-utilities\") on node \"crc\" DevicePath \"\"" Feb 23 09:13:18 crc kubenswrapper[4940]: I0223 09:13:18.501254 4940 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4dkhx\" (UniqueName: \"kubernetes.io/projected/141e9d14-de58-492b-aa30-9ecf534ce324-kube-api-access-4dkhx\") on node \"crc\" DevicePath \"\"" Feb 23 09:13:18 crc kubenswrapper[4940]: I0223 09:13:18.594514 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/141e9d14-de58-492b-aa30-9ecf534ce324-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "141e9d14-de58-492b-aa30-9ecf534ce324" (UID: "141e9d14-de58-492b-aa30-9ecf534ce324"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 09:13:18 crc kubenswrapper[4940]: I0223 09:13:18.603589 4940 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/141e9d14-de58-492b-aa30-9ecf534ce324-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 23 09:13:18 crc kubenswrapper[4940]: I0223 09:13:18.752694 4940 generic.go:334] "Generic (PLEG): container finished" podID="141e9d14-de58-492b-aa30-9ecf534ce324" containerID="2963f44c5696b26e5cf7330c25ae508c58d15b78e2f3e11cb40e961340383808" exitCode=0 Feb 23 09:13:18 crc kubenswrapper[4940]: I0223 09:13:18.752744 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mm4hq" event={"ID":"141e9d14-de58-492b-aa30-9ecf534ce324","Type":"ContainerDied","Data":"2963f44c5696b26e5cf7330c25ae508c58d15b78e2f3e11cb40e961340383808"} Feb 23 09:13:18 crc kubenswrapper[4940]: I0223 09:13:18.752782 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mm4hq" event={"ID":"141e9d14-de58-492b-aa30-9ecf534ce324","Type":"ContainerDied","Data":"01b1779120d7c5985f0fb219de86f8d94cbc1fc2d739b87eb8e89717b0c97f66"} Feb 23 09:13:18 crc kubenswrapper[4940]: I0223 09:13:18.752805 4940 scope.go:117] "RemoveContainer" containerID="2963f44c5696b26e5cf7330c25ae508c58d15b78e2f3e11cb40e961340383808" Feb 23 09:13:18 crc kubenswrapper[4940]: I0223 09:13:18.752846 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mm4hq" Feb 23 09:13:18 crc kubenswrapper[4940]: I0223 09:13:18.788673 4940 scope.go:117] "RemoveContainer" containerID="0a8b84b7c88640f6f6a0c74ccfeab9d98635845488b1ae734da7c1ec32997b78" Feb 23 09:13:18 crc kubenswrapper[4940]: I0223 09:13:18.814772 4940 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-mm4hq"] Feb 23 09:13:18 crc kubenswrapper[4940]: I0223 09:13:18.828186 4940 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-mm4hq"] Feb 23 09:13:18 crc kubenswrapper[4940]: I0223 09:13:18.844147 4940 scope.go:117] "RemoveContainer" containerID="ee8276a9cbf2dbad40fc00ef5039c3c3f6a8b73ead6d330746d703cb68e4d4d6" Feb 23 09:13:18 crc kubenswrapper[4940]: I0223 09:13:18.892945 4940 scope.go:117] "RemoveContainer" containerID="2963f44c5696b26e5cf7330c25ae508c58d15b78e2f3e11cb40e961340383808" Feb 23 09:13:18 crc kubenswrapper[4940]: E0223 09:13:18.893436 4940 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2963f44c5696b26e5cf7330c25ae508c58d15b78e2f3e11cb40e961340383808\": container with ID starting with 2963f44c5696b26e5cf7330c25ae508c58d15b78e2f3e11cb40e961340383808 not found: ID does not exist" containerID="2963f44c5696b26e5cf7330c25ae508c58d15b78e2f3e11cb40e961340383808" Feb 23 09:13:18 crc kubenswrapper[4940]: I0223 09:13:18.893480 4940 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2963f44c5696b26e5cf7330c25ae508c58d15b78e2f3e11cb40e961340383808"} err="failed to get container status \"2963f44c5696b26e5cf7330c25ae508c58d15b78e2f3e11cb40e961340383808\": rpc error: code = NotFound desc = could not find container \"2963f44c5696b26e5cf7330c25ae508c58d15b78e2f3e11cb40e961340383808\": container with ID starting with 2963f44c5696b26e5cf7330c25ae508c58d15b78e2f3e11cb40e961340383808 not found: ID does not exist" Feb 23 09:13:18 crc kubenswrapper[4940]: I0223 09:13:18.893505 4940 scope.go:117] "RemoveContainer" containerID="0a8b84b7c88640f6f6a0c74ccfeab9d98635845488b1ae734da7c1ec32997b78" Feb 23 09:13:18 crc kubenswrapper[4940]: E0223 09:13:18.893941 4940 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0a8b84b7c88640f6f6a0c74ccfeab9d98635845488b1ae734da7c1ec32997b78\": container with ID starting with 0a8b84b7c88640f6f6a0c74ccfeab9d98635845488b1ae734da7c1ec32997b78 not found: ID does not exist" containerID="0a8b84b7c88640f6f6a0c74ccfeab9d98635845488b1ae734da7c1ec32997b78" Feb 23 09:13:18 crc kubenswrapper[4940]: I0223 09:13:18.893988 4940 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0a8b84b7c88640f6f6a0c74ccfeab9d98635845488b1ae734da7c1ec32997b78"} err="failed to get container status \"0a8b84b7c88640f6f6a0c74ccfeab9d98635845488b1ae734da7c1ec32997b78\": rpc error: code = NotFound desc = could not find container \"0a8b84b7c88640f6f6a0c74ccfeab9d98635845488b1ae734da7c1ec32997b78\": container with ID starting with 0a8b84b7c88640f6f6a0c74ccfeab9d98635845488b1ae734da7c1ec32997b78 not found: ID does not exist" Feb 23 09:13:18 crc kubenswrapper[4940]: I0223 09:13:18.894021 4940 scope.go:117] "RemoveContainer" containerID="ee8276a9cbf2dbad40fc00ef5039c3c3f6a8b73ead6d330746d703cb68e4d4d6" Feb 23 09:13:18 crc kubenswrapper[4940]: E0223 09:13:18.894577 4940 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ee8276a9cbf2dbad40fc00ef5039c3c3f6a8b73ead6d330746d703cb68e4d4d6\": container with ID starting with ee8276a9cbf2dbad40fc00ef5039c3c3f6a8b73ead6d330746d703cb68e4d4d6 not found: ID does not exist" containerID="ee8276a9cbf2dbad40fc00ef5039c3c3f6a8b73ead6d330746d703cb68e4d4d6" Feb 23 09:13:18 crc kubenswrapper[4940]: I0223 09:13:18.894824 4940 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ee8276a9cbf2dbad40fc00ef5039c3c3f6a8b73ead6d330746d703cb68e4d4d6"} err="failed to get container status \"ee8276a9cbf2dbad40fc00ef5039c3c3f6a8b73ead6d330746d703cb68e4d4d6\": rpc error: code = NotFound desc = could not find container \"ee8276a9cbf2dbad40fc00ef5039c3c3f6a8b73ead6d330746d703cb68e4d4d6\": container with ID starting with ee8276a9cbf2dbad40fc00ef5039c3c3f6a8b73ead6d330746d703cb68e4d4d6 not found: ID does not exist" Feb 23 09:13:19 crc kubenswrapper[4940]: I0223 09:13:19.361389 4940 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="141e9d14-de58-492b-aa30-9ecf534ce324" path="/var/lib/kubelet/pods/141e9d14-de58-492b-aa30-9ecf534ce324/volumes" Feb 23 09:13:21 crc kubenswrapper[4940]: I0223 09:13:21.532835 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5559d4f67f-lphr5" Feb 23 09:13:21 crc kubenswrapper[4940]: I0223 09:13:21.605374 4940 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b4c997d87-hgkbr"] Feb 23 09:13:21 crc kubenswrapper[4940]: I0223 09:13:21.605649 4940 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5b4c997d87-hgkbr" podUID="c5af1432-d260-46c1-9502-de04b6978ca4" containerName="dnsmasq-dns" containerID="cri-o://e444a861a71b87a1edc1fc26769e634a7e4ca0943635b8c7c4077988298a1982" gracePeriod=10 Feb 23 09:13:21 crc kubenswrapper[4940]: I0223 09:13:21.778541 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5d99fc9df9-v8kxm"] Feb 23 09:13:21 crc kubenswrapper[4940]: E0223 09:13:21.779520 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="141e9d14-de58-492b-aa30-9ecf534ce324" containerName="registry-server" Feb 23 09:13:21 crc kubenswrapper[4940]: I0223 09:13:21.779662 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="141e9d14-de58-492b-aa30-9ecf534ce324" containerName="registry-server" Feb 23 09:13:21 crc kubenswrapper[4940]: E0223 09:13:21.779778 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="141e9d14-de58-492b-aa30-9ecf534ce324" containerName="extract-content" Feb 23 09:13:21 crc kubenswrapper[4940]: I0223 09:13:21.779895 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="141e9d14-de58-492b-aa30-9ecf534ce324" containerName="extract-content" Feb 23 09:13:21 crc kubenswrapper[4940]: E0223 09:13:21.780000 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="141e9d14-de58-492b-aa30-9ecf534ce324" containerName="extract-utilities" Feb 23 09:13:21 crc kubenswrapper[4940]: I0223 09:13:21.780087 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="141e9d14-de58-492b-aa30-9ecf534ce324" containerName="extract-utilities" Feb 23 09:13:21 crc kubenswrapper[4940]: I0223 09:13:21.780457 4940 memory_manager.go:354] "RemoveStaleState removing state" podUID="141e9d14-de58-492b-aa30-9ecf534ce324" containerName="registry-server" Feb 23 09:13:21 crc kubenswrapper[4940]: I0223 09:13:21.782261 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5d99fc9df9-v8kxm" Feb 23 09:13:21 crc kubenswrapper[4940]: I0223 09:13:21.806266 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5d99fc9df9-v8kxm"] Feb 23 09:13:21 crc kubenswrapper[4940]: I0223 09:13:21.828993 4940 generic.go:334] "Generic (PLEG): container finished" podID="c5af1432-d260-46c1-9502-de04b6978ca4" containerID="e444a861a71b87a1edc1fc26769e634a7e4ca0943635b8c7c4077988298a1982" exitCode=0 Feb 23 09:13:21 crc kubenswrapper[4940]: I0223 09:13:21.829048 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b4c997d87-hgkbr" event={"ID":"c5af1432-d260-46c1-9502-de04b6978ca4","Type":"ContainerDied","Data":"e444a861a71b87a1edc1fc26769e634a7e4ca0943635b8c7c4077988298a1982"} Feb 23 09:13:21 crc kubenswrapper[4940]: I0223 09:13:21.879737 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hqjnm\" (UniqueName: \"kubernetes.io/projected/99e36de7-b768-429a-a0c5-78ee546952bf-kube-api-access-hqjnm\") pod \"dnsmasq-dns-5d99fc9df9-v8kxm\" (UID: \"99e36de7-b768-429a-a0c5-78ee546952bf\") " pod="openstack/dnsmasq-dns-5d99fc9df9-v8kxm" Feb 23 09:13:21 crc kubenswrapper[4940]: I0223 09:13:21.879836 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/99e36de7-b768-429a-a0c5-78ee546952bf-dns-svc\") pod \"dnsmasq-dns-5d99fc9df9-v8kxm\" (UID: \"99e36de7-b768-429a-a0c5-78ee546952bf\") " pod="openstack/dnsmasq-dns-5d99fc9df9-v8kxm" Feb 23 09:13:21 crc kubenswrapper[4940]: I0223 09:13:21.879905 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/99e36de7-b768-429a-a0c5-78ee546952bf-openstack-edpm-ipam\") pod \"dnsmasq-dns-5d99fc9df9-v8kxm\" (UID: \"99e36de7-b768-429a-a0c5-78ee546952bf\") " pod="openstack/dnsmasq-dns-5d99fc9df9-v8kxm" Feb 23 09:13:21 crc kubenswrapper[4940]: I0223 09:13:21.879982 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/99e36de7-b768-429a-a0c5-78ee546952bf-config\") pod \"dnsmasq-dns-5d99fc9df9-v8kxm\" (UID: \"99e36de7-b768-429a-a0c5-78ee546952bf\") " pod="openstack/dnsmasq-dns-5d99fc9df9-v8kxm" Feb 23 09:13:21 crc kubenswrapper[4940]: I0223 09:13:21.880017 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/99e36de7-b768-429a-a0c5-78ee546952bf-dns-swift-storage-0\") pod \"dnsmasq-dns-5d99fc9df9-v8kxm\" (UID: \"99e36de7-b768-429a-a0c5-78ee546952bf\") " pod="openstack/dnsmasq-dns-5d99fc9df9-v8kxm" Feb 23 09:13:21 crc kubenswrapper[4940]: I0223 09:13:21.880064 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/99e36de7-b768-429a-a0c5-78ee546952bf-ovsdbserver-nb\") pod \"dnsmasq-dns-5d99fc9df9-v8kxm\" (UID: \"99e36de7-b768-429a-a0c5-78ee546952bf\") " pod="openstack/dnsmasq-dns-5d99fc9df9-v8kxm" Feb 23 09:13:21 crc kubenswrapper[4940]: I0223 09:13:21.880127 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/99e36de7-b768-429a-a0c5-78ee546952bf-ovsdbserver-sb\") pod \"dnsmasq-dns-5d99fc9df9-v8kxm\" (UID: \"99e36de7-b768-429a-a0c5-78ee546952bf\") " pod="openstack/dnsmasq-dns-5d99fc9df9-v8kxm" Feb 23 09:13:21 crc kubenswrapper[4940]: I0223 09:13:21.985090 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hqjnm\" (UniqueName: \"kubernetes.io/projected/99e36de7-b768-429a-a0c5-78ee546952bf-kube-api-access-hqjnm\") pod \"dnsmasq-dns-5d99fc9df9-v8kxm\" (UID: \"99e36de7-b768-429a-a0c5-78ee546952bf\") " pod="openstack/dnsmasq-dns-5d99fc9df9-v8kxm" Feb 23 09:13:21 crc kubenswrapper[4940]: I0223 09:13:21.985527 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/99e36de7-b768-429a-a0c5-78ee546952bf-dns-svc\") pod \"dnsmasq-dns-5d99fc9df9-v8kxm\" (UID: \"99e36de7-b768-429a-a0c5-78ee546952bf\") " pod="openstack/dnsmasq-dns-5d99fc9df9-v8kxm" Feb 23 09:13:21 crc kubenswrapper[4940]: I0223 09:13:21.985594 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/99e36de7-b768-429a-a0c5-78ee546952bf-openstack-edpm-ipam\") pod \"dnsmasq-dns-5d99fc9df9-v8kxm\" (UID: \"99e36de7-b768-429a-a0c5-78ee546952bf\") " pod="openstack/dnsmasq-dns-5d99fc9df9-v8kxm" Feb 23 09:13:21 crc kubenswrapper[4940]: I0223 09:13:21.985721 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/99e36de7-b768-429a-a0c5-78ee546952bf-config\") pod \"dnsmasq-dns-5d99fc9df9-v8kxm\" (UID: \"99e36de7-b768-429a-a0c5-78ee546952bf\") " pod="openstack/dnsmasq-dns-5d99fc9df9-v8kxm" Feb 23 09:13:21 crc kubenswrapper[4940]: I0223 09:13:21.985747 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/99e36de7-b768-429a-a0c5-78ee546952bf-dns-swift-storage-0\") pod \"dnsmasq-dns-5d99fc9df9-v8kxm\" (UID: \"99e36de7-b768-429a-a0c5-78ee546952bf\") " pod="openstack/dnsmasq-dns-5d99fc9df9-v8kxm" Feb 23 09:13:21 crc kubenswrapper[4940]: I0223 09:13:21.985841 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/99e36de7-b768-429a-a0c5-78ee546952bf-ovsdbserver-nb\") pod \"dnsmasq-dns-5d99fc9df9-v8kxm\" (UID: \"99e36de7-b768-429a-a0c5-78ee546952bf\") " pod="openstack/dnsmasq-dns-5d99fc9df9-v8kxm" Feb 23 09:13:21 crc kubenswrapper[4940]: I0223 09:13:21.985922 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/99e36de7-b768-429a-a0c5-78ee546952bf-ovsdbserver-sb\") pod \"dnsmasq-dns-5d99fc9df9-v8kxm\" (UID: \"99e36de7-b768-429a-a0c5-78ee546952bf\") " pod="openstack/dnsmasq-dns-5d99fc9df9-v8kxm" Feb 23 09:13:21 crc kubenswrapper[4940]: I0223 09:13:21.986973 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/99e36de7-b768-429a-a0c5-78ee546952bf-ovsdbserver-sb\") pod \"dnsmasq-dns-5d99fc9df9-v8kxm\" (UID: \"99e36de7-b768-429a-a0c5-78ee546952bf\") " pod="openstack/dnsmasq-dns-5d99fc9df9-v8kxm" Feb 23 09:13:21 crc kubenswrapper[4940]: I0223 09:13:21.987139 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/99e36de7-b768-429a-a0c5-78ee546952bf-config\") pod \"dnsmasq-dns-5d99fc9df9-v8kxm\" (UID: \"99e36de7-b768-429a-a0c5-78ee546952bf\") " pod="openstack/dnsmasq-dns-5d99fc9df9-v8kxm" Feb 23 09:13:21 crc kubenswrapper[4940]: I0223 09:13:21.987576 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/99e36de7-b768-429a-a0c5-78ee546952bf-dns-swift-storage-0\") pod \"dnsmasq-dns-5d99fc9df9-v8kxm\" (UID: \"99e36de7-b768-429a-a0c5-78ee546952bf\") " pod="openstack/dnsmasq-dns-5d99fc9df9-v8kxm" Feb 23 09:13:21 crc kubenswrapper[4940]: I0223 09:13:21.988133 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/99e36de7-b768-429a-a0c5-78ee546952bf-dns-svc\") pod \"dnsmasq-dns-5d99fc9df9-v8kxm\" (UID: \"99e36de7-b768-429a-a0c5-78ee546952bf\") " pod="openstack/dnsmasq-dns-5d99fc9df9-v8kxm" Feb 23 09:13:21 crc kubenswrapper[4940]: I0223 09:13:21.988874 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/99e36de7-b768-429a-a0c5-78ee546952bf-openstack-edpm-ipam\") pod \"dnsmasq-dns-5d99fc9df9-v8kxm\" (UID: \"99e36de7-b768-429a-a0c5-78ee546952bf\") " pod="openstack/dnsmasq-dns-5d99fc9df9-v8kxm" Feb 23 09:13:21 crc kubenswrapper[4940]: I0223 09:13:21.995500 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/99e36de7-b768-429a-a0c5-78ee546952bf-ovsdbserver-nb\") pod \"dnsmasq-dns-5d99fc9df9-v8kxm\" (UID: \"99e36de7-b768-429a-a0c5-78ee546952bf\") " pod="openstack/dnsmasq-dns-5d99fc9df9-v8kxm" Feb 23 09:13:22 crc kubenswrapper[4940]: I0223 09:13:22.051861 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hqjnm\" (UniqueName: \"kubernetes.io/projected/99e36de7-b768-429a-a0c5-78ee546952bf-kube-api-access-hqjnm\") pod \"dnsmasq-dns-5d99fc9df9-v8kxm\" (UID: \"99e36de7-b768-429a-a0c5-78ee546952bf\") " pod="openstack/dnsmasq-dns-5d99fc9df9-v8kxm" Feb 23 09:13:22 crc kubenswrapper[4940]: I0223 09:13:22.109399 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5d99fc9df9-v8kxm" Feb 23 09:13:22 crc kubenswrapper[4940]: I0223 09:13:22.276204 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b4c997d87-hgkbr" Feb 23 09:13:22 crc kubenswrapper[4940]: I0223 09:13:22.403228 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c5af1432-d260-46c1-9502-de04b6978ca4-dns-svc\") pod \"c5af1432-d260-46c1-9502-de04b6978ca4\" (UID: \"c5af1432-d260-46c1-9502-de04b6978ca4\") " Feb 23 09:13:22 crc kubenswrapper[4940]: I0223 09:13:22.403582 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n7bdx\" (UniqueName: \"kubernetes.io/projected/c5af1432-d260-46c1-9502-de04b6978ca4-kube-api-access-n7bdx\") pod \"c5af1432-d260-46c1-9502-de04b6978ca4\" (UID: \"c5af1432-d260-46c1-9502-de04b6978ca4\") " Feb 23 09:13:22 crc kubenswrapper[4940]: I0223 09:13:22.403776 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c5af1432-d260-46c1-9502-de04b6978ca4-dns-swift-storage-0\") pod \"c5af1432-d260-46c1-9502-de04b6978ca4\" (UID: \"c5af1432-d260-46c1-9502-de04b6978ca4\") " Feb 23 09:13:22 crc kubenswrapper[4940]: I0223 09:13:22.403821 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c5af1432-d260-46c1-9502-de04b6978ca4-ovsdbserver-nb\") pod \"c5af1432-d260-46c1-9502-de04b6978ca4\" (UID: \"c5af1432-d260-46c1-9502-de04b6978ca4\") " Feb 23 09:13:22 crc kubenswrapper[4940]: I0223 09:13:22.403855 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c5af1432-d260-46c1-9502-de04b6978ca4-ovsdbserver-sb\") pod \"c5af1432-d260-46c1-9502-de04b6978ca4\" (UID: \"c5af1432-d260-46c1-9502-de04b6978ca4\") " Feb 23 09:13:22 crc kubenswrapper[4940]: I0223 09:13:22.403873 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c5af1432-d260-46c1-9502-de04b6978ca4-config\") pod \"c5af1432-d260-46c1-9502-de04b6978ca4\" (UID: \"c5af1432-d260-46c1-9502-de04b6978ca4\") " Feb 23 09:13:22 crc kubenswrapper[4940]: I0223 09:13:22.408751 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c5af1432-d260-46c1-9502-de04b6978ca4-kube-api-access-n7bdx" (OuterVolumeSpecName: "kube-api-access-n7bdx") pod "c5af1432-d260-46c1-9502-de04b6978ca4" (UID: "c5af1432-d260-46c1-9502-de04b6978ca4"). InnerVolumeSpecName "kube-api-access-n7bdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 09:13:22 crc kubenswrapper[4940]: I0223 09:13:22.463303 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c5af1432-d260-46c1-9502-de04b6978ca4-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "c5af1432-d260-46c1-9502-de04b6978ca4" (UID: "c5af1432-d260-46c1-9502-de04b6978ca4"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 09:13:22 crc kubenswrapper[4940]: I0223 09:13:22.470575 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c5af1432-d260-46c1-9502-de04b6978ca4-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "c5af1432-d260-46c1-9502-de04b6978ca4" (UID: "c5af1432-d260-46c1-9502-de04b6978ca4"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 09:13:22 crc kubenswrapper[4940]: I0223 09:13:22.471888 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c5af1432-d260-46c1-9502-de04b6978ca4-config" (OuterVolumeSpecName: "config") pod "c5af1432-d260-46c1-9502-de04b6978ca4" (UID: "c5af1432-d260-46c1-9502-de04b6978ca4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 09:13:22 crc kubenswrapper[4940]: I0223 09:13:22.478622 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c5af1432-d260-46c1-9502-de04b6978ca4-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "c5af1432-d260-46c1-9502-de04b6978ca4" (UID: "c5af1432-d260-46c1-9502-de04b6978ca4"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 09:13:22 crc kubenswrapper[4940]: I0223 09:13:22.483660 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c5af1432-d260-46c1-9502-de04b6978ca4-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "c5af1432-d260-46c1-9502-de04b6978ca4" (UID: "c5af1432-d260-46c1-9502-de04b6978ca4"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 09:13:22 crc kubenswrapper[4940]: I0223 09:13:22.506822 4940 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c5af1432-d260-46c1-9502-de04b6978ca4-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 23 09:13:22 crc kubenswrapper[4940]: I0223 09:13:22.506855 4940 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c5af1432-d260-46c1-9502-de04b6978ca4-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 23 09:13:22 crc kubenswrapper[4940]: I0223 09:13:22.507631 4940 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c5af1432-d260-46c1-9502-de04b6978ca4-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 23 09:13:22 crc kubenswrapper[4940]: I0223 09:13:22.507649 4940 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c5af1432-d260-46c1-9502-de04b6978ca4-config\") on node \"crc\" DevicePath \"\"" Feb 23 09:13:22 crc kubenswrapper[4940]: I0223 09:13:22.507659 4940 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c5af1432-d260-46c1-9502-de04b6978ca4-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 23 09:13:22 crc kubenswrapper[4940]: I0223 09:13:22.507668 4940 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n7bdx\" (UniqueName: \"kubernetes.io/projected/c5af1432-d260-46c1-9502-de04b6978ca4-kube-api-access-n7bdx\") on node \"crc\" DevicePath \"\"" Feb 23 09:13:22 crc kubenswrapper[4940]: I0223 09:13:22.642120 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5d99fc9df9-v8kxm"] Feb 23 09:13:22 crc kubenswrapper[4940]: W0223 09:13:22.648501 4940 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod99e36de7_b768_429a_a0c5_78ee546952bf.slice/crio-c87bf03178f12f2302ca19d92f7c4800eabe14eccc0fd9c2315ff429d51147a1 WatchSource:0}: Error finding container c87bf03178f12f2302ca19d92f7c4800eabe14eccc0fd9c2315ff429d51147a1: Status 404 returned error can't find the container with id c87bf03178f12f2302ca19d92f7c4800eabe14eccc0fd9c2315ff429d51147a1 Feb 23 09:13:22 crc kubenswrapper[4940]: I0223 09:13:22.839381 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d99fc9df9-v8kxm" event={"ID":"99e36de7-b768-429a-a0c5-78ee546952bf","Type":"ContainerStarted","Data":"c87bf03178f12f2302ca19d92f7c4800eabe14eccc0fd9c2315ff429d51147a1"} Feb 23 09:13:22 crc kubenswrapper[4940]: I0223 09:13:22.841972 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b4c997d87-hgkbr" event={"ID":"c5af1432-d260-46c1-9502-de04b6978ca4","Type":"ContainerDied","Data":"986971ce13f9979f4215c500234f068424d10dbd8a4372b6b8f7ec169f8786ec"} Feb 23 09:13:22 crc kubenswrapper[4940]: I0223 09:13:22.842026 4940 scope.go:117] "RemoveContainer" containerID="e444a861a71b87a1edc1fc26769e634a7e4ca0943635b8c7c4077988298a1982" Feb 23 09:13:22 crc kubenswrapper[4940]: I0223 09:13:22.842170 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b4c997d87-hgkbr" Feb 23 09:13:22 crc kubenswrapper[4940]: I0223 09:13:22.886637 4940 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b4c997d87-hgkbr"] Feb 23 09:13:22 crc kubenswrapper[4940]: I0223 09:13:22.890585 4940 scope.go:117] "RemoveContainer" containerID="8551a511799cde32c8cb464ced3e5564dbf55fa5fb7dbe69e40d54cfd114350c" Feb 23 09:13:22 crc kubenswrapper[4940]: I0223 09:13:22.896282 4940 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5b4c997d87-hgkbr"] Feb 23 09:13:23 crc kubenswrapper[4940]: I0223 09:13:23.358048 4940 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c5af1432-d260-46c1-9502-de04b6978ca4" path="/var/lib/kubelet/pods/c5af1432-d260-46c1-9502-de04b6978ca4/volumes" Feb 23 09:13:23 crc kubenswrapper[4940]: I0223 09:13:23.859295 4940 generic.go:334] "Generic (PLEG): container finished" podID="99e36de7-b768-429a-a0c5-78ee546952bf" containerID="6b1337e8d1fab5aa8798947915ff3117d5b3f3173ea76c08053eec44da035cb0" exitCode=0 Feb 23 09:13:23 crc kubenswrapper[4940]: I0223 09:13:23.859399 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d99fc9df9-v8kxm" event={"ID":"99e36de7-b768-429a-a0c5-78ee546952bf","Type":"ContainerDied","Data":"6b1337e8d1fab5aa8798947915ff3117d5b3f3173ea76c08053eec44da035cb0"} Feb 23 09:13:24 crc kubenswrapper[4940]: I0223 09:13:24.876197 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d99fc9df9-v8kxm" event={"ID":"99e36de7-b768-429a-a0c5-78ee546952bf","Type":"ContainerStarted","Data":"65096ea1a50d86dd3c290e5b6441732e7b6daa1b2923f75691f1b4b3cf37ccaa"} Feb 23 09:13:24 crc kubenswrapper[4940]: I0223 09:13:24.876855 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5d99fc9df9-v8kxm" Feb 23 09:13:24 crc kubenswrapper[4940]: I0223 09:13:24.905938 4940 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5d99fc9df9-v8kxm" podStartSLOduration=3.905899127 podStartE2EDuration="3.905899127s" podCreationTimestamp="2026-02-23 09:13:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 09:13:24.900113175 +0000 UTC m=+1536.283319342" watchObservedRunningTime="2026-02-23 09:13:24.905899127 +0000 UTC m=+1536.289105294" Feb 23 09:13:31 crc kubenswrapper[4940]: I0223 09:13:31.429259 4940 patch_prober.go:28] interesting pod/machine-config-daemon-26mgs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 23 09:13:31 crc kubenswrapper[4940]: I0223 09:13:31.430005 4940 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" podUID="f3f2cfd6-5ddf-436d-998f-440f1cc642b1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 23 09:13:32 crc kubenswrapper[4940]: I0223 09:13:32.112577 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5d99fc9df9-v8kxm" Feb 23 09:13:32 crc kubenswrapper[4940]: I0223 09:13:32.178351 4940 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5559d4f67f-lphr5"] Feb 23 09:13:32 crc kubenswrapper[4940]: I0223 09:13:32.178629 4940 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5559d4f67f-lphr5" podUID="25fa996a-bfc0-4b64-a588-c45d80c9e0b8" containerName="dnsmasq-dns" containerID="cri-o://03c8ff89f1404812c85783e37aed7af503f90cb2759eee1ff1e2910e1e51095b" gracePeriod=10 Feb 23 09:13:32 crc kubenswrapper[4940]: I0223 09:13:32.710169 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5559d4f67f-lphr5" Feb 23 09:13:32 crc kubenswrapper[4940]: I0223 09:13:32.864045 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/25fa996a-bfc0-4b64-a588-c45d80c9e0b8-dns-swift-storage-0\") pod \"25fa996a-bfc0-4b64-a588-c45d80c9e0b8\" (UID: \"25fa996a-bfc0-4b64-a588-c45d80c9e0b8\") " Feb 23 09:13:32 crc kubenswrapper[4940]: I0223 09:13:32.864222 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/25fa996a-bfc0-4b64-a588-c45d80c9e0b8-config\") pod \"25fa996a-bfc0-4b64-a588-c45d80c9e0b8\" (UID: \"25fa996a-bfc0-4b64-a588-c45d80c9e0b8\") " Feb 23 09:13:32 crc kubenswrapper[4940]: I0223 09:13:32.864264 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/25fa996a-bfc0-4b64-a588-c45d80c9e0b8-ovsdbserver-nb\") pod \"25fa996a-bfc0-4b64-a588-c45d80c9e0b8\" (UID: \"25fa996a-bfc0-4b64-a588-c45d80c9e0b8\") " Feb 23 09:13:32 crc kubenswrapper[4940]: I0223 09:13:32.864303 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/25fa996a-bfc0-4b64-a588-c45d80c9e0b8-openstack-edpm-ipam\") pod \"25fa996a-bfc0-4b64-a588-c45d80c9e0b8\" (UID: \"25fa996a-bfc0-4b64-a588-c45d80c9e0b8\") " Feb 23 09:13:32 crc kubenswrapper[4940]: I0223 09:13:32.864419 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/25fa996a-bfc0-4b64-a588-c45d80c9e0b8-ovsdbserver-sb\") pod \"25fa996a-bfc0-4b64-a588-c45d80c9e0b8\" (UID: \"25fa996a-bfc0-4b64-a588-c45d80c9e0b8\") " Feb 23 09:13:32 crc kubenswrapper[4940]: I0223 09:13:32.864491 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/25fa996a-bfc0-4b64-a588-c45d80c9e0b8-dns-svc\") pod \"25fa996a-bfc0-4b64-a588-c45d80c9e0b8\" (UID: \"25fa996a-bfc0-4b64-a588-c45d80c9e0b8\") " Feb 23 09:13:32 crc kubenswrapper[4940]: I0223 09:13:32.864550 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7jhwz\" (UniqueName: \"kubernetes.io/projected/25fa996a-bfc0-4b64-a588-c45d80c9e0b8-kube-api-access-7jhwz\") pod \"25fa996a-bfc0-4b64-a588-c45d80c9e0b8\" (UID: \"25fa996a-bfc0-4b64-a588-c45d80c9e0b8\") " Feb 23 09:13:32 crc kubenswrapper[4940]: I0223 09:13:32.875259 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25fa996a-bfc0-4b64-a588-c45d80c9e0b8-kube-api-access-7jhwz" (OuterVolumeSpecName: "kube-api-access-7jhwz") pod "25fa996a-bfc0-4b64-a588-c45d80c9e0b8" (UID: "25fa996a-bfc0-4b64-a588-c45d80c9e0b8"). InnerVolumeSpecName "kube-api-access-7jhwz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 09:13:32 crc kubenswrapper[4940]: I0223 09:13:32.930385 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25fa996a-bfc0-4b64-a588-c45d80c9e0b8-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "25fa996a-bfc0-4b64-a588-c45d80c9e0b8" (UID: "25fa996a-bfc0-4b64-a588-c45d80c9e0b8"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 09:13:32 crc kubenswrapper[4940]: I0223 09:13:32.943418 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25fa996a-bfc0-4b64-a588-c45d80c9e0b8-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "25fa996a-bfc0-4b64-a588-c45d80c9e0b8" (UID: "25fa996a-bfc0-4b64-a588-c45d80c9e0b8"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 09:13:32 crc kubenswrapper[4940]: I0223 09:13:32.944321 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25fa996a-bfc0-4b64-a588-c45d80c9e0b8-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "25fa996a-bfc0-4b64-a588-c45d80c9e0b8" (UID: "25fa996a-bfc0-4b64-a588-c45d80c9e0b8"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 09:13:32 crc kubenswrapper[4940]: I0223 09:13:32.944354 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25fa996a-bfc0-4b64-a588-c45d80c9e0b8-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "25fa996a-bfc0-4b64-a588-c45d80c9e0b8" (UID: "25fa996a-bfc0-4b64-a588-c45d80c9e0b8"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 09:13:32 crc kubenswrapper[4940]: I0223 09:13:32.949071 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25fa996a-bfc0-4b64-a588-c45d80c9e0b8-config" (OuterVolumeSpecName: "config") pod "25fa996a-bfc0-4b64-a588-c45d80c9e0b8" (UID: "25fa996a-bfc0-4b64-a588-c45d80c9e0b8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 09:13:32 crc kubenswrapper[4940]: I0223 09:13:32.950974 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25fa996a-bfc0-4b64-a588-c45d80c9e0b8-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "25fa996a-bfc0-4b64-a588-c45d80c9e0b8" (UID: "25fa996a-bfc0-4b64-a588-c45d80c9e0b8"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 09:13:32 crc kubenswrapper[4940]: I0223 09:13:32.972650 4940 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/25fa996a-bfc0-4b64-a588-c45d80c9e0b8-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 23 09:13:32 crc kubenswrapper[4940]: I0223 09:13:32.972680 4940 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/25fa996a-bfc0-4b64-a588-c45d80c9e0b8-config\") on node \"crc\" DevicePath \"\"" Feb 23 09:13:32 crc kubenswrapper[4940]: I0223 09:13:32.972696 4940 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/25fa996a-bfc0-4b64-a588-c45d80c9e0b8-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 23 09:13:32 crc kubenswrapper[4940]: I0223 09:13:32.972708 4940 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/25fa996a-bfc0-4b64-a588-c45d80c9e0b8-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 23 09:13:32 crc kubenswrapper[4940]: I0223 09:13:32.972719 4940 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/25fa996a-bfc0-4b64-a588-c45d80c9e0b8-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 23 09:13:32 crc kubenswrapper[4940]: I0223 09:13:32.972753 4940 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/25fa996a-bfc0-4b64-a588-c45d80c9e0b8-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 23 09:13:32 crc kubenswrapper[4940]: I0223 09:13:32.972767 4940 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7jhwz\" (UniqueName: \"kubernetes.io/projected/25fa996a-bfc0-4b64-a588-c45d80c9e0b8-kube-api-access-7jhwz\") on node \"crc\" DevicePath \"\"" Feb 23 09:13:32 crc kubenswrapper[4940]: I0223 09:13:32.972970 4940 generic.go:334] "Generic (PLEG): container finished" podID="25fa996a-bfc0-4b64-a588-c45d80c9e0b8" containerID="03c8ff89f1404812c85783e37aed7af503f90cb2759eee1ff1e2910e1e51095b" exitCode=0 Feb 23 09:13:32 crc kubenswrapper[4940]: I0223 09:13:32.973027 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5559d4f67f-lphr5" event={"ID":"25fa996a-bfc0-4b64-a588-c45d80c9e0b8","Type":"ContainerDied","Data":"03c8ff89f1404812c85783e37aed7af503f90cb2759eee1ff1e2910e1e51095b"} Feb 23 09:13:32 crc kubenswrapper[4940]: I0223 09:13:32.973067 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5559d4f67f-lphr5" event={"ID":"25fa996a-bfc0-4b64-a588-c45d80c9e0b8","Type":"ContainerDied","Data":"bf3fbdec2c73678ee96684a35d8c06fd17fcaccaee95bb06c75c5b89938fc4ed"} Feb 23 09:13:32 crc kubenswrapper[4940]: I0223 09:13:32.973093 4940 scope.go:117] "RemoveContainer" containerID="03c8ff89f1404812c85783e37aed7af503f90cb2759eee1ff1e2910e1e51095b" Feb 23 09:13:32 crc kubenswrapper[4940]: I0223 09:13:32.973068 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5559d4f67f-lphr5" Feb 23 09:13:33 crc kubenswrapper[4940]: I0223 09:13:33.019583 4940 scope.go:117] "RemoveContainer" containerID="e0ab0b61e89f151465d6d8c44bab3115e0e93b8c172b96e0f7538f1ed34e4960" Feb 23 09:13:33 crc kubenswrapper[4940]: I0223 09:13:33.034719 4940 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5559d4f67f-lphr5"] Feb 23 09:13:33 crc kubenswrapper[4940]: I0223 09:13:33.053635 4940 scope.go:117] "RemoveContainer" containerID="03c8ff89f1404812c85783e37aed7af503f90cb2759eee1ff1e2910e1e51095b" Feb 23 09:13:33 crc kubenswrapper[4940]: E0223 09:13:33.054171 4940 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"03c8ff89f1404812c85783e37aed7af503f90cb2759eee1ff1e2910e1e51095b\": container with ID starting with 03c8ff89f1404812c85783e37aed7af503f90cb2759eee1ff1e2910e1e51095b not found: ID does not exist" containerID="03c8ff89f1404812c85783e37aed7af503f90cb2759eee1ff1e2910e1e51095b" Feb 23 09:13:33 crc kubenswrapper[4940]: I0223 09:13:33.054218 4940 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"03c8ff89f1404812c85783e37aed7af503f90cb2759eee1ff1e2910e1e51095b"} err="failed to get container status \"03c8ff89f1404812c85783e37aed7af503f90cb2759eee1ff1e2910e1e51095b\": rpc error: code = NotFound desc = could not find container \"03c8ff89f1404812c85783e37aed7af503f90cb2759eee1ff1e2910e1e51095b\": container with ID starting with 03c8ff89f1404812c85783e37aed7af503f90cb2759eee1ff1e2910e1e51095b not found: ID does not exist" Feb 23 09:13:33 crc kubenswrapper[4940]: I0223 09:13:33.054249 4940 scope.go:117] "RemoveContainer" containerID="e0ab0b61e89f151465d6d8c44bab3115e0e93b8c172b96e0f7538f1ed34e4960" Feb 23 09:13:33 crc kubenswrapper[4940]: I0223 09:13:33.054700 4940 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5559d4f67f-lphr5"] Feb 23 09:13:33 crc kubenswrapper[4940]: E0223 09:13:33.054753 4940 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e0ab0b61e89f151465d6d8c44bab3115e0e93b8c172b96e0f7538f1ed34e4960\": container with ID starting with e0ab0b61e89f151465d6d8c44bab3115e0e93b8c172b96e0f7538f1ed34e4960 not found: ID does not exist" containerID="e0ab0b61e89f151465d6d8c44bab3115e0e93b8c172b96e0f7538f1ed34e4960" Feb 23 09:13:33 crc kubenswrapper[4940]: I0223 09:13:33.055004 4940 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e0ab0b61e89f151465d6d8c44bab3115e0e93b8c172b96e0f7538f1ed34e4960"} err="failed to get container status \"e0ab0b61e89f151465d6d8c44bab3115e0e93b8c172b96e0f7538f1ed34e4960\": rpc error: code = NotFound desc = could not find container \"e0ab0b61e89f151465d6d8c44bab3115e0e93b8c172b96e0f7538f1ed34e4960\": container with ID starting with e0ab0b61e89f151465d6d8c44bab3115e0e93b8c172b96e0f7538f1ed34e4960 not found: ID does not exist" Feb 23 09:13:33 crc kubenswrapper[4940]: I0223 09:13:33.359474 4940 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25fa996a-bfc0-4b64-a588-c45d80c9e0b8" path="/var/lib/kubelet/pods/25fa996a-bfc0-4b64-a588-c45d80c9e0b8/volumes" Feb 23 09:13:44 crc kubenswrapper[4940]: I0223 09:13:44.897001 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-wrv9l"] Feb 23 09:13:44 crc kubenswrapper[4940]: E0223 09:13:44.897947 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="25fa996a-bfc0-4b64-a588-c45d80c9e0b8" containerName="dnsmasq-dns" Feb 23 09:13:44 crc kubenswrapper[4940]: I0223 09:13:44.897963 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="25fa996a-bfc0-4b64-a588-c45d80c9e0b8" containerName="dnsmasq-dns" Feb 23 09:13:44 crc kubenswrapper[4940]: E0223 09:13:44.897983 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="25fa996a-bfc0-4b64-a588-c45d80c9e0b8" containerName="init" Feb 23 09:13:44 crc kubenswrapper[4940]: I0223 09:13:44.897992 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="25fa996a-bfc0-4b64-a588-c45d80c9e0b8" containerName="init" Feb 23 09:13:44 crc kubenswrapper[4940]: E0223 09:13:44.898008 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5af1432-d260-46c1-9502-de04b6978ca4" containerName="init" Feb 23 09:13:44 crc kubenswrapper[4940]: I0223 09:13:44.898014 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5af1432-d260-46c1-9502-de04b6978ca4" containerName="init" Feb 23 09:13:44 crc kubenswrapper[4940]: E0223 09:13:44.898043 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5af1432-d260-46c1-9502-de04b6978ca4" containerName="dnsmasq-dns" Feb 23 09:13:44 crc kubenswrapper[4940]: I0223 09:13:44.898050 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5af1432-d260-46c1-9502-de04b6978ca4" containerName="dnsmasq-dns" Feb 23 09:13:44 crc kubenswrapper[4940]: I0223 09:13:44.898252 4940 memory_manager.go:354] "RemoveStaleState removing state" podUID="25fa996a-bfc0-4b64-a588-c45d80c9e0b8" containerName="dnsmasq-dns" Feb 23 09:13:44 crc kubenswrapper[4940]: I0223 09:13:44.898283 4940 memory_manager.go:354] "RemoveStaleState removing state" podUID="c5af1432-d260-46c1-9502-de04b6978ca4" containerName="dnsmasq-dns" Feb 23 09:13:44 crc kubenswrapper[4940]: I0223 09:13:44.899211 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-wrv9l" Feb 23 09:13:44 crc kubenswrapper[4940]: I0223 09:13:44.902026 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 23 09:13:44 crc kubenswrapper[4940]: I0223 09:13:44.903178 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 23 09:13:44 crc kubenswrapper[4940]: I0223 09:13:44.903212 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-x648h" Feb 23 09:13:44 crc kubenswrapper[4940]: I0223 09:13:44.903773 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 23 09:13:44 crc kubenswrapper[4940]: I0223 09:13:44.914216 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-wrv9l"] Feb 23 09:13:45 crc kubenswrapper[4940]: I0223 09:13:45.067951 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cb29483b-9f50-4202-935a-0ff2e3e7d3ec-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-wrv9l\" (UID: \"cb29483b-9f50-4202-935a-0ff2e3e7d3ec\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-wrv9l" Feb 23 09:13:45 crc kubenswrapper[4940]: I0223 09:13:45.067996 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hb9g6\" (UniqueName: \"kubernetes.io/projected/cb29483b-9f50-4202-935a-0ff2e3e7d3ec-kube-api-access-hb9g6\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-wrv9l\" (UID: \"cb29483b-9f50-4202-935a-0ff2e3e7d3ec\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-wrv9l" Feb 23 09:13:45 crc kubenswrapper[4940]: I0223 09:13:45.068080 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/cb29483b-9f50-4202-935a-0ff2e3e7d3ec-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-wrv9l\" (UID: \"cb29483b-9f50-4202-935a-0ff2e3e7d3ec\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-wrv9l" Feb 23 09:13:45 crc kubenswrapper[4940]: I0223 09:13:45.068215 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb29483b-9f50-4202-935a-0ff2e3e7d3ec-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-wrv9l\" (UID: \"cb29483b-9f50-4202-935a-0ff2e3e7d3ec\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-wrv9l" Feb 23 09:13:45 crc kubenswrapper[4940]: I0223 09:13:45.096640 4940 generic.go:334] "Generic (PLEG): container finished" podID="438743de-ddf8-4b10-878a-b87c389cd3b6" containerID="6a1f86722dfe96633622e2b6662c60cd9c8f167ffc1d4e04f4dc76c1bff7da66" exitCode=0 Feb 23 09:13:45 crc kubenswrapper[4940]: I0223 09:13:45.096687 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"438743de-ddf8-4b10-878a-b87c389cd3b6","Type":"ContainerDied","Data":"6a1f86722dfe96633622e2b6662c60cd9c8f167ffc1d4e04f4dc76c1bff7da66"} Feb 23 09:13:45 crc kubenswrapper[4940]: I0223 09:13:45.170341 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb29483b-9f50-4202-935a-0ff2e3e7d3ec-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-wrv9l\" (UID: \"cb29483b-9f50-4202-935a-0ff2e3e7d3ec\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-wrv9l" Feb 23 09:13:45 crc kubenswrapper[4940]: I0223 09:13:45.170431 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cb29483b-9f50-4202-935a-0ff2e3e7d3ec-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-wrv9l\" (UID: \"cb29483b-9f50-4202-935a-0ff2e3e7d3ec\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-wrv9l" Feb 23 09:13:45 crc kubenswrapper[4940]: I0223 09:13:45.170458 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hb9g6\" (UniqueName: \"kubernetes.io/projected/cb29483b-9f50-4202-935a-0ff2e3e7d3ec-kube-api-access-hb9g6\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-wrv9l\" (UID: \"cb29483b-9f50-4202-935a-0ff2e3e7d3ec\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-wrv9l" Feb 23 09:13:45 crc kubenswrapper[4940]: I0223 09:13:45.170568 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/cb29483b-9f50-4202-935a-0ff2e3e7d3ec-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-wrv9l\" (UID: \"cb29483b-9f50-4202-935a-0ff2e3e7d3ec\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-wrv9l" Feb 23 09:13:45 crc kubenswrapper[4940]: I0223 09:13:45.176278 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/cb29483b-9f50-4202-935a-0ff2e3e7d3ec-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-wrv9l\" (UID: \"cb29483b-9f50-4202-935a-0ff2e3e7d3ec\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-wrv9l" Feb 23 09:13:45 crc kubenswrapper[4940]: I0223 09:13:45.179378 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb29483b-9f50-4202-935a-0ff2e3e7d3ec-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-wrv9l\" (UID: \"cb29483b-9f50-4202-935a-0ff2e3e7d3ec\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-wrv9l" Feb 23 09:13:45 crc kubenswrapper[4940]: I0223 09:13:45.187086 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cb29483b-9f50-4202-935a-0ff2e3e7d3ec-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-wrv9l\" (UID: \"cb29483b-9f50-4202-935a-0ff2e3e7d3ec\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-wrv9l" Feb 23 09:13:45 crc kubenswrapper[4940]: I0223 09:13:45.193698 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hb9g6\" (UniqueName: \"kubernetes.io/projected/cb29483b-9f50-4202-935a-0ff2e3e7d3ec-kube-api-access-hb9g6\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-wrv9l\" (UID: \"cb29483b-9f50-4202-935a-0ff2e3e7d3ec\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-wrv9l" Feb 23 09:13:45 crc kubenswrapper[4940]: I0223 09:13:45.220228 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-wrv9l" Feb 23 09:13:45 crc kubenswrapper[4940]: I0223 09:13:45.781421 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-wrv9l"] Feb 23 09:13:46 crc kubenswrapper[4940]: I0223 09:13:46.108654 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-wrv9l" event={"ID":"cb29483b-9f50-4202-935a-0ff2e3e7d3ec","Type":"ContainerStarted","Data":"6107841a5f32c3a6ea8ef1ec7dd79c75ac846b98a4bb7943475dff951e2fe088"} Feb 23 09:13:46 crc kubenswrapper[4940]: I0223 09:13:46.113757 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"438743de-ddf8-4b10-878a-b87c389cd3b6","Type":"ContainerStarted","Data":"7cc9a0486dc2501765d0260ccec25b5834f57d0797924c68297d85721016d719"} Feb 23 09:13:46 crc kubenswrapper[4940]: I0223 09:13:46.114888 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Feb 23 09:13:46 crc kubenswrapper[4940]: I0223 09:13:46.117356 4940 generic.go:334] "Generic (PLEG): container finished" podID="89409108-7455-4318-83ba-65a6dd96d76c" containerID="bb131a2b964b3b473a6d1a8f721051c3d61ebc0803e88011037b91b236d31517" exitCode=0 Feb 23 09:13:46 crc kubenswrapper[4940]: I0223 09:13:46.117411 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"89409108-7455-4318-83ba-65a6dd96d76c","Type":"ContainerDied","Data":"bb131a2b964b3b473a6d1a8f721051c3d61ebc0803e88011037b91b236d31517"} Feb 23 09:13:46 crc kubenswrapper[4940]: I0223 09:13:46.144492 4940 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=38.144465451 podStartE2EDuration="38.144465451s" podCreationTimestamp="2026-02-23 09:13:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 09:13:46.139353762 +0000 UTC m=+1557.522559919" watchObservedRunningTime="2026-02-23 09:13:46.144465451 +0000 UTC m=+1557.527671598" Feb 23 09:13:47 crc kubenswrapper[4940]: I0223 09:13:47.131020 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"89409108-7455-4318-83ba-65a6dd96d76c","Type":"ContainerStarted","Data":"79b5dab81e290366628a4e2a26e49dfe17e195a59abd5bfa15e93ce634bc2182"} Feb 23 09:13:47 crc kubenswrapper[4940]: I0223 09:13:47.131549 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Feb 23 09:13:47 crc kubenswrapper[4940]: I0223 09:13:47.162366 4940 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=38.162344067 podStartE2EDuration="38.162344067s" podCreationTimestamp="2026-02-23 09:13:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 09:13:47.152731538 +0000 UTC m=+1558.535937715" watchObservedRunningTime="2026-02-23 09:13:47.162344067 +0000 UTC m=+1558.545550224" Feb 23 09:13:56 crc kubenswrapper[4940]: I0223 09:13:56.910280 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 23 09:13:57 crc kubenswrapper[4940]: I0223 09:13:57.247219 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-wrv9l" event={"ID":"cb29483b-9f50-4202-935a-0ff2e3e7d3ec","Type":"ContainerStarted","Data":"b986909cd58e10139b28ac0c2d785c1cd877a867099e02433d492553d9512520"} Feb 23 09:13:57 crc kubenswrapper[4940]: I0223 09:13:57.285321 4940 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-wrv9l" podStartSLOduration=2.166886749 podStartE2EDuration="13.285289463s" podCreationTimestamp="2026-02-23 09:13:44 +0000 UTC" firstStartedPulling="2026-02-23 09:13:45.789364637 +0000 UTC m=+1557.172570794" lastFinishedPulling="2026-02-23 09:13:56.907767351 +0000 UTC m=+1568.290973508" observedRunningTime="2026-02-23 09:13:57.270151792 +0000 UTC m=+1568.653357949" watchObservedRunningTime="2026-02-23 09:13:57.285289463 +0000 UTC m=+1568.668495660" Feb 23 09:13:59 crc kubenswrapper[4940]: I0223 09:13:59.359365 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Feb 23 09:14:00 crc kubenswrapper[4940]: I0223 09:14:00.400813 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Feb 23 09:14:01 crc kubenswrapper[4940]: I0223 09:14:01.429593 4940 patch_prober.go:28] interesting pod/machine-config-daemon-26mgs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 23 09:14:01 crc kubenswrapper[4940]: I0223 09:14:01.429870 4940 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" podUID="f3f2cfd6-5ddf-436d-998f-440f1cc642b1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 23 09:14:01 crc kubenswrapper[4940]: I0223 09:14:01.429920 4940 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" Feb 23 09:14:01 crc kubenswrapper[4940]: I0223 09:14:01.430739 4940 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"19d6b225c575372b5c150867fa5fdc624fa3eadb3fc5651545d3dd6885ec731e"} pod="openshift-machine-config-operator/machine-config-daemon-26mgs" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 23 09:14:01 crc kubenswrapper[4940]: I0223 09:14:01.430810 4940 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" podUID="f3f2cfd6-5ddf-436d-998f-440f1cc642b1" containerName="machine-config-daemon" containerID="cri-o://19d6b225c575372b5c150867fa5fdc624fa3eadb3fc5651545d3dd6885ec731e" gracePeriod=600 Feb 23 09:14:01 crc kubenswrapper[4940]: E0223 09:14:01.552925 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-26mgs_openshift-machine-config-operator(f3f2cfd6-5ddf-436d-998f-440f1cc642b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" podUID="f3f2cfd6-5ddf-436d-998f-440f1cc642b1" Feb 23 09:14:02 crc kubenswrapper[4940]: I0223 09:14:02.290694 4940 generic.go:334] "Generic (PLEG): container finished" podID="f3f2cfd6-5ddf-436d-998f-440f1cc642b1" containerID="19d6b225c575372b5c150867fa5fdc624fa3eadb3fc5651545d3dd6885ec731e" exitCode=0 Feb 23 09:14:02 crc kubenswrapper[4940]: I0223 09:14:02.290809 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" event={"ID":"f3f2cfd6-5ddf-436d-998f-440f1cc642b1","Type":"ContainerDied","Data":"19d6b225c575372b5c150867fa5fdc624fa3eadb3fc5651545d3dd6885ec731e"} Feb 23 09:14:02 crc kubenswrapper[4940]: I0223 09:14:02.291059 4940 scope.go:117] "RemoveContainer" containerID="4fad30523a6437e41dff0a057c489e191d38b824c4f57fb0c206fe34b2b2c2ec" Feb 23 09:14:02 crc kubenswrapper[4940]: I0223 09:14:02.291844 4940 scope.go:117] "RemoveContainer" containerID="19d6b225c575372b5c150867fa5fdc624fa3eadb3fc5651545d3dd6885ec731e" Feb 23 09:14:02 crc kubenswrapper[4940]: E0223 09:14:02.292223 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-26mgs_openshift-machine-config-operator(f3f2cfd6-5ddf-436d-998f-440f1cc642b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" podUID="f3f2cfd6-5ddf-436d-998f-440f1cc642b1" Feb 23 09:14:08 crc kubenswrapper[4940]: I0223 09:14:08.371973 4940 generic.go:334] "Generic (PLEG): container finished" podID="cb29483b-9f50-4202-935a-0ff2e3e7d3ec" containerID="b986909cd58e10139b28ac0c2d785c1cd877a867099e02433d492553d9512520" exitCode=0 Feb 23 09:14:08 crc kubenswrapper[4940]: I0223 09:14:08.372074 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-wrv9l" event={"ID":"cb29483b-9f50-4202-935a-0ff2e3e7d3ec","Type":"ContainerDied","Data":"b986909cd58e10139b28ac0c2d785c1cd877a867099e02433d492553d9512520"} Feb 23 09:14:09 crc kubenswrapper[4940]: I0223 09:14:09.880994 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-wrv9l" Feb 23 09:14:09 crc kubenswrapper[4940]: I0223 09:14:09.991067 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hb9g6\" (UniqueName: \"kubernetes.io/projected/cb29483b-9f50-4202-935a-0ff2e3e7d3ec-kube-api-access-hb9g6\") pod \"cb29483b-9f50-4202-935a-0ff2e3e7d3ec\" (UID: \"cb29483b-9f50-4202-935a-0ff2e3e7d3ec\") " Feb 23 09:14:09 crc kubenswrapper[4940]: I0223 09:14:09.991131 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/cb29483b-9f50-4202-935a-0ff2e3e7d3ec-ssh-key-openstack-edpm-ipam\") pod \"cb29483b-9f50-4202-935a-0ff2e3e7d3ec\" (UID: \"cb29483b-9f50-4202-935a-0ff2e3e7d3ec\") " Feb 23 09:14:09 crc kubenswrapper[4940]: I0223 09:14:09.991178 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cb29483b-9f50-4202-935a-0ff2e3e7d3ec-inventory\") pod \"cb29483b-9f50-4202-935a-0ff2e3e7d3ec\" (UID: \"cb29483b-9f50-4202-935a-0ff2e3e7d3ec\") " Feb 23 09:14:09 crc kubenswrapper[4940]: I0223 09:14:09.991236 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb29483b-9f50-4202-935a-0ff2e3e7d3ec-repo-setup-combined-ca-bundle\") pod \"cb29483b-9f50-4202-935a-0ff2e3e7d3ec\" (UID: \"cb29483b-9f50-4202-935a-0ff2e3e7d3ec\") " Feb 23 09:14:09 crc kubenswrapper[4940]: I0223 09:14:09.996603 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cb29483b-9f50-4202-935a-0ff2e3e7d3ec-kube-api-access-hb9g6" (OuterVolumeSpecName: "kube-api-access-hb9g6") pod "cb29483b-9f50-4202-935a-0ff2e3e7d3ec" (UID: "cb29483b-9f50-4202-935a-0ff2e3e7d3ec"). InnerVolumeSpecName "kube-api-access-hb9g6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 09:14:09 crc kubenswrapper[4940]: I0223 09:14:09.996785 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cb29483b-9f50-4202-935a-0ff2e3e7d3ec-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "cb29483b-9f50-4202-935a-0ff2e3e7d3ec" (UID: "cb29483b-9f50-4202-935a-0ff2e3e7d3ec"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 09:14:10 crc kubenswrapper[4940]: I0223 09:14:10.018854 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cb29483b-9f50-4202-935a-0ff2e3e7d3ec-inventory" (OuterVolumeSpecName: "inventory") pod "cb29483b-9f50-4202-935a-0ff2e3e7d3ec" (UID: "cb29483b-9f50-4202-935a-0ff2e3e7d3ec"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 09:14:10 crc kubenswrapper[4940]: I0223 09:14:10.023712 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cb29483b-9f50-4202-935a-0ff2e3e7d3ec-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "cb29483b-9f50-4202-935a-0ff2e3e7d3ec" (UID: "cb29483b-9f50-4202-935a-0ff2e3e7d3ec"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 09:14:10 crc kubenswrapper[4940]: I0223 09:14:10.094567 4940 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb29483b-9f50-4202-935a-0ff2e3e7d3ec-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 09:14:10 crc kubenswrapper[4940]: I0223 09:14:10.094633 4940 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hb9g6\" (UniqueName: \"kubernetes.io/projected/cb29483b-9f50-4202-935a-0ff2e3e7d3ec-kube-api-access-hb9g6\") on node \"crc\" DevicePath \"\"" Feb 23 09:14:10 crc kubenswrapper[4940]: I0223 09:14:10.094647 4940 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/cb29483b-9f50-4202-935a-0ff2e3e7d3ec-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 23 09:14:10 crc kubenswrapper[4940]: I0223 09:14:10.094660 4940 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cb29483b-9f50-4202-935a-0ff2e3e7d3ec-inventory\") on node \"crc\" DevicePath \"\"" Feb 23 09:14:10 crc kubenswrapper[4940]: I0223 09:14:10.392373 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-wrv9l" event={"ID":"cb29483b-9f50-4202-935a-0ff2e3e7d3ec","Type":"ContainerDied","Data":"6107841a5f32c3a6ea8ef1ec7dd79c75ac846b98a4bb7943475dff951e2fe088"} Feb 23 09:14:10 crc kubenswrapper[4940]: I0223 09:14:10.392661 4940 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6107841a5f32c3a6ea8ef1ec7dd79c75ac846b98a4bb7943475dff951e2fe088" Feb 23 09:14:10 crc kubenswrapper[4940]: I0223 09:14:10.392600 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-wrv9l" Feb 23 09:14:10 crc kubenswrapper[4940]: I0223 09:14:10.485397 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-mh545"] Feb 23 09:14:10 crc kubenswrapper[4940]: E0223 09:14:10.486628 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb29483b-9f50-4202-935a-0ff2e3e7d3ec" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Feb 23 09:14:10 crc kubenswrapper[4940]: I0223 09:14:10.486682 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb29483b-9f50-4202-935a-0ff2e3e7d3ec" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Feb 23 09:14:10 crc kubenswrapper[4940]: I0223 09:14:10.486963 4940 memory_manager.go:354] "RemoveStaleState removing state" podUID="cb29483b-9f50-4202-935a-0ff2e3e7d3ec" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Feb 23 09:14:10 crc kubenswrapper[4940]: I0223 09:14:10.488073 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-mh545" Feb 23 09:14:10 crc kubenswrapper[4940]: I0223 09:14:10.490377 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 23 09:14:10 crc kubenswrapper[4940]: I0223 09:14:10.490580 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 23 09:14:10 crc kubenswrapper[4940]: I0223 09:14:10.494487 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 23 09:14:10 crc kubenswrapper[4940]: I0223 09:14:10.494546 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-x648h" Feb 23 09:14:10 crc kubenswrapper[4940]: I0223 09:14:10.520406 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-mh545"] Feb 23 09:14:10 crc kubenswrapper[4940]: I0223 09:14:10.607485 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5fe83bad-242b-4933-9ff1-525359d29867-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-mh545\" (UID: \"5fe83bad-242b-4933-9ff1-525359d29867\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-mh545" Feb 23 09:14:10 crc kubenswrapper[4940]: I0223 09:14:10.608002 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dlvbp\" (UniqueName: \"kubernetes.io/projected/5fe83bad-242b-4933-9ff1-525359d29867-kube-api-access-dlvbp\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-mh545\" (UID: \"5fe83bad-242b-4933-9ff1-525359d29867\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-mh545" Feb 23 09:14:10 crc kubenswrapper[4940]: I0223 09:14:10.608108 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5fe83bad-242b-4933-9ff1-525359d29867-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-mh545\" (UID: \"5fe83bad-242b-4933-9ff1-525359d29867\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-mh545" Feb 23 09:14:10 crc kubenswrapper[4940]: I0223 09:14:10.709842 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5fe83bad-242b-4933-9ff1-525359d29867-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-mh545\" (UID: \"5fe83bad-242b-4933-9ff1-525359d29867\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-mh545" Feb 23 09:14:10 crc kubenswrapper[4940]: I0223 09:14:10.709997 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dlvbp\" (UniqueName: \"kubernetes.io/projected/5fe83bad-242b-4933-9ff1-525359d29867-kube-api-access-dlvbp\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-mh545\" (UID: \"5fe83bad-242b-4933-9ff1-525359d29867\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-mh545" Feb 23 09:14:10 crc kubenswrapper[4940]: I0223 09:14:10.710037 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5fe83bad-242b-4933-9ff1-525359d29867-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-mh545\" (UID: \"5fe83bad-242b-4933-9ff1-525359d29867\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-mh545" Feb 23 09:14:10 crc kubenswrapper[4940]: I0223 09:14:10.722445 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5fe83bad-242b-4933-9ff1-525359d29867-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-mh545\" (UID: \"5fe83bad-242b-4933-9ff1-525359d29867\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-mh545" Feb 23 09:14:10 crc kubenswrapper[4940]: I0223 09:14:10.722825 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5fe83bad-242b-4933-9ff1-525359d29867-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-mh545\" (UID: \"5fe83bad-242b-4933-9ff1-525359d29867\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-mh545" Feb 23 09:14:10 crc kubenswrapper[4940]: I0223 09:14:10.730116 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dlvbp\" (UniqueName: \"kubernetes.io/projected/5fe83bad-242b-4933-9ff1-525359d29867-kube-api-access-dlvbp\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-mh545\" (UID: \"5fe83bad-242b-4933-9ff1-525359d29867\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-mh545" Feb 23 09:14:10 crc kubenswrapper[4940]: I0223 09:14:10.822660 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-mh545" Feb 23 09:14:11 crc kubenswrapper[4940]: I0223 09:14:11.343848 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-mh545"] Feb 23 09:14:11 crc kubenswrapper[4940]: I0223 09:14:11.403564 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-mh545" event={"ID":"5fe83bad-242b-4933-9ff1-525359d29867","Type":"ContainerStarted","Data":"58b943fee70b6ffaca0efc1d02a6ede88bde37e50d55dcf96f267411d5cc1455"} Feb 23 09:14:12 crc kubenswrapper[4940]: I0223 09:14:12.415746 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-mh545" event={"ID":"5fe83bad-242b-4933-9ff1-525359d29867","Type":"ContainerStarted","Data":"05e602e8802fa52c49675507cc1790f79dc774a96c719e762d28ce89c589b3c7"} Feb 23 09:14:12 crc kubenswrapper[4940]: I0223 09:14:12.436781 4940 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-mh545" podStartSLOduration=2.048879381 podStartE2EDuration="2.436760575s" podCreationTimestamp="2026-02-23 09:14:10 +0000 UTC" firstStartedPulling="2026-02-23 09:14:11.348581963 +0000 UTC m=+1582.731788120" lastFinishedPulling="2026-02-23 09:14:11.736463157 +0000 UTC m=+1583.119669314" observedRunningTime="2026-02-23 09:14:12.431313026 +0000 UTC m=+1583.814519183" watchObservedRunningTime="2026-02-23 09:14:12.436760575 +0000 UTC m=+1583.819966732" Feb 23 09:14:14 crc kubenswrapper[4940]: I0223 09:14:14.435577 4940 generic.go:334] "Generic (PLEG): container finished" podID="5fe83bad-242b-4933-9ff1-525359d29867" containerID="05e602e8802fa52c49675507cc1790f79dc774a96c719e762d28ce89c589b3c7" exitCode=0 Feb 23 09:14:14 crc kubenswrapper[4940]: I0223 09:14:14.435657 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-mh545" event={"ID":"5fe83bad-242b-4933-9ff1-525359d29867","Type":"ContainerDied","Data":"05e602e8802fa52c49675507cc1790f79dc774a96c719e762d28ce89c589b3c7"} Feb 23 09:14:15 crc kubenswrapper[4940]: I0223 09:14:15.861993 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-mh545" Feb 23 09:14:16 crc kubenswrapper[4940]: I0223 09:14:16.020188 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dlvbp\" (UniqueName: \"kubernetes.io/projected/5fe83bad-242b-4933-9ff1-525359d29867-kube-api-access-dlvbp\") pod \"5fe83bad-242b-4933-9ff1-525359d29867\" (UID: \"5fe83bad-242b-4933-9ff1-525359d29867\") " Feb 23 09:14:16 crc kubenswrapper[4940]: I0223 09:14:16.020351 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5fe83bad-242b-4933-9ff1-525359d29867-ssh-key-openstack-edpm-ipam\") pod \"5fe83bad-242b-4933-9ff1-525359d29867\" (UID: \"5fe83bad-242b-4933-9ff1-525359d29867\") " Feb 23 09:14:16 crc kubenswrapper[4940]: I0223 09:14:16.020460 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5fe83bad-242b-4933-9ff1-525359d29867-inventory\") pod \"5fe83bad-242b-4933-9ff1-525359d29867\" (UID: \"5fe83bad-242b-4933-9ff1-525359d29867\") " Feb 23 09:14:16 crc kubenswrapper[4940]: I0223 09:14:16.025642 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe83bad-242b-4933-9ff1-525359d29867-kube-api-access-dlvbp" (OuterVolumeSpecName: "kube-api-access-dlvbp") pod "5fe83bad-242b-4933-9ff1-525359d29867" (UID: "5fe83bad-242b-4933-9ff1-525359d29867"). InnerVolumeSpecName "kube-api-access-dlvbp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 09:14:16 crc kubenswrapper[4940]: I0223 09:14:16.050035 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe83bad-242b-4933-9ff1-525359d29867-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "5fe83bad-242b-4933-9ff1-525359d29867" (UID: "5fe83bad-242b-4933-9ff1-525359d29867"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 09:14:16 crc kubenswrapper[4940]: I0223 09:14:16.077204 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe83bad-242b-4933-9ff1-525359d29867-inventory" (OuterVolumeSpecName: "inventory") pod "5fe83bad-242b-4933-9ff1-525359d29867" (UID: "5fe83bad-242b-4933-9ff1-525359d29867"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 09:14:16 crc kubenswrapper[4940]: I0223 09:14:16.122892 4940 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dlvbp\" (UniqueName: \"kubernetes.io/projected/5fe83bad-242b-4933-9ff1-525359d29867-kube-api-access-dlvbp\") on node \"crc\" DevicePath \"\"" Feb 23 09:14:16 crc kubenswrapper[4940]: I0223 09:14:16.122941 4940 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5fe83bad-242b-4933-9ff1-525359d29867-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 23 09:14:16 crc kubenswrapper[4940]: I0223 09:14:16.122955 4940 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5fe83bad-242b-4933-9ff1-525359d29867-inventory\") on node \"crc\" DevicePath \"\"" Feb 23 09:14:16 crc kubenswrapper[4940]: I0223 09:14:16.454930 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-mh545" event={"ID":"5fe83bad-242b-4933-9ff1-525359d29867","Type":"ContainerDied","Data":"58b943fee70b6ffaca0efc1d02a6ede88bde37e50d55dcf96f267411d5cc1455"} Feb 23 09:14:16 crc kubenswrapper[4940]: I0223 09:14:16.454991 4940 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="58b943fee70b6ffaca0efc1d02a6ede88bde37e50d55dcf96f267411d5cc1455" Feb 23 09:14:16 crc kubenswrapper[4940]: I0223 09:14:16.455105 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-mh545" Feb 23 09:14:16 crc kubenswrapper[4940]: I0223 09:14:16.559511 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-kvmwh"] Feb 23 09:14:16 crc kubenswrapper[4940]: E0223 09:14:16.560248 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5fe83bad-242b-4933-9ff1-525359d29867" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Feb 23 09:14:16 crc kubenswrapper[4940]: I0223 09:14:16.560264 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="5fe83bad-242b-4933-9ff1-525359d29867" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Feb 23 09:14:16 crc kubenswrapper[4940]: I0223 09:14:16.560485 4940 memory_manager.go:354] "RemoveStaleState removing state" podUID="5fe83bad-242b-4933-9ff1-525359d29867" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Feb 23 09:14:16 crc kubenswrapper[4940]: I0223 09:14:16.561432 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-kvmwh" Feb 23 09:14:16 crc kubenswrapper[4940]: I0223 09:14:16.563552 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-x648h" Feb 23 09:14:16 crc kubenswrapper[4940]: I0223 09:14:16.564107 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 23 09:14:16 crc kubenswrapper[4940]: I0223 09:14:16.564796 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 23 09:14:16 crc kubenswrapper[4940]: I0223 09:14:16.570212 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 23 09:14:16 crc kubenswrapper[4940]: I0223 09:14:16.582204 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-kvmwh"] Feb 23 09:14:16 crc kubenswrapper[4940]: I0223 09:14:16.738049 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xs74d\" (UniqueName: \"kubernetes.io/projected/5d90dbb8-e870-41e1-bbab-a053b479fee1-kube-api-access-xs74d\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-kvmwh\" (UID: \"5d90dbb8-e870-41e1-bbab-a053b479fee1\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-kvmwh" Feb 23 09:14:16 crc kubenswrapper[4940]: I0223 09:14:16.738271 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5d90dbb8-e870-41e1-bbab-a053b479fee1-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-kvmwh\" (UID: \"5d90dbb8-e870-41e1-bbab-a053b479fee1\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-kvmwh" Feb 23 09:14:16 crc kubenswrapper[4940]: I0223 09:14:16.738478 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d90dbb8-e870-41e1-bbab-a053b479fee1-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-kvmwh\" (UID: \"5d90dbb8-e870-41e1-bbab-a053b479fee1\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-kvmwh" Feb 23 09:14:16 crc kubenswrapper[4940]: I0223 09:14:16.738563 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5d90dbb8-e870-41e1-bbab-a053b479fee1-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-kvmwh\" (UID: \"5d90dbb8-e870-41e1-bbab-a053b479fee1\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-kvmwh" Feb 23 09:14:16 crc kubenswrapper[4940]: I0223 09:14:16.841034 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xs74d\" (UniqueName: \"kubernetes.io/projected/5d90dbb8-e870-41e1-bbab-a053b479fee1-kube-api-access-xs74d\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-kvmwh\" (UID: \"5d90dbb8-e870-41e1-bbab-a053b479fee1\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-kvmwh" Feb 23 09:14:16 crc kubenswrapper[4940]: I0223 09:14:16.841143 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5d90dbb8-e870-41e1-bbab-a053b479fee1-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-kvmwh\" (UID: \"5d90dbb8-e870-41e1-bbab-a053b479fee1\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-kvmwh" Feb 23 09:14:16 crc kubenswrapper[4940]: I0223 09:14:16.841227 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d90dbb8-e870-41e1-bbab-a053b479fee1-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-kvmwh\" (UID: \"5d90dbb8-e870-41e1-bbab-a053b479fee1\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-kvmwh" Feb 23 09:14:16 crc kubenswrapper[4940]: I0223 09:14:16.841272 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5d90dbb8-e870-41e1-bbab-a053b479fee1-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-kvmwh\" (UID: \"5d90dbb8-e870-41e1-bbab-a053b479fee1\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-kvmwh" Feb 23 09:14:16 crc kubenswrapper[4940]: I0223 09:14:16.846240 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5d90dbb8-e870-41e1-bbab-a053b479fee1-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-kvmwh\" (UID: \"5d90dbb8-e870-41e1-bbab-a053b479fee1\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-kvmwh" Feb 23 09:14:16 crc kubenswrapper[4940]: I0223 09:14:16.847807 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5d90dbb8-e870-41e1-bbab-a053b479fee1-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-kvmwh\" (UID: \"5d90dbb8-e870-41e1-bbab-a053b479fee1\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-kvmwh" Feb 23 09:14:16 crc kubenswrapper[4940]: I0223 09:14:16.851113 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d90dbb8-e870-41e1-bbab-a053b479fee1-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-kvmwh\" (UID: \"5d90dbb8-e870-41e1-bbab-a053b479fee1\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-kvmwh" Feb 23 09:14:16 crc kubenswrapper[4940]: I0223 09:14:16.863127 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xs74d\" (UniqueName: \"kubernetes.io/projected/5d90dbb8-e870-41e1-bbab-a053b479fee1-kube-api-access-xs74d\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-kvmwh\" (UID: \"5d90dbb8-e870-41e1-bbab-a053b479fee1\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-kvmwh" Feb 23 09:14:16 crc kubenswrapper[4940]: I0223 09:14:16.880068 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-kvmwh" Feb 23 09:14:17 crc kubenswrapper[4940]: I0223 09:14:17.346301 4940 scope.go:117] "RemoveContainer" containerID="19d6b225c575372b5c150867fa5fdc624fa3eadb3fc5651545d3dd6885ec731e" Feb 23 09:14:17 crc kubenswrapper[4940]: E0223 09:14:17.347142 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-26mgs_openshift-machine-config-operator(f3f2cfd6-5ddf-436d-998f-440f1cc642b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" podUID="f3f2cfd6-5ddf-436d-998f-440f1cc642b1" Feb 23 09:14:17 crc kubenswrapper[4940]: I0223 09:14:17.489945 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-kvmwh"] Feb 23 09:14:18 crc kubenswrapper[4940]: I0223 09:14:18.478673 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-kvmwh" event={"ID":"5d90dbb8-e870-41e1-bbab-a053b479fee1","Type":"ContainerStarted","Data":"8f4df081347b94b5752691b9e12633689a467637854c0c25fe55527f2c3effab"} Feb 23 09:14:18 crc kubenswrapper[4940]: I0223 09:14:18.478971 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-kvmwh" event={"ID":"5d90dbb8-e870-41e1-bbab-a053b479fee1","Type":"ContainerStarted","Data":"f129be20f020f7057308f39c38070f0aa2ca4ff07508d4fe655ceee1a0307c7c"} Feb 23 09:14:18 crc kubenswrapper[4940]: I0223 09:14:18.514718 4940 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-kvmwh" podStartSLOduration=2.110256871 podStartE2EDuration="2.51469632s" podCreationTimestamp="2026-02-23 09:14:16 +0000 UTC" firstStartedPulling="2026-02-23 09:14:17.490430445 +0000 UTC m=+1588.873636612" lastFinishedPulling="2026-02-23 09:14:17.894869904 +0000 UTC m=+1589.278076061" observedRunningTime="2026-02-23 09:14:18.498053162 +0000 UTC m=+1589.881259329" watchObservedRunningTime="2026-02-23 09:14:18.51469632 +0000 UTC m=+1589.897902497" Feb 23 09:14:31 crc kubenswrapper[4940]: I0223 09:14:31.345259 4940 scope.go:117] "RemoveContainer" containerID="19d6b225c575372b5c150867fa5fdc624fa3eadb3fc5651545d3dd6885ec731e" Feb 23 09:14:31 crc kubenswrapper[4940]: E0223 09:14:31.346126 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-26mgs_openshift-machine-config-operator(f3f2cfd6-5ddf-436d-998f-440f1cc642b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" podUID="f3f2cfd6-5ddf-436d-998f-440f1cc642b1" Feb 23 09:14:32 crc kubenswrapper[4940]: I0223 09:14:32.490153 4940 scope.go:117] "RemoveContainer" containerID="d1e7c42006c191648eec14ed5655d867d922aff230310c3c25cd193649eb7d9c" Feb 23 09:14:32 crc kubenswrapper[4940]: I0223 09:14:32.514665 4940 scope.go:117] "RemoveContainer" containerID="572ce6d167ac51d0ad91a991b210bb256b4159032f69211871f7840ee7d58b59" Feb 23 09:14:32 crc kubenswrapper[4940]: I0223 09:14:32.572263 4940 scope.go:117] "RemoveContainer" containerID="b9bfdd82c352d157813b0fa560b8738de873ad062b4b47db9c19da0f078c62a4" Feb 23 09:14:45 crc kubenswrapper[4940]: I0223 09:14:45.346517 4940 scope.go:117] "RemoveContainer" containerID="19d6b225c575372b5c150867fa5fdc624fa3eadb3fc5651545d3dd6885ec731e" Feb 23 09:14:45 crc kubenswrapper[4940]: E0223 09:14:45.347544 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-26mgs_openshift-machine-config-operator(f3f2cfd6-5ddf-436d-998f-440f1cc642b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" podUID="f3f2cfd6-5ddf-436d-998f-440f1cc642b1" Feb 23 09:14:56 crc kubenswrapper[4940]: I0223 09:14:56.347393 4940 scope.go:117] "RemoveContainer" containerID="19d6b225c575372b5c150867fa5fdc624fa3eadb3fc5651545d3dd6885ec731e" Feb 23 09:14:56 crc kubenswrapper[4940]: E0223 09:14:56.348567 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-26mgs_openshift-machine-config-operator(f3f2cfd6-5ddf-436d-998f-440f1cc642b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" podUID="f3f2cfd6-5ddf-436d-998f-440f1cc642b1" Feb 23 09:14:58 crc kubenswrapper[4940]: I0223 09:14:58.448696 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-f64b8"] Feb 23 09:14:58 crc kubenswrapper[4940]: I0223 09:14:58.452851 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-f64b8" Feb 23 09:14:58 crc kubenswrapper[4940]: I0223 09:14:58.461783 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-f64b8"] Feb 23 09:14:58 crc kubenswrapper[4940]: I0223 09:14:58.463976 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ea835bf0-3cf6-48f8-a9da-92528bd27030-utilities\") pod \"certified-operators-f64b8\" (UID: \"ea835bf0-3cf6-48f8-a9da-92528bd27030\") " pod="openshift-marketplace/certified-operators-f64b8" Feb 23 09:14:58 crc kubenswrapper[4940]: I0223 09:14:58.464041 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ea835bf0-3cf6-48f8-a9da-92528bd27030-catalog-content\") pod \"certified-operators-f64b8\" (UID: \"ea835bf0-3cf6-48f8-a9da-92528bd27030\") " pod="openshift-marketplace/certified-operators-f64b8" Feb 23 09:14:58 crc kubenswrapper[4940]: I0223 09:14:58.464149 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fntg4\" (UniqueName: \"kubernetes.io/projected/ea835bf0-3cf6-48f8-a9da-92528bd27030-kube-api-access-fntg4\") pod \"certified-operators-f64b8\" (UID: \"ea835bf0-3cf6-48f8-a9da-92528bd27030\") " pod="openshift-marketplace/certified-operators-f64b8" Feb 23 09:14:58 crc kubenswrapper[4940]: I0223 09:14:58.566406 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ea835bf0-3cf6-48f8-a9da-92528bd27030-utilities\") pod \"certified-operators-f64b8\" (UID: \"ea835bf0-3cf6-48f8-a9da-92528bd27030\") " pod="openshift-marketplace/certified-operators-f64b8" Feb 23 09:14:58 crc kubenswrapper[4940]: I0223 09:14:58.566476 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ea835bf0-3cf6-48f8-a9da-92528bd27030-catalog-content\") pod \"certified-operators-f64b8\" (UID: \"ea835bf0-3cf6-48f8-a9da-92528bd27030\") " pod="openshift-marketplace/certified-operators-f64b8" Feb 23 09:14:58 crc kubenswrapper[4940]: I0223 09:14:58.566594 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fntg4\" (UniqueName: \"kubernetes.io/projected/ea835bf0-3cf6-48f8-a9da-92528bd27030-kube-api-access-fntg4\") pod \"certified-operators-f64b8\" (UID: \"ea835bf0-3cf6-48f8-a9da-92528bd27030\") " pod="openshift-marketplace/certified-operators-f64b8" Feb 23 09:14:58 crc kubenswrapper[4940]: I0223 09:14:58.567218 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ea835bf0-3cf6-48f8-a9da-92528bd27030-catalog-content\") pod \"certified-operators-f64b8\" (UID: \"ea835bf0-3cf6-48f8-a9da-92528bd27030\") " pod="openshift-marketplace/certified-operators-f64b8" Feb 23 09:14:58 crc kubenswrapper[4940]: I0223 09:14:58.567196 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ea835bf0-3cf6-48f8-a9da-92528bd27030-utilities\") pod \"certified-operators-f64b8\" (UID: \"ea835bf0-3cf6-48f8-a9da-92528bd27030\") " pod="openshift-marketplace/certified-operators-f64b8" Feb 23 09:14:58 crc kubenswrapper[4940]: I0223 09:14:58.587834 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fntg4\" (UniqueName: \"kubernetes.io/projected/ea835bf0-3cf6-48f8-a9da-92528bd27030-kube-api-access-fntg4\") pod \"certified-operators-f64b8\" (UID: \"ea835bf0-3cf6-48f8-a9da-92528bd27030\") " pod="openshift-marketplace/certified-operators-f64b8" Feb 23 09:14:58 crc kubenswrapper[4940]: I0223 09:14:58.818173 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-f64b8" Feb 23 09:14:59 crc kubenswrapper[4940]: I0223 09:14:59.339099 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-f64b8"] Feb 23 09:14:59 crc kubenswrapper[4940]: I0223 09:14:59.924826 4940 generic.go:334] "Generic (PLEG): container finished" podID="ea835bf0-3cf6-48f8-a9da-92528bd27030" containerID="8396ce3127ec1d1444cb76f821778d1794a4f92514de2ce9b47f7d0bf39c5a68" exitCode=0 Feb 23 09:14:59 crc kubenswrapper[4940]: I0223 09:14:59.924897 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-f64b8" event={"ID":"ea835bf0-3cf6-48f8-a9da-92528bd27030","Type":"ContainerDied","Data":"8396ce3127ec1d1444cb76f821778d1794a4f92514de2ce9b47f7d0bf39c5a68"} Feb 23 09:14:59 crc kubenswrapper[4940]: I0223 09:14:59.925105 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-f64b8" event={"ID":"ea835bf0-3cf6-48f8-a9da-92528bd27030","Type":"ContainerStarted","Data":"e548578c28b254442319f3771004f049bff4fb416ecb5e2b81fc9c4ac8920e28"} Feb 23 09:14:59 crc kubenswrapper[4940]: I0223 09:14:59.927894 4940 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 23 09:15:00 crc kubenswrapper[4940]: I0223 09:15:00.155055 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29530635-xqbs7"] Feb 23 09:15:00 crc kubenswrapper[4940]: I0223 09:15:00.157263 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29530635-xqbs7" Feb 23 09:15:00 crc kubenswrapper[4940]: I0223 09:15:00.166604 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29530635-xqbs7"] Feb 23 09:15:00 crc kubenswrapper[4940]: I0223 09:15:00.167104 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 23 09:15:00 crc kubenswrapper[4940]: I0223 09:15:00.167315 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 23 09:15:00 crc kubenswrapper[4940]: I0223 09:15:00.254079 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2d84e3f5-dc4c-46fc-b34a-e4d9d30d80d7-secret-volume\") pod \"collect-profiles-29530635-xqbs7\" (UID: \"2d84e3f5-dc4c-46fc-b34a-e4d9d30d80d7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29530635-xqbs7" Feb 23 09:15:00 crc kubenswrapper[4940]: I0223 09:15:00.254156 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dpvnh\" (UniqueName: \"kubernetes.io/projected/2d84e3f5-dc4c-46fc-b34a-e4d9d30d80d7-kube-api-access-dpvnh\") pod \"collect-profiles-29530635-xqbs7\" (UID: \"2d84e3f5-dc4c-46fc-b34a-e4d9d30d80d7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29530635-xqbs7" Feb 23 09:15:00 crc kubenswrapper[4940]: I0223 09:15:00.254355 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2d84e3f5-dc4c-46fc-b34a-e4d9d30d80d7-config-volume\") pod \"collect-profiles-29530635-xqbs7\" (UID: \"2d84e3f5-dc4c-46fc-b34a-e4d9d30d80d7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29530635-xqbs7" Feb 23 09:15:00 crc kubenswrapper[4940]: I0223 09:15:00.356504 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dpvnh\" (UniqueName: \"kubernetes.io/projected/2d84e3f5-dc4c-46fc-b34a-e4d9d30d80d7-kube-api-access-dpvnh\") pod \"collect-profiles-29530635-xqbs7\" (UID: \"2d84e3f5-dc4c-46fc-b34a-e4d9d30d80d7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29530635-xqbs7" Feb 23 09:15:00 crc kubenswrapper[4940]: I0223 09:15:00.356574 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2d84e3f5-dc4c-46fc-b34a-e4d9d30d80d7-config-volume\") pod \"collect-profiles-29530635-xqbs7\" (UID: \"2d84e3f5-dc4c-46fc-b34a-e4d9d30d80d7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29530635-xqbs7" Feb 23 09:15:00 crc kubenswrapper[4940]: I0223 09:15:00.356867 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2d84e3f5-dc4c-46fc-b34a-e4d9d30d80d7-secret-volume\") pod \"collect-profiles-29530635-xqbs7\" (UID: \"2d84e3f5-dc4c-46fc-b34a-e4d9d30d80d7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29530635-xqbs7" Feb 23 09:15:00 crc kubenswrapper[4940]: I0223 09:15:00.357467 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2d84e3f5-dc4c-46fc-b34a-e4d9d30d80d7-config-volume\") pod \"collect-profiles-29530635-xqbs7\" (UID: \"2d84e3f5-dc4c-46fc-b34a-e4d9d30d80d7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29530635-xqbs7" Feb 23 09:15:00 crc kubenswrapper[4940]: I0223 09:15:00.363977 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2d84e3f5-dc4c-46fc-b34a-e4d9d30d80d7-secret-volume\") pod \"collect-profiles-29530635-xqbs7\" (UID: \"2d84e3f5-dc4c-46fc-b34a-e4d9d30d80d7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29530635-xqbs7" Feb 23 09:15:00 crc kubenswrapper[4940]: I0223 09:15:00.376269 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dpvnh\" (UniqueName: \"kubernetes.io/projected/2d84e3f5-dc4c-46fc-b34a-e4d9d30d80d7-kube-api-access-dpvnh\") pod \"collect-profiles-29530635-xqbs7\" (UID: \"2d84e3f5-dc4c-46fc-b34a-e4d9d30d80d7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29530635-xqbs7" Feb 23 09:15:00 crc kubenswrapper[4940]: I0223 09:15:00.488410 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29530635-xqbs7" Feb 23 09:15:00 crc kubenswrapper[4940]: I0223 09:15:00.979657 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29530635-xqbs7"] Feb 23 09:15:01 crc kubenswrapper[4940]: I0223 09:15:01.943971 4940 generic.go:334] "Generic (PLEG): container finished" podID="ea835bf0-3cf6-48f8-a9da-92528bd27030" containerID="526ee5f6a176923ab63fb12c53b15a8fe4d79cf2f05603da588394021951e940" exitCode=0 Feb 23 09:15:01 crc kubenswrapper[4940]: I0223 09:15:01.944024 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-f64b8" event={"ID":"ea835bf0-3cf6-48f8-a9da-92528bd27030","Type":"ContainerDied","Data":"526ee5f6a176923ab63fb12c53b15a8fe4d79cf2f05603da588394021951e940"} Feb 23 09:15:01 crc kubenswrapper[4940]: I0223 09:15:01.947271 4940 generic.go:334] "Generic (PLEG): container finished" podID="2d84e3f5-dc4c-46fc-b34a-e4d9d30d80d7" containerID="715738e9152aafffc6accd0ba77eb6c14ee3a7826e604ac54f16ceb11f80f540" exitCode=0 Feb 23 09:15:01 crc kubenswrapper[4940]: I0223 09:15:01.947328 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29530635-xqbs7" event={"ID":"2d84e3f5-dc4c-46fc-b34a-e4d9d30d80d7","Type":"ContainerDied","Data":"715738e9152aafffc6accd0ba77eb6c14ee3a7826e604ac54f16ceb11f80f540"} Feb 23 09:15:01 crc kubenswrapper[4940]: I0223 09:15:01.947363 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29530635-xqbs7" event={"ID":"2d84e3f5-dc4c-46fc-b34a-e4d9d30d80d7","Type":"ContainerStarted","Data":"d38bc5d8375971ae5f87a619cb5d93451cc05a572fd57abae8389fcd66cc834d"} Feb 23 09:15:02 crc kubenswrapper[4940]: I0223 09:15:02.958736 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-f64b8" event={"ID":"ea835bf0-3cf6-48f8-a9da-92528bd27030","Type":"ContainerStarted","Data":"2b7df7a8f91f041d3183d865439290f8f283cec42b814fd68cb90f5c3d932a96"} Feb 23 09:15:02 crc kubenswrapper[4940]: I0223 09:15:02.987491 4940 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-f64b8" podStartSLOduration=2.238651314 podStartE2EDuration="4.987461786s" podCreationTimestamp="2026-02-23 09:14:58 +0000 UTC" firstStartedPulling="2026-02-23 09:14:59.927271171 +0000 UTC m=+1631.310477328" lastFinishedPulling="2026-02-23 09:15:02.676081643 +0000 UTC m=+1634.059287800" observedRunningTime="2026-02-23 09:15:02.978897279 +0000 UTC m=+1634.362103456" watchObservedRunningTime="2026-02-23 09:15:02.987461786 +0000 UTC m=+1634.370667953" Feb 23 09:15:03 crc kubenswrapper[4940]: I0223 09:15:03.375132 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29530635-xqbs7" Feb 23 09:15:03 crc kubenswrapper[4940]: I0223 09:15:03.441878 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2d84e3f5-dc4c-46fc-b34a-e4d9d30d80d7-secret-volume\") pod \"2d84e3f5-dc4c-46fc-b34a-e4d9d30d80d7\" (UID: \"2d84e3f5-dc4c-46fc-b34a-e4d9d30d80d7\") " Feb 23 09:15:03 crc kubenswrapper[4940]: I0223 09:15:03.442024 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2d84e3f5-dc4c-46fc-b34a-e4d9d30d80d7-config-volume\") pod \"2d84e3f5-dc4c-46fc-b34a-e4d9d30d80d7\" (UID: \"2d84e3f5-dc4c-46fc-b34a-e4d9d30d80d7\") " Feb 23 09:15:03 crc kubenswrapper[4940]: I0223 09:15:03.442084 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dpvnh\" (UniqueName: \"kubernetes.io/projected/2d84e3f5-dc4c-46fc-b34a-e4d9d30d80d7-kube-api-access-dpvnh\") pod \"2d84e3f5-dc4c-46fc-b34a-e4d9d30d80d7\" (UID: \"2d84e3f5-dc4c-46fc-b34a-e4d9d30d80d7\") " Feb 23 09:15:03 crc kubenswrapper[4940]: I0223 09:15:03.442728 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2d84e3f5-dc4c-46fc-b34a-e4d9d30d80d7-config-volume" (OuterVolumeSpecName: "config-volume") pod "2d84e3f5-dc4c-46fc-b34a-e4d9d30d80d7" (UID: "2d84e3f5-dc4c-46fc-b34a-e4d9d30d80d7"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 09:15:03 crc kubenswrapper[4940]: I0223 09:15:03.448982 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2d84e3f5-dc4c-46fc-b34a-e4d9d30d80d7-kube-api-access-dpvnh" (OuterVolumeSpecName: "kube-api-access-dpvnh") pod "2d84e3f5-dc4c-46fc-b34a-e4d9d30d80d7" (UID: "2d84e3f5-dc4c-46fc-b34a-e4d9d30d80d7"). InnerVolumeSpecName "kube-api-access-dpvnh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 09:15:03 crc kubenswrapper[4940]: I0223 09:15:03.452788 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d84e3f5-dc4c-46fc-b34a-e4d9d30d80d7-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "2d84e3f5-dc4c-46fc-b34a-e4d9d30d80d7" (UID: "2d84e3f5-dc4c-46fc-b34a-e4d9d30d80d7"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 09:15:03 crc kubenswrapper[4940]: I0223 09:15:03.544871 4940 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2d84e3f5-dc4c-46fc-b34a-e4d9d30d80d7-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 23 09:15:03 crc kubenswrapper[4940]: I0223 09:15:03.544912 4940 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2d84e3f5-dc4c-46fc-b34a-e4d9d30d80d7-config-volume\") on node \"crc\" DevicePath \"\"" Feb 23 09:15:03 crc kubenswrapper[4940]: I0223 09:15:03.544925 4940 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dpvnh\" (UniqueName: \"kubernetes.io/projected/2d84e3f5-dc4c-46fc-b34a-e4d9d30d80d7-kube-api-access-dpvnh\") on node \"crc\" DevicePath \"\"" Feb 23 09:15:03 crc kubenswrapper[4940]: I0223 09:15:03.969231 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29530635-xqbs7" Feb 23 09:15:03 crc kubenswrapper[4940]: I0223 09:15:03.970781 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29530635-xqbs7" event={"ID":"2d84e3f5-dc4c-46fc-b34a-e4d9d30d80d7","Type":"ContainerDied","Data":"d38bc5d8375971ae5f87a619cb5d93451cc05a572fd57abae8389fcd66cc834d"} Feb 23 09:15:03 crc kubenswrapper[4940]: I0223 09:15:03.970823 4940 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d38bc5d8375971ae5f87a619cb5d93451cc05a572fd57abae8389fcd66cc834d" Feb 23 09:15:08 crc kubenswrapper[4940]: I0223 09:15:08.345634 4940 scope.go:117] "RemoveContainer" containerID="19d6b225c575372b5c150867fa5fdc624fa3eadb3fc5651545d3dd6885ec731e" Feb 23 09:15:08 crc kubenswrapper[4940]: E0223 09:15:08.346305 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-26mgs_openshift-machine-config-operator(f3f2cfd6-5ddf-436d-998f-440f1cc642b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" podUID="f3f2cfd6-5ddf-436d-998f-440f1cc642b1" Feb 23 09:15:08 crc kubenswrapper[4940]: I0223 09:15:08.818376 4940 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-f64b8" Feb 23 09:15:08 crc kubenswrapper[4940]: I0223 09:15:08.818440 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-f64b8" Feb 23 09:15:08 crc kubenswrapper[4940]: I0223 09:15:08.870193 4940 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-f64b8" Feb 23 09:15:09 crc kubenswrapper[4940]: I0223 09:15:09.084645 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-f64b8" Feb 23 09:15:09 crc kubenswrapper[4940]: I0223 09:15:09.135731 4940 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-f64b8"] Feb 23 09:15:11 crc kubenswrapper[4940]: I0223 09:15:11.029871 4940 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-f64b8" podUID="ea835bf0-3cf6-48f8-a9da-92528bd27030" containerName="registry-server" containerID="cri-o://2b7df7a8f91f041d3183d865439290f8f283cec42b814fd68cb90f5c3d932a96" gracePeriod=2 Feb 23 09:15:11 crc kubenswrapper[4940]: I0223 09:15:11.552343 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-f64b8" Feb 23 09:15:11 crc kubenswrapper[4940]: I0223 09:15:11.707639 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ea835bf0-3cf6-48f8-a9da-92528bd27030-utilities\") pod \"ea835bf0-3cf6-48f8-a9da-92528bd27030\" (UID: \"ea835bf0-3cf6-48f8-a9da-92528bd27030\") " Feb 23 09:15:11 crc kubenswrapper[4940]: I0223 09:15:11.707729 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ea835bf0-3cf6-48f8-a9da-92528bd27030-catalog-content\") pod \"ea835bf0-3cf6-48f8-a9da-92528bd27030\" (UID: \"ea835bf0-3cf6-48f8-a9da-92528bd27030\") " Feb 23 09:15:11 crc kubenswrapper[4940]: I0223 09:15:11.707843 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fntg4\" (UniqueName: \"kubernetes.io/projected/ea835bf0-3cf6-48f8-a9da-92528bd27030-kube-api-access-fntg4\") pod \"ea835bf0-3cf6-48f8-a9da-92528bd27030\" (UID: \"ea835bf0-3cf6-48f8-a9da-92528bd27030\") " Feb 23 09:15:11 crc kubenswrapper[4940]: I0223 09:15:11.709441 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ea835bf0-3cf6-48f8-a9da-92528bd27030-utilities" (OuterVolumeSpecName: "utilities") pod "ea835bf0-3cf6-48f8-a9da-92528bd27030" (UID: "ea835bf0-3cf6-48f8-a9da-92528bd27030"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 09:15:11 crc kubenswrapper[4940]: I0223 09:15:11.727118 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ea835bf0-3cf6-48f8-a9da-92528bd27030-kube-api-access-fntg4" (OuterVolumeSpecName: "kube-api-access-fntg4") pod "ea835bf0-3cf6-48f8-a9da-92528bd27030" (UID: "ea835bf0-3cf6-48f8-a9da-92528bd27030"). InnerVolumeSpecName "kube-api-access-fntg4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 09:15:11 crc kubenswrapper[4940]: I0223 09:15:11.777513 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ea835bf0-3cf6-48f8-a9da-92528bd27030-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ea835bf0-3cf6-48f8-a9da-92528bd27030" (UID: "ea835bf0-3cf6-48f8-a9da-92528bd27030"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 09:15:11 crc kubenswrapper[4940]: I0223 09:15:11.812473 4940 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ea835bf0-3cf6-48f8-a9da-92528bd27030-utilities\") on node \"crc\" DevicePath \"\"" Feb 23 09:15:11 crc kubenswrapper[4940]: I0223 09:15:11.813058 4940 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ea835bf0-3cf6-48f8-a9da-92528bd27030-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 23 09:15:11 crc kubenswrapper[4940]: I0223 09:15:11.813171 4940 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fntg4\" (UniqueName: \"kubernetes.io/projected/ea835bf0-3cf6-48f8-a9da-92528bd27030-kube-api-access-fntg4\") on node \"crc\" DevicePath \"\"" Feb 23 09:15:12 crc kubenswrapper[4940]: I0223 09:15:12.043181 4940 generic.go:334] "Generic (PLEG): container finished" podID="ea835bf0-3cf6-48f8-a9da-92528bd27030" containerID="2b7df7a8f91f041d3183d865439290f8f283cec42b814fd68cb90f5c3d932a96" exitCode=0 Feb 23 09:15:12 crc kubenswrapper[4940]: I0223 09:15:12.043235 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-f64b8" event={"ID":"ea835bf0-3cf6-48f8-a9da-92528bd27030","Type":"ContainerDied","Data":"2b7df7a8f91f041d3183d865439290f8f283cec42b814fd68cb90f5c3d932a96"} Feb 23 09:15:12 crc kubenswrapper[4940]: I0223 09:15:12.043269 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-f64b8" event={"ID":"ea835bf0-3cf6-48f8-a9da-92528bd27030","Type":"ContainerDied","Data":"e548578c28b254442319f3771004f049bff4fb416ecb5e2b81fc9c4ac8920e28"} Feb 23 09:15:12 crc kubenswrapper[4940]: I0223 09:15:12.043289 4940 scope.go:117] "RemoveContainer" containerID="2b7df7a8f91f041d3183d865439290f8f283cec42b814fd68cb90f5c3d932a96" Feb 23 09:15:12 crc kubenswrapper[4940]: I0223 09:15:12.043456 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-f64b8" Feb 23 09:15:12 crc kubenswrapper[4940]: I0223 09:15:12.086801 4940 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-f64b8"] Feb 23 09:15:12 crc kubenswrapper[4940]: I0223 09:15:12.095666 4940 scope.go:117] "RemoveContainer" containerID="526ee5f6a176923ab63fb12c53b15a8fe4d79cf2f05603da588394021951e940" Feb 23 09:15:12 crc kubenswrapper[4940]: I0223 09:15:12.098900 4940 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-f64b8"] Feb 23 09:15:12 crc kubenswrapper[4940]: I0223 09:15:12.126361 4940 scope.go:117] "RemoveContainer" containerID="8396ce3127ec1d1444cb76f821778d1794a4f92514de2ce9b47f7d0bf39c5a68" Feb 23 09:15:12 crc kubenswrapper[4940]: I0223 09:15:12.168765 4940 scope.go:117] "RemoveContainer" containerID="2b7df7a8f91f041d3183d865439290f8f283cec42b814fd68cb90f5c3d932a96" Feb 23 09:15:12 crc kubenswrapper[4940]: E0223 09:15:12.169329 4940 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2b7df7a8f91f041d3183d865439290f8f283cec42b814fd68cb90f5c3d932a96\": container with ID starting with 2b7df7a8f91f041d3183d865439290f8f283cec42b814fd68cb90f5c3d932a96 not found: ID does not exist" containerID="2b7df7a8f91f041d3183d865439290f8f283cec42b814fd68cb90f5c3d932a96" Feb 23 09:15:12 crc kubenswrapper[4940]: I0223 09:15:12.169370 4940 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2b7df7a8f91f041d3183d865439290f8f283cec42b814fd68cb90f5c3d932a96"} err="failed to get container status \"2b7df7a8f91f041d3183d865439290f8f283cec42b814fd68cb90f5c3d932a96\": rpc error: code = NotFound desc = could not find container \"2b7df7a8f91f041d3183d865439290f8f283cec42b814fd68cb90f5c3d932a96\": container with ID starting with 2b7df7a8f91f041d3183d865439290f8f283cec42b814fd68cb90f5c3d932a96 not found: ID does not exist" Feb 23 09:15:12 crc kubenswrapper[4940]: I0223 09:15:12.169415 4940 scope.go:117] "RemoveContainer" containerID="526ee5f6a176923ab63fb12c53b15a8fe4d79cf2f05603da588394021951e940" Feb 23 09:15:12 crc kubenswrapper[4940]: E0223 09:15:12.170145 4940 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"526ee5f6a176923ab63fb12c53b15a8fe4d79cf2f05603da588394021951e940\": container with ID starting with 526ee5f6a176923ab63fb12c53b15a8fe4d79cf2f05603da588394021951e940 not found: ID does not exist" containerID="526ee5f6a176923ab63fb12c53b15a8fe4d79cf2f05603da588394021951e940" Feb 23 09:15:12 crc kubenswrapper[4940]: I0223 09:15:12.170174 4940 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"526ee5f6a176923ab63fb12c53b15a8fe4d79cf2f05603da588394021951e940"} err="failed to get container status \"526ee5f6a176923ab63fb12c53b15a8fe4d79cf2f05603da588394021951e940\": rpc error: code = NotFound desc = could not find container \"526ee5f6a176923ab63fb12c53b15a8fe4d79cf2f05603da588394021951e940\": container with ID starting with 526ee5f6a176923ab63fb12c53b15a8fe4d79cf2f05603da588394021951e940 not found: ID does not exist" Feb 23 09:15:12 crc kubenswrapper[4940]: I0223 09:15:12.170192 4940 scope.go:117] "RemoveContainer" containerID="8396ce3127ec1d1444cb76f821778d1794a4f92514de2ce9b47f7d0bf39c5a68" Feb 23 09:15:12 crc kubenswrapper[4940]: E0223 09:15:12.170379 4940 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8396ce3127ec1d1444cb76f821778d1794a4f92514de2ce9b47f7d0bf39c5a68\": container with ID starting with 8396ce3127ec1d1444cb76f821778d1794a4f92514de2ce9b47f7d0bf39c5a68 not found: ID does not exist" containerID="8396ce3127ec1d1444cb76f821778d1794a4f92514de2ce9b47f7d0bf39c5a68" Feb 23 09:15:12 crc kubenswrapper[4940]: I0223 09:15:12.170398 4940 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8396ce3127ec1d1444cb76f821778d1794a4f92514de2ce9b47f7d0bf39c5a68"} err="failed to get container status \"8396ce3127ec1d1444cb76f821778d1794a4f92514de2ce9b47f7d0bf39c5a68\": rpc error: code = NotFound desc = could not find container \"8396ce3127ec1d1444cb76f821778d1794a4f92514de2ce9b47f7d0bf39c5a68\": container with ID starting with 8396ce3127ec1d1444cb76f821778d1794a4f92514de2ce9b47f7d0bf39c5a68 not found: ID does not exist" Feb 23 09:15:13 crc kubenswrapper[4940]: I0223 09:15:13.359108 4940 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ea835bf0-3cf6-48f8-a9da-92528bd27030" path="/var/lib/kubelet/pods/ea835bf0-3cf6-48f8-a9da-92528bd27030/volumes" Feb 23 09:15:23 crc kubenswrapper[4940]: I0223 09:15:23.346605 4940 scope.go:117] "RemoveContainer" containerID="19d6b225c575372b5c150867fa5fdc624fa3eadb3fc5651545d3dd6885ec731e" Feb 23 09:15:23 crc kubenswrapper[4940]: E0223 09:15:23.347529 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-26mgs_openshift-machine-config-operator(f3f2cfd6-5ddf-436d-998f-440f1cc642b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" podUID="f3f2cfd6-5ddf-436d-998f-440f1cc642b1" Feb 23 09:15:32 crc kubenswrapper[4940]: I0223 09:15:32.771459 4940 scope.go:117] "RemoveContainer" containerID="c66255ffa47f345c3635c233bea3468a82a41928b871561c873a41d70a0535a6" Feb 23 09:15:38 crc kubenswrapper[4940]: I0223 09:15:38.347152 4940 scope.go:117] "RemoveContainer" containerID="19d6b225c575372b5c150867fa5fdc624fa3eadb3fc5651545d3dd6885ec731e" Feb 23 09:15:38 crc kubenswrapper[4940]: E0223 09:15:38.347800 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-26mgs_openshift-machine-config-operator(f3f2cfd6-5ddf-436d-998f-440f1cc642b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" podUID="f3f2cfd6-5ddf-436d-998f-440f1cc642b1" Feb 23 09:15:41 crc kubenswrapper[4940]: I0223 09:15:41.912250 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-zg782"] Feb 23 09:15:41 crc kubenswrapper[4940]: E0223 09:15:41.913101 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea835bf0-3cf6-48f8-a9da-92528bd27030" containerName="registry-server" Feb 23 09:15:41 crc kubenswrapper[4940]: I0223 09:15:41.913119 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea835bf0-3cf6-48f8-a9da-92528bd27030" containerName="registry-server" Feb 23 09:15:41 crc kubenswrapper[4940]: E0223 09:15:41.913139 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea835bf0-3cf6-48f8-a9da-92528bd27030" containerName="extract-content" Feb 23 09:15:41 crc kubenswrapper[4940]: I0223 09:15:41.913148 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea835bf0-3cf6-48f8-a9da-92528bd27030" containerName="extract-content" Feb 23 09:15:41 crc kubenswrapper[4940]: E0223 09:15:41.913168 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d84e3f5-dc4c-46fc-b34a-e4d9d30d80d7" containerName="collect-profiles" Feb 23 09:15:41 crc kubenswrapper[4940]: I0223 09:15:41.913176 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d84e3f5-dc4c-46fc-b34a-e4d9d30d80d7" containerName="collect-profiles" Feb 23 09:15:41 crc kubenswrapper[4940]: E0223 09:15:41.913218 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea835bf0-3cf6-48f8-a9da-92528bd27030" containerName="extract-utilities" Feb 23 09:15:41 crc kubenswrapper[4940]: I0223 09:15:41.913399 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea835bf0-3cf6-48f8-a9da-92528bd27030" containerName="extract-utilities" Feb 23 09:15:41 crc kubenswrapper[4940]: I0223 09:15:41.913637 4940 memory_manager.go:354] "RemoveStaleState removing state" podUID="ea835bf0-3cf6-48f8-a9da-92528bd27030" containerName="registry-server" Feb 23 09:15:41 crc kubenswrapper[4940]: I0223 09:15:41.913664 4940 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d84e3f5-dc4c-46fc-b34a-e4d9d30d80d7" containerName="collect-profiles" Feb 23 09:15:41 crc kubenswrapper[4940]: I0223 09:15:41.915050 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zg782" Feb 23 09:15:41 crc kubenswrapper[4940]: I0223 09:15:41.941264 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-zg782"] Feb 23 09:15:42 crc kubenswrapper[4940]: I0223 09:15:42.077863 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1380233a-2c52-4948-a790-7c85da2ac891-utilities\") pod \"redhat-marketplace-zg782\" (UID: \"1380233a-2c52-4948-a790-7c85da2ac891\") " pod="openshift-marketplace/redhat-marketplace-zg782" Feb 23 09:15:42 crc kubenswrapper[4940]: I0223 09:15:42.077993 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1380233a-2c52-4948-a790-7c85da2ac891-catalog-content\") pod \"redhat-marketplace-zg782\" (UID: \"1380233a-2c52-4948-a790-7c85da2ac891\") " pod="openshift-marketplace/redhat-marketplace-zg782" Feb 23 09:15:42 crc kubenswrapper[4940]: I0223 09:15:42.078290 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ltzp9\" (UniqueName: \"kubernetes.io/projected/1380233a-2c52-4948-a790-7c85da2ac891-kube-api-access-ltzp9\") pod \"redhat-marketplace-zg782\" (UID: \"1380233a-2c52-4948-a790-7c85da2ac891\") " pod="openshift-marketplace/redhat-marketplace-zg782" Feb 23 09:15:42 crc kubenswrapper[4940]: I0223 09:15:42.179641 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ltzp9\" (UniqueName: \"kubernetes.io/projected/1380233a-2c52-4948-a790-7c85da2ac891-kube-api-access-ltzp9\") pod \"redhat-marketplace-zg782\" (UID: \"1380233a-2c52-4948-a790-7c85da2ac891\") " pod="openshift-marketplace/redhat-marketplace-zg782" Feb 23 09:15:42 crc kubenswrapper[4940]: I0223 09:15:42.182107 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1380233a-2c52-4948-a790-7c85da2ac891-utilities\") pod \"redhat-marketplace-zg782\" (UID: \"1380233a-2c52-4948-a790-7c85da2ac891\") " pod="openshift-marketplace/redhat-marketplace-zg782" Feb 23 09:15:42 crc kubenswrapper[4940]: I0223 09:15:42.182165 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1380233a-2c52-4948-a790-7c85da2ac891-catalog-content\") pod \"redhat-marketplace-zg782\" (UID: \"1380233a-2c52-4948-a790-7c85da2ac891\") " pod="openshift-marketplace/redhat-marketplace-zg782" Feb 23 09:15:42 crc kubenswrapper[4940]: I0223 09:15:42.183343 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1380233a-2c52-4948-a790-7c85da2ac891-utilities\") pod \"redhat-marketplace-zg782\" (UID: \"1380233a-2c52-4948-a790-7c85da2ac891\") " pod="openshift-marketplace/redhat-marketplace-zg782" Feb 23 09:15:42 crc kubenswrapper[4940]: I0223 09:15:42.183343 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1380233a-2c52-4948-a790-7c85da2ac891-catalog-content\") pod \"redhat-marketplace-zg782\" (UID: \"1380233a-2c52-4948-a790-7c85da2ac891\") " pod="openshift-marketplace/redhat-marketplace-zg782" Feb 23 09:15:42 crc kubenswrapper[4940]: I0223 09:15:42.216380 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ltzp9\" (UniqueName: \"kubernetes.io/projected/1380233a-2c52-4948-a790-7c85da2ac891-kube-api-access-ltzp9\") pod \"redhat-marketplace-zg782\" (UID: \"1380233a-2c52-4948-a790-7c85da2ac891\") " pod="openshift-marketplace/redhat-marketplace-zg782" Feb 23 09:15:42 crc kubenswrapper[4940]: I0223 09:15:42.250975 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zg782" Feb 23 09:15:42 crc kubenswrapper[4940]: I0223 09:15:42.745267 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-zg782"] Feb 23 09:15:43 crc kubenswrapper[4940]: I0223 09:15:43.394483 4940 generic.go:334] "Generic (PLEG): container finished" podID="1380233a-2c52-4948-a790-7c85da2ac891" containerID="f0bf3e3e6d56268382a35d3a073157225cf256f235d3632b69eb3bc4eaadaf8c" exitCode=0 Feb 23 09:15:43 crc kubenswrapper[4940]: I0223 09:15:43.394550 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zg782" event={"ID":"1380233a-2c52-4948-a790-7c85da2ac891","Type":"ContainerDied","Data":"f0bf3e3e6d56268382a35d3a073157225cf256f235d3632b69eb3bc4eaadaf8c"} Feb 23 09:15:43 crc kubenswrapper[4940]: I0223 09:15:43.394592 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zg782" event={"ID":"1380233a-2c52-4948-a790-7c85da2ac891","Type":"ContainerStarted","Data":"0c6dc1c9da383a1ce77e6cc62e81bc13c135fd28a28a24f0e047d5be83ad1708"} Feb 23 09:15:44 crc kubenswrapper[4940]: I0223 09:15:44.408257 4940 generic.go:334] "Generic (PLEG): container finished" podID="1380233a-2c52-4948-a790-7c85da2ac891" containerID="647a417859c2a616735c6df5f4ef105060bcc6016b5cd3c2770576e3683321c9" exitCode=0 Feb 23 09:15:44 crc kubenswrapper[4940]: I0223 09:15:44.408328 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zg782" event={"ID":"1380233a-2c52-4948-a790-7c85da2ac891","Type":"ContainerDied","Data":"647a417859c2a616735c6df5f4ef105060bcc6016b5cd3c2770576e3683321c9"} Feb 23 09:15:45 crc kubenswrapper[4940]: I0223 09:15:45.419461 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zg782" event={"ID":"1380233a-2c52-4948-a790-7c85da2ac891","Type":"ContainerStarted","Data":"29b4a89d4629409424e9b16958199290a8201445bc1d87b8911372cab7366426"} Feb 23 09:15:45 crc kubenswrapper[4940]: I0223 09:15:45.437980 4940 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-zg782" podStartSLOduration=3.003174451 podStartE2EDuration="4.437956231s" podCreationTimestamp="2026-02-23 09:15:41 +0000 UTC" firstStartedPulling="2026-02-23 09:15:43.397406585 +0000 UTC m=+1674.780612742" lastFinishedPulling="2026-02-23 09:15:44.832188325 +0000 UTC m=+1676.215394522" observedRunningTime="2026-02-23 09:15:45.435183305 +0000 UTC m=+1676.818389482" watchObservedRunningTime="2026-02-23 09:15:45.437956231 +0000 UTC m=+1676.821162388" Feb 23 09:15:49 crc kubenswrapper[4940]: I0223 09:15:49.433872 4940 scope.go:117] "RemoveContainer" containerID="19d6b225c575372b5c150867fa5fdc624fa3eadb3fc5651545d3dd6885ec731e" Feb 23 09:15:49 crc kubenswrapper[4940]: E0223 09:15:49.435377 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-26mgs_openshift-machine-config-operator(f3f2cfd6-5ddf-436d-998f-440f1cc642b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" podUID="f3f2cfd6-5ddf-436d-998f-440f1cc642b1" Feb 23 09:15:52 crc kubenswrapper[4940]: I0223 09:15:52.253895 4940 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-zg782" Feb 23 09:15:52 crc kubenswrapper[4940]: I0223 09:15:52.254411 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-zg782" Feb 23 09:15:52 crc kubenswrapper[4940]: I0223 09:15:52.310674 4940 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-zg782" Feb 23 09:15:52 crc kubenswrapper[4940]: I0223 09:15:52.527683 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-zg782" Feb 23 09:15:52 crc kubenswrapper[4940]: I0223 09:15:52.584846 4940 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-zg782"] Feb 23 09:15:54 crc kubenswrapper[4940]: I0223 09:15:54.502306 4940 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-zg782" podUID="1380233a-2c52-4948-a790-7c85da2ac891" containerName="registry-server" containerID="cri-o://29b4a89d4629409424e9b16958199290a8201445bc1d87b8911372cab7366426" gracePeriod=2 Feb 23 09:15:55 crc kubenswrapper[4940]: I0223 09:15:55.005985 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zg782" Feb 23 09:15:55 crc kubenswrapper[4940]: I0223 09:15:55.033996 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1380233a-2c52-4948-a790-7c85da2ac891-utilities\") pod \"1380233a-2c52-4948-a790-7c85da2ac891\" (UID: \"1380233a-2c52-4948-a790-7c85da2ac891\") " Feb 23 09:15:55 crc kubenswrapper[4940]: I0223 09:15:55.035254 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1380233a-2c52-4948-a790-7c85da2ac891-utilities" (OuterVolumeSpecName: "utilities") pod "1380233a-2c52-4948-a790-7c85da2ac891" (UID: "1380233a-2c52-4948-a790-7c85da2ac891"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 09:15:55 crc kubenswrapper[4940]: I0223 09:15:55.135984 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1380233a-2c52-4948-a790-7c85da2ac891-catalog-content\") pod \"1380233a-2c52-4948-a790-7c85da2ac891\" (UID: \"1380233a-2c52-4948-a790-7c85da2ac891\") " Feb 23 09:15:55 crc kubenswrapper[4940]: I0223 09:15:55.136101 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ltzp9\" (UniqueName: \"kubernetes.io/projected/1380233a-2c52-4948-a790-7c85da2ac891-kube-api-access-ltzp9\") pod \"1380233a-2c52-4948-a790-7c85da2ac891\" (UID: \"1380233a-2c52-4948-a790-7c85da2ac891\") " Feb 23 09:15:55 crc kubenswrapper[4940]: I0223 09:15:55.136403 4940 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1380233a-2c52-4948-a790-7c85da2ac891-utilities\") on node \"crc\" DevicePath \"\"" Feb 23 09:15:55 crc kubenswrapper[4940]: I0223 09:15:55.147958 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1380233a-2c52-4948-a790-7c85da2ac891-kube-api-access-ltzp9" (OuterVolumeSpecName: "kube-api-access-ltzp9") pod "1380233a-2c52-4948-a790-7c85da2ac891" (UID: "1380233a-2c52-4948-a790-7c85da2ac891"). InnerVolumeSpecName "kube-api-access-ltzp9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 09:15:55 crc kubenswrapper[4940]: I0223 09:15:55.161815 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1380233a-2c52-4948-a790-7c85da2ac891-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1380233a-2c52-4948-a790-7c85da2ac891" (UID: "1380233a-2c52-4948-a790-7c85da2ac891"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 09:15:55 crc kubenswrapper[4940]: I0223 09:15:55.238767 4940 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ltzp9\" (UniqueName: \"kubernetes.io/projected/1380233a-2c52-4948-a790-7c85da2ac891-kube-api-access-ltzp9\") on node \"crc\" DevicePath \"\"" Feb 23 09:15:55 crc kubenswrapper[4940]: I0223 09:15:55.238816 4940 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1380233a-2c52-4948-a790-7c85da2ac891-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 23 09:15:55 crc kubenswrapper[4940]: I0223 09:15:55.515263 4940 generic.go:334] "Generic (PLEG): container finished" podID="1380233a-2c52-4948-a790-7c85da2ac891" containerID="29b4a89d4629409424e9b16958199290a8201445bc1d87b8911372cab7366426" exitCode=0 Feb 23 09:15:55 crc kubenswrapper[4940]: I0223 09:15:55.515353 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zg782" event={"ID":"1380233a-2c52-4948-a790-7c85da2ac891","Type":"ContainerDied","Data":"29b4a89d4629409424e9b16958199290a8201445bc1d87b8911372cab7366426"} Feb 23 09:15:55 crc kubenswrapper[4940]: I0223 09:15:55.515538 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zg782" event={"ID":"1380233a-2c52-4948-a790-7c85da2ac891","Type":"ContainerDied","Data":"0c6dc1c9da383a1ce77e6cc62e81bc13c135fd28a28a24f0e047d5be83ad1708"} Feb 23 09:15:55 crc kubenswrapper[4940]: I0223 09:15:55.515560 4940 scope.go:117] "RemoveContainer" containerID="29b4a89d4629409424e9b16958199290a8201445bc1d87b8911372cab7366426" Feb 23 09:15:55 crc kubenswrapper[4940]: I0223 09:15:55.515377 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zg782" Feb 23 09:15:55 crc kubenswrapper[4940]: I0223 09:15:55.551747 4940 scope.go:117] "RemoveContainer" containerID="647a417859c2a616735c6df5f4ef105060bcc6016b5cd3c2770576e3683321c9" Feb 23 09:15:55 crc kubenswrapper[4940]: I0223 09:15:55.553054 4940 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-zg782"] Feb 23 09:15:55 crc kubenswrapper[4940]: I0223 09:15:55.568802 4940 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-zg782"] Feb 23 09:15:55 crc kubenswrapper[4940]: I0223 09:15:55.609040 4940 scope.go:117] "RemoveContainer" containerID="f0bf3e3e6d56268382a35d3a073157225cf256f235d3632b69eb3bc4eaadaf8c" Feb 23 09:15:55 crc kubenswrapper[4940]: I0223 09:15:55.660026 4940 scope.go:117] "RemoveContainer" containerID="29b4a89d4629409424e9b16958199290a8201445bc1d87b8911372cab7366426" Feb 23 09:15:55 crc kubenswrapper[4940]: E0223 09:15:55.660482 4940 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"29b4a89d4629409424e9b16958199290a8201445bc1d87b8911372cab7366426\": container with ID starting with 29b4a89d4629409424e9b16958199290a8201445bc1d87b8911372cab7366426 not found: ID does not exist" containerID="29b4a89d4629409424e9b16958199290a8201445bc1d87b8911372cab7366426" Feb 23 09:15:55 crc kubenswrapper[4940]: I0223 09:15:55.660523 4940 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"29b4a89d4629409424e9b16958199290a8201445bc1d87b8911372cab7366426"} err="failed to get container status \"29b4a89d4629409424e9b16958199290a8201445bc1d87b8911372cab7366426\": rpc error: code = NotFound desc = could not find container \"29b4a89d4629409424e9b16958199290a8201445bc1d87b8911372cab7366426\": container with ID starting with 29b4a89d4629409424e9b16958199290a8201445bc1d87b8911372cab7366426 not found: ID does not exist" Feb 23 09:15:55 crc kubenswrapper[4940]: I0223 09:15:55.660551 4940 scope.go:117] "RemoveContainer" containerID="647a417859c2a616735c6df5f4ef105060bcc6016b5cd3c2770576e3683321c9" Feb 23 09:15:55 crc kubenswrapper[4940]: E0223 09:15:55.661065 4940 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"647a417859c2a616735c6df5f4ef105060bcc6016b5cd3c2770576e3683321c9\": container with ID starting with 647a417859c2a616735c6df5f4ef105060bcc6016b5cd3c2770576e3683321c9 not found: ID does not exist" containerID="647a417859c2a616735c6df5f4ef105060bcc6016b5cd3c2770576e3683321c9" Feb 23 09:15:55 crc kubenswrapper[4940]: I0223 09:15:55.661122 4940 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"647a417859c2a616735c6df5f4ef105060bcc6016b5cd3c2770576e3683321c9"} err="failed to get container status \"647a417859c2a616735c6df5f4ef105060bcc6016b5cd3c2770576e3683321c9\": rpc error: code = NotFound desc = could not find container \"647a417859c2a616735c6df5f4ef105060bcc6016b5cd3c2770576e3683321c9\": container with ID starting with 647a417859c2a616735c6df5f4ef105060bcc6016b5cd3c2770576e3683321c9 not found: ID does not exist" Feb 23 09:15:55 crc kubenswrapper[4940]: I0223 09:15:55.661161 4940 scope.go:117] "RemoveContainer" containerID="f0bf3e3e6d56268382a35d3a073157225cf256f235d3632b69eb3bc4eaadaf8c" Feb 23 09:15:55 crc kubenswrapper[4940]: E0223 09:15:55.661485 4940 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f0bf3e3e6d56268382a35d3a073157225cf256f235d3632b69eb3bc4eaadaf8c\": container with ID starting with f0bf3e3e6d56268382a35d3a073157225cf256f235d3632b69eb3bc4eaadaf8c not found: ID does not exist" containerID="f0bf3e3e6d56268382a35d3a073157225cf256f235d3632b69eb3bc4eaadaf8c" Feb 23 09:15:55 crc kubenswrapper[4940]: I0223 09:15:55.661507 4940 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f0bf3e3e6d56268382a35d3a073157225cf256f235d3632b69eb3bc4eaadaf8c"} err="failed to get container status \"f0bf3e3e6d56268382a35d3a073157225cf256f235d3632b69eb3bc4eaadaf8c\": rpc error: code = NotFound desc = could not find container \"f0bf3e3e6d56268382a35d3a073157225cf256f235d3632b69eb3bc4eaadaf8c\": container with ID starting with f0bf3e3e6d56268382a35d3a073157225cf256f235d3632b69eb3bc4eaadaf8c not found: ID does not exist" Feb 23 09:15:57 crc kubenswrapper[4940]: I0223 09:15:57.359044 4940 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1380233a-2c52-4948-a790-7c85da2ac891" path="/var/lib/kubelet/pods/1380233a-2c52-4948-a790-7c85da2ac891/volumes" Feb 23 09:16:01 crc kubenswrapper[4940]: I0223 09:16:01.346119 4940 scope.go:117] "RemoveContainer" containerID="19d6b225c575372b5c150867fa5fdc624fa3eadb3fc5651545d3dd6885ec731e" Feb 23 09:16:01 crc kubenswrapper[4940]: E0223 09:16:01.346856 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-26mgs_openshift-machine-config-operator(f3f2cfd6-5ddf-436d-998f-440f1cc642b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" podUID="f3f2cfd6-5ddf-436d-998f-440f1cc642b1" Feb 23 09:16:13 crc kubenswrapper[4940]: I0223 09:16:13.350269 4940 scope.go:117] "RemoveContainer" containerID="19d6b225c575372b5c150867fa5fdc624fa3eadb3fc5651545d3dd6885ec731e" Feb 23 09:16:13 crc kubenswrapper[4940]: E0223 09:16:13.351155 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-26mgs_openshift-machine-config-operator(f3f2cfd6-5ddf-436d-998f-440f1cc642b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" podUID="f3f2cfd6-5ddf-436d-998f-440f1cc642b1" Feb 23 09:16:19 crc kubenswrapper[4940]: I0223 09:16:19.531725 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-wf6jf"] Feb 23 09:16:19 crc kubenswrapper[4940]: E0223 09:16:19.532719 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1380233a-2c52-4948-a790-7c85da2ac891" containerName="extract-utilities" Feb 23 09:16:19 crc kubenswrapper[4940]: I0223 09:16:19.532740 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="1380233a-2c52-4948-a790-7c85da2ac891" containerName="extract-utilities" Feb 23 09:16:19 crc kubenswrapper[4940]: E0223 09:16:19.532753 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1380233a-2c52-4948-a790-7c85da2ac891" containerName="registry-server" Feb 23 09:16:19 crc kubenswrapper[4940]: I0223 09:16:19.532762 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="1380233a-2c52-4948-a790-7c85da2ac891" containerName="registry-server" Feb 23 09:16:19 crc kubenswrapper[4940]: E0223 09:16:19.532785 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1380233a-2c52-4948-a790-7c85da2ac891" containerName="extract-content" Feb 23 09:16:19 crc kubenswrapper[4940]: I0223 09:16:19.532794 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="1380233a-2c52-4948-a790-7c85da2ac891" containerName="extract-content" Feb 23 09:16:19 crc kubenswrapper[4940]: I0223 09:16:19.533077 4940 memory_manager.go:354] "RemoveStaleState removing state" podUID="1380233a-2c52-4948-a790-7c85da2ac891" containerName="registry-server" Feb 23 09:16:19 crc kubenswrapper[4940]: I0223 09:16:19.534904 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wf6jf" Feb 23 09:16:19 crc kubenswrapper[4940]: I0223 09:16:19.548998 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-wf6jf"] Feb 23 09:16:19 crc kubenswrapper[4940]: I0223 09:16:19.720509 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/efdbc051-9c8e-40fd-a34f-5de187695bdf-catalog-content\") pod \"community-operators-wf6jf\" (UID: \"efdbc051-9c8e-40fd-a34f-5de187695bdf\") " pod="openshift-marketplace/community-operators-wf6jf" Feb 23 09:16:19 crc kubenswrapper[4940]: I0223 09:16:19.720807 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/efdbc051-9c8e-40fd-a34f-5de187695bdf-utilities\") pod \"community-operators-wf6jf\" (UID: \"efdbc051-9c8e-40fd-a34f-5de187695bdf\") " pod="openshift-marketplace/community-operators-wf6jf" Feb 23 09:16:19 crc kubenswrapper[4940]: I0223 09:16:19.720931 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rspxp\" (UniqueName: \"kubernetes.io/projected/efdbc051-9c8e-40fd-a34f-5de187695bdf-kube-api-access-rspxp\") pod \"community-operators-wf6jf\" (UID: \"efdbc051-9c8e-40fd-a34f-5de187695bdf\") " pod="openshift-marketplace/community-operators-wf6jf" Feb 23 09:16:19 crc kubenswrapper[4940]: I0223 09:16:19.822948 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/efdbc051-9c8e-40fd-a34f-5de187695bdf-catalog-content\") pod \"community-operators-wf6jf\" (UID: \"efdbc051-9c8e-40fd-a34f-5de187695bdf\") " pod="openshift-marketplace/community-operators-wf6jf" Feb 23 09:16:19 crc kubenswrapper[4940]: I0223 09:16:19.823034 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/efdbc051-9c8e-40fd-a34f-5de187695bdf-utilities\") pod \"community-operators-wf6jf\" (UID: \"efdbc051-9c8e-40fd-a34f-5de187695bdf\") " pod="openshift-marketplace/community-operators-wf6jf" Feb 23 09:16:19 crc kubenswrapper[4940]: I0223 09:16:19.823069 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rspxp\" (UniqueName: \"kubernetes.io/projected/efdbc051-9c8e-40fd-a34f-5de187695bdf-kube-api-access-rspxp\") pod \"community-operators-wf6jf\" (UID: \"efdbc051-9c8e-40fd-a34f-5de187695bdf\") " pod="openshift-marketplace/community-operators-wf6jf" Feb 23 09:16:19 crc kubenswrapper[4940]: I0223 09:16:19.824008 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/efdbc051-9c8e-40fd-a34f-5de187695bdf-catalog-content\") pod \"community-operators-wf6jf\" (UID: \"efdbc051-9c8e-40fd-a34f-5de187695bdf\") " pod="openshift-marketplace/community-operators-wf6jf" Feb 23 09:16:19 crc kubenswrapper[4940]: I0223 09:16:19.824307 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/efdbc051-9c8e-40fd-a34f-5de187695bdf-utilities\") pod \"community-operators-wf6jf\" (UID: \"efdbc051-9c8e-40fd-a34f-5de187695bdf\") " pod="openshift-marketplace/community-operators-wf6jf" Feb 23 09:16:19 crc kubenswrapper[4940]: I0223 09:16:19.846471 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rspxp\" (UniqueName: \"kubernetes.io/projected/efdbc051-9c8e-40fd-a34f-5de187695bdf-kube-api-access-rspxp\") pod \"community-operators-wf6jf\" (UID: \"efdbc051-9c8e-40fd-a34f-5de187695bdf\") " pod="openshift-marketplace/community-operators-wf6jf" Feb 23 09:16:19 crc kubenswrapper[4940]: I0223 09:16:19.869396 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wf6jf" Feb 23 09:16:20 crc kubenswrapper[4940]: I0223 09:16:20.423005 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-wf6jf"] Feb 23 09:16:20 crc kubenswrapper[4940]: I0223 09:16:20.795623 4940 generic.go:334] "Generic (PLEG): container finished" podID="efdbc051-9c8e-40fd-a34f-5de187695bdf" containerID="5e5a39ebff0c3ed34669e740cb7d08400a55ed8e8999711148f9e33fc7abed69" exitCode=0 Feb 23 09:16:20 crc kubenswrapper[4940]: I0223 09:16:20.795750 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wf6jf" event={"ID":"efdbc051-9c8e-40fd-a34f-5de187695bdf","Type":"ContainerDied","Data":"5e5a39ebff0c3ed34669e740cb7d08400a55ed8e8999711148f9e33fc7abed69"} Feb 23 09:16:20 crc kubenswrapper[4940]: I0223 09:16:20.795965 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wf6jf" event={"ID":"efdbc051-9c8e-40fd-a34f-5de187695bdf","Type":"ContainerStarted","Data":"9032e55fe832fe40adb5fb18580034f259837650923495e2bb223d31eed7a43f"} Feb 23 09:16:21 crc kubenswrapper[4940]: I0223 09:16:21.810534 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wf6jf" event={"ID":"efdbc051-9c8e-40fd-a34f-5de187695bdf","Type":"ContainerStarted","Data":"e68d0a934a27bd3b9278fb985ae1e5f8d5b0f5de128e961ebf415fac27a1f849"} Feb 23 09:16:22 crc kubenswrapper[4940]: I0223 09:16:22.820963 4940 generic.go:334] "Generic (PLEG): container finished" podID="efdbc051-9c8e-40fd-a34f-5de187695bdf" containerID="e68d0a934a27bd3b9278fb985ae1e5f8d5b0f5de128e961ebf415fac27a1f849" exitCode=0 Feb 23 09:16:22 crc kubenswrapper[4940]: I0223 09:16:22.821090 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wf6jf" event={"ID":"efdbc051-9c8e-40fd-a34f-5de187695bdf","Type":"ContainerDied","Data":"e68d0a934a27bd3b9278fb985ae1e5f8d5b0f5de128e961ebf415fac27a1f849"} Feb 23 09:16:23 crc kubenswrapper[4940]: I0223 09:16:23.836028 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wf6jf" event={"ID":"efdbc051-9c8e-40fd-a34f-5de187695bdf","Type":"ContainerStarted","Data":"9a6d40393afa7ab92b4de7233f3c21240d6bb7c78eb3a033abdeb4037c5aa8c7"} Feb 23 09:16:23 crc kubenswrapper[4940]: I0223 09:16:23.870540 4940 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-wf6jf" podStartSLOduration=2.313390416 podStartE2EDuration="4.870512742s" podCreationTimestamp="2026-02-23 09:16:19 +0000 UTC" firstStartedPulling="2026-02-23 09:16:20.799179829 +0000 UTC m=+1712.182385986" lastFinishedPulling="2026-02-23 09:16:23.356302155 +0000 UTC m=+1714.739508312" observedRunningTime="2026-02-23 09:16:23.855003309 +0000 UTC m=+1715.238209516" watchObservedRunningTime="2026-02-23 09:16:23.870512742 +0000 UTC m=+1715.253718939" Feb 23 09:16:26 crc kubenswrapper[4940]: I0223 09:16:26.346141 4940 scope.go:117] "RemoveContainer" containerID="19d6b225c575372b5c150867fa5fdc624fa3eadb3fc5651545d3dd6885ec731e" Feb 23 09:16:26 crc kubenswrapper[4940]: E0223 09:16:26.346974 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-26mgs_openshift-machine-config-operator(f3f2cfd6-5ddf-436d-998f-440f1cc642b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" podUID="f3f2cfd6-5ddf-436d-998f-440f1cc642b1" Feb 23 09:16:29 crc kubenswrapper[4940]: I0223 09:16:29.869603 4940 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-wf6jf" Feb 23 09:16:29 crc kubenswrapper[4940]: I0223 09:16:29.870139 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-wf6jf" Feb 23 09:16:29 crc kubenswrapper[4940]: I0223 09:16:29.919807 4940 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-wf6jf" Feb 23 09:16:29 crc kubenswrapper[4940]: I0223 09:16:29.985917 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-wf6jf" Feb 23 09:16:30 crc kubenswrapper[4940]: I0223 09:16:30.164056 4940 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-wf6jf"] Feb 23 09:16:31 crc kubenswrapper[4940]: I0223 09:16:31.909406 4940 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-wf6jf" podUID="efdbc051-9c8e-40fd-a34f-5de187695bdf" containerName="registry-server" containerID="cri-o://9a6d40393afa7ab92b4de7233f3c21240d6bb7c78eb3a033abdeb4037c5aa8c7" gracePeriod=2 Feb 23 09:16:32 crc kubenswrapper[4940]: I0223 09:16:32.348453 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wf6jf" Feb 23 09:16:32 crc kubenswrapper[4940]: I0223 09:16:32.503909 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/efdbc051-9c8e-40fd-a34f-5de187695bdf-utilities\") pod \"efdbc051-9c8e-40fd-a34f-5de187695bdf\" (UID: \"efdbc051-9c8e-40fd-a34f-5de187695bdf\") " Feb 23 09:16:32 crc kubenswrapper[4940]: I0223 09:16:32.504078 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rspxp\" (UniqueName: \"kubernetes.io/projected/efdbc051-9c8e-40fd-a34f-5de187695bdf-kube-api-access-rspxp\") pod \"efdbc051-9c8e-40fd-a34f-5de187695bdf\" (UID: \"efdbc051-9c8e-40fd-a34f-5de187695bdf\") " Feb 23 09:16:32 crc kubenswrapper[4940]: I0223 09:16:32.504222 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/efdbc051-9c8e-40fd-a34f-5de187695bdf-catalog-content\") pod \"efdbc051-9c8e-40fd-a34f-5de187695bdf\" (UID: \"efdbc051-9c8e-40fd-a34f-5de187695bdf\") " Feb 23 09:16:32 crc kubenswrapper[4940]: I0223 09:16:32.504699 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/efdbc051-9c8e-40fd-a34f-5de187695bdf-utilities" (OuterVolumeSpecName: "utilities") pod "efdbc051-9c8e-40fd-a34f-5de187695bdf" (UID: "efdbc051-9c8e-40fd-a34f-5de187695bdf"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 09:16:32 crc kubenswrapper[4940]: I0223 09:16:32.505213 4940 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/efdbc051-9c8e-40fd-a34f-5de187695bdf-utilities\") on node \"crc\" DevicePath \"\"" Feb 23 09:16:32 crc kubenswrapper[4940]: I0223 09:16:32.516931 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdbc051-9c8e-40fd-a34f-5de187695bdf-kube-api-access-rspxp" (OuterVolumeSpecName: "kube-api-access-rspxp") pod "efdbc051-9c8e-40fd-a34f-5de187695bdf" (UID: "efdbc051-9c8e-40fd-a34f-5de187695bdf"). InnerVolumeSpecName "kube-api-access-rspxp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 09:16:32 crc kubenswrapper[4940]: I0223 09:16:32.569991 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/efdbc051-9c8e-40fd-a34f-5de187695bdf-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "efdbc051-9c8e-40fd-a34f-5de187695bdf" (UID: "efdbc051-9c8e-40fd-a34f-5de187695bdf"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 09:16:32 crc kubenswrapper[4940]: I0223 09:16:32.607034 4940 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rspxp\" (UniqueName: \"kubernetes.io/projected/efdbc051-9c8e-40fd-a34f-5de187695bdf-kube-api-access-rspxp\") on node \"crc\" DevicePath \"\"" Feb 23 09:16:32 crc kubenswrapper[4940]: I0223 09:16:32.607070 4940 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/efdbc051-9c8e-40fd-a34f-5de187695bdf-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 23 09:16:32 crc kubenswrapper[4940]: I0223 09:16:32.877284 4940 scope.go:117] "RemoveContainer" containerID="d6ba6422de2258f1e311c528e44b43ca195cb07e82d58cb1c86e0becabb0880f" Feb 23 09:16:32 crc kubenswrapper[4940]: I0223 09:16:32.903239 4940 scope.go:117] "RemoveContainer" containerID="ef014b2266efe4f98cd310710c01009d7f5c93e4347f3d5b8d1c34fbfa8ef086" Feb 23 09:16:32 crc kubenswrapper[4940]: I0223 09:16:32.925422 4940 generic.go:334] "Generic (PLEG): container finished" podID="efdbc051-9c8e-40fd-a34f-5de187695bdf" containerID="9a6d40393afa7ab92b4de7233f3c21240d6bb7c78eb3a033abdeb4037c5aa8c7" exitCode=0 Feb 23 09:16:32 crc kubenswrapper[4940]: I0223 09:16:32.925570 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wf6jf" Feb 23 09:16:32 crc kubenswrapper[4940]: I0223 09:16:32.926440 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wf6jf" event={"ID":"efdbc051-9c8e-40fd-a34f-5de187695bdf","Type":"ContainerDied","Data":"9a6d40393afa7ab92b4de7233f3c21240d6bb7c78eb3a033abdeb4037c5aa8c7"} Feb 23 09:16:32 crc kubenswrapper[4940]: I0223 09:16:32.926506 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wf6jf" event={"ID":"efdbc051-9c8e-40fd-a34f-5de187695bdf","Type":"ContainerDied","Data":"9032e55fe832fe40adb5fb18580034f259837650923495e2bb223d31eed7a43f"} Feb 23 09:16:32 crc kubenswrapper[4940]: I0223 09:16:32.926528 4940 scope.go:117] "RemoveContainer" containerID="9a6d40393afa7ab92b4de7233f3c21240d6bb7c78eb3a033abdeb4037c5aa8c7" Feb 23 09:16:32 crc kubenswrapper[4940]: I0223 09:16:32.967116 4940 scope.go:117] "RemoveContainer" containerID="e68d0a934a27bd3b9278fb985ae1e5f8d5b0f5de128e961ebf415fac27a1f849" Feb 23 09:16:32 crc kubenswrapper[4940]: I0223 09:16:32.969086 4940 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-wf6jf"] Feb 23 09:16:32 crc kubenswrapper[4940]: I0223 09:16:32.981885 4940 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-wf6jf"] Feb 23 09:16:33 crc kubenswrapper[4940]: I0223 09:16:33.031865 4940 scope.go:117] "RemoveContainer" containerID="5e5a39ebff0c3ed34669e740cb7d08400a55ed8e8999711148f9e33fc7abed69" Feb 23 09:16:33 crc kubenswrapper[4940]: I0223 09:16:33.105822 4940 scope.go:117] "RemoveContainer" containerID="9a6d40393afa7ab92b4de7233f3c21240d6bb7c78eb3a033abdeb4037c5aa8c7" Feb 23 09:16:33 crc kubenswrapper[4940]: E0223 09:16:33.107080 4940 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9a6d40393afa7ab92b4de7233f3c21240d6bb7c78eb3a033abdeb4037c5aa8c7\": container with ID starting with 9a6d40393afa7ab92b4de7233f3c21240d6bb7c78eb3a033abdeb4037c5aa8c7 not found: ID does not exist" containerID="9a6d40393afa7ab92b4de7233f3c21240d6bb7c78eb3a033abdeb4037c5aa8c7" Feb 23 09:16:33 crc kubenswrapper[4940]: I0223 09:16:33.107121 4940 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9a6d40393afa7ab92b4de7233f3c21240d6bb7c78eb3a033abdeb4037c5aa8c7"} err="failed to get container status \"9a6d40393afa7ab92b4de7233f3c21240d6bb7c78eb3a033abdeb4037c5aa8c7\": rpc error: code = NotFound desc = could not find container \"9a6d40393afa7ab92b4de7233f3c21240d6bb7c78eb3a033abdeb4037c5aa8c7\": container with ID starting with 9a6d40393afa7ab92b4de7233f3c21240d6bb7c78eb3a033abdeb4037c5aa8c7 not found: ID does not exist" Feb 23 09:16:33 crc kubenswrapper[4940]: I0223 09:16:33.107150 4940 scope.go:117] "RemoveContainer" containerID="e68d0a934a27bd3b9278fb985ae1e5f8d5b0f5de128e961ebf415fac27a1f849" Feb 23 09:16:33 crc kubenswrapper[4940]: E0223 09:16:33.111113 4940 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e68d0a934a27bd3b9278fb985ae1e5f8d5b0f5de128e961ebf415fac27a1f849\": container with ID starting with e68d0a934a27bd3b9278fb985ae1e5f8d5b0f5de128e961ebf415fac27a1f849 not found: ID does not exist" containerID="e68d0a934a27bd3b9278fb985ae1e5f8d5b0f5de128e961ebf415fac27a1f849" Feb 23 09:16:33 crc kubenswrapper[4940]: I0223 09:16:33.111171 4940 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e68d0a934a27bd3b9278fb985ae1e5f8d5b0f5de128e961ebf415fac27a1f849"} err="failed to get container status \"e68d0a934a27bd3b9278fb985ae1e5f8d5b0f5de128e961ebf415fac27a1f849\": rpc error: code = NotFound desc = could not find container \"e68d0a934a27bd3b9278fb985ae1e5f8d5b0f5de128e961ebf415fac27a1f849\": container with ID starting with e68d0a934a27bd3b9278fb985ae1e5f8d5b0f5de128e961ebf415fac27a1f849 not found: ID does not exist" Feb 23 09:16:33 crc kubenswrapper[4940]: I0223 09:16:33.111199 4940 scope.go:117] "RemoveContainer" containerID="5e5a39ebff0c3ed34669e740cb7d08400a55ed8e8999711148f9e33fc7abed69" Feb 23 09:16:33 crc kubenswrapper[4940]: E0223 09:16:33.115183 4940 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5e5a39ebff0c3ed34669e740cb7d08400a55ed8e8999711148f9e33fc7abed69\": container with ID starting with 5e5a39ebff0c3ed34669e740cb7d08400a55ed8e8999711148f9e33fc7abed69 not found: ID does not exist" containerID="5e5a39ebff0c3ed34669e740cb7d08400a55ed8e8999711148f9e33fc7abed69" Feb 23 09:16:33 crc kubenswrapper[4940]: I0223 09:16:33.115230 4940 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5e5a39ebff0c3ed34669e740cb7d08400a55ed8e8999711148f9e33fc7abed69"} err="failed to get container status \"5e5a39ebff0c3ed34669e740cb7d08400a55ed8e8999711148f9e33fc7abed69\": rpc error: code = NotFound desc = could not find container \"5e5a39ebff0c3ed34669e740cb7d08400a55ed8e8999711148f9e33fc7abed69\": container with ID starting with 5e5a39ebff0c3ed34669e740cb7d08400a55ed8e8999711148f9e33fc7abed69 not found: ID does not exist" Feb 23 09:16:33 crc kubenswrapper[4940]: I0223 09:16:33.367919 4940 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdbc051-9c8e-40fd-a34f-5de187695bdf" path="/var/lib/kubelet/pods/efdbc051-9c8e-40fd-a34f-5de187695bdf/volumes" Feb 23 09:16:39 crc kubenswrapper[4940]: I0223 09:16:39.354209 4940 scope.go:117] "RemoveContainer" containerID="19d6b225c575372b5c150867fa5fdc624fa3eadb3fc5651545d3dd6885ec731e" Feb 23 09:16:39 crc kubenswrapper[4940]: E0223 09:16:39.355089 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-26mgs_openshift-machine-config-operator(f3f2cfd6-5ddf-436d-998f-440f1cc642b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" podUID="f3f2cfd6-5ddf-436d-998f-440f1cc642b1" Feb 23 09:16:50 crc kubenswrapper[4940]: I0223 09:16:50.347639 4940 scope.go:117] "RemoveContainer" containerID="19d6b225c575372b5c150867fa5fdc624fa3eadb3fc5651545d3dd6885ec731e" Feb 23 09:16:50 crc kubenswrapper[4940]: E0223 09:16:50.348377 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-26mgs_openshift-machine-config-operator(f3f2cfd6-5ddf-436d-998f-440f1cc642b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" podUID="f3f2cfd6-5ddf-436d-998f-440f1cc642b1" Feb 23 09:17:04 crc kubenswrapper[4940]: I0223 09:17:04.346475 4940 scope.go:117] "RemoveContainer" containerID="19d6b225c575372b5c150867fa5fdc624fa3eadb3fc5651545d3dd6885ec731e" Feb 23 09:17:04 crc kubenswrapper[4940]: E0223 09:17:04.347181 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-26mgs_openshift-machine-config-operator(f3f2cfd6-5ddf-436d-998f-440f1cc642b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" podUID="f3f2cfd6-5ddf-436d-998f-440f1cc642b1" Feb 23 09:17:08 crc kubenswrapper[4940]: I0223 09:17:08.264906 4940 generic.go:334] "Generic (PLEG): container finished" podID="5d90dbb8-e870-41e1-bbab-a053b479fee1" containerID="8f4df081347b94b5752691b9e12633689a467637854c0c25fe55527f2c3effab" exitCode=0 Feb 23 09:17:08 crc kubenswrapper[4940]: I0223 09:17:08.265697 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-kvmwh" event={"ID":"5d90dbb8-e870-41e1-bbab-a053b479fee1","Type":"ContainerDied","Data":"8f4df081347b94b5752691b9e12633689a467637854c0c25fe55527f2c3effab"} Feb 23 09:17:09 crc kubenswrapper[4940]: I0223 09:17:09.689168 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-kvmwh" Feb 23 09:17:09 crc kubenswrapper[4940]: I0223 09:17:09.764635 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xs74d\" (UniqueName: \"kubernetes.io/projected/5d90dbb8-e870-41e1-bbab-a053b479fee1-kube-api-access-xs74d\") pod \"5d90dbb8-e870-41e1-bbab-a053b479fee1\" (UID: \"5d90dbb8-e870-41e1-bbab-a053b479fee1\") " Feb 23 09:17:09 crc kubenswrapper[4940]: I0223 09:17:09.764797 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d90dbb8-e870-41e1-bbab-a053b479fee1-bootstrap-combined-ca-bundle\") pod \"5d90dbb8-e870-41e1-bbab-a053b479fee1\" (UID: \"5d90dbb8-e870-41e1-bbab-a053b479fee1\") " Feb 23 09:17:09 crc kubenswrapper[4940]: I0223 09:17:09.764837 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5d90dbb8-e870-41e1-bbab-a053b479fee1-inventory\") pod \"5d90dbb8-e870-41e1-bbab-a053b479fee1\" (UID: \"5d90dbb8-e870-41e1-bbab-a053b479fee1\") " Feb 23 09:17:09 crc kubenswrapper[4940]: I0223 09:17:09.764873 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5d90dbb8-e870-41e1-bbab-a053b479fee1-ssh-key-openstack-edpm-ipam\") pod \"5d90dbb8-e870-41e1-bbab-a053b479fee1\" (UID: \"5d90dbb8-e870-41e1-bbab-a053b479fee1\") " Feb 23 09:17:09 crc kubenswrapper[4940]: I0223 09:17:09.772525 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5d90dbb8-e870-41e1-bbab-a053b479fee1-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "5d90dbb8-e870-41e1-bbab-a053b479fee1" (UID: "5d90dbb8-e870-41e1-bbab-a053b479fee1"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 09:17:09 crc kubenswrapper[4940]: I0223 09:17:09.776154 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5d90dbb8-e870-41e1-bbab-a053b479fee1-kube-api-access-xs74d" (OuterVolumeSpecName: "kube-api-access-xs74d") pod "5d90dbb8-e870-41e1-bbab-a053b479fee1" (UID: "5d90dbb8-e870-41e1-bbab-a053b479fee1"). InnerVolumeSpecName "kube-api-access-xs74d". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 09:17:09 crc kubenswrapper[4940]: I0223 09:17:09.802433 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5d90dbb8-e870-41e1-bbab-a053b479fee1-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "5d90dbb8-e870-41e1-bbab-a053b479fee1" (UID: "5d90dbb8-e870-41e1-bbab-a053b479fee1"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 09:17:09 crc kubenswrapper[4940]: I0223 09:17:09.804806 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5d90dbb8-e870-41e1-bbab-a053b479fee1-inventory" (OuterVolumeSpecName: "inventory") pod "5d90dbb8-e870-41e1-bbab-a053b479fee1" (UID: "5d90dbb8-e870-41e1-bbab-a053b479fee1"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 09:17:09 crc kubenswrapper[4940]: I0223 09:17:09.867393 4940 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5d90dbb8-e870-41e1-bbab-a053b479fee1-inventory\") on node \"crc\" DevicePath \"\"" Feb 23 09:17:09 crc kubenswrapper[4940]: I0223 09:17:09.867439 4940 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5d90dbb8-e870-41e1-bbab-a053b479fee1-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 23 09:17:09 crc kubenswrapper[4940]: I0223 09:17:09.867454 4940 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xs74d\" (UniqueName: \"kubernetes.io/projected/5d90dbb8-e870-41e1-bbab-a053b479fee1-kube-api-access-xs74d\") on node \"crc\" DevicePath \"\"" Feb 23 09:17:09 crc kubenswrapper[4940]: I0223 09:17:09.867466 4940 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d90dbb8-e870-41e1-bbab-a053b479fee1-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 09:17:10 crc kubenswrapper[4940]: I0223 09:17:10.284404 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-kvmwh" event={"ID":"5d90dbb8-e870-41e1-bbab-a053b479fee1","Type":"ContainerDied","Data":"f129be20f020f7057308f39c38070f0aa2ca4ff07508d4fe655ceee1a0307c7c"} Feb 23 09:17:10 crc kubenswrapper[4940]: I0223 09:17:10.284452 4940 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f129be20f020f7057308f39c38070f0aa2ca4ff07508d4fe655ceee1a0307c7c" Feb 23 09:17:10 crc kubenswrapper[4940]: I0223 09:17:10.284464 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-kvmwh" Feb 23 09:17:10 crc kubenswrapper[4940]: I0223 09:17:10.388403 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-nr6hb"] Feb 23 09:17:10 crc kubenswrapper[4940]: E0223 09:17:10.388894 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="efdbc051-9c8e-40fd-a34f-5de187695bdf" containerName="extract-utilities" Feb 23 09:17:10 crc kubenswrapper[4940]: I0223 09:17:10.388918 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="efdbc051-9c8e-40fd-a34f-5de187695bdf" containerName="extract-utilities" Feb 23 09:17:10 crc kubenswrapper[4940]: E0223 09:17:10.388944 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d90dbb8-e870-41e1-bbab-a053b479fee1" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Feb 23 09:17:10 crc kubenswrapper[4940]: I0223 09:17:10.388954 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d90dbb8-e870-41e1-bbab-a053b479fee1" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Feb 23 09:17:10 crc kubenswrapper[4940]: E0223 09:17:10.388979 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="efdbc051-9c8e-40fd-a34f-5de187695bdf" containerName="registry-server" Feb 23 09:17:10 crc kubenswrapper[4940]: I0223 09:17:10.388987 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="efdbc051-9c8e-40fd-a34f-5de187695bdf" containerName="registry-server" Feb 23 09:17:10 crc kubenswrapper[4940]: E0223 09:17:10.389004 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="efdbc051-9c8e-40fd-a34f-5de187695bdf" containerName="extract-content" Feb 23 09:17:10 crc kubenswrapper[4940]: I0223 09:17:10.389012 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="efdbc051-9c8e-40fd-a34f-5de187695bdf" containerName="extract-content" Feb 23 09:17:10 crc kubenswrapper[4940]: I0223 09:17:10.389284 4940 memory_manager.go:354] "RemoveStaleState removing state" podUID="efdbc051-9c8e-40fd-a34f-5de187695bdf" containerName="registry-server" Feb 23 09:17:10 crc kubenswrapper[4940]: I0223 09:17:10.389315 4940 memory_manager.go:354] "RemoveStaleState removing state" podUID="5d90dbb8-e870-41e1-bbab-a053b479fee1" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Feb 23 09:17:10 crc kubenswrapper[4940]: I0223 09:17:10.390370 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-nr6hb" Feb 23 09:17:10 crc kubenswrapper[4940]: I0223 09:17:10.392828 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-x648h" Feb 23 09:17:10 crc kubenswrapper[4940]: I0223 09:17:10.392949 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 23 09:17:10 crc kubenswrapper[4940]: I0223 09:17:10.393043 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 23 09:17:10 crc kubenswrapper[4940]: I0223 09:17:10.393531 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 23 09:17:10 crc kubenswrapper[4940]: I0223 09:17:10.409563 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-nr6hb"] Feb 23 09:17:10 crc kubenswrapper[4940]: I0223 09:17:10.480971 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lrl9g\" (UniqueName: \"kubernetes.io/projected/50cd61db-fb52-4abe-a3c6-7c3e3777d04b-kube-api-access-lrl9g\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-nr6hb\" (UID: \"50cd61db-fb52-4abe-a3c6-7c3e3777d04b\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-nr6hb" Feb 23 09:17:10 crc kubenswrapper[4940]: I0223 09:17:10.481255 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/50cd61db-fb52-4abe-a3c6-7c3e3777d04b-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-nr6hb\" (UID: \"50cd61db-fb52-4abe-a3c6-7c3e3777d04b\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-nr6hb" Feb 23 09:17:10 crc kubenswrapper[4940]: I0223 09:17:10.481564 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/50cd61db-fb52-4abe-a3c6-7c3e3777d04b-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-nr6hb\" (UID: \"50cd61db-fb52-4abe-a3c6-7c3e3777d04b\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-nr6hb" Feb 23 09:17:10 crc kubenswrapper[4940]: I0223 09:17:10.583910 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/50cd61db-fb52-4abe-a3c6-7c3e3777d04b-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-nr6hb\" (UID: \"50cd61db-fb52-4abe-a3c6-7c3e3777d04b\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-nr6hb" Feb 23 09:17:10 crc kubenswrapper[4940]: I0223 09:17:10.584052 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/50cd61db-fb52-4abe-a3c6-7c3e3777d04b-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-nr6hb\" (UID: \"50cd61db-fb52-4abe-a3c6-7c3e3777d04b\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-nr6hb" Feb 23 09:17:10 crc kubenswrapper[4940]: I0223 09:17:10.584115 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lrl9g\" (UniqueName: \"kubernetes.io/projected/50cd61db-fb52-4abe-a3c6-7c3e3777d04b-kube-api-access-lrl9g\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-nr6hb\" (UID: \"50cd61db-fb52-4abe-a3c6-7c3e3777d04b\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-nr6hb" Feb 23 09:17:10 crc kubenswrapper[4940]: I0223 09:17:10.588680 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/50cd61db-fb52-4abe-a3c6-7c3e3777d04b-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-nr6hb\" (UID: \"50cd61db-fb52-4abe-a3c6-7c3e3777d04b\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-nr6hb" Feb 23 09:17:10 crc kubenswrapper[4940]: I0223 09:17:10.588683 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/50cd61db-fb52-4abe-a3c6-7c3e3777d04b-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-nr6hb\" (UID: \"50cd61db-fb52-4abe-a3c6-7c3e3777d04b\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-nr6hb" Feb 23 09:17:10 crc kubenswrapper[4940]: I0223 09:17:10.605316 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lrl9g\" (UniqueName: \"kubernetes.io/projected/50cd61db-fb52-4abe-a3c6-7c3e3777d04b-kube-api-access-lrl9g\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-nr6hb\" (UID: \"50cd61db-fb52-4abe-a3c6-7c3e3777d04b\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-nr6hb" Feb 23 09:17:10 crc kubenswrapper[4940]: I0223 09:17:10.710284 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-nr6hb" Feb 23 09:17:11 crc kubenswrapper[4940]: I0223 09:17:11.239535 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-nr6hb"] Feb 23 09:17:11 crc kubenswrapper[4940]: I0223 09:17:11.297693 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-nr6hb" event={"ID":"50cd61db-fb52-4abe-a3c6-7c3e3777d04b","Type":"ContainerStarted","Data":"2397429e3a24f79465f1438f5b3632e7f566de6ddaca8716986e8f60de68b8a1"} Feb 23 09:17:13 crc kubenswrapper[4940]: I0223 09:17:13.332264 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-nr6hb" event={"ID":"50cd61db-fb52-4abe-a3c6-7c3e3777d04b","Type":"ContainerStarted","Data":"2518991c82dbe19b79443c061e123019233ce9608b1e32d59394673ebe26c198"} Feb 23 09:17:13 crc kubenswrapper[4940]: I0223 09:17:13.361144 4940 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-nr6hb" podStartSLOduration=2.425356259 podStartE2EDuration="3.361121141s" podCreationTimestamp="2026-02-23 09:17:10 +0000 UTC" firstStartedPulling="2026-02-23 09:17:11.246870227 +0000 UTC m=+1762.630076424" lastFinishedPulling="2026-02-23 09:17:12.182635149 +0000 UTC m=+1763.565841306" observedRunningTime="2026-02-23 09:17:13.358307854 +0000 UTC m=+1764.741514021" watchObservedRunningTime="2026-02-23 09:17:13.361121141 +0000 UTC m=+1764.744327308" Feb 23 09:17:19 crc kubenswrapper[4940]: I0223 09:17:19.357281 4940 scope.go:117] "RemoveContainer" containerID="19d6b225c575372b5c150867fa5fdc624fa3eadb3fc5651545d3dd6885ec731e" Feb 23 09:17:19 crc kubenswrapper[4940]: E0223 09:17:19.358122 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-26mgs_openshift-machine-config-operator(f3f2cfd6-5ddf-436d-998f-440f1cc642b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" podUID="f3f2cfd6-5ddf-436d-998f-440f1cc642b1" Feb 23 09:17:33 crc kubenswrapper[4940]: I0223 09:17:33.351812 4940 scope.go:117] "RemoveContainer" containerID="19d6b225c575372b5c150867fa5fdc624fa3eadb3fc5651545d3dd6885ec731e" Feb 23 09:17:33 crc kubenswrapper[4940]: E0223 09:17:33.353135 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-26mgs_openshift-machine-config-operator(f3f2cfd6-5ddf-436d-998f-440f1cc642b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" podUID="f3f2cfd6-5ddf-436d-998f-440f1cc642b1" Feb 23 09:17:44 crc kubenswrapper[4940]: I0223 09:17:44.346246 4940 scope.go:117] "RemoveContainer" containerID="19d6b225c575372b5c150867fa5fdc624fa3eadb3fc5651545d3dd6885ec731e" Feb 23 09:17:44 crc kubenswrapper[4940]: E0223 09:17:44.348379 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-26mgs_openshift-machine-config-operator(f3f2cfd6-5ddf-436d-998f-440f1cc642b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" podUID="f3f2cfd6-5ddf-436d-998f-440f1cc642b1" Feb 23 09:17:55 crc kubenswrapper[4940]: I0223 09:17:55.346805 4940 scope.go:117] "RemoveContainer" containerID="19d6b225c575372b5c150867fa5fdc624fa3eadb3fc5651545d3dd6885ec731e" Feb 23 09:17:55 crc kubenswrapper[4940]: E0223 09:17:55.347982 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-26mgs_openshift-machine-config-operator(f3f2cfd6-5ddf-436d-998f-440f1cc642b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" podUID="f3f2cfd6-5ddf-436d-998f-440f1cc642b1" Feb 23 09:18:05 crc kubenswrapper[4940]: I0223 09:18:05.039121 4940 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-dzncq"] Feb 23 09:18:05 crc kubenswrapper[4940]: I0223 09:18:05.049536 4940 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-dzncq"] Feb 23 09:18:05 crc kubenswrapper[4940]: I0223 09:18:05.365104 4940 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4374df7-da62-4cf6-a912-f1463d42cf3a" path="/var/lib/kubelet/pods/f4374df7-da62-4cf6-a912-f1463d42cf3a/volumes" Feb 23 09:18:06 crc kubenswrapper[4940]: I0223 09:18:06.037092 4940 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-2013-account-create-update-5rbqj"] Feb 23 09:18:06 crc kubenswrapper[4940]: I0223 09:18:06.048127 4940 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-3b70-account-create-update-7xz8h"] Feb 23 09:18:06 crc kubenswrapper[4940]: I0223 09:18:06.058480 4940 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-pjzmw"] Feb 23 09:18:06 crc kubenswrapper[4940]: I0223 09:18:06.068555 4940 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-3b70-account-create-update-7xz8h"] Feb 23 09:18:06 crc kubenswrapper[4940]: I0223 09:18:06.078515 4940 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-2013-account-create-update-5rbqj"] Feb 23 09:18:06 crc kubenswrapper[4940]: I0223 09:18:06.089353 4940 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-pjzmw"] Feb 23 09:18:07 crc kubenswrapper[4940]: I0223 09:18:07.047488 4940 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-25d4-account-create-update-xmnzj"] Feb 23 09:18:07 crc kubenswrapper[4940]: I0223 09:18:07.064371 4940 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-htjl5"] Feb 23 09:18:07 crc kubenswrapper[4940]: I0223 09:18:07.073753 4940 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-25d4-account-create-update-xmnzj"] Feb 23 09:18:07 crc kubenswrapper[4940]: I0223 09:18:07.083473 4940 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-htjl5"] Feb 23 09:18:07 crc kubenswrapper[4940]: I0223 09:18:07.362516 4940 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7c0d4f47-b6ec-4115-95ed-466d4aa7edf5" path="/var/lib/kubelet/pods/7c0d4f47-b6ec-4115-95ed-466d4aa7edf5/volumes" Feb 23 09:18:07 crc kubenswrapper[4940]: I0223 09:18:07.367437 4940 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a40974fa-e647-45d0-b3a4-6d9f99b3039d" path="/var/lib/kubelet/pods/a40974fa-e647-45d0-b3a4-6d9f99b3039d/volumes" Feb 23 09:18:07 crc kubenswrapper[4940]: I0223 09:18:07.371406 4940 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b3b4aa31-5e69-4df5-ba1c-19b12f8ba67b" path="/var/lib/kubelet/pods/b3b4aa31-5e69-4df5-ba1c-19b12f8ba67b/volumes" Feb 23 09:18:07 crc kubenswrapper[4940]: I0223 09:18:07.377737 4940 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b7335ef7-f87f-4e06-9992-59f607a87dfa" path="/var/lib/kubelet/pods/b7335ef7-f87f-4e06-9992-59f607a87dfa/volumes" Feb 23 09:18:07 crc kubenswrapper[4940]: I0223 09:18:07.381036 4940 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fbd1d1cf-4935-4c7b-b7d2-35a6d801d15e" path="/var/lib/kubelet/pods/fbd1d1cf-4935-4c7b-b7d2-35a6d801d15e/volumes" Feb 23 09:18:10 crc kubenswrapper[4940]: I0223 09:18:10.345786 4940 scope.go:117] "RemoveContainer" containerID="19d6b225c575372b5c150867fa5fdc624fa3eadb3fc5651545d3dd6885ec731e" Feb 23 09:18:10 crc kubenswrapper[4940]: E0223 09:18:10.346665 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-26mgs_openshift-machine-config-operator(f3f2cfd6-5ddf-436d-998f-440f1cc642b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" podUID="f3f2cfd6-5ddf-436d-998f-440f1cc642b1" Feb 23 09:18:13 crc kubenswrapper[4940]: I0223 09:18:13.060156 4940 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-k4dkj"] Feb 23 09:18:13 crc kubenswrapper[4940]: I0223 09:18:13.073748 4940 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-k4dkj"] Feb 23 09:18:13 crc kubenswrapper[4940]: I0223 09:18:13.372037 4940 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cf2e1100-e815-4e3c-9d88-aa5cf3fb47d0" path="/var/lib/kubelet/pods/cf2e1100-e815-4e3c-9d88-aa5cf3fb47d0/volumes" Feb 23 09:18:21 crc kubenswrapper[4940]: I0223 09:18:21.346442 4940 scope.go:117] "RemoveContainer" containerID="19d6b225c575372b5c150867fa5fdc624fa3eadb3fc5651545d3dd6885ec731e" Feb 23 09:18:21 crc kubenswrapper[4940]: E0223 09:18:21.347761 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-26mgs_openshift-machine-config-operator(f3f2cfd6-5ddf-436d-998f-440f1cc642b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" podUID="f3f2cfd6-5ddf-436d-998f-440f1cc642b1" Feb 23 09:18:33 crc kubenswrapper[4940]: I0223 09:18:33.033779 4940 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-p9579"] Feb 23 09:18:33 crc kubenswrapper[4940]: I0223 09:18:33.047306 4940 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-p9579"] Feb 23 09:18:33 crc kubenswrapper[4940]: I0223 09:18:33.103822 4940 scope.go:117] "RemoveContainer" containerID="668cdceee79943f61deb09513605bd5d0263cea76401310aa436e5b03d86db21" Feb 23 09:18:33 crc kubenswrapper[4940]: I0223 09:18:33.141695 4940 scope.go:117] "RemoveContainer" containerID="ab3a5e6678b6c95a3c3d418985abae07cc44098450902f3ec7d6342bf9db75aa" Feb 23 09:18:33 crc kubenswrapper[4940]: I0223 09:18:33.144098 4940 generic.go:334] "Generic (PLEG): container finished" podID="50cd61db-fb52-4abe-a3c6-7c3e3777d04b" containerID="2518991c82dbe19b79443c061e123019233ce9608b1e32d59394673ebe26c198" exitCode=0 Feb 23 09:18:33 crc kubenswrapper[4940]: I0223 09:18:33.144172 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-nr6hb" event={"ID":"50cd61db-fb52-4abe-a3c6-7c3e3777d04b","Type":"ContainerDied","Data":"2518991c82dbe19b79443c061e123019233ce9608b1e32d59394673ebe26c198"} Feb 23 09:18:33 crc kubenswrapper[4940]: I0223 09:18:33.188582 4940 scope.go:117] "RemoveContainer" containerID="02d61f937ec3457890463f07b84036feb2152a52fc370b14157508774d63207f" Feb 23 09:18:33 crc kubenswrapper[4940]: I0223 09:18:33.232835 4940 scope.go:117] "RemoveContainer" containerID="1c7944c7a25ff1cdb994986f3df318030a88b5c9893e000a6275cb74fed9313b" Feb 23 09:18:33 crc kubenswrapper[4940]: I0223 09:18:33.271297 4940 scope.go:117] "RemoveContainer" containerID="ae127364224ffbb3721761580fa1deedee4320a20294817cd2b2aa6e16b7b2d8" Feb 23 09:18:33 crc kubenswrapper[4940]: I0223 09:18:33.327030 4940 scope.go:117] "RemoveContainer" containerID="238321d87e92995516535d3e05a15bbf1e7b3cfed587e6ba115a0680d34cfb77" Feb 23 09:18:33 crc kubenswrapper[4940]: I0223 09:18:33.360560 4940 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0fd00530-75d3-4e2e-aaf9-4b67a1f2e95e" path="/var/lib/kubelet/pods/0fd00530-75d3-4e2e-aaf9-4b67a1f2e95e/volumes" Feb 23 09:18:33 crc kubenswrapper[4940]: I0223 09:18:33.369934 4940 scope.go:117] "RemoveContainer" containerID="4a51a5cdb80d3375354b54c221153f975715ed0531469d89d407194b20d251b1" Feb 23 09:18:34 crc kubenswrapper[4940]: I0223 09:18:34.558128 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-nr6hb" Feb 23 09:18:34 crc kubenswrapper[4940]: I0223 09:18:34.682915 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/50cd61db-fb52-4abe-a3c6-7c3e3777d04b-inventory\") pod \"50cd61db-fb52-4abe-a3c6-7c3e3777d04b\" (UID: \"50cd61db-fb52-4abe-a3c6-7c3e3777d04b\") " Feb 23 09:18:34 crc kubenswrapper[4940]: I0223 09:18:34.683363 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lrl9g\" (UniqueName: \"kubernetes.io/projected/50cd61db-fb52-4abe-a3c6-7c3e3777d04b-kube-api-access-lrl9g\") pod \"50cd61db-fb52-4abe-a3c6-7c3e3777d04b\" (UID: \"50cd61db-fb52-4abe-a3c6-7c3e3777d04b\") " Feb 23 09:18:34 crc kubenswrapper[4940]: I0223 09:18:34.683470 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/50cd61db-fb52-4abe-a3c6-7c3e3777d04b-ssh-key-openstack-edpm-ipam\") pod \"50cd61db-fb52-4abe-a3c6-7c3e3777d04b\" (UID: \"50cd61db-fb52-4abe-a3c6-7c3e3777d04b\") " Feb 23 09:18:34 crc kubenswrapper[4940]: I0223 09:18:34.689806 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/50cd61db-fb52-4abe-a3c6-7c3e3777d04b-kube-api-access-lrl9g" (OuterVolumeSpecName: "kube-api-access-lrl9g") pod "50cd61db-fb52-4abe-a3c6-7c3e3777d04b" (UID: "50cd61db-fb52-4abe-a3c6-7c3e3777d04b"). InnerVolumeSpecName "kube-api-access-lrl9g". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 09:18:34 crc kubenswrapper[4940]: I0223 09:18:34.719071 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/50cd61db-fb52-4abe-a3c6-7c3e3777d04b-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "50cd61db-fb52-4abe-a3c6-7c3e3777d04b" (UID: "50cd61db-fb52-4abe-a3c6-7c3e3777d04b"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 09:18:34 crc kubenswrapper[4940]: I0223 09:18:34.743058 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/50cd61db-fb52-4abe-a3c6-7c3e3777d04b-inventory" (OuterVolumeSpecName: "inventory") pod "50cd61db-fb52-4abe-a3c6-7c3e3777d04b" (UID: "50cd61db-fb52-4abe-a3c6-7c3e3777d04b"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 09:18:34 crc kubenswrapper[4940]: I0223 09:18:34.786463 4940 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/50cd61db-fb52-4abe-a3c6-7c3e3777d04b-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 23 09:18:34 crc kubenswrapper[4940]: I0223 09:18:34.786510 4940 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/50cd61db-fb52-4abe-a3c6-7c3e3777d04b-inventory\") on node \"crc\" DevicePath \"\"" Feb 23 09:18:34 crc kubenswrapper[4940]: I0223 09:18:34.786523 4940 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lrl9g\" (UniqueName: \"kubernetes.io/projected/50cd61db-fb52-4abe-a3c6-7c3e3777d04b-kube-api-access-lrl9g\") on node \"crc\" DevicePath \"\"" Feb 23 09:18:35 crc kubenswrapper[4940]: I0223 09:18:35.164018 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-nr6hb" event={"ID":"50cd61db-fb52-4abe-a3c6-7c3e3777d04b","Type":"ContainerDied","Data":"2397429e3a24f79465f1438f5b3632e7f566de6ddaca8716986e8f60de68b8a1"} Feb 23 09:18:35 crc kubenswrapper[4940]: I0223 09:18:35.164068 4940 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2397429e3a24f79465f1438f5b3632e7f566de6ddaca8716986e8f60de68b8a1" Feb 23 09:18:35 crc kubenswrapper[4940]: I0223 09:18:35.164132 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-nr6hb" Feb 23 09:18:35 crc kubenswrapper[4940]: I0223 09:18:35.275135 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-z77p7"] Feb 23 09:18:35 crc kubenswrapper[4940]: E0223 09:18:35.275596 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50cd61db-fb52-4abe-a3c6-7c3e3777d04b" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 23 09:18:35 crc kubenswrapper[4940]: I0223 09:18:35.275643 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="50cd61db-fb52-4abe-a3c6-7c3e3777d04b" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 23 09:18:35 crc kubenswrapper[4940]: I0223 09:18:35.275957 4940 memory_manager.go:354] "RemoveStaleState removing state" podUID="50cd61db-fb52-4abe-a3c6-7c3e3777d04b" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 23 09:18:35 crc kubenswrapper[4940]: I0223 09:18:35.276720 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-z77p7" Feb 23 09:18:35 crc kubenswrapper[4940]: I0223 09:18:35.278981 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 23 09:18:35 crc kubenswrapper[4940]: I0223 09:18:35.279717 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 23 09:18:35 crc kubenswrapper[4940]: I0223 09:18:35.280879 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 23 09:18:35 crc kubenswrapper[4940]: I0223 09:18:35.282296 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-x648h" Feb 23 09:18:35 crc kubenswrapper[4940]: I0223 09:18:35.290262 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-z77p7"] Feb 23 09:18:35 crc kubenswrapper[4940]: I0223 09:18:35.399027 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r6ckf\" (UniqueName: \"kubernetes.io/projected/10b1d407-edfe-4a01-9d25-ae2d0491e2aa-kube-api-access-r6ckf\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-z77p7\" (UID: \"10b1d407-edfe-4a01-9d25-ae2d0491e2aa\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-z77p7" Feb 23 09:18:35 crc kubenswrapper[4940]: I0223 09:18:35.399596 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/10b1d407-edfe-4a01-9d25-ae2d0491e2aa-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-z77p7\" (UID: \"10b1d407-edfe-4a01-9d25-ae2d0491e2aa\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-z77p7" Feb 23 09:18:35 crc kubenswrapper[4940]: I0223 09:18:35.399758 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/10b1d407-edfe-4a01-9d25-ae2d0491e2aa-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-z77p7\" (UID: \"10b1d407-edfe-4a01-9d25-ae2d0491e2aa\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-z77p7" Feb 23 09:18:35 crc kubenswrapper[4940]: I0223 09:18:35.502466 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/10b1d407-edfe-4a01-9d25-ae2d0491e2aa-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-z77p7\" (UID: \"10b1d407-edfe-4a01-9d25-ae2d0491e2aa\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-z77p7" Feb 23 09:18:35 crc kubenswrapper[4940]: I0223 09:18:35.502680 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r6ckf\" (UniqueName: \"kubernetes.io/projected/10b1d407-edfe-4a01-9d25-ae2d0491e2aa-kube-api-access-r6ckf\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-z77p7\" (UID: \"10b1d407-edfe-4a01-9d25-ae2d0491e2aa\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-z77p7" Feb 23 09:18:35 crc kubenswrapper[4940]: I0223 09:18:35.503048 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/10b1d407-edfe-4a01-9d25-ae2d0491e2aa-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-z77p7\" (UID: \"10b1d407-edfe-4a01-9d25-ae2d0491e2aa\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-z77p7" Feb 23 09:18:35 crc kubenswrapper[4940]: I0223 09:18:35.513113 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/10b1d407-edfe-4a01-9d25-ae2d0491e2aa-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-z77p7\" (UID: \"10b1d407-edfe-4a01-9d25-ae2d0491e2aa\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-z77p7" Feb 23 09:18:35 crc kubenswrapper[4940]: I0223 09:18:35.513244 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/10b1d407-edfe-4a01-9d25-ae2d0491e2aa-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-z77p7\" (UID: \"10b1d407-edfe-4a01-9d25-ae2d0491e2aa\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-z77p7" Feb 23 09:18:35 crc kubenswrapper[4940]: I0223 09:18:35.522334 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r6ckf\" (UniqueName: \"kubernetes.io/projected/10b1d407-edfe-4a01-9d25-ae2d0491e2aa-kube-api-access-r6ckf\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-z77p7\" (UID: \"10b1d407-edfe-4a01-9d25-ae2d0491e2aa\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-z77p7" Feb 23 09:18:35 crc kubenswrapper[4940]: I0223 09:18:35.607338 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-z77p7" Feb 23 09:18:36 crc kubenswrapper[4940]: I0223 09:18:36.181915 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-z77p7"] Feb 23 09:18:36 crc kubenswrapper[4940]: I0223 09:18:36.346079 4940 scope.go:117] "RemoveContainer" containerID="19d6b225c575372b5c150867fa5fdc624fa3eadb3fc5651545d3dd6885ec731e" Feb 23 09:18:36 crc kubenswrapper[4940]: E0223 09:18:36.346400 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-26mgs_openshift-machine-config-operator(f3f2cfd6-5ddf-436d-998f-440f1cc642b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" podUID="f3f2cfd6-5ddf-436d-998f-440f1cc642b1" Feb 23 09:18:37 crc kubenswrapper[4940]: I0223 09:18:37.185768 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-z77p7" event={"ID":"10b1d407-edfe-4a01-9d25-ae2d0491e2aa","Type":"ContainerStarted","Data":"7b907c639c813690a7a1d7399b9db420a9471344b95f47cabf51d81da8626f9b"} Feb 23 09:18:37 crc kubenswrapper[4940]: I0223 09:18:37.186078 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-z77p7" event={"ID":"10b1d407-edfe-4a01-9d25-ae2d0491e2aa","Type":"ContainerStarted","Data":"acb877d151e9e0953616571c5c1b432fe630c17b843eec2bc549515475aaad9b"} Feb 23 09:18:37 crc kubenswrapper[4940]: I0223 09:18:37.208304 4940 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-z77p7" podStartSLOduration=1.8054301320000001 podStartE2EDuration="2.208285472s" podCreationTimestamp="2026-02-23 09:18:35 +0000 UTC" firstStartedPulling="2026-02-23 09:18:36.185435967 +0000 UTC m=+1847.568642164" lastFinishedPulling="2026-02-23 09:18:36.588291347 +0000 UTC m=+1847.971497504" observedRunningTime="2026-02-23 09:18:37.206405063 +0000 UTC m=+1848.589611250" watchObservedRunningTime="2026-02-23 09:18:37.208285472 +0000 UTC m=+1848.591491629" Feb 23 09:18:42 crc kubenswrapper[4940]: I0223 09:18:42.039210 4940 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-t4mpt"] Feb 23 09:18:42 crc kubenswrapper[4940]: I0223 09:18:42.048258 4940 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-t4mpt"] Feb 23 09:18:43 crc kubenswrapper[4940]: I0223 09:18:43.038916 4940 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-f7ce-account-create-update-hms8s"] Feb 23 09:18:43 crc kubenswrapper[4940]: I0223 09:18:43.055375 4940 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-db-create-cjlx5"] Feb 23 09:18:43 crc kubenswrapper[4940]: I0223 09:18:43.066224 4940 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-f7ce-account-create-update-hms8s"] Feb 23 09:18:43 crc kubenswrapper[4940]: I0223 09:18:43.077373 4940 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/manila-db-create-cjlx5"] Feb 23 09:18:43 crc kubenswrapper[4940]: I0223 09:18:43.360721 4940 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="33bcc7d8-8eed-4039-97fa-d156a882474c" path="/var/lib/kubelet/pods/33bcc7d8-8eed-4039-97fa-d156a882474c/volumes" Feb 23 09:18:43 crc kubenswrapper[4940]: I0223 09:18:43.362425 4940 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="534d5483-19f1-48db-92f4-7311eb8e0bdd" path="/var/lib/kubelet/pods/534d5483-19f1-48db-92f4-7311eb8e0bdd/volumes" Feb 23 09:18:43 crc kubenswrapper[4940]: I0223 09:18:43.363940 4940 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bbecedf9-3f67-471e-b8e7-8945107b9055" path="/var/lib/kubelet/pods/bbecedf9-3f67-471e-b8e7-8945107b9055/volumes" Feb 23 09:18:47 crc kubenswrapper[4940]: I0223 09:18:47.040643 4940 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-jw74t"] Feb 23 09:18:47 crc kubenswrapper[4940]: I0223 09:18:47.053265 4940 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-8086-account-create-update-pmxvb"] Feb 23 09:18:47 crc kubenswrapper[4940]: I0223 09:18:47.067603 4940 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-0ed7-account-create-update-rqmbz"] Feb 23 09:18:47 crc kubenswrapper[4940]: I0223 09:18:47.079172 4940 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-jw74t"] Feb 23 09:18:47 crc kubenswrapper[4940]: I0223 09:18:47.087145 4940 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-zcqlx"] Feb 23 09:18:47 crc kubenswrapper[4940]: I0223 09:18:47.095175 4940 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/manila-8086-account-create-update-pmxvb"] Feb 23 09:18:47 crc kubenswrapper[4940]: I0223 09:18:47.103062 4940 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-0ed7-account-create-update-rqmbz"] Feb 23 09:18:47 crc kubenswrapper[4940]: I0223 09:18:47.110958 4940 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-zcqlx"] Feb 23 09:18:47 crc kubenswrapper[4940]: I0223 09:18:47.119158 4940 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-fca0-account-create-update-cbpzc"] Feb 23 09:18:47 crc kubenswrapper[4940]: I0223 09:18:47.126850 4940 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-fca0-account-create-update-cbpzc"] Feb 23 09:18:47 crc kubenswrapper[4940]: I0223 09:18:47.366674 4940 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1900fd27-c407-4691-8b8c-c92f97c6829e" path="/var/lib/kubelet/pods/1900fd27-c407-4691-8b8c-c92f97c6829e/volumes" Feb 23 09:18:47 crc kubenswrapper[4940]: I0223 09:18:47.367358 4940 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="52b8ddee-466a-4cfa-b22f-c5b256a5b602" path="/var/lib/kubelet/pods/52b8ddee-466a-4cfa-b22f-c5b256a5b602/volumes" Feb 23 09:18:47 crc kubenswrapper[4940]: I0223 09:18:47.367943 4940 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="926cba46-e952-43fd-a42e-9dfaa77e74d0" path="/var/lib/kubelet/pods/926cba46-e952-43fd-a42e-9dfaa77e74d0/volumes" Feb 23 09:18:47 crc kubenswrapper[4940]: I0223 09:18:47.368636 4940 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="92c6b8ad-74d4-447c-b8b2-e6302e5a2d55" path="/var/lib/kubelet/pods/92c6b8ad-74d4-447c-b8b2-e6302e5a2d55/volumes" Feb 23 09:18:47 crc kubenswrapper[4940]: I0223 09:18:47.369908 4940 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e79ed92a-52e6-42a1-9870-08a965e41cd0" path="/var/lib/kubelet/pods/e79ed92a-52e6-42a1-9870-08a965e41cd0/volumes" Feb 23 09:18:51 crc kubenswrapper[4940]: I0223 09:18:51.346311 4940 scope.go:117] "RemoveContainer" containerID="19d6b225c575372b5c150867fa5fdc624fa3eadb3fc5651545d3dd6885ec731e" Feb 23 09:18:51 crc kubenswrapper[4940]: E0223 09:18:51.347199 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-26mgs_openshift-machine-config-operator(f3f2cfd6-5ddf-436d-998f-440f1cc642b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" podUID="f3f2cfd6-5ddf-436d-998f-440f1cc642b1" Feb 23 09:18:52 crc kubenswrapper[4940]: I0223 09:18:52.048790 4940 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-ztzmr"] Feb 23 09:18:52 crc kubenswrapper[4940]: I0223 09:18:52.067187 4940 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-ztzmr"] Feb 23 09:18:53 crc kubenswrapper[4940]: I0223 09:18:53.366595 4940 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9063161f-40d0-49a1-a4f2-f68a3aff7897" path="/var/lib/kubelet/pods/9063161f-40d0-49a1-a4f2-f68a3aff7897/volumes" Feb 23 09:19:06 crc kubenswrapper[4940]: I0223 09:19:06.345791 4940 scope.go:117] "RemoveContainer" containerID="19d6b225c575372b5c150867fa5fdc624fa3eadb3fc5651545d3dd6885ec731e" Feb 23 09:19:07 crc kubenswrapper[4940]: I0223 09:19:07.526877 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" event={"ID":"f3f2cfd6-5ddf-436d-998f-440f1cc642b1","Type":"ContainerStarted","Data":"0e22f4141042ccdf07818d61e1f9266aa39799c5d25cbea563603ab4fa191be6"} Feb 23 09:19:25 crc kubenswrapper[4940]: I0223 09:19:25.045115 4940 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-hcm9c"] Feb 23 09:19:25 crc kubenswrapper[4940]: I0223 09:19:25.082396 4940 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-hcm9c"] Feb 23 09:19:25 crc kubenswrapper[4940]: I0223 09:19:25.356058 4940 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf56a426-9c5a-4a94-8740-fbe2c05dafbb" path="/var/lib/kubelet/pods/bf56a426-9c5a-4a94-8740-fbe2c05dafbb/volumes" Feb 23 09:19:33 crc kubenswrapper[4940]: I0223 09:19:33.543949 4940 scope.go:117] "RemoveContainer" containerID="93564bc54012fea7c0d172def0757dd0e72e20b2ee023b5224e79fa561559ff4" Feb 23 09:19:33 crc kubenswrapper[4940]: I0223 09:19:33.598588 4940 scope.go:117] "RemoveContainer" containerID="394176dddcb0382c0a2bbc210c6359d6c0e4bb26ecfc27caaa2aa22ad5201b06" Feb 23 09:19:33 crc kubenswrapper[4940]: I0223 09:19:33.647769 4940 scope.go:117] "RemoveContainer" containerID="3692a952631f69e3210d7d0c41508b109967ad4af1b8f9e7a5c6505b602976b0" Feb 23 09:19:33 crc kubenswrapper[4940]: I0223 09:19:33.712905 4940 scope.go:117] "RemoveContainer" containerID="422da4d80f32fe87000a2d770ab1ade34428ef47d6c3a1364b3fff25e0bf9ed5" Feb 23 09:19:33 crc kubenswrapper[4940]: I0223 09:19:33.761087 4940 scope.go:117] "RemoveContainer" containerID="ad0edd3ade96ef715c3dfd49c9b7bdee951b4f2ba1ade606630cba78fd183785" Feb 23 09:19:33 crc kubenswrapper[4940]: I0223 09:19:33.812471 4940 scope.go:117] "RemoveContainer" containerID="a899ed4bfcf3daffef0949e5d81e86917d231cd12db0067ee4d54d594794bd8b" Feb 23 09:19:33 crc kubenswrapper[4940]: I0223 09:19:33.866641 4940 scope.go:117] "RemoveContainer" containerID="893611724f44d5a274aacb48dc70ebf6c251d1f8b411b2a6851f87e6d911ac78" Feb 23 09:19:33 crc kubenswrapper[4940]: I0223 09:19:33.892216 4940 scope.go:117] "RemoveContainer" containerID="100755ce628d7b75bc077814d6db070d80ec0892f5ebded60652945511ef5835" Feb 23 09:19:33 crc kubenswrapper[4940]: I0223 09:19:33.923186 4940 scope.go:117] "RemoveContainer" containerID="718eab3076e08c740b11b125044da354b569ad3ae05e5abee77eeeaf7cc395d0" Feb 23 09:19:33 crc kubenswrapper[4940]: I0223 09:19:33.953874 4940 scope.go:117] "RemoveContainer" containerID="a271c8366b2a73340221775abdf9bc7b756fa893190124b600d8d50ad96ec250" Feb 23 09:19:33 crc kubenswrapper[4940]: I0223 09:19:33.990328 4940 scope.go:117] "RemoveContainer" containerID="5113cc9aed38e0c069ca83f4113fd2f41c0a4e4ce5a2416899c6c49e8954c612" Feb 23 09:19:37 crc kubenswrapper[4940]: I0223 09:19:37.821670 4940 generic.go:334] "Generic (PLEG): container finished" podID="10b1d407-edfe-4a01-9d25-ae2d0491e2aa" containerID="7b907c639c813690a7a1d7399b9db420a9471344b95f47cabf51d81da8626f9b" exitCode=0 Feb 23 09:19:37 crc kubenswrapper[4940]: I0223 09:19:37.823113 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-z77p7" event={"ID":"10b1d407-edfe-4a01-9d25-ae2d0491e2aa","Type":"ContainerDied","Data":"7b907c639c813690a7a1d7399b9db420a9471344b95f47cabf51d81da8626f9b"} Feb 23 09:19:38 crc kubenswrapper[4940]: I0223 09:19:38.036434 4940 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-mphgm"] Feb 23 09:19:38 crc kubenswrapper[4940]: I0223 09:19:38.049247 4940 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-rxlnz"] Feb 23 09:19:38 crc kubenswrapper[4940]: I0223 09:19:38.061564 4940 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-7d9wv"] Feb 23 09:19:38 crc kubenswrapper[4940]: I0223 09:19:38.072695 4940 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-rxlnz"] Feb 23 09:19:38 crc kubenswrapper[4940]: I0223 09:19:38.083141 4940 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-7d9wv"] Feb 23 09:19:38 crc kubenswrapper[4940]: I0223 09:19:38.093150 4940 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-mphgm"] Feb 23 09:19:39 crc kubenswrapper[4940]: I0223 09:19:39.241741 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-z77p7" Feb 23 09:19:39 crc kubenswrapper[4940]: I0223 09:19:39.319632 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/10b1d407-edfe-4a01-9d25-ae2d0491e2aa-inventory\") pod \"10b1d407-edfe-4a01-9d25-ae2d0491e2aa\" (UID: \"10b1d407-edfe-4a01-9d25-ae2d0491e2aa\") " Feb 23 09:19:39 crc kubenswrapper[4940]: I0223 09:19:39.319694 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r6ckf\" (UniqueName: \"kubernetes.io/projected/10b1d407-edfe-4a01-9d25-ae2d0491e2aa-kube-api-access-r6ckf\") pod \"10b1d407-edfe-4a01-9d25-ae2d0491e2aa\" (UID: \"10b1d407-edfe-4a01-9d25-ae2d0491e2aa\") " Feb 23 09:19:39 crc kubenswrapper[4940]: I0223 09:19:39.319963 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/10b1d407-edfe-4a01-9d25-ae2d0491e2aa-ssh-key-openstack-edpm-ipam\") pod \"10b1d407-edfe-4a01-9d25-ae2d0491e2aa\" (UID: \"10b1d407-edfe-4a01-9d25-ae2d0491e2aa\") " Feb 23 09:19:39 crc kubenswrapper[4940]: I0223 09:19:39.325417 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/10b1d407-edfe-4a01-9d25-ae2d0491e2aa-kube-api-access-r6ckf" (OuterVolumeSpecName: "kube-api-access-r6ckf") pod "10b1d407-edfe-4a01-9d25-ae2d0491e2aa" (UID: "10b1d407-edfe-4a01-9d25-ae2d0491e2aa"). InnerVolumeSpecName "kube-api-access-r6ckf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 09:19:39 crc kubenswrapper[4940]: I0223 09:19:39.358197 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/10b1d407-edfe-4a01-9d25-ae2d0491e2aa-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "10b1d407-edfe-4a01-9d25-ae2d0491e2aa" (UID: "10b1d407-edfe-4a01-9d25-ae2d0491e2aa"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 09:19:39 crc kubenswrapper[4940]: I0223 09:19:39.358701 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/10b1d407-edfe-4a01-9d25-ae2d0491e2aa-inventory" (OuterVolumeSpecName: "inventory") pod "10b1d407-edfe-4a01-9d25-ae2d0491e2aa" (UID: "10b1d407-edfe-4a01-9d25-ae2d0491e2aa"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 09:19:39 crc kubenswrapper[4940]: I0223 09:19:39.360284 4940 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7c217a33-e32d-41cc-8fda-6691bf37db15" path="/var/lib/kubelet/pods/7c217a33-e32d-41cc-8fda-6691bf37db15/volumes" Feb 23 09:19:39 crc kubenswrapper[4940]: I0223 09:19:39.361036 4940 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b224c257-f773-40f9-b62b-8d6e897ed198" path="/var/lib/kubelet/pods/b224c257-f773-40f9-b62b-8d6e897ed198/volumes" Feb 23 09:19:39 crc kubenswrapper[4940]: I0223 09:19:39.361732 4940 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b430a58d-ed32-4642-ac93-d6f0de2eeb0d" path="/var/lib/kubelet/pods/b430a58d-ed32-4642-ac93-d6f0de2eeb0d/volumes" Feb 23 09:19:39 crc kubenswrapper[4940]: I0223 09:19:39.422726 4940 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/10b1d407-edfe-4a01-9d25-ae2d0491e2aa-inventory\") on node \"crc\" DevicePath \"\"" Feb 23 09:19:39 crc kubenswrapper[4940]: I0223 09:19:39.423020 4940 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r6ckf\" (UniqueName: \"kubernetes.io/projected/10b1d407-edfe-4a01-9d25-ae2d0491e2aa-kube-api-access-r6ckf\") on node \"crc\" DevicePath \"\"" Feb 23 09:19:39 crc kubenswrapper[4940]: I0223 09:19:39.423033 4940 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/10b1d407-edfe-4a01-9d25-ae2d0491e2aa-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 23 09:19:39 crc kubenswrapper[4940]: I0223 09:19:39.847013 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-z77p7" event={"ID":"10b1d407-edfe-4a01-9d25-ae2d0491e2aa","Type":"ContainerDied","Data":"acb877d151e9e0953616571c5c1b432fe630c17b843eec2bc549515475aaad9b"} Feb 23 09:19:39 crc kubenswrapper[4940]: I0223 09:19:39.847071 4940 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="acb877d151e9e0953616571c5c1b432fe630c17b843eec2bc549515475aaad9b" Feb 23 09:19:39 crc kubenswrapper[4940]: I0223 09:19:39.847077 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-z77p7" Feb 23 09:19:39 crc kubenswrapper[4940]: I0223 09:19:39.920593 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-l8x8b"] Feb 23 09:19:39 crc kubenswrapper[4940]: E0223 09:19:39.921527 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="10b1d407-edfe-4a01-9d25-ae2d0491e2aa" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Feb 23 09:19:39 crc kubenswrapper[4940]: I0223 09:19:39.921556 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="10b1d407-edfe-4a01-9d25-ae2d0491e2aa" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Feb 23 09:19:39 crc kubenswrapper[4940]: I0223 09:19:39.921805 4940 memory_manager.go:354] "RemoveStaleState removing state" podUID="10b1d407-edfe-4a01-9d25-ae2d0491e2aa" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Feb 23 09:19:39 crc kubenswrapper[4940]: I0223 09:19:39.922671 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-l8x8b" Feb 23 09:19:39 crc kubenswrapper[4940]: I0223 09:19:39.925868 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 23 09:19:39 crc kubenswrapper[4940]: I0223 09:19:39.926157 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 23 09:19:39 crc kubenswrapper[4940]: I0223 09:19:39.926558 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-x648h" Feb 23 09:19:39 crc kubenswrapper[4940]: I0223 09:19:39.931831 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 23 09:19:39 crc kubenswrapper[4940]: I0223 09:19:39.951858 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-l8x8b"] Feb 23 09:19:40 crc kubenswrapper[4940]: I0223 09:19:40.034659 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f35a90d0-fb62-4a2f-9ae3-f7c8971a58e8-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-l8x8b\" (UID: \"f35a90d0-fb62-4a2f-9ae3-f7c8971a58e8\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-l8x8b" Feb 23 09:19:40 crc kubenswrapper[4940]: I0223 09:19:40.034756 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f35a90d0-fb62-4a2f-9ae3-f7c8971a58e8-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-l8x8b\" (UID: \"f35a90d0-fb62-4a2f-9ae3-f7c8971a58e8\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-l8x8b" Feb 23 09:19:40 crc kubenswrapper[4940]: I0223 09:19:40.034828 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kxtmr\" (UniqueName: \"kubernetes.io/projected/f35a90d0-fb62-4a2f-9ae3-f7c8971a58e8-kube-api-access-kxtmr\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-l8x8b\" (UID: \"f35a90d0-fb62-4a2f-9ae3-f7c8971a58e8\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-l8x8b" Feb 23 09:19:40 crc kubenswrapper[4940]: I0223 09:19:40.136310 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kxtmr\" (UniqueName: \"kubernetes.io/projected/f35a90d0-fb62-4a2f-9ae3-f7c8971a58e8-kube-api-access-kxtmr\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-l8x8b\" (UID: \"f35a90d0-fb62-4a2f-9ae3-f7c8971a58e8\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-l8x8b" Feb 23 09:19:40 crc kubenswrapper[4940]: I0223 09:19:40.136433 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f35a90d0-fb62-4a2f-9ae3-f7c8971a58e8-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-l8x8b\" (UID: \"f35a90d0-fb62-4a2f-9ae3-f7c8971a58e8\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-l8x8b" Feb 23 09:19:40 crc kubenswrapper[4940]: I0223 09:19:40.136491 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f35a90d0-fb62-4a2f-9ae3-f7c8971a58e8-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-l8x8b\" (UID: \"f35a90d0-fb62-4a2f-9ae3-f7c8971a58e8\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-l8x8b" Feb 23 09:19:40 crc kubenswrapper[4940]: I0223 09:19:40.140176 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f35a90d0-fb62-4a2f-9ae3-f7c8971a58e8-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-l8x8b\" (UID: \"f35a90d0-fb62-4a2f-9ae3-f7c8971a58e8\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-l8x8b" Feb 23 09:19:40 crc kubenswrapper[4940]: I0223 09:19:40.140295 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f35a90d0-fb62-4a2f-9ae3-f7c8971a58e8-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-l8x8b\" (UID: \"f35a90d0-fb62-4a2f-9ae3-f7c8971a58e8\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-l8x8b" Feb 23 09:19:40 crc kubenswrapper[4940]: I0223 09:19:40.155109 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kxtmr\" (UniqueName: \"kubernetes.io/projected/f35a90d0-fb62-4a2f-9ae3-f7c8971a58e8-kube-api-access-kxtmr\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-l8x8b\" (UID: \"f35a90d0-fb62-4a2f-9ae3-f7c8971a58e8\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-l8x8b" Feb 23 09:19:40 crc kubenswrapper[4940]: I0223 09:19:40.245722 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-l8x8b" Feb 23 09:19:40 crc kubenswrapper[4940]: I0223 09:19:40.742693 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-l8x8b"] Feb 23 09:19:40 crc kubenswrapper[4940]: I0223 09:19:40.858633 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-l8x8b" event={"ID":"f35a90d0-fb62-4a2f-9ae3-f7c8971a58e8","Type":"ContainerStarted","Data":"4b6d9ebf1d618f8e231fd0ea9a1e9eaaa63125d2861c5741eec0ca09d0faa91d"} Feb 23 09:19:41 crc kubenswrapper[4940]: I0223 09:19:41.866573 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-l8x8b" event={"ID":"f35a90d0-fb62-4a2f-9ae3-f7c8971a58e8","Type":"ContainerStarted","Data":"b0c98e89920e1f0c7866f31469f1ca1c1801c3c817bb738300a21f9ceb18d316"} Feb 23 09:19:41 crc kubenswrapper[4940]: I0223 09:19:41.890374 4940 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-l8x8b" podStartSLOduration=2.418494225 podStartE2EDuration="2.890357297s" podCreationTimestamp="2026-02-23 09:19:39 +0000 UTC" firstStartedPulling="2026-02-23 09:19:40.7473758 +0000 UTC m=+1912.130581957" lastFinishedPulling="2026-02-23 09:19:41.219238872 +0000 UTC m=+1912.602445029" observedRunningTime="2026-02-23 09:19:41.886366583 +0000 UTC m=+1913.269572750" watchObservedRunningTime="2026-02-23 09:19:41.890357297 +0000 UTC m=+1913.273563454" Feb 23 09:19:45 crc kubenswrapper[4940]: I0223 09:19:45.906543 4940 generic.go:334] "Generic (PLEG): container finished" podID="f35a90d0-fb62-4a2f-9ae3-f7c8971a58e8" containerID="b0c98e89920e1f0c7866f31469f1ca1c1801c3c817bb738300a21f9ceb18d316" exitCode=0 Feb 23 09:19:45 crc kubenswrapper[4940]: I0223 09:19:45.906647 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-l8x8b" event={"ID":"f35a90d0-fb62-4a2f-9ae3-f7c8971a58e8","Type":"ContainerDied","Data":"b0c98e89920e1f0c7866f31469f1ca1c1801c3c817bb738300a21f9ceb18d316"} Feb 23 09:19:47 crc kubenswrapper[4940]: I0223 09:19:47.319058 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-l8x8b" Feb 23 09:19:47 crc kubenswrapper[4940]: I0223 09:19:47.400990 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f35a90d0-fb62-4a2f-9ae3-f7c8971a58e8-ssh-key-openstack-edpm-ipam\") pod \"f35a90d0-fb62-4a2f-9ae3-f7c8971a58e8\" (UID: \"f35a90d0-fb62-4a2f-9ae3-f7c8971a58e8\") " Feb 23 09:19:47 crc kubenswrapper[4940]: I0223 09:19:47.401158 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f35a90d0-fb62-4a2f-9ae3-f7c8971a58e8-inventory\") pod \"f35a90d0-fb62-4a2f-9ae3-f7c8971a58e8\" (UID: \"f35a90d0-fb62-4a2f-9ae3-f7c8971a58e8\") " Feb 23 09:19:47 crc kubenswrapper[4940]: I0223 09:19:47.401221 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kxtmr\" (UniqueName: \"kubernetes.io/projected/f35a90d0-fb62-4a2f-9ae3-f7c8971a58e8-kube-api-access-kxtmr\") pod \"f35a90d0-fb62-4a2f-9ae3-f7c8971a58e8\" (UID: \"f35a90d0-fb62-4a2f-9ae3-f7c8971a58e8\") " Feb 23 09:19:47 crc kubenswrapper[4940]: I0223 09:19:47.410547 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f35a90d0-fb62-4a2f-9ae3-f7c8971a58e8-kube-api-access-kxtmr" (OuterVolumeSpecName: "kube-api-access-kxtmr") pod "f35a90d0-fb62-4a2f-9ae3-f7c8971a58e8" (UID: "f35a90d0-fb62-4a2f-9ae3-f7c8971a58e8"). InnerVolumeSpecName "kube-api-access-kxtmr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 09:19:47 crc kubenswrapper[4940]: I0223 09:19:47.476630 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f35a90d0-fb62-4a2f-9ae3-f7c8971a58e8-inventory" (OuterVolumeSpecName: "inventory") pod "f35a90d0-fb62-4a2f-9ae3-f7c8971a58e8" (UID: "f35a90d0-fb62-4a2f-9ae3-f7c8971a58e8"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 09:19:47 crc kubenswrapper[4940]: I0223 09:19:47.481922 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f35a90d0-fb62-4a2f-9ae3-f7c8971a58e8-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "f35a90d0-fb62-4a2f-9ae3-f7c8971a58e8" (UID: "f35a90d0-fb62-4a2f-9ae3-f7c8971a58e8"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 09:19:47 crc kubenswrapper[4940]: I0223 09:19:47.503792 4940 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kxtmr\" (UniqueName: \"kubernetes.io/projected/f35a90d0-fb62-4a2f-9ae3-f7c8971a58e8-kube-api-access-kxtmr\") on node \"crc\" DevicePath \"\"" Feb 23 09:19:47 crc kubenswrapper[4940]: I0223 09:19:47.503830 4940 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f35a90d0-fb62-4a2f-9ae3-f7c8971a58e8-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 23 09:19:47 crc kubenswrapper[4940]: I0223 09:19:47.503867 4940 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f35a90d0-fb62-4a2f-9ae3-f7c8971a58e8-inventory\") on node \"crc\" DevicePath \"\"" Feb 23 09:19:47 crc kubenswrapper[4940]: I0223 09:19:47.924985 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-l8x8b" event={"ID":"f35a90d0-fb62-4a2f-9ae3-f7c8971a58e8","Type":"ContainerDied","Data":"4b6d9ebf1d618f8e231fd0ea9a1e9eaaa63125d2861c5741eec0ca09d0faa91d"} Feb 23 09:19:47 crc kubenswrapper[4940]: I0223 09:19:47.925352 4940 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4b6d9ebf1d618f8e231fd0ea9a1e9eaaa63125d2861c5741eec0ca09d0faa91d" Feb 23 09:19:47 crc kubenswrapper[4940]: I0223 09:19:47.925055 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-l8x8b" Feb 23 09:19:48 crc kubenswrapper[4940]: I0223 09:19:48.015550 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-nntpm"] Feb 23 09:19:48 crc kubenswrapper[4940]: E0223 09:19:48.016160 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f35a90d0-fb62-4a2f-9ae3-f7c8971a58e8" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Feb 23 09:19:48 crc kubenswrapper[4940]: I0223 09:19:48.016226 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="f35a90d0-fb62-4a2f-9ae3-f7c8971a58e8" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Feb 23 09:19:48 crc kubenswrapper[4940]: I0223 09:19:48.016490 4940 memory_manager.go:354] "RemoveStaleState removing state" podUID="f35a90d0-fb62-4a2f-9ae3-f7c8971a58e8" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Feb 23 09:19:48 crc kubenswrapper[4940]: I0223 09:19:48.018476 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-nntpm" Feb 23 09:19:48 crc kubenswrapper[4940]: I0223 09:19:48.020665 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 23 09:19:48 crc kubenswrapper[4940]: I0223 09:19:48.020780 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-x648h" Feb 23 09:19:48 crc kubenswrapper[4940]: I0223 09:19:48.021861 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 23 09:19:48 crc kubenswrapper[4940]: I0223 09:19:48.022285 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 23 09:19:48 crc kubenswrapper[4940]: I0223 09:19:48.028366 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-nntpm"] Feb 23 09:19:48 crc kubenswrapper[4940]: I0223 09:19:48.116334 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/05541a9b-b462-4150-b0d7-131d75a1d775-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-nntpm\" (UID: \"05541a9b-b462-4150-b0d7-131d75a1d775\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-nntpm" Feb 23 09:19:48 crc kubenswrapper[4940]: I0223 09:19:48.116412 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c9lz5\" (UniqueName: \"kubernetes.io/projected/05541a9b-b462-4150-b0d7-131d75a1d775-kube-api-access-c9lz5\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-nntpm\" (UID: \"05541a9b-b462-4150-b0d7-131d75a1d775\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-nntpm" Feb 23 09:19:48 crc kubenswrapper[4940]: I0223 09:19:48.116543 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/05541a9b-b462-4150-b0d7-131d75a1d775-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-nntpm\" (UID: \"05541a9b-b462-4150-b0d7-131d75a1d775\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-nntpm" Feb 23 09:19:48 crc kubenswrapper[4940]: I0223 09:19:48.218709 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/05541a9b-b462-4150-b0d7-131d75a1d775-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-nntpm\" (UID: \"05541a9b-b462-4150-b0d7-131d75a1d775\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-nntpm" Feb 23 09:19:48 crc kubenswrapper[4940]: I0223 09:19:48.219046 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c9lz5\" (UniqueName: \"kubernetes.io/projected/05541a9b-b462-4150-b0d7-131d75a1d775-kube-api-access-c9lz5\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-nntpm\" (UID: \"05541a9b-b462-4150-b0d7-131d75a1d775\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-nntpm" Feb 23 09:19:48 crc kubenswrapper[4940]: I0223 09:19:48.219222 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/05541a9b-b462-4150-b0d7-131d75a1d775-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-nntpm\" (UID: \"05541a9b-b462-4150-b0d7-131d75a1d775\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-nntpm" Feb 23 09:19:48 crc kubenswrapper[4940]: I0223 09:19:48.222919 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/05541a9b-b462-4150-b0d7-131d75a1d775-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-nntpm\" (UID: \"05541a9b-b462-4150-b0d7-131d75a1d775\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-nntpm" Feb 23 09:19:48 crc kubenswrapper[4940]: I0223 09:19:48.224372 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/05541a9b-b462-4150-b0d7-131d75a1d775-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-nntpm\" (UID: \"05541a9b-b462-4150-b0d7-131d75a1d775\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-nntpm" Feb 23 09:19:48 crc kubenswrapper[4940]: I0223 09:19:48.255230 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c9lz5\" (UniqueName: \"kubernetes.io/projected/05541a9b-b462-4150-b0d7-131d75a1d775-kube-api-access-c9lz5\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-nntpm\" (UID: \"05541a9b-b462-4150-b0d7-131d75a1d775\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-nntpm" Feb 23 09:19:48 crc kubenswrapper[4940]: I0223 09:19:48.338204 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-nntpm" Feb 23 09:19:48 crc kubenswrapper[4940]: I0223 09:19:48.893875 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-nntpm"] Feb 23 09:19:48 crc kubenswrapper[4940]: I0223 09:19:48.935322 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-nntpm" event={"ID":"05541a9b-b462-4150-b0d7-131d75a1d775","Type":"ContainerStarted","Data":"342d21b3aae7d6dd73bd7916620b01ceb9ce0f8d6c3ba9e4abc19fc6584ca041"} Feb 23 09:19:49 crc kubenswrapper[4940]: I0223 09:19:49.945559 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-nntpm" event={"ID":"05541a9b-b462-4150-b0d7-131d75a1d775","Type":"ContainerStarted","Data":"99f797b23d8ec83d7fd6a483f4c7ece2182be6d613f496afb77bcfaa104f2a02"} Feb 23 09:19:49 crc kubenswrapper[4940]: I0223 09:19:49.990817 4940 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-nntpm" podStartSLOduration=2.618141501 podStartE2EDuration="2.990795981s" podCreationTimestamp="2026-02-23 09:19:47 +0000 UTC" firstStartedPulling="2026-02-23 09:19:48.899479554 +0000 UTC m=+1920.282685711" lastFinishedPulling="2026-02-23 09:19:49.272134034 +0000 UTC m=+1920.655340191" observedRunningTime="2026-02-23 09:19:49.982218254 +0000 UTC m=+1921.365424421" watchObservedRunningTime="2026-02-23 09:19:49.990795981 +0000 UTC m=+1921.374002128" Feb 23 09:19:53 crc kubenswrapper[4940]: I0223 09:19:53.048348 4940 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-6p69q"] Feb 23 09:19:53 crc kubenswrapper[4940]: I0223 09:19:53.057364 4940 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-6p69q"] Feb 23 09:19:53 crc kubenswrapper[4940]: I0223 09:19:53.358885 4940 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ab97aa50-1b14-4a5c-82cd-1be9f025b2b5" path="/var/lib/kubelet/pods/ab97aa50-1b14-4a5c-82cd-1be9f025b2b5/volumes" Feb 23 09:20:01 crc kubenswrapper[4940]: I0223 09:20:01.045968 4940 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-db-sync-ktg94"] Feb 23 09:20:01 crc kubenswrapper[4940]: I0223 09:20:01.054799 4940 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/manila-db-sync-ktg94"] Feb 23 09:20:01 crc kubenswrapper[4940]: I0223 09:20:01.357796 4940 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a43f9f8e-d118-4247-b1f0-b6aac984bb4d" path="/var/lib/kubelet/pods/a43f9f8e-d118-4247-b1f0-b6aac984bb4d/volumes" Feb 23 09:20:25 crc kubenswrapper[4940]: I0223 09:20:25.273092 4940 generic.go:334] "Generic (PLEG): container finished" podID="05541a9b-b462-4150-b0d7-131d75a1d775" containerID="99f797b23d8ec83d7fd6a483f4c7ece2182be6d613f496afb77bcfaa104f2a02" exitCode=0 Feb 23 09:20:25 crc kubenswrapper[4940]: I0223 09:20:25.273182 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-nntpm" event={"ID":"05541a9b-b462-4150-b0d7-131d75a1d775","Type":"ContainerDied","Data":"99f797b23d8ec83d7fd6a483f4c7ece2182be6d613f496afb77bcfaa104f2a02"} Feb 23 09:20:26 crc kubenswrapper[4940]: I0223 09:20:26.757235 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-nntpm" Feb 23 09:20:26 crc kubenswrapper[4940]: I0223 09:20:26.891416 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/05541a9b-b462-4150-b0d7-131d75a1d775-ssh-key-openstack-edpm-ipam\") pod \"05541a9b-b462-4150-b0d7-131d75a1d775\" (UID: \"05541a9b-b462-4150-b0d7-131d75a1d775\") " Feb 23 09:20:26 crc kubenswrapper[4940]: I0223 09:20:26.891561 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c9lz5\" (UniqueName: \"kubernetes.io/projected/05541a9b-b462-4150-b0d7-131d75a1d775-kube-api-access-c9lz5\") pod \"05541a9b-b462-4150-b0d7-131d75a1d775\" (UID: \"05541a9b-b462-4150-b0d7-131d75a1d775\") " Feb 23 09:20:26 crc kubenswrapper[4940]: I0223 09:20:26.891746 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/05541a9b-b462-4150-b0d7-131d75a1d775-inventory\") pod \"05541a9b-b462-4150-b0d7-131d75a1d775\" (UID: \"05541a9b-b462-4150-b0d7-131d75a1d775\") " Feb 23 09:20:26 crc kubenswrapper[4940]: I0223 09:20:26.898183 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/05541a9b-b462-4150-b0d7-131d75a1d775-kube-api-access-c9lz5" (OuterVolumeSpecName: "kube-api-access-c9lz5") pod "05541a9b-b462-4150-b0d7-131d75a1d775" (UID: "05541a9b-b462-4150-b0d7-131d75a1d775"). InnerVolumeSpecName "kube-api-access-c9lz5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 09:20:26 crc kubenswrapper[4940]: I0223 09:20:26.929949 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/05541a9b-b462-4150-b0d7-131d75a1d775-inventory" (OuterVolumeSpecName: "inventory") pod "05541a9b-b462-4150-b0d7-131d75a1d775" (UID: "05541a9b-b462-4150-b0d7-131d75a1d775"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 09:20:26 crc kubenswrapper[4940]: I0223 09:20:26.941947 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/05541a9b-b462-4150-b0d7-131d75a1d775-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "05541a9b-b462-4150-b0d7-131d75a1d775" (UID: "05541a9b-b462-4150-b0d7-131d75a1d775"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 09:20:26 crc kubenswrapper[4940]: I0223 09:20:26.995051 4940 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c9lz5\" (UniqueName: \"kubernetes.io/projected/05541a9b-b462-4150-b0d7-131d75a1d775-kube-api-access-c9lz5\") on node \"crc\" DevicePath \"\"" Feb 23 09:20:26 crc kubenswrapper[4940]: I0223 09:20:26.995091 4940 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/05541a9b-b462-4150-b0d7-131d75a1d775-inventory\") on node \"crc\" DevicePath \"\"" Feb 23 09:20:26 crc kubenswrapper[4940]: I0223 09:20:26.995106 4940 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/05541a9b-b462-4150-b0d7-131d75a1d775-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 23 09:20:27 crc kubenswrapper[4940]: I0223 09:20:27.296040 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-nntpm" event={"ID":"05541a9b-b462-4150-b0d7-131d75a1d775","Type":"ContainerDied","Data":"342d21b3aae7d6dd73bd7916620b01ceb9ce0f8d6c3ba9e4abc19fc6584ca041"} Feb 23 09:20:27 crc kubenswrapper[4940]: I0223 09:20:27.296079 4940 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="342d21b3aae7d6dd73bd7916620b01ceb9ce0f8d6c3ba9e4abc19fc6584ca041" Feb 23 09:20:27 crc kubenswrapper[4940]: I0223 09:20:27.296147 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-nntpm" Feb 23 09:20:27 crc kubenswrapper[4940]: I0223 09:20:27.398301 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-ttbp8"] Feb 23 09:20:27 crc kubenswrapper[4940]: E0223 09:20:27.398916 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="05541a9b-b462-4150-b0d7-131d75a1d775" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Feb 23 09:20:27 crc kubenswrapper[4940]: I0223 09:20:27.398944 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="05541a9b-b462-4150-b0d7-131d75a1d775" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Feb 23 09:20:27 crc kubenswrapper[4940]: I0223 09:20:27.399320 4940 memory_manager.go:354] "RemoveStaleState removing state" podUID="05541a9b-b462-4150-b0d7-131d75a1d775" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Feb 23 09:20:27 crc kubenswrapper[4940]: I0223 09:20:27.410856 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-ttbp8" Feb 23 09:20:27 crc kubenswrapper[4940]: I0223 09:20:27.419184 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-x648h" Feb 23 09:20:27 crc kubenswrapper[4940]: I0223 09:20:27.419390 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 23 09:20:27 crc kubenswrapper[4940]: I0223 09:20:27.419464 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 23 09:20:27 crc kubenswrapper[4940]: I0223 09:20:27.420140 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 23 09:20:27 crc kubenswrapper[4940]: I0223 09:20:27.452059 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-ttbp8"] Feb 23 09:20:27 crc kubenswrapper[4940]: I0223 09:20:27.543036 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/bf532816-d5b9-4205-844c-bf70b4cc5c18-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-ttbp8\" (UID: \"bf532816-d5b9-4205-844c-bf70b4cc5c18\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-ttbp8" Feb 23 09:20:27 crc kubenswrapper[4940]: I0223 09:20:27.543223 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x6n6s\" (UniqueName: \"kubernetes.io/projected/bf532816-d5b9-4205-844c-bf70b4cc5c18-kube-api-access-x6n6s\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-ttbp8\" (UID: \"bf532816-d5b9-4205-844c-bf70b4cc5c18\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-ttbp8" Feb 23 09:20:27 crc kubenswrapper[4940]: I0223 09:20:27.543419 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bf532816-d5b9-4205-844c-bf70b4cc5c18-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-ttbp8\" (UID: \"bf532816-d5b9-4205-844c-bf70b4cc5c18\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-ttbp8" Feb 23 09:20:27 crc kubenswrapper[4940]: I0223 09:20:27.645654 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x6n6s\" (UniqueName: \"kubernetes.io/projected/bf532816-d5b9-4205-844c-bf70b4cc5c18-kube-api-access-x6n6s\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-ttbp8\" (UID: \"bf532816-d5b9-4205-844c-bf70b4cc5c18\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-ttbp8" Feb 23 09:20:27 crc kubenswrapper[4940]: I0223 09:20:27.645731 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bf532816-d5b9-4205-844c-bf70b4cc5c18-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-ttbp8\" (UID: \"bf532816-d5b9-4205-844c-bf70b4cc5c18\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-ttbp8" Feb 23 09:20:27 crc kubenswrapper[4940]: I0223 09:20:27.645797 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/bf532816-d5b9-4205-844c-bf70b4cc5c18-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-ttbp8\" (UID: \"bf532816-d5b9-4205-844c-bf70b4cc5c18\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-ttbp8" Feb 23 09:20:27 crc kubenswrapper[4940]: I0223 09:20:27.649041 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/bf532816-d5b9-4205-844c-bf70b4cc5c18-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-ttbp8\" (UID: \"bf532816-d5b9-4205-844c-bf70b4cc5c18\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-ttbp8" Feb 23 09:20:27 crc kubenswrapper[4940]: I0223 09:20:27.649445 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bf532816-d5b9-4205-844c-bf70b4cc5c18-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-ttbp8\" (UID: \"bf532816-d5b9-4205-844c-bf70b4cc5c18\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-ttbp8" Feb 23 09:20:27 crc kubenswrapper[4940]: I0223 09:20:27.665173 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x6n6s\" (UniqueName: \"kubernetes.io/projected/bf532816-d5b9-4205-844c-bf70b4cc5c18-kube-api-access-x6n6s\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-ttbp8\" (UID: \"bf532816-d5b9-4205-844c-bf70b4cc5c18\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-ttbp8" Feb 23 09:20:27 crc kubenswrapper[4940]: I0223 09:20:27.742161 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-ttbp8" Feb 23 09:20:28 crc kubenswrapper[4940]: I0223 09:20:28.264720 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-ttbp8"] Feb 23 09:20:28 crc kubenswrapper[4940]: I0223 09:20:28.275182 4940 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 23 09:20:28 crc kubenswrapper[4940]: I0223 09:20:28.306439 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-ttbp8" event={"ID":"bf532816-d5b9-4205-844c-bf70b4cc5c18","Type":"ContainerStarted","Data":"f4ba0d53d0ee1461bcd3b9752a667551d14151886e63adff395b168fcb87a61b"} Feb 23 09:20:29 crc kubenswrapper[4940]: I0223 09:20:29.318604 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-ttbp8" event={"ID":"bf532816-d5b9-4205-844c-bf70b4cc5c18","Type":"ContainerStarted","Data":"14a50bca0167954ab94ff91006af7d6f0331c917567320f4c524a0351348d44f"} Feb 23 09:20:29 crc kubenswrapper[4940]: I0223 09:20:29.344815 4940 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-ttbp8" podStartSLOduration=1.955303064 podStartE2EDuration="2.344780378s" podCreationTimestamp="2026-02-23 09:20:27 +0000 UTC" firstStartedPulling="2026-02-23 09:20:28.274882868 +0000 UTC m=+1959.658089025" lastFinishedPulling="2026-02-23 09:20:28.664360152 +0000 UTC m=+1960.047566339" observedRunningTime="2026-02-23 09:20:29.337063178 +0000 UTC m=+1960.720269365" watchObservedRunningTime="2026-02-23 09:20:29.344780378 +0000 UTC m=+1960.727986625" Feb 23 09:20:32 crc kubenswrapper[4940]: I0223 09:20:32.055274 4940 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-sjg4x"] Feb 23 09:20:32 crc kubenswrapper[4940]: I0223 09:20:32.066301 4940 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-50ec-account-create-update-mqlhz"] Feb 23 09:20:32 crc kubenswrapper[4940]: I0223 09:20:32.078005 4940 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-j6xpr"] Feb 23 09:20:32 crc kubenswrapper[4940]: I0223 09:20:32.087736 4940 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-sjg4x"] Feb 23 09:20:32 crc kubenswrapper[4940]: I0223 09:20:32.095138 4940 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-53e3-account-create-update-dlb86"] Feb 23 09:20:32 crc kubenswrapper[4940]: I0223 09:20:32.102256 4940 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-50ec-account-create-update-mqlhz"] Feb 23 09:20:32 crc kubenswrapper[4940]: I0223 09:20:32.110550 4940 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-8slbq"] Feb 23 09:20:32 crc kubenswrapper[4940]: I0223 09:20:32.118867 4940 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-c0be-account-create-update-2b67b"] Feb 23 09:20:32 crc kubenswrapper[4940]: I0223 09:20:32.126920 4940 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-j6xpr"] Feb 23 09:20:32 crc kubenswrapper[4940]: I0223 09:20:32.135161 4940 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-c0be-account-create-update-2b67b"] Feb 23 09:20:32 crc kubenswrapper[4940]: I0223 09:20:32.142419 4940 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-53e3-account-create-update-dlb86"] Feb 23 09:20:32 crc kubenswrapper[4940]: I0223 09:20:32.150428 4940 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-8slbq"] Feb 23 09:20:33 crc kubenswrapper[4940]: I0223 09:20:33.357359 4940 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2ea39198-ffee-4b2e-9561-71a16fab5149" path="/var/lib/kubelet/pods/2ea39198-ffee-4b2e-9561-71a16fab5149/volumes" Feb 23 09:20:33 crc kubenswrapper[4940]: I0223 09:20:33.358468 4940 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="502149a1-62b1-4e45-831b-51d1d10d4265" path="/var/lib/kubelet/pods/502149a1-62b1-4e45-831b-51d1d10d4265/volumes" Feb 23 09:20:33 crc kubenswrapper[4940]: I0223 09:20:33.359527 4940 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5684e490-a8e8-435c-8d93-6b510ca1f90f" path="/var/lib/kubelet/pods/5684e490-a8e8-435c-8d93-6b510ca1f90f/volumes" Feb 23 09:20:33 crc kubenswrapper[4940]: I0223 09:20:33.360631 4940 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="871fc6a4-b1d8-4676-977e-10c7bd9bf609" path="/var/lib/kubelet/pods/871fc6a4-b1d8-4676-977e-10c7bd9bf609/volumes" Feb 23 09:20:33 crc kubenswrapper[4940]: I0223 09:20:33.362523 4940 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bb9a8d01-0012-4ea6-b812-7ef38a27fe7b" path="/var/lib/kubelet/pods/bb9a8d01-0012-4ea6-b812-7ef38a27fe7b/volumes" Feb 23 09:20:33 crc kubenswrapper[4940]: I0223 09:20:33.363559 4940 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e9332095-7a85-4e0d-8a06-da6462a9397b" path="/var/lib/kubelet/pods/e9332095-7a85-4e0d-8a06-da6462a9397b/volumes" Feb 23 09:20:34 crc kubenswrapper[4940]: I0223 09:20:34.264117 4940 scope.go:117] "RemoveContainer" containerID="b31dced08ed0d4610a6fc8684712ee95a303e4f689bd23d43d8ed45b13ccae92" Feb 23 09:20:34 crc kubenswrapper[4940]: I0223 09:20:34.298037 4940 scope.go:117] "RemoveContainer" containerID="4f4df3fa4522baf5a0e74fba1c40c93d23867333df4e484b7336fb7618419dc8" Feb 23 09:20:34 crc kubenswrapper[4940]: I0223 09:20:34.336904 4940 scope.go:117] "RemoveContainer" containerID="935a7a505dfe9d7724626c35bf0f3d5f01b1fcafcb203ec8ac32ce3cc29422db" Feb 23 09:20:34 crc kubenswrapper[4940]: I0223 09:20:34.401574 4940 scope.go:117] "RemoveContainer" containerID="e21b7573134555a09738502f85d5e3873c19ebc7d08f4d9759e38cf1a8aeb82e" Feb 23 09:20:34 crc kubenswrapper[4940]: I0223 09:20:34.442761 4940 scope.go:117] "RemoveContainer" containerID="342f7def9f50941425d743518c4769503c938bad72ec71dd786fa1971cffb42d" Feb 23 09:20:34 crc kubenswrapper[4940]: I0223 09:20:34.490748 4940 scope.go:117] "RemoveContainer" containerID="7584d944668278dbc303ed8ea0f9f93364b2d04f6bd4c7bd4b351eb7e68181a0" Feb 23 09:20:34 crc kubenswrapper[4940]: I0223 09:20:34.526939 4940 scope.go:117] "RemoveContainer" containerID="cd2764c789e4740aa378dbf6c1d22791d291e706850273c689c394120e943215" Feb 23 09:20:34 crc kubenswrapper[4940]: I0223 09:20:34.566756 4940 scope.go:117] "RemoveContainer" containerID="9531b6a359564e5acc36c1a844011c56b279a58cb4d140e8d4784d32d1a4405c" Feb 23 09:20:34 crc kubenswrapper[4940]: I0223 09:20:34.590663 4940 scope.go:117] "RemoveContainer" containerID="d259250918978cd7c3ae3722a903f4629c7b64cbf15a3fbf81f3518820b864db" Feb 23 09:20:34 crc kubenswrapper[4940]: I0223 09:20:34.621876 4940 scope.go:117] "RemoveContainer" containerID="5e9d238d69a51cb9a14dd2440ed7053be4d703055895ca45dcce44cdb54d3f79" Feb 23 09:20:34 crc kubenswrapper[4940]: I0223 09:20:34.658007 4940 scope.go:117] "RemoveContainer" containerID="4f47c5299d7075d882481ef592b61a33f78a5937be72d9b5e3d4726b2451cd6a" Feb 23 09:21:00 crc kubenswrapper[4940]: I0223 09:21:00.044914 4940 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-r5b46"] Feb 23 09:21:00 crc kubenswrapper[4940]: I0223 09:21:00.054249 4940 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-r5b46"] Feb 23 09:21:01 crc kubenswrapper[4940]: I0223 09:21:01.362671 4940 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8b3cfe7f-19c0-47e1-b535-0b4e98dba050" path="/var/lib/kubelet/pods/8b3cfe7f-19c0-47e1-b535-0b4e98dba050/volumes" Feb 23 09:21:12 crc kubenswrapper[4940]: I0223 09:21:12.752062 4940 generic.go:334] "Generic (PLEG): container finished" podID="bf532816-d5b9-4205-844c-bf70b4cc5c18" containerID="14a50bca0167954ab94ff91006af7d6f0331c917567320f4c524a0351348d44f" exitCode=0 Feb 23 09:21:12 crc kubenswrapper[4940]: I0223 09:21:12.752182 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-ttbp8" event={"ID":"bf532816-d5b9-4205-844c-bf70b4cc5c18","Type":"ContainerDied","Data":"14a50bca0167954ab94ff91006af7d6f0331c917567320f4c524a0351348d44f"} Feb 23 09:21:14 crc kubenswrapper[4940]: I0223 09:21:14.182635 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-ttbp8" Feb 23 09:21:14 crc kubenswrapper[4940]: I0223 09:21:14.291534 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x6n6s\" (UniqueName: \"kubernetes.io/projected/bf532816-d5b9-4205-844c-bf70b4cc5c18-kube-api-access-x6n6s\") pod \"bf532816-d5b9-4205-844c-bf70b4cc5c18\" (UID: \"bf532816-d5b9-4205-844c-bf70b4cc5c18\") " Feb 23 09:21:14 crc kubenswrapper[4940]: I0223 09:21:14.291668 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/bf532816-d5b9-4205-844c-bf70b4cc5c18-ssh-key-openstack-edpm-ipam\") pod \"bf532816-d5b9-4205-844c-bf70b4cc5c18\" (UID: \"bf532816-d5b9-4205-844c-bf70b4cc5c18\") " Feb 23 09:21:14 crc kubenswrapper[4940]: I0223 09:21:14.291885 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bf532816-d5b9-4205-844c-bf70b4cc5c18-inventory\") pod \"bf532816-d5b9-4205-844c-bf70b4cc5c18\" (UID: \"bf532816-d5b9-4205-844c-bf70b4cc5c18\") " Feb 23 09:21:14 crc kubenswrapper[4940]: I0223 09:21:14.297925 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf532816-d5b9-4205-844c-bf70b4cc5c18-kube-api-access-x6n6s" (OuterVolumeSpecName: "kube-api-access-x6n6s") pod "bf532816-d5b9-4205-844c-bf70b4cc5c18" (UID: "bf532816-d5b9-4205-844c-bf70b4cc5c18"). InnerVolumeSpecName "kube-api-access-x6n6s". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 09:21:14 crc kubenswrapper[4940]: I0223 09:21:14.328661 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf532816-d5b9-4205-844c-bf70b4cc5c18-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "bf532816-d5b9-4205-844c-bf70b4cc5c18" (UID: "bf532816-d5b9-4205-844c-bf70b4cc5c18"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 09:21:14 crc kubenswrapper[4940]: I0223 09:21:14.339470 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf532816-d5b9-4205-844c-bf70b4cc5c18-inventory" (OuterVolumeSpecName: "inventory") pod "bf532816-d5b9-4205-844c-bf70b4cc5c18" (UID: "bf532816-d5b9-4205-844c-bf70b4cc5c18"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 09:21:14 crc kubenswrapper[4940]: I0223 09:21:14.395334 4940 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bf532816-d5b9-4205-844c-bf70b4cc5c18-inventory\") on node \"crc\" DevicePath \"\"" Feb 23 09:21:14 crc kubenswrapper[4940]: I0223 09:21:14.395395 4940 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x6n6s\" (UniqueName: \"kubernetes.io/projected/bf532816-d5b9-4205-844c-bf70b4cc5c18-kube-api-access-x6n6s\") on node \"crc\" DevicePath \"\"" Feb 23 09:21:14 crc kubenswrapper[4940]: I0223 09:21:14.395410 4940 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/bf532816-d5b9-4205-844c-bf70b4cc5c18-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 23 09:21:14 crc kubenswrapper[4940]: I0223 09:21:14.796973 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-ttbp8" event={"ID":"bf532816-d5b9-4205-844c-bf70b4cc5c18","Type":"ContainerDied","Data":"f4ba0d53d0ee1461bcd3b9752a667551d14151886e63adff395b168fcb87a61b"} Feb 23 09:21:14 crc kubenswrapper[4940]: I0223 09:21:14.797025 4940 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f4ba0d53d0ee1461bcd3b9752a667551d14151886e63adff395b168fcb87a61b" Feb 23 09:21:14 crc kubenswrapper[4940]: I0223 09:21:14.797129 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-ttbp8" Feb 23 09:21:14 crc kubenswrapper[4940]: I0223 09:21:14.876339 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-w7thd"] Feb 23 09:21:14 crc kubenswrapper[4940]: E0223 09:21:14.876865 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf532816-d5b9-4205-844c-bf70b4cc5c18" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Feb 23 09:21:14 crc kubenswrapper[4940]: I0223 09:21:14.876892 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf532816-d5b9-4205-844c-bf70b4cc5c18" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Feb 23 09:21:14 crc kubenswrapper[4940]: I0223 09:21:14.877153 4940 memory_manager.go:354] "RemoveStaleState removing state" podUID="bf532816-d5b9-4205-844c-bf70b4cc5c18" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Feb 23 09:21:14 crc kubenswrapper[4940]: I0223 09:21:14.878024 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-w7thd" Feb 23 09:21:14 crc kubenswrapper[4940]: I0223 09:21:14.881114 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 23 09:21:14 crc kubenswrapper[4940]: I0223 09:21:14.881294 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 23 09:21:14 crc kubenswrapper[4940]: I0223 09:21:14.886352 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-w7thd"] Feb 23 09:21:14 crc kubenswrapper[4940]: I0223 09:21:14.897500 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-x648h" Feb 23 09:21:14 crc kubenswrapper[4940]: I0223 09:21:14.897570 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 23 09:21:14 crc kubenswrapper[4940]: I0223 09:21:14.912771 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/96cf1ebf-387e-417f-83eb-a360f951217e-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-w7thd\" (UID: \"96cf1ebf-387e-417f-83eb-a360f951217e\") " pod="openstack/ssh-known-hosts-edpm-deployment-w7thd" Feb 23 09:21:14 crc kubenswrapper[4940]: I0223 09:21:14.912821 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/96cf1ebf-387e-417f-83eb-a360f951217e-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-w7thd\" (UID: \"96cf1ebf-387e-417f-83eb-a360f951217e\") " pod="openstack/ssh-known-hosts-edpm-deployment-w7thd" Feb 23 09:21:14 crc kubenswrapper[4940]: I0223 09:21:14.912881 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sv6xr\" (UniqueName: \"kubernetes.io/projected/96cf1ebf-387e-417f-83eb-a360f951217e-kube-api-access-sv6xr\") pod \"ssh-known-hosts-edpm-deployment-w7thd\" (UID: \"96cf1ebf-387e-417f-83eb-a360f951217e\") " pod="openstack/ssh-known-hosts-edpm-deployment-w7thd" Feb 23 09:21:15 crc kubenswrapper[4940]: I0223 09:21:15.014664 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/96cf1ebf-387e-417f-83eb-a360f951217e-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-w7thd\" (UID: \"96cf1ebf-387e-417f-83eb-a360f951217e\") " pod="openstack/ssh-known-hosts-edpm-deployment-w7thd" Feb 23 09:21:15 crc kubenswrapper[4940]: I0223 09:21:15.014712 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/96cf1ebf-387e-417f-83eb-a360f951217e-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-w7thd\" (UID: \"96cf1ebf-387e-417f-83eb-a360f951217e\") " pod="openstack/ssh-known-hosts-edpm-deployment-w7thd" Feb 23 09:21:15 crc kubenswrapper[4940]: I0223 09:21:15.014772 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sv6xr\" (UniqueName: \"kubernetes.io/projected/96cf1ebf-387e-417f-83eb-a360f951217e-kube-api-access-sv6xr\") pod \"ssh-known-hosts-edpm-deployment-w7thd\" (UID: \"96cf1ebf-387e-417f-83eb-a360f951217e\") " pod="openstack/ssh-known-hosts-edpm-deployment-w7thd" Feb 23 09:21:15 crc kubenswrapper[4940]: I0223 09:21:15.018315 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/96cf1ebf-387e-417f-83eb-a360f951217e-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-w7thd\" (UID: \"96cf1ebf-387e-417f-83eb-a360f951217e\") " pod="openstack/ssh-known-hosts-edpm-deployment-w7thd" Feb 23 09:21:15 crc kubenswrapper[4940]: I0223 09:21:15.018317 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/96cf1ebf-387e-417f-83eb-a360f951217e-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-w7thd\" (UID: \"96cf1ebf-387e-417f-83eb-a360f951217e\") " pod="openstack/ssh-known-hosts-edpm-deployment-w7thd" Feb 23 09:21:15 crc kubenswrapper[4940]: I0223 09:21:15.033147 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sv6xr\" (UniqueName: \"kubernetes.io/projected/96cf1ebf-387e-417f-83eb-a360f951217e-kube-api-access-sv6xr\") pod \"ssh-known-hosts-edpm-deployment-w7thd\" (UID: \"96cf1ebf-387e-417f-83eb-a360f951217e\") " pod="openstack/ssh-known-hosts-edpm-deployment-w7thd" Feb 23 09:21:15 crc kubenswrapper[4940]: I0223 09:21:15.212715 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-w7thd" Feb 23 09:21:15 crc kubenswrapper[4940]: I0223 09:21:15.783298 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-w7thd"] Feb 23 09:21:15 crc kubenswrapper[4940]: I0223 09:21:15.821534 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-w7thd" event={"ID":"96cf1ebf-387e-417f-83eb-a360f951217e","Type":"ContainerStarted","Data":"f6482bdd0ee148b10d288942ea181abde122fbb154327b150b43edfc1a48283b"} Feb 23 09:21:16 crc kubenswrapper[4940]: I0223 09:21:16.832123 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-w7thd" event={"ID":"96cf1ebf-387e-417f-83eb-a360f951217e","Type":"ContainerStarted","Data":"1bf300a8a0e03234e1e1aec8ff16a8e32a17f05a6ea78f5948e00421f3cd9133"} Feb 23 09:21:23 crc kubenswrapper[4940]: I0223 09:21:23.893908 4940 generic.go:334] "Generic (PLEG): container finished" podID="96cf1ebf-387e-417f-83eb-a360f951217e" containerID="1bf300a8a0e03234e1e1aec8ff16a8e32a17f05a6ea78f5948e00421f3cd9133" exitCode=0 Feb 23 09:21:23 crc kubenswrapper[4940]: I0223 09:21:23.893983 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-w7thd" event={"ID":"96cf1ebf-387e-417f-83eb-a360f951217e","Type":"ContainerDied","Data":"1bf300a8a0e03234e1e1aec8ff16a8e32a17f05a6ea78f5948e00421f3cd9133"} Feb 23 09:21:25 crc kubenswrapper[4940]: I0223 09:21:25.320085 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-w7thd" Feb 23 09:21:25 crc kubenswrapper[4940]: I0223 09:21:25.331648 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/96cf1ebf-387e-417f-83eb-a360f951217e-inventory-0\") pod \"96cf1ebf-387e-417f-83eb-a360f951217e\" (UID: \"96cf1ebf-387e-417f-83eb-a360f951217e\") " Feb 23 09:21:25 crc kubenswrapper[4940]: I0223 09:21:25.331787 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/96cf1ebf-387e-417f-83eb-a360f951217e-ssh-key-openstack-edpm-ipam\") pod \"96cf1ebf-387e-417f-83eb-a360f951217e\" (UID: \"96cf1ebf-387e-417f-83eb-a360f951217e\") " Feb 23 09:21:25 crc kubenswrapper[4940]: I0223 09:21:25.332703 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sv6xr\" (UniqueName: \"kubernetes.io/projected/96cf1ebf-387e-417f-83eb-a360f951217e-kube-api-access-sv6xr\") pod \"96cf1ebf-387e-417f-83eb-a360f951217e\" (UID: \"96cf1ebf-387e-417f-83eb-a360f951217e\") " Feb 23 09:21:25 crc kubenswrapper[4940]: I0223 09:21:25.344015 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96cf1ebf-387e-417f-83eb-a360f951217e-kube-api-access-sv6xr" (OuterVolumeSpecName: "kube-api-access-sv6xr") pod "96cf1ebf-387e-417f-83eb-a360f951217e" (UID: "96cf1ebf-387e-417f-83eb-a360f951217e"). InnerVolumeSpecName "kube-api-access-sv6xr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 09:21:25 crc kubenswrapper[4940]: I0223 09:21:25.370893 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96cf1ebf-387e-417f-83eb-a360f951217e-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "96cf1ebf-387e-417f-83eb-a360f951217e" (UID: "96cf1ebf-387e-417f-83eb-a360f951217e"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 09:21:25 crc kubenswrapper[4940]: I0223 09:21:25.371061 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96cf1ebf-387e-417f-83eb-a360f951217e-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "96cf1ebf-387e-417f-83eb-a360f951217e" (UID: "96cf1ebf-387e-417f-83eb-a360f951217e"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 09:21:25 crc kubenswrapper[4940]: I0223 09:21:25.436907 4940 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sv6xr\" (UniqueName: \"kubernetes.io/projected/96cf1ebf-387e-417f-83eb-a360f951217e-kube-api-access-sv6xr\") on node \"crc\" DevicePath \"\"" Feb 23 09:21:25 crc kubenswrapper[4940]: I0223 09:21:25.436947 4940 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/96cf1ebf-387e-417f-83eb-a360f951217e-inventory-0\") on node \"crc\" DevicePath \"\"" Feb 23 09:21:25 crc kubenswrapper[4940]: I0223 09:21:25.436956 4940 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/96cf1ebf-387e-417f-83eb-a360f951217e-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 23 09:21:25 crc kubenswrapper[4940]: I0223 09:21:25.920743 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-w7thd" event={"ID":"96cf1ebf-387e-417f-83eb-a360f951217e","Type":"ContainerDied","Data":"f6482bdd0ee148b10d288942ea181abde122fbb154327b150b43edfc1a48283b"} Feb 23 09:21:25 crc kubenswrapper[4940]: I0223 09:21:25.920836 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-w7thd" Feb 23 09:21:25 crc kubenswrapper[4940]: I0223 09:21:25.920852 4940 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f6482bdd0ee148b10d288942ea181abde122fbb154327b150b43edfc1a48283b" Feb 23 09:21:26 crc kubenswrapper[4940]: I0223 09:21:26.034034 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-lgnzc"] Feb 23 09:21:26 crc kubenswrapper[4940]: E0223 09:21:26.034483 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96cf1ebf-387e-417f-83eb-a360f951217e" containerName="ssh-known-hosts-edpm-deployment" Feb 23 09:21:26 crc kubenswrapper[4940]: I0223 09:21:26.034502 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="96cf1ebf-387e-417f-83eb-a360f951217e" containerName="ssh-known-hosts-edpm-deployment" Feb 23 09:21:26 crc kubenswrapper[4940]: I0223 09:21:26.034734 4940 memory_manager.go:354] "RemoveStaleState removing state" podUID="96cf1ebf-387e-417f-83eb-a360f951217e" containerName="ssh-known-hosts-edpm-deployment" Feb 23 09:21:26 crc kubenswrapper[4940]: I0223 09:21:26.035520 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-lgnzc" Feb 23 09:21:26 crc kubenswrapper[4940]: I0223 09:21:26.039420 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-x648h" Feb 23 09:21:26 crc kubenswrapper[4940]: I0223 09:21:26.039645 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 23 09:21:26 crc kubenswrapper[4940]: I0223 09:21:26.039813 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 23 09:21:26 crc kubenswrapper[4940]: I0223 09:21:26.040001 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 23 09:21:26 crc kubenswrapper[4940]: I0223 09:21:26.047235 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-lgnzc"] Feb 23 09:21:26 crc kubenswrapper[4940]: I0223 09:21:26.056481 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a25d2721-a065-4c7a-9d4c-61c3be28422e-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-lgnzc\" (UID: \"a25d2721-a065-4c7a-9d4c-61c3be28422e\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-lgnzc" Feb 23 09:21:26 crc kubenswrapper[4940]: I0223 09:21:26.056843 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a25d2721-a065-4c7a-9d4c-61c3be28422e-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-lgnzc\" (UID: \"a25d2721-a065-4c7a-9d4c-61c3be28422e\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-lgnzc" Feb 23 09:21:26 crc kubenswrapper[4940]: I0223 09:21:26.057173 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-psfkf\" (UniqueName: \"kubernetes.io/projected/a25d2721-a065-4c7a-9d4c-61c3be28422e-kube-api-access-psfkf\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-lgnzc\" (UID: \"a25d2721-a065-4c7a-9d4c-61c3be28422e\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-lgnzc" Feb 23 09:21:26 crc kubenswrapper[4940]: I0223 09:21:26.159409 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-psfkf\" (UniqueName: \"kubernetes.io/projected/a25d2721-a065-4c7a-9d4c-61c3be28422e-kube-api-access-psfkf\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-lgnzc\" (UID: \"a25d2721-a065-4c7a-9d4c-61c3be28422e\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-lgnzc" Feb 23 09:21:26 crc kubenswrapper[4940]: I0223 09:21:26.159570 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a25d2721-a065-4c7a-9d4c-61c3be28422e-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-lgnzc\" (UID: \"a25d2721-a065-4c7a-9d4c-61c3be28422e\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-lgnzc" Feb 23 09:21:26 crc kubenswrapper[4940]: I0223 09:21:26.159748 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a25d2721-a065-4c7a-9d4c-61c3be28422e-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-lgnzc\" (UID: \"a25d2721-a065-4c7a-9d4c-61c3be28422e\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-lgnzc" Feb 23 09:21:26 crc kubenswrapper[4940]: I0223 09:21:26.163333 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a25d2721-a065-4c7a-9d4c-61c3be28422e-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-lgnzc\" (UID: \"a25d2721-a065-4c7a-9d4c-61c3be28422e\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-lgnzc" Feb 23 09:21:26 crc kubenswrapper[4940]: I0223 09:21:26.163349 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a25d2721-a065-4c7a-9d4c-61c3be28422e-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-lgnzc\" (UID: \"a25d2721-a065-4c7a-9d4c-61c3be28422e\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-lgnzc" Feb 23 09:21:26 crc kubenswrapper[4940]: I0223 09:21:26.179132 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-psfkf\" (UniqueName: \"kubernetes.io/projected/a25d2721-a065-4c7a-9d4c-61c3be28422e-kube-api-access-psfkf\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-lgnzc\" (UID: \"a25d2721-a065-4c7a-9d4c-61c3be28422e\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-lgnzc" Feb 23 09:21:26 crc kubenswrapper[4940]: I0223 09:21:26.351228 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-lgnzc" Feb 23 09:21:26 crc kubenswrapper[4940]: I0223 09:21:26.886883 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-lgnzc"] Feb 23 09:21:26 crc kubenswrapper[4940]: I0223 09:21:26.932927 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-lgnzc" event={"ID":"a25d2721-a065-4c7a-9d4c-61c3be28422e","Type":"ContainerStarted","Data":"11b32c29a3284c4716a043278f48eeff33702c798f4473940b922a451b0aa85b"} Feb 23 09:21:27 crc kubenswrapper[4940]: I0223 09:21:27.942934 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-lgnzc" event={"ID":"a25d2721-a065-4c7a-9d4c-61c3be28422e","Type":"ContainerStarted","Data":"d92273b27e80bdf400fc41e157a58facb3c82e301cf89a484a88c592a2ea0749"} Feb 23 09:21:27 crc kubenswrapper[4940]: I0223 09:21:27.973082 4940 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-lgnzc" podStartSLOduration=1.587957472 podStartE2EDuration="1.973062594s" podCreationTimestamp="2026-02-23 09:21:26 +0000 UTC" firstStartedPulling="2026-02-23 09:21:26.895081442 +0000 UTC m=+2018.278287599" lastFinishedPulling="2026-02-23 09:21:27.280186564 +0000 UTC m=+2018.663392721" observedRunningTime="2026-02-23 09:21:27.959812879 +0000 UTC m=+2019.343019046" watchObservedRunningTime="2026-02-23 09:21:27.973062594 +0000 UTC m=+2019.356268751" Feb 23 09:21:31 crc kubenswrapper[4940]: I0223 09:21:31.429297 4940 patch_prober.go:28] interesting pod/machine-config-daemon-26mgs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 23 09:21:31 crc kubenswrapper[4940]: I0223 09:21:31.429853 4940 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" podUID="f3f2cfd6-5ddf-436d-998f-440f1cc642b1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 23 09:21:34 crc kubenswrapper[4940]: I0223 09:21:34.915534 4940 scope.go:117] "RemoveContainer" containerID="94c228c755da60cf2c2d4aff4d92241a5462047210c1ba47c929e426e1101812" Feb 23 09:21:35 crc kubenswrapper[4940]: I0223 09:21:35.004524 4940 generic.go:334] "Generic (PLEG): container finished" podID="a25d2721-a065-4c7a-9d4c-61c3be28422e" containerID="d92273b27e80bdf400fc41e157a58facb3c82e301cf89a484a88c592a2ea0749" exitCode=0 Feb 23 09:21:35 crc kubenswrapper[4940]: I0223 09:21:35.004575 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-lgnzc" event={"ID":"a25d2721-a065-4c7a-9d4c-61c3be28422e","Type":"ContainerDied","Data":"d92273b27e80bdf400fc41e157a58facb3c82e301cf89a484a88c592a2ea0749"} Feb 23 09:21:36 crc kubenswrapper[4940]: I0223 09:21:36.516252 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-lgnzc" Feb 23 09:21:36 crc kubenswrapper[4940]: I0223 09:21:36.602895 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-psfkf\" (UniqueName: \"kubernetes.io/projected/a25d2721-a065-4c7a-9d4c-61c3be28422e-kube-api-access-psfkf\") pod \"a25d2721-a065-4c7a-9d4c-61c3be28422e\" (UID: \"a25d2721-a065-4c7a-9d4c-61c3be28422e\") " Feb 23 09:21:36 crc kubenswrapper[4940]: I0223 09:21:36.603241 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a25d2721-a065-4c7a-9d4c-61c3be28422e-ssh-key-openstack-edpm-ipam\") pod \"a25d2721-a065-4c7a-9d4c-61c3be28422e\" (UID: \"a25d2721-a065-4c7a-9d4c-61c3be28422e\") " Feb 23 09:21:36 crc kubenswrapper[4940]: I0223 09:21:36.603387 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a25d2721-a065-4c7a-9d4c-61c3be28422e-inventory\") pod \"a25d2721-a065-4c7a-9d4c-61c3be28422e\" (UID: \"a25d2721-a065-4c7a-9d4c-61c3be28422e\") " Feb 23 09:21:36 crc kubenswrapper[4940]: I0223 09:21:36.614481 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a25d2721-a065-4c7a-9d4c-61c3be28422e-kube-api-access-psfkf" (OuterVolumeSpecName: "kube-api-access-psfkf") pod "a25d2721-a065-4c7a-9d4c-61c3be28422e" (UID: "a25d2721-a065-4c7a-9d4c-61c3be28422e"). InnerVolumeSpecName "kube-api-access-psfkf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 09:21:36 crc kubenswrapper[4940]: I0223 09:21:36.628384 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a25d2721-a065-4c7a-9d4c-61c3be28422e-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "a25d2721-a065-4c7a-9d4c-61c3be28422e" (UID: "a25d2721-a065-4c7a-9d4c-61c3be28422e"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 09:21:36 crc kubenswrapper[4940]: I0223 09:21:36.634440 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a25d2721-a065-4c7a-9d4c-61c3be28422e-inventory" (OuterVolumeSpecName: "inventory") pod "a25d2721-a065-4c7a-9d4c-61c3be28422e" (UID: "a25d2721-a065-4c7a-9d4c-61c3be28422e"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 09:21:36 crc kubenswrapper[4940]: I0223 09:21:36.706143 4940 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-psfkf\" (UniqueName: \"kubernetes.io/projected/a25d2721-a065-4c7a-9d4c-61c3be28422e-kube-api-access-psfkf\") on node \"crc\" DevicePath \"\"" Feb 23 09:21:36 crc kubenswrapper[4940]: I0223 09:21:36.706196 4940 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a25d2721-a065-4c7a-9d4c-61c3be28422e-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 23 09:21:36 crc kubenswrapper[4940]: I0223 09:21:36.706211 4940 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a25d2721-a065-4c7a-9d4c-61c3be28422e-inventory\") on node \"crc\" DevicePath \"\"" Feb 23 09:21:37 crc kubenswrapper[4940]: I0223 09:21:37.029856 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-lgnzc" event={"ID":"a25d2721-a065-4c7a-9d4c-61c3be28422e","Type":"ContainerDied","Data":"11b32c29a3284c4716a043278f48eeff33702c798f4473940b922a451b0aa85b"} Feb 23 09:21:37 crc kubenswrapper[4940]: I0223 09:21:37.029914 4940 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="11b32c29a3284c4716a043278f48eeff33702c798f4473940b922a451b0aa85b" Feb 23 09:21:37 crc kubenswrapper[4940]: I0223 09:21:37.029954 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-lgnzc" Feb 23 09:21:37 crc kubenswrapper[4940]: I0223 09:21:37.121832 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-l48h8"] Feb 23 09:21:37 crc kubenswrapper[4940]: E0223 09:21:37.122255 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a25d2721-a065-4c7a-9d4c-61c3be28422e" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Feb 23 09:21:37 crc kubenswrapper[4940]: I0223 09:21:37.122273 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="a25d2721-a065-4c7a-9d4c-61c3be28422e" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Feb 23 09:21:37 crc kubenswrapper[4940]: I0223 09:21:37.122491 4940 memory_manager.go:354] "RemoveStaleState removing state" podUID="a25d2721-a065-4c7a-9d4c-61c3be28422e" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Feb 23 09:21:37 crc kubenswrapper[4940]: I0223 09:21:37.123203 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-l48h8" Feb 23 09:21:37 crc kubenswrapper[4940]: I0223 09:21:37.127183 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 23 09:21:37 crc kubenswrapper[4940]: I0223 09:21:37.128199 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 23 09:21:37 crc kubenswrapper[4940]: I0223 09:21:37.129111 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-x648h" Feb 23 09:21:37 crc kubenswrapper[4940]: I0223 09:21:37.129991 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 23 09:21:37 crc kubenswrapper[4940]: I0223 09:21:37.144393 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-l48h8"] Feb 23 09:21:37 crc kubenswrapper[4940]: I0223 09:21:37.216929 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/863d4b0a-6bc6-44a6-89d0-9167411a397d-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-l48h8\" (UID: \"863d4b0a-6bc6-44a6-89d0-9167411a397d\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-l48h8" Feb 23 09:21:37 crc kubenswrapper[4940]: I0223 09:21:37.216991 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bn22m\" (UniqueName: \"kubernetes.io/projected/863d4b0a-6bc6-44a6-89d0-9167411a397d-kube-api-access-bn22m\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-l48h8\" (UID: \"863d4b0a-6bc6-44a6-89d0-9167411a397d\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-l48h8" Feb 23 09:21:37 crc kubenswrapper[4940]: I0223 09:21:37.217053 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/863d4b0a-6bc6-44a6-89d0-9167411a397d-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-l48h8\" (UID: \"863d4b0a-6bc6-44a6-89d0-9167411a397d\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-l48h8" Feb 23 09:21:37 crc kubenswrapper[4940]: I0223 09:21:37.319839 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/863d4b0a-6bc6-44a6-89d0-9167411a397d-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-l48h8\" (UID: \"863d4b0a-6bc6-44a6-89d0-9167411a397d\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-l48h8" Feb 23 09:21:37 crc kubenswrapper[4940]: I0223 09:21:37.319946 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bn22m\" (UniqueName: \"kubernetes.io/projected/863d4b0a-6bc6-44a6-89d0-9167411a397d-kube-api-access-bn22m\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-l48h8\" (UID: \"863d4b0a-6bc6-44a6-89d0-9167411a397d\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-l48h8" Feb 23 09:21:37 crc kubenswrapper[4940]: I0223 09:21:37.320057 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/863d4b0a-6bc6-44a6-89d0-9167411a397d-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-l48h8\" (UID: \"863d4b0a-6bc6-44a6-89d0-9167411a397d\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-l48h8" Feb 23 09:21:37 crc kubenswrapper[4940]: I0223 09:21:37.324962 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/863d4b0a-6bc6-44a6-89d0-9167411a397d-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-l48h8\" (UID: \"863d4b0a-6bc6-44a6-89d0-9167411a397d\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-l48h8" Feb 23 09:21:37 crc kubenswrapper[4940]: I0223 09:21:37.327436 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/863d4b0a-6bc6-44a6-89d0-9167411a397d-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-l48h8\" (UID: \"863d4b0a-6bc6-44a6-89d0-9167411a397d\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-l48h8" Feb 23 09:21:37 crc kubenswrapper[4940]: I0223 09:21:37.351936 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bn22m\" (UniqueName: \"kubernetes.io/projected/863d4b0a-6bc6-44a6-89d0-9167411a397d-kube-api-access-bn22m\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-l48h8\" (UID: \"863d4b0a-6bc6-44a6-89d0-9167411a397d\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-l48h8" Feb 23 09:21:37 crc kubenswrapper[4940]: I0223 09:21:37.457567 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-l48h8" Feb 23 09:21:38 crc kubenswrapper[4940]: I0223 09:21:38.060013 4940 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-vvclk"] Feb 23 09:21:38 crc kubenswrapper[4940]: I0223 09:21:38.070405 4940 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-vvclk"] Feb 23 09:21:38 crc kubenswrapper[4940]: I0223 09:21:38.094604 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-l48h8"] Feb 23 09:21:39 crc kubenswrapper[4940]: I0223 09:21:39.032097 4940 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-ctwwt"] Feb 23 09:21:39 crc kubenswrapper[4940]: I0223 09:21:39.040637 4940 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-ctwwt"] Feb 23 09:21:39 crc kubenswrapper[4940]: I0223 09:21:39.049339 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-l48h8" event={"ID":"863d4b0a-6bc6-44a6-89d0-9167411a397d","Type":"ContainerStarted","Data":"c7058ab8f261fecb2e2f8aa3fdfad05be85baf163e01d467abf249f9cacd29f5"} Feb 23 09:21:39 crc kubenswrapper[4940]: I0223 09:21:39.049409 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-l48h8" event={"ID":"863d4b0a-6bc6-44a6-89d0-9167411a397d","Type":"ContainerStarted","Data":"285b321a77817c335aebf7877c694340ba4e11b70f66e091e7aec637180a5698"} Feb 23 09:21:39 crc kubenswrapper[4940]: I0223 09:21:39.076379 4940 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-l48h8" podStartSLOduration=1.651130154 podStartE2EDuration="2.076354764s" podCreationTimestamp="2026-02-23 09:21:37 +0000 UTC" firstStartedPulling="2026-02-23 09:21:38.09754433 +0000 UTC m=+2029.480750487" lastFinishedPulling="2026-02-23 09:21:38.52276894 +0000 UTC m=+2029.905975097" observedRunningTime="2026-02-23 09:21:39.066569956 +0000 UTC m=+2030.449776123" watchObservedRunningTime="2026-02-23 09:21:39.076354764 +0000 UTC m=+2030.459560961" Feb 23 09:21:39 crc kubenswrapper[4940]: I0223 09:21:39.362278 4940 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="199cf4dd-ab6f-4d59-9a82-86c613352012" path="/var/lib/kubelet/pods/199cf4dd-ab6f-4d59-9a82-86c613352012/volumes" Feb 23 09:21:39 crc kubenswrapper[4940]: I0223 09:21:39.363576 4940 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5f88e18b-bcfa-4446-bbdf-8824c2c94f65" path="/var/lib/kubelet/pods/5f88e18b-bcfa-4446-bbdf-8824c2c94f65/volumes" Feb 23 09:21:47 crc kubenswrapper[4940]: I0223 09:21:47.130881 4940 generic.go:334] "Generic (PLEG): container finished" podID="863d4b0a-6bc6-44a6-89d0-9167411a397d" containerID="c7058ab8f261fecb2e2f8aa3fdfad05be85baf163e01d467abf249f9cacd29f5" exitCode=0 Feb 23 09:21:47 crc kubenswrapper[4940]: I0223 09:21:47.130974 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-l48h8" event={"ID":"863d4b0a-6bc6-44a6-89d0-9167411a397d","Type":"ContainerDied","Data":"c7058ab8f261fecb2e2f8aa3fdfad05be85baf163e01d467abf249f9cacd29f5"} Feb 23 09:21:48 crc kubenswrapper[4940]: I0223 09:21:48.563263 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-l48h8" Feb 23 09:21:48 crc kubenswrapper[4940]: I0223 09:21:48.679352 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/863d4b0a-6bc6-44a6-89d0-9167411a397d-inventory\") pod \"863d4b0a-6bc6-44a6-89d0-9167411a397d\" (UID: \"863d4b0a-6bc6-44a6-89d0-9167411a397d\") " Feb 23 09:21:48 crc kubenswrapper[4940]: I0223 09:21:48.679472 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/863d4b0a-6bc6-44a6-89d0-9167411a397d-ssh-key-openstack-edpm-ipam\") pod \"863d4b0a-6bc6-44a6-89d0-9167411a397d\" (UID: \"863d4b0a-6bc6-44a6-89d0-9167411a397d\") " Feb 23 09:21:48 crc kubenswrapper[4940]: I0223 09:21:48.679551 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bn22m\" (UniqueName: \"kubernetes.io/projected/863d4b0a-6bc6-44a6-89d0-9167411a397d-kube-api-access-bn22m\") pod \"863d4b0a-6bc6-44a6-89d0-9167411a397d\" (UID: \"863d4b0a-6bc6-44a6-89d0-9167411a397d\") " Feb 23 09:21:48 crc kubenswrapper[4940]: I0223 09:21:48.685547 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/863d4b0a-6bc6-44a6-89d0-9167411a397d-kube-api-access-bn22m" (OuterVolumeSpecName: "kube-api-access-bn22m") pod "863d4b0a-6bc6-44a6-89d0-9167411a397d" (UID: "863d4b0a-6bc6-44a6-89d0-9167411a397d"). InnerVolumeSpecName "kube-api-access-bn22m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 09:21:48 crc kubenswrapper[4940]: I0223 09:21:48.711347 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/863d4b0a-6bc6-44a6-89d0-9167411a397d-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "863d4b0a-6bc6-44a6-89d0-9167411a397d" (UID: "863d4b0a-6bc6-44a6-89d0-9167411a397d"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 09:21:48 crc kubenswrapper[4940]: I0223 09:21:48.716854 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/863d4b0a-6bc6-44a6-89d0-9167411a397d-inventory" (OuterVolumeSpecName: "inventory") pod "863d4b0a-6bc6-44a6-89d0-9167411a397d" (UID: "863d4b0a-6bc6-44a6-89d0-9167411a397d"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 09:21:48 crc kubenswrapper[4940]: I0223 09:21:48.781391 4940 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/863d4b0a-6bc6-44a6-89d0-9167411a397d-inventory\") on node \"crc\" DevicePath \"\"" Feb 23 09:21:48 crc kubenswrapper[4940]: I0223 09:21:48.781430 4940 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/863d4b0a-6bc6-44a6-89d0-9167411a397d-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 23 09:21:48 crc kubenswrapper[4940]: I0223 09:21:48.781442 4940 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bn22m\" (UniqueName: \"kubernetes.io/projected/863d4b0a-6bc6-44a6-89d0-9167411a397d-kube-api-access-bn22m\") on node \"crc\" DevicePath \"\"" Feb 23 09:21:49 crc kubenswrapper[4940]: I0223 09:21:49.155258 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-l48h8" event={"ID":"863d4b0a-6bc6-44a6-89d0-9167411a397d","Type":"ContainerDied","Data":"285b321a77817c335aebf7877c694340ba4e11b70f66e091e7aec637180a5698"} Feb 23 09:21:49 crc kubenswrapper[4940]: I0223 09:21:49.155322 4940 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="285b321a77817c335aebf7877c694340ba4e11b70f66e091e7aec637180a5698" Feb 23 09:21:49 crc kubenswrapper[4940]: I0223 09:21:49.155338 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-l48h8" Feb 23 09:21:49 crc kubenswrapper[4940]: I0223 09:21:49.256042 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l9x62"] Feb 23 09:21:49 crc kubenswrapper[4940]: E0223 09:21:49.256471 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="863d4b0a-6bc6-44a6-89d0-9167411a397d" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Feb 23 09:21:49 crc kubenswrapper[4940]: I0223 09:21:49.256492 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="863d4b0a-6bc6-44a6-89d0-9167411a397d" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Feb 23 09:21:49 crc kubenswrapper[4940]: I0223 09:21:49.256685 4940 memory_manager.go:354] "RemoveStaleState removing state" podUID="863d4b0a-6bc6-44a6-89d0-9167411a397d" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Feb 23 09:21:49 crc kubenswrapper[4940]: I0223 09:21:49.257479 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l9x62" Feb 23 09:21:49 crc kubenswrapper[4940]: I0223 09:21:49.260562 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 23 09:21:49 crc kubenswrapper[4940]: I0223 09:21:49.260675 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-libvirt-default-certs-0" Feb 23 09:21:49 crc kubenswrapper[4940]: I0223 09:21:49.260809 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-x648h" Feb 23 09:21:49 crc kubenswrapper[4940]: I0223 09:21:49.260838 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-telemetry-default-certs-0" Feb 23 09:21:49 crc kubenswrapper[4940]: I0223 09:21:49.260929 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 23 09:21:49 crc kubenswrapper[4940]: I0223 09:21:49.261080 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 23 09:21:49 crc kubenswrapper[4940]: I0223 09:21:49.261207 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-neutron-metadata-default-certs-0" Feb 23 09:21:49 crc kubenswrapper[4940]: I0223 09:21:49.264334 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-ovn-default-certs-0" Feb 23 09:21:49 crc kubenswrapper[4940]: I0223 09:21:49.267287 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l9x62"] Feb 23 09:21:49 crc kubenswrapper[4940]: I0223 09:21:49.294241 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b42a2d02-c866-40d6-93ce-81d71aaf7195-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l9x62\" (UID: \"b42a2d02-c866-40d6-93ce-81d71aaf7195\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l9x62" Feb 23 09:21:49 crc kubenswrapper[4940]: I0223 09:21:49.294280 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c928h\" (UniqueName: \"kubernetes.io/projected/b42a2d02-c866-40d6-93ce-81d71aaf7195-kube-api-access-c928h\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l9x62\" (UID: \"b42a2d02-c866-40d6-93ce-81d71aaf7195\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l9x62" Feb 23 09:21:49 crc kubenswrapper[4940]: I0223 09:21:49.294330 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/b42a2d02-c866-40d6-93ce-81d71aaf7195-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l9x62\" (UID: \"b42a2d02-c866-40d6-93ce-81d71aaf7195\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l9x62" Feb 23 09:21:49 crc kubenswrapper[4940]: I0223 09:21:49.294357 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b42a2d02-c866-40d6-93ce-81d71aaf7195-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l9x62\" (UID: \"b42a2d02-c866-40d6-93ce-81d71aaf7195\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l9x62" Feb 23 09:21:49 crc kubenswrapper[4940]: I0223 09:21:49.294413 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b42a2d02-c866-40d6-93ce-81d71aaf7195-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l9x62\" (UID: \"b42a2d02-c866-40d6-93ce-81d71aaf7195\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l9x62" Feb 23 09:21:49 crc kubenswrapper[4940]: I0223 09:21:49.294437 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/b42a2d02-c866-40d6-93ce-81d71aaf7195-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l9x62\" (UID: \"b42a2d02-c866-40d6-93ce-81d71aaf7195\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l9x62" Feb 23 09:21:49 crc kubenswrapper[4940]: I0223 09:21:49.294456 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b42a2d02-c866-40d6-93ce-81d71aaf7195-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l9x62\" (UID: \"b42a2d02-c866-40d6-93ce-81d71aaf7195\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l9x62" Feb 23 09:21:49 crc kubenswrapper[4940]: I0223 09:21:49.294517 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b42a2d02-c866-40d6-93ce-81d71aaf7195-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l9x62\" (UID: \"b42a2d02-c866-40d6-93ce-81d71aaf7195\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l9x62" Feb 23 09:21:49 crc kubenswrapper[4940]: I0223 09:21:49.294593 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b42a2d02-c866-40d6-93ce-81d71aaf7195-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l9x62\" (UID: \"b42a2d02-c866-40d6-93ce-81d71aaf7195\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l9x62" Feb 23 09:21:49 crc kubenswrapper[4940]: I0223 09:21:49.294697 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/b42a2d02-c866-40d6-93ce-81d71aaf7195-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l9x62\" (UID: \"b42a2d02-c866-40d6-93ce-81d71aaf7195\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l9x62" Feb 23 09:21:49 crc kubenswrapper[4940]: I0223 09:21:49.294773 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b42a2d02-c866-40d6-93ce-81d71aaf7195-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l9x62\" (UID: \"b42a2d02-c866-40d6-93ce-81d71aaf7195\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l9x62" Feb 23 09:21:49 crc kubenswrapper[4940]: I0223 09:21:49.294820 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b42a2d02-c866-40d6-93ce-81d71aaf7195-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l9x62\" (UID: \"b42a2d02-c866-40d6-93ce-81d71aaf7195\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l9x62" Feb 23 09:21:49 crc kubenswrapper[4940]: I0223 09:21:49.294879 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/b42a2d02-c866-40d6-93ce-81d71aaf7195-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l9x62\" (UID: \"b42a2d02-c866-40d6-93ce-81d71aaf7195\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l9x62" Feb 23 09:21:49 crc kubenswrapper[4940]: I0223 09:21:49.294920 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b42a2d02-c866-40d6-93ce-81d71aaf7195-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l9x62\" (UID: \"b42a2d02-c866-40d6-93ce-81d71aaf7195\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l9x62" Feb 23 09:21:49 crc kubenswrapper[4940]: I0223 09:21:49.397051 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/b42a2d02-c866-40d6-93ce-81d71aaf7195-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l9x62\" (UID: \"b42a2d02-c866-40d6-93ce-81d71aaf7195\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l9x62" Feb 23 09:21:49 crc kubenswrapper[4940]: I0223 09:21:49.397445 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b42a2d02-c866-40d6-93ce-81d71aaf7195-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l9x62\" (UID: \"b42a2d02-c866-40d6-93ce-81d71aaf7195\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l9x62" Feb 23 09:21:49 crc kubenswrapper[4940]: I0223 09:21:49.397543 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b42a2d02-c866-40d6-93ce-81d71aaf7195-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l9x62\" (UID: \"b42a2d02-c866-40d6-93ce-81d71aaf7195\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l9x62" Feb 23 09:21:49 crc kubenswrapper[4940]: I0223 09:21:49.397573 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c928h\" (UniqueName: \"kubernetes.io/projected/b42a2d02-c866-40d6-93ce-81d71aaf7195-kube-api-access-c928h\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l9x62\" (UID: \"b42a2d02-c866-40d6-93ce-81d71aaf7195\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l9x62" Feb 23 09:21:49 crc kubenswrapper[4940]: I0223 09:21:49.397669 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/b42a2d02-c866-40d6-93ce-81d71aaf7195-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l9x62\" (UID: \"b42a2d02-c866-40d6-93ce-81d71aaf7195\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l9x62" Feb 23 09:21:49 crc kubenswrapper[4940]: I0223 09:21:49.397739 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b42a2d02-c866-40d6-93ce-81d71aaf7195-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l9x62\" (UID: \"b42a2d02-c866-40d6-93ce-81d71aaf7195\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l9x62" Feb 23 09:21:49 crc kubenswrapper[4940]: I0223 09:21:49.398295 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b42a2d02-c866-40d6-93ce-81d71aaf7195-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l9x62\" (UID: \"b42a2d02-c866-40d6-93ce-81d71aaf7195\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l9x62" Feb 23 09:21:49 crc kubenswrapper[4940]: I0223 09:21:49.398326 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/b42a2d02-c866-40d6-93ce-81d71aaf7195-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l9x62\" (UID: \"b42a2d02-c866-40d6-93ce-81d71aaf7195\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l9x62" Feb 23 09:21:49 crc kubenswrapper[4940]: I0223 09:21:49.398386 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b42a2d02-c866-40d6-93ce-81d71aaf7195-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l9x62\" (UID: \"b42a2d02-c866-40d6-93ce-81d71aaf7195\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l9x62" Feb 23 09:21:49 crc kubenswrapper[4940]: I0223 09:21:49.398555 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b42a2d02-c866-40d6-93ce-81d71aaf7195-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l9x62\" (UID: \"b42a2d02-c866-40d6-93ce-81d71aaf7195\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l9x62" Feb 23 09:21:49 crc kubenswrapper[4940]: I0223 09:21:49.398625 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b42a2d02-c866-40d6-93ce-81d71aaf7195-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l9x62\" (UID: \"b42a2d02-c866-40d6-93ce-81d71aaf7195\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l9x62" Feb 23 09:21:49 crc kubenswrapper[4940]: I0223 09:21:49.398657 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/b42a2d02-c866-40d6-93ce-81d71aaf7195-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l9x62\" (UID: \"b42a2d02-c866-40d6-93ce-81d71aaf7195\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l9x62" Feb 23 09:21:49 crc kubenswrapper[4940]: I0223 09:21:49.398907 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b42a2d02-c866-40d6-93ce-81d71aaf7195-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l9x62\" (UID: \"b42a2d02-c866-40d6-93ce-81d71aaf7195\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l9x62" Feb 23 09:21:49 crc kubenswrapper[4940]: I0223 09:21:49.398970 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b42a2d02-c866-40d6-93ce-81d71aaf7195-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l9x62\" (UID: \"b42a2d02-c866-40d6-93ce-81d71aaf7195\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l9x62" Feb 23 09:21:49 crc kubenswrapper[4940]: I0223 09:21:49.401541 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b42a2d02-c866-40d6-93ce-81d71aaf7195-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l9x62\" (UID: \"b42a2d02-c866-40d6-93ce-81d71aaf7195\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l9x62" Feb 23 09:21:49 crc kubenswrapper[4940]: I0223 09:21:49.403024 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/b42a2d02-c866-40d6-93ce-81d71aaf7195-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l9x62\" (UID: \"b42a2d02-c866-40d6-93ce-81d71aaf7195\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l9x62" Feb 23 09:21:49 crc kubenswrapper[4940]: I0223 09:21:49.403566 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/b42a2d02-c866-40d6-93ce-81d71aaf7195-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l9x62\" (UID: \"b42a2d02-c866-40d6-93ce-81d71aaf7195\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l9x62" Feb 23 09:21:49 crc kubenswrapper[4940]: I0223 09:21:49.403999 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/b42a2d02-c866-40d6-93ce-81d71aaf7195-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l9x62\" (UID: \"b42a2d02-c866-40d6-93ce-81d71aaf7195\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l9x62" Feb 23 09:21:49 crc kubenswrapper[4940]: I0223 09:21:49.404371 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b42a2d02-c866-40d6-93ce-81d71aaf7195-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l9x62\" (UID: \"b42a2d02-c866-40d6-93ce-81d71aaf7195\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l9x62" Feb 23 09:21:49 crc kubenswrapper[4940]: I0223 09:21:49.405511 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b42a2d02-c866-40d6-93ce-81d71aaf7195-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l9x62\" (UID: \"b42a2d02-c866-40d6-93ce-81d71aaf7195\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l9x62" Feb 23 09:21:49 crc kubenswrapper[4940]: I0223 09:21:49.406536 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b42a2d02-c866-40d6-93ce-81d71aaf7195-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l9x62\" (UID: \"b42a2d02-c866-40d6-93ce-81d71aaf7195\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l9x62" Feb 23 09:21:49 crc kubenswrapper[4940]: I0223 09:21:49.406576 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b42a2d02-c866-40d6-93ce-81d71aaf7195-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l9x62\" (UID: \"b42a2d02-c866-40d6-93ce-81d71aaf7195\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l9x62" Feb 23 09:21:49 crc kubenswrapper[4940]: I0223 09:21:49.408743 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/b42a2d02-c866-40d6-93ce-81d71aaf7195-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l9x62\" (UID: \"b42a2d02-c866-40d6-93ce-81d71aaf7195\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l9x62" Feb 23 09:21:49 crc kubenswrapper[4940]: I0223 09:21:49.409353 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b42a2d02-c866-40d6-93ce-81d71aaf7195-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l9x62\" (UID: \"b42a2d02-c866-40d6-93ce-81d71aaf7195\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l9x62" Feb 23 09:21:49 crc kubenswrapper[4940]: I0223 09:21:49.409691 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b42a2d02-c866-40d6-93ce-81d71aaf7195-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l9x62\" (UID: \"b42a2d02-c866-40d6-93ce-81d71aaf7195\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l9x62" Feb 23 09:21:49 crc kubenswrapper[4940]: I0223 09:21:49.409719 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b42a2d02-c866-40d6-93ce-81d71aaf7195-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l9x62\" (UID: \"b42a2d02-c866-40d6-93ce-81d71aaf7195\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l9x62" Feb 23 09:21:49 crc kubenswrapper[4940]: I0223 09:21:49.411234 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b42a2d02-c866-40d6-93ce-81d71aaf7195-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l9x62\" (UID: \"b42a2d02-c866-40d6-93ce-81d71aaf7195\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l9x62" Feb 23 09:21:49 crc kubenswrapper[4940]: I0223 09:21:49.417989 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c928h\" (UniqueName: \"kubernetes.io/projected/b42a2d02-c866-40d6-93ce-81d71aaf7195-kube-api-access-c928h\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l9x62\" (UID: \"b42a2d02-c866-40d6-93ce-81d71aaf7195\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l9x62" Feb 23 09:21:49 crc kubenswrapper[4940]: I0223 09:21:49.574825 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l9x62" Feb 23 09:21:50 crc kubenswrapper[4940]: I0223 09:21:50.150797 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l9x62"] Feb 23 09:21:50 crc kubenswrapper[4940]: I0223 09:21:50.168281 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l9x62" event={"ID":"b42a2d02-c866-40d6-93ce-81d71aaf7195","Type":"ContainerStarted","Data":"0588e65ccd41db577f2fb762bbf0631e551be91a58506f4f21f294cc3fd93018"} Feb 23 09:21:51 crc kubenswrapper[4940]: I0223 09:21:51.179056 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l9x62" event={"ID":"b42a2d02-c866-40d6-93ce-81d71aaf7195","Type":"ContainerStarted","Data":"3512d284a5d2234caffb981b56e71193ec6a95896a6b80c7f719a5692c815a11"} Feb 23 09:21:51 crc kubenswrapper[4940]: I0223 09:21:51.207907 4940 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l9x62" podStartSLOduration=1.7812084000000001 podStartE2EDuration="2.207882736s" podCreationTimestamp="2026-02-23 09:21:49 +0000 UTC" firstStartedPulling="2026-02-23 09:21:50.149879479 +0000 UTC m=+2041.533085646" lastFinishedPulling="2026-02-23 09:21:50.576553825 +0000 UTC m=+2041.959759982" observedRunningTime="2026-02-23 09:21:51.201107353 +0000 UTC m=+2042.584313550" watchObservedRunningTime="2026-02-23 09:21:51.207882736 +0000 UTC m=+2042.591088893" Feb 23 09:22:01 crc kubenswrapper[4940]: I0223 09:22:01.429901 4940 patch_prober.go:28] interesting pod/machine-config-daemon-26mgs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 23 09:22:01 crc kubenswrapper[4940]: I0223 09:22:01.430335 4940 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" podUID="f3f2cfd6-5ddf-436d-998f-440f1cc642b1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 23 09:22:23 crc kubenswrapper[4940]: I0223 09:22:23.039197 4940 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-2nkmb"] Feb 23 09:22:23 crc kubenswrapper[4940]: I0223 09:22:23.047985 4940 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-2nkmb"] Feb 23 09:22:23 crc kubenswrapper[4940]: I0223 09:22:23.354973 4940 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="79be8ad1-5c0e-41a0-b293-46a293c25212" path="/var/lib/kubelet/pods/79be8ad1-5c0e-41a0-b293-46a293c25212/volumes" Feb 23 09:22:24 crc kubenswrapper[4940]: I0223 09:22:24.472837 4940 generic.go:334] "Generic (PLEG): container finished" podID="b42a2d02-c866-40d6-93ce-81d71aaf7195" containerID="3512d284a5d2234caffb981b56e71193ec6a95896a6b80c7f719a5692c815a11" exitCode=0 Feb 23 09:22:24 crc kubenswrapper[4940]: I0223 09:22:24.472951 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l9x62" event={"ID":"b42a2d02-c866-40d6-93ce-81d71aaf7195","Type":"ContainerDied","Data":"3512d284a5d2234caffb981b56e71193ec6a95896a6b80c7f719a5692c815a11"} Feb 23 09:22:25 crc kubenswrapper[4940]: I0223 09:22:25.889714 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l9x62" Feb 23 09:22:26 crc kubenswrapper[4940]: I0223 09:22:26.003421 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b42a2d02-c866-40d6-93ce-81d71aaf7195-telemetry-combined-ca-bundle\") pod \"b42a2d02-c866-40d6-93ce-81d71aaf7195\" (UID: \"b42a2d02-c866-40d6-93ce-81d71aaf7195\") " Feb 23 09:22:26 crc kubenswrapper[4940]: I0223 09:22:26.003483 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b42a2d02-c866-40d6-93ce-81d71aaf7195-nova-combined-ca-bundle\") pod \"b42a2d02-c866-40d6-93ce-81d71aaf7195\" (UID: \"b42a2d02-c866-40d6-93ce-81d71aaf7195\") " Feb 23 09:22:26 crc kubenswrapper[4940]: I0223 09:22:26.003510 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b42a2d02-c866-40d6-93ce-81d71aaf7195-neutron-metadata-combined-ca-bundle\") pod \"b42a2d02-c866-40d6-93ce-81d71aaf7195\" (UID: \"b42a2d02-c866-40d6-93ce-81d71aaf7195\") " Feb 23 09:22:26 crc kubenswrapper[4940]: I0223 09:22:26.003541 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b42a2d02-c866-40d6-93ce-81d71aaf7195-repo-setup-combined-ca-bundle\") pod \"b42a2d02-c866-40d6-93ce-81d71aaf7195\" (UID: \"b42a2d02-c866-40d6-93ce-81d71aaf7195\") " Feb 23 09:22:26 crc kubenswrapper[4940]: I0223 09:22:26.003571 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b42a2d02-c866-40d6-93ce-81d71aaf7195-libvirt-combined-ca-bundle\") pod \"b42a2d02-c866-40d6-93ce-81d71aaf7195\" (UID: \"b42a2d02-c866-40d6-93ce-81d71aaf7195\") " Feb 23 09:22:26 crc kubenswrapper[4940]: I0223 09:22:26.004493 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b42a2d02-c866-40d6-93ce-81d71aaf7195-bootstrap-combined-ca-bundle\") pod \"b42a2d02-c866-40d6-93ce-81d71aaf7195\" (UID: \"b42a2d02-c866-40d6-93ce-81d71aaf7195\") " Feb 23 09:22:26 crc kubenswrapper[4940]: I0223 09:22:26.004563 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b42a2d02-c866-40d6-93ce-81d71aaf7195-ovn-combined-ca-bundle\") pod \"b42a2d02-c866-40d6-93ce-81d71aaf7195\" (UID: \"b42a2d02-c866-40d6-93ce-81d71aaf7195\") " Feb 23 09:22:26 crc kubenswrapper[4940]: I0223 09:22:26.004649 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b42a2d02-c866-40d6-93ce-81d71aaf7195-inventory\") pod \"b42a2d02-c866-40d6-93ce-81d71aaf7195\" (UID: \"b42a2d02-c866-40d6-93ce-81d71aaf7195\") " Feb 23 09:22:26 crc kubenswrapper[4940]: I0223 09:22:26.004674 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/b42a2d02-c866-40d6-93ce-81d71aaf7195-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"b42a2d02-c866-40d6-93ce-81d71aaf7195\" (UID: \"b42a2d02-c866-40d6-93ce-81d71aaf7195\") " Feb 23 09:22:26 crc kubenswrapper[4940]: I0223 09:22:26.004699 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b42a2d02-c866-40d6-93ce-81d71aaf7195-ssh-key-openstack-edpm-ipam\") pod \"b42a2d02-c866-40d6-93ce-81d71aaf7195\" (UID: \"b42a2d02-c866-40d6-93ce-81d71aaf7195\") " Feb 23 09:22:26 crc kubenswrapper[4940]: I0223 09:22:26.004771 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/b42a2d02-c866-40d6-93ce-81d71aaf7195-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"b42a2d02-c866-40d6-93ce-81d71aaf7195\" (UID: \"b42a2d02-c866-40d6-93ce-81d71aaf7195\") " Feb 23 09:22:26 crc kubenswrapper[4940]: I0223 09:22:26.004838 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c928h\" (UniqueName: \"kubernetes.io/projected/b42a2d02-c866-40d6-93ce-81d71aaf7195-kube-api-access-c928h\") pod \"b42a2d02-c866-40d6-93ce-81d71aaf7195\" (UID: \"b42a2d02-c866-40d6-93ce-81d71aaf7195\") " Feb 23 09:22:26 crc kubenswrapper[4940]: I0223 09:22:26.004882 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/b42a2d02-c866-40d6-93ce-81d71aaf7195-openstack-edpm-ipam-ovn-default-certs-0\") pod \"b42a2d02-c866-40d6-93ce-81d71aaf7195\" (UID: \"b42a2d02-c866-40d6-93ce-81d71aaf7195\") " Feb 23 09:22:26 crc kubenswrapper[4940]: I0223 09:22:26.004918 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/b42a2d02-c866-40d6-93ce-81d71aaf7195-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"b42a2d02-c866-40d6-93ce-81d71aaf7195\" (UID: \"b42a2d02-c866-40d6-93ce-81d71aaf7195\") " Feb 23 09:22:26 crc kubenswrapper[4940]: I0223 09:22:26.011410 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b42a2d02-c866-40d6-93ce-81d71aaf7195-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "b42a2d02-c866-40d6-93ce-81d71aaf7195" (UID: "b42a2d02-c866-40d6-93ce-81d71aaf7195"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 09:22:26 crc kubenswrapper[4940]: I0223 09:22:26.011437 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b42a2d02-c866-40d6-93ce-81d71aaf7195-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "b42a2d02-c866-40d6-93ce-81d71aaf7195" (UID: "b42a2d02-c866-40d6-93ce-81d71aaf7195"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 09:22:26 crc kubenswrapper[4940]: I0223 09:22:26.011458 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b42a2d02-c866-40d6-93ce-81d71aaf7195-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "b42a2d02-c866-40d6-93ce-81d71aaf7195" (UID: "b42a2d02-c866-40d6-93ce-81d71aaf7195"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 09:22:26 crc kubenswrapper[4940]: I0223 09:22:26.011512 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b42a2d02-c866-40d6-93ce-81d71aaf7195-kube-api-access-c928h" (OuterVolumeSpecName: "kube-api-access-c928h") pod "b42a2d02-c866-40d6-93ce-81d71aaf7195" (UID: "b42a2d02-c866-40d6-93ce-81d71aaf7195"). InnerVolumeSpecName "kube-api-access-c928h". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 09:22:26 crc kubenswrapper[4940]: I0223 09:22:26.011579 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b42a2d02-c866-40d6-93ce-81d71aaf7195-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "b42a2d02-c866-40d6-93ce-81d71aaf7195" (UID: "b42a2d02-c866-40d6-93ce-81d71aaf7195"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 09:22:26 crc kubenswrapper[4940]: I0223 09:22:26.012516 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b42a2d02-c866-40d6-93ce-81d71aaf7195-openstack-edpm-ipam-telemetry-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-telemetry-default-certs-0") pod "b42a2d02-c866-40d6-93ce-81d71aaf7195" (UID: "b42a2d02-c866-40d6-93ce-81d71aaf7195"). InnerVolumeSpecName "openstack-edpm-ipam-telemetry-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 09:22:26 crc kubenswrapper[4940]: I0223 09:22:26.014969 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b42a2d02-c866-40d6-93ce-81d71aaf7195-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "b42a2d02-c866-40d6-93ce-81d71aaf7195" (UID: "b42a2d02-c866-40d6-93ce-81d71aaf7195"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 09:22:26 crc kubenswrapper[4940]: I0223 09:22:26.015448 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b42a2d02-c866-40d6-93ce-81d71aaf7195-openstack-edpm-ipam-libvirt-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-libvirt-default-certs-0") pod "b42a2d02-c866-40d6-93ce-81d71aaf7195" (UID: "b42a2d02-c866-40d6-93ce-81d71aaf7195"). InnerVolumeSpecName "openstack-edpm-ipam-libvirt-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 09:22:26 crc kubenswrapper[4940]: I0223 09:22:26.016396 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b42a2d02-c866-40d6-93ce-81d71aaf7195-openstack-edpm-ipam-neutron-metadata-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-neutron-metadata-default-certs-0") pod "b42a2d02-c866-40d6-93ce-81d71aaf7195" (UID: "b42a2d02-c866-40d6-93ce-81d71aaf7195"). InnerVolumeSpecName "openstack-edpm-ipam-neutron-metadata-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 09:22:26 crc kubenswrapper[4940]: I0223 09:22:26.018751 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b42a2d02-c866-40d6-93ce-81d71aaf7195-openstack-edpm-ipam-ovn-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-ovn-default-certs-0") pod "b42a2d02-c866-40d6-93ce-81d71aaf7195" (UID: "b42a2d02-c866-40d6-93ce-81d71aaf7195"). InnerVolumeSpecName "openstack-edpm-ipam-ovn-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 09:22:26 crc kubenswrapper[4940]: I0223 09:22:26.019442 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b42a2d02-c866-40d6-93ce-81d71aaf7195-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "b42a2d02-c866-40d6-93ce-81d71aaf7195" (UID: "b42a2d02-c866-40d6-93ce-81d71aaf7195"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 09:22:26 crc kubenswrapper[4940]: I0223 09:22:26.024717 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b42a2d02-c866-40d6-93ce-81d71aaf7195-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "b42a2d02-c866-40d6-93ce-81d71aaf7195" (UID: "b42a2d02-c866-40d6-93ce-81d71aaf7195"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 09:22:26 crc kubenswrapper[4940]: I0223 09:22:26.040650 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b42a2d02-c866-40d6-93ce-81d71aaf7195-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "b42a2d02-c866-40d6-93ce-81d71aaf7195" (UID: "b42a2d02-c866-40d6-93ce-81d71aaf7195"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 09:22:26 crc kubenswrapper[4940]: I0223 09:22:26.043414 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b42a2d02-c866-40d6-93ce-81d71aaf7195-inventory" (OuterVolumeSpecName: "inventory") pod "b42a2d02-c866-40d6-93ce-81d71aaf7195" (UID: "b42a2d02-c866-40d6-93ce-81d71aaf7195"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 09:22:26 crc kubenswrapper[4940]: I0223 09:22:26.106955 4940 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b42a2d02-c866-40d6-93ce-81d71aaf7195-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 09:22:26 crc kubenswrapper[4940]: I0223 09:22:26.106989 4940 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b42a2d02-c866-40d6-93ce-81d71aaf7195-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 09:22:26 crc kubenswrapper[4940]: I0223 09:22:26.107013 4940 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b42a2d02-c866-40d6-93ce-81d71aaf7195-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 09:22:26 crc kubenswrapper[4940]: I0223 09:22:26.107022 4940 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b42a2d02-c866-40d6-93ce-81d71aaf7195-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 09:22:26 crc kubenswrapper[4940]: I0223 09:22:26.107031 4940 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b42a2d02-c866-40d6-93ce-81d71aaf7195-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 09:22:26 crc kubenswrapper[4940]: I0223 09:22:26.107039 4940 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b42a2d02-c866-40d6-93ce-81d71aaf7195-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 09:22:26 crc kubenswrapper[4940]: I0223 09:22:26.107048 4940 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b42a2d02-c866-40d6-93ce-81d71aaf7195-inventory\") on node \"crc\" DevicePath \"\"" Feb 23 09:22:26 crc kubenswrapper[4940]: I0223 09:22:26.107056 4940 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/b42a2d02-c866-40d6-93ce-81d71aaf7195-openstack-edpm-ipam-neutron-metadata-default-certs-0\") on node \"crc\" DevicePath \"\"" Feb 23 09:22:26 crc kubenswrapper[4940]: I0223 09:22:26.107065 4940 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b42a2d02-c866-40d6-93ce-81d71aaf7195-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 23 09:22:26 crc kubenswrapper[4940]: I0223 09:22:26.107073 4940 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/b42a2d02-c866-40d6-93ce-81d71aaf7195-openstack-edpm-ipam-libvirt-default-certs-0\") on node \"crc\" DevicePath \"\"" Feb 23 09:22:26 crc kubenswrapper[4940]: I0223 09:22:26.107082 4940 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c928h\" (UniqueName: \"kubernetes.io/projected/b42a2d02-c866-40d6-93ce-81d71aaf7195-kube-api-access-c928h\") on node \"crc\" DevicePath \"\"" Feb 23 09:22:26 crc kubenswrapper[4940]: I0223 09:22:26.107092 4940 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/b42a2d02-c866-40d6-93ce-81d71aaf7195-openstack-edpm-ipam-ovn-default-certs-0\") on node \"crc\" DevicePath \"\"" Feb 23 09:22:26 crc kubenswrapper[4940]: I0223 09:22:26.107100 4940 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/b42a2d02-c866-40d6-93ce-81d71aaf7195-openstack-edpm-ipam-telemetry-default-certs-0\") on node \"crc\" DevicePath \"\"" Feb 23 09:22:26 crc kubenswrapper[4940]: I0223 09:22:26.107109 4940 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b42a2d02-c866-40d6-93ce-81d71aaf7195-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 09:22:26 crc kubenswrapper[4940]: I0223 09:22:26.498690 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l9x62" event={"ID":"b42a2d02-c866-40d6-93ce-81d71aaf7195","Type":"ContainerDied","Data":"0588e65ccd41db577f2fb762bbf0631e551be91a58506f4f21f294cc3fd93018"} Feb 23 09:22:26 crc kubenswrapper[4940]: I0223 09:22:26.498751 4940 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0588e65ccd41db577f2fb762bbf0631e551be91a58506f4f21f294cc3fd93018" Feb 23 09:22:26 crc kubenswrapper[4940]: I0223 09:22:26.498753 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l9x62" Feb 23 09:22:26 crc kubenswrapper[4940]: I0223 09:22:26.604730 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-ltqdk"] Feb 23 09:22:26 crc kubenswrapper[4940]: E0223 09:22:26.605191 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b42a2d02-c866-40d6-93ce-81d71aaf7195" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Feb 23 09:22:26 crc kubenswrapper[4940]: I0223 09:22:26.605213 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="b42a2d02-c866-40d6-93ce-81d71aaf7195" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Feb 23 09:22:26 crc kubenswrapper[4940]: I0223 09:22:26.605428 4940 memory_manager.go:354] "RemoveStaleState removing state" podUID="b42a2d02-c866-40d6-93ce-81d71aaf7195" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Feb 23 09:22:26 crc kubenswrapper[4940]: I0223 09:22:26.606136 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ltqdk" Feb 23 09:22:26 crc kubenswrapper[4940]: I0223 09:22:26.609250 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 23 09:22:26 crc kubenswrapper[4940]: I0223 09:22:26.609489 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-config" Feb 23 09:22:26 crc kubenswrapper[4940]: I0223 09:22:26.609567 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 23 09:22:26 crc kubenswrapper[4940]: I0223 09:22:26.609753 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-x648h" Feb 23 09:22:26 crc kubenswrapper[4940]: I0223 09:22:26.609914 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 23 09:22:26 crc kubenswrapper[4940]: I0223 09:22:26.628374 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-ltqdk"] Feb 23 09:22:26 crc kubenswrapper[4940]: I0223 09:22:26.720731 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d252356a-80f4-4cf3-b739-520d9bd4b2c1-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-ltqdk\" (UID: \"d252356a-80f4-4cf3-b739-520d9bd4b2c1\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ltqdk" Feb 23 09:22:26 crc kubenswrapper[4940]: I0223 09:22:26.720858 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kgdg9\" (UniqueName: \"kubernetes.io/projected/d252356a-80f4-4cf3-b739-520d9bd4b2c1-kube-api-access-kgdg9\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-ltqdk\" (UID: \"d252356a-80f4-4cf3-b739-520d9bd4b2c1\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ltqdk" Feb 23 09:22:26 crc kubenswrapper[4940]: I0223 09:22:26.720895 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d252356a-80f4-4cf3-b739-520d9bd4b2c1-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-ltqdk\" (UID: \"d252356a-80f4-4cf3-b739-520d9bd4b2c1\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ltqdk" Feb 23 09:22:26 crc kubenswrapper[4940]: I0223 09:22:26.720928 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/d252356a-80f4-4cf3-b739-520d9bd4b2c1-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-ltqdk\" (UID: \"d252356a-80f4-4cf3-b739-520d9bd4b2c1\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ltqdk" Feb 23 09:22:26 crc kubenswrapper[4940]: I0223 09:22:26.720955 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d252356a-80f4-4cf3-b739-520d9bd4b2c1-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-ltqdk\" (UID: \"d252356a-80f4-4cf3-b739-520d9bd4b2c1\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ltqdk" Feb 23 09:22:26 crc kubenswrapper[4940]: I0223 09:22:26.823009 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d252356a-80f4-4cf3-b739-520d9bd4b2c1-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-ltqdk\" (UID: \"d252356a-80f4-4cf3-b739-520d9bd4b2c1\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ltqdk" Feb 23 09:22:26 crc kubenswrapper[4940]: I0223 09:22:26.823141 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kgdg9\" (UniqueName: \"kubernetes.io/projected/d252356a-80f4-4cf3-b739-520d9bd4b2c1-kube-api-access-kgdg9\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-ltqdk\" (UID: \"d252356a-80f4-4cf3-b739-520d9bd4b2c1\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ltqdk" Feb 23 09:22:26 crc kubenswrapper[4940]: I0223 09:22:26.823514 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d252356a-80f4-4cf3-b739-520d9bd4b2c1-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-ltqdk\" (UID: \"d252356a-80f4-4cf3-b739-520d9bd4b2c1\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ltqdk" Feb 23 09:22:26 crc kubenswrapper[4940]: I0223 09:22:26.823984 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/d252356a-80f4-4cf3-b739-520d9bd4b2c1-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-ltqdk\" (UID: \"d252356a-80f4-4cf3-b739-520d9bd4b2c1\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ltqdk" Feb 23 09:22:26 crc kubenswrapper[4940]: I0223 09:22:26.824805 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/d252356a-80f4-4cf3-b739-520d9bd4b2c1-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-ltqdk\" (UID: \"d252356a-80f4-4cf3-b739-520d9bd4b2c1\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ltqdk" Feb 23 09:22:26 crc kubenswrapper[4940]: I0223 09:22:26.824925 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d252356a-80f4-4cf3-b739-520d9bd4b2c1-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-ltqdk\" (UID: \"d252356a-80f4-4cf3-b739-520d9bd4b2c1\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ltqdk" Feb 23 09:22:26 crc kubenswrapper[4940]: I0223 09:22:26.827509 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d252356a-80f4-4cf3-b739-520d9bd4b2c1-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-ltqdk\" (UID: \"d252356a-80f4-4cf3-b739-520d9bd4b2c1\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ltqdk" Feb 23 09:22:26 crc kubenswrapper[4940]: I0223 09:22:26.828364 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d252356a-80f4-4cf3-b739-520d9bd4b2c1-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-ltqdk\" (UID: \"d252356a-80f4-4cf3-b739-520d9bd4b2c1\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ltqdk" Feb 23 09:22:26 crc kubenswrapper[4940]: I0223 09:22:26.828830 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d252356a-80f4-4cf3-b739-520d9bd4b2c1-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-ltqdk\" (UID: \"d252356a-80f4-4cf3-b739-520d9bd4b2c1\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ltqdk" Feb 23 09:22:26 crc kubenswrapper[4940]: I0223 09:22:26.841710 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kgdg9\" (UniqueName: \"kubernetes.io/projected/d252356a-80f4-4cf3-b739-520d9bd4b2c1-kube-api-access-kgdg9\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-ltqdk\" (UID: \"d252356a-80f4-4cf3-b739-520d9bd4b2c1\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ltqdk" Feb 23 09:22:26 crc kubenswrapper[4940]: I0223 09:22:26.948330 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ltqdk" Feb 23 09:22:27 crc kubenswrapper[4940]: I0223 09:22:27.510204 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-ltqdk"] Feb 23 09:22:28 crc kubenswrapper[4940]: I0223 09:22:28.526262 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ltqdk" event={"ID":"d252356a-80f4-4cf3-b739-520d9bd4b2c1","Type":"ContainerStarted","Data":"a677f6195f8fdeb41b0815adbca73c0e312f3c5d872657b9b92328e612553c4f"} Feb 23 09:22:28 crc kubenswrapper[4940]: I0223 09:22:28.527591 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ltqdk" event={"ID":"d252356a-80f4-4cf3-b739-520d9bd4b2c1","Type":"ContainerStarted","Data":"97b18fd35b44252603bc7096123712146feb6d069e06b27d8a8b5c442ce6d6b6"} Feb 23 09:22:28 crc kubenswrapper[4940]: I0223 09:22:28.564021 4940 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ltqdk" podStartSLOduration=2.148985334 podStartE2EDuration="2.563992183s" podCreationTimestamp="2026-02-23 09:22:26 +0000 UTC" firstStartedPulling="2026-02-23 09:22:27.512401308 +0000 UTC m=+2078.895607465" lastFinishedPulling="2026-02-23 09:22:27.927408157 +0000 UTC m=+2079.310614314" observedRunningTime="2026-02-23 09:22:28.547205847 +0000 UTC m=+2079.930412014" watchObservedRunningTime="2026-02-23 09:22:28.563992183 +0000 UTC m=+2079.947198360" Feb 23 09:22:31 crc kubenswrapper[4940]: I0223 09:22:31.429972 4940 patch_prober.go:28] interesting pod/machine-config-daemon-26mgs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 23 09:22:31 crc kubenswrapper[4940]: I0223 09:22:31.430483 4940 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" podUID="f3f2cfd6-5ddf-436d-998f-440f1cc642b1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 23 09:22:31 crc kubenswrapper[4940]: I0223 09:22:31.430536 4940 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" Feb 23 09:22:31 crc kubenswrapper[4940]: I0223 09:22:31.431565 4940 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"0e22f4141042ccdf07818d61e1f9266aa39799c5d25cbea563603ab4fa191be6"} pod="openshift-machine-config-operator/machine-config-daemon-26mgs" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 23 09:22:31 crc kubenswrapper[4940]: I0223 09:22:31.431661 4940 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" podUID="f3f2cfd6-5ddf-436d-998f-440f1cc642b1" containerName="machine-config-daemon" containerID="cri-o://0e22f4141042ccdf07818d61e1f9266aa39799c5d25cbea563603ab4fa191be6" gracePeriod=600 Feb 23 09:22:31 crc kubenswrapper[4940]: I0223 09:22:31.678514 4940 generic.go:334] "Generic (PLEG): container finished" podID="f3f2cfd6-5ddf-436d-998f-440f1cc642b1" containerID="0e22f4141042ccdf07818d61e1f9266aa39799c5d25cbea563603ab4fa191be6" exitCode=0 Feb 23 09:22:31 crc kubenswrapper[4940]: I0223 09:22:31.678556 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" event={"ID":"f3f2cfd6-5ddf-436d-998f-440f1cc642b1","Type":"ContainerDied","Data":"0e22f4141042ccdf07818d61e1f9266aa39799c5d25cbea563603ab4fa191be6"} Feb 23 09:22:31 crc kubenswrapper[4940]: I0223 09:22:31.678588 4940 scope.go:117] "RemoveContainer" containerID="19d6b225c575372b5c150867fa5fdc624fa3eadb3fc5651545d3dd6885ec731e" Feb 23 09:22:32 crc kubenswrapper[4940]: I0223 09:22:32.689870 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" event={"ID":"f3f2cfd6-5ddf-436d-998f-440f1cc642b1","Type":"ContainerStarted","Data":"6b12db6909c870fba93a3600a398d05d040f133d1fff6f41fc66f1a7da8177f8"} Feb 23 09:22:34 crc kubenswrapper[4940]: I0223 09:22:34.995716 4940 scope.go:117] "RemoveContainer" containerID="411c1a85a40487c82b15b811ee4b3e06cf9eadd0da0077b096e2a940b32afbba" Feb 23 09:22:35 crc kubenswrapper[4940]: I0223 09:22:35.031752 4940 scope.go:117] "RemoveContainer" containerID="879d09cd05fdd673a469aaef8f885d26bd2f4d676ecd9134c6269c67a9b0c59e" Feb 23 09:22:35 crc kubenswrapper[4940]: I0223 09:22:35.099460 4940 scope.go:117] "RemoveContainer" containerID="e46cc3d8abf9da36dc6d70b4803b2e8c3cb35392fdd83d5a8598848f99040823" Feb 23 09:22:57 crc kubenswrapper[4940]: I0223 09:22:57.825580 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-qrzvz"] Feb 23 09:22:57 crc kubenswrapper[4940]: I0223 09:22:57.835287 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qrzvz" Feb 23 09:22:57 crc kubenswrapper[4940]: I0223 09:22:57.851472 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-qrzvz"] Feb 23 09:22:57 crc kubenswrapper[4940]: I0223 09:22:57.935174 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gzkhz\" (UniqueName: \"kubernetes.io/projected/347c982c-1253-4120-b37b-2850671ab3e1-kube-api-access-gzkhz\") pod \"redhat-operators-qrzvz\" (UID: \"347c982c-1253-4120-b37b-2850671ab3e1\") " pod="openshift-marketplace/redhat-operators-qrzvz" Feb 23 09:22:57 crc kubenswrapper[4940]: I0223 09:22:57.935284 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/347c982c-1253-4120-b37b-2850671ab3e1-utilities\") pod \"redhat-operators-qrzvz\" (UID: \"347c982c-1253-4120-b37b-2850671ab3e1\") " pod="openshift-marketplace/redhat-operators-qrzvz" Feb 23 09:22:57 crc kubenswrapper[4940]: I0223 09:22:57.935457 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/347c982c-1253-4120-b37b-2850671ab3e1-catalog-content\") pod \"redhat-operators-qrzvz\" (UID: \"347c982c-1253-4120-b37b-2850671ab3e1\") " pod="openshift-marketplace/redhat-operators-qrzvz" Feb 23 09:22:58 crc kubenswrapper[4940]: I0223 09:22:58.038602 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gzkhz\" (UniqueName: \"kubernetes.io/projected/347c982c-1253-4120-b37b-2850671ab3e1-kube-api-access-gzkhz\") pod \"redhat-operators-qrzvz\" (UID: \"347c982c-1253-4120-b37b-2850671ab3e1\") " pod="openshift-marketplace/redhat-operators-qrzvz" Feb 23 09:22:58 crc kubenswrapper[4940]: I0223 09:22:58.038720 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/347c982c-1253-4120-b37b-2850671ab3e1-utilities\") pod \"redhat-operators-qrzvz\" (UID: \"347c982c-1253-4120-b37b-2850671ab3e1\") " pod="openshift-marketplace/redhat-operators-qrzvz" Feb 23 09:22:58 crc kubenswrapper[4940]: I0223 09:22:58.038769 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/347c982c-1253-4120-b37b-2850671ab3e1-catalog-content\") pod \"redhat-operators-qrzvz\" (UID: \"347c982c-1253-4120-b37b-2850671ab3e1\") " pod="openshift-marketplace/redhat-operators-qrzvz" Feb 23 09:22:58 crc kubenswrapper[4940]: I0223 09:22:58.039217 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/347c982c-1253-4120-b37b-2850671ab3e1-utilities\") pod \"redhat-operators-qrzvz\" (UID: \"347c982c-1253-4120-b37b-2850671ab3e1\") " pod="openshift-marketplace/redhat-operators-qrzvz" Feb 23 09:22:58 crc kubenswrapper[4940]: I0223 09:22:58.039370 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/347c982c-1253-4120-b37b-2850671ab3e1-catalog-content\") pod \"redhat-operators-qrzvz\" (UID: \"347c982c-1253-4120-b37b-2850671ab3e1\") " pod="openshift-marketplace/redhat-operators-qrzvz" Feb 23 09:22:58 crc kubenswrapper[4940]: I0223 09:22:58.057145 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gzkhz\" (UniqueName: \"kubernetes.io/projected/347c982c-1253-4120-b37b-2850671ab3e1-kube-api-access-gzkhz\") pod \"redhat-operators-qrzvz\" (UID: \"347c982c-1253-4120-b37b-2850671ab3e1\") " pod="openshift-marketplace/redhat-operators-qrzvz" Feb 23 09:22:58 crc kubenswrapper[4940]: I0223 09:22:58.172703 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qrzvz" Feb 23 09:22:58 crc kubenswrapper[4940]: I0223 09:22:58.666090 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-qrzvz"] Feb 23 09:22:58 crc kubenswrapper[4940]: I0223 09:22:58.947974 4940 generic.go:334] "Generic (PLEG): container finished" podID="347c982c-1253-4120-b37b-2850671ab3e1" containerID="e7eeeb1019938b5a85d0f788f3a7bff5ac03c052342d4f68e216ee5bb0f8fd15" exitCode=0 Feb 23 09:22:58 crc kubenswrapper[4940]: I0223 09:22:58.948270 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qrzvz" event={"ID":"347c982c-1253-4120-b37b-2850671ab3e1","Type":"ContainerDied","Data":"e7eeeb1019938b5a85d0f788f3a7bff5ac03c052342d4f68e216ee5bb0f8fd15"} Feb 23 09:22:58 crc kubenswrapper[4940]: I0223 09:22:58.948552 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qrzvz" event={"ID":"347c982c-1253-4120-b37b-2850671ab3e1","Type":"ContainerStarted","Data":"17d85cf83c5d4fb2a532e142fdb8fefe0dc44225dedd2a90a0979903569dd6fd"} Feb 23 09:22:59 crc kubenswrapper[4940]: I0223 09:22:59.960840 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qrzvz" event={"ID":"347c982c-1253-4120-b37b-2850671ab3e1","Type":"ContainerStarted","Data":"c6d180260b24db5823462f1b9cfbeb00bf5e554d61e0dd55b68b0385fc9f0446"} Feb 23 09:23:02 crc kubenswrapper[4940]: I0223 09:23:02.993288 4940 generic.go:334] "Generic (PLEG): container finished" podID="347c982c-1253-4120-b37b-2850671ab3e1" containerID="c6d180260b24db5823462f1b9cfbeb00bf5e554d61e0dd55b68b0385fc9f0446" exitCode=0 Feb 23 09:23:02 crc kubenswrapper[4940]: I0223 09:23:02.993381 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qrzvz" event={"ID":"347c982c-1253-4120-b37b-2850671ab3e1","Type":"ContainerDied","Data":"c6d180260b24db5823462f1b9cfbeb00bf5e554d61e0dd55b68b0385fc9f0446"} Feb 23 09:23:04 crc kubenswrapper[4940]: I0223 09:23:04.005011 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qrzvz" event={"ID":"347c982c-1253-4120-b37b-2850671ab3e1","Type":"ContainerStarted","Data":"cf2719c7dbf47a59a2005e1a90c3166df4e21f2e3a8c0139af291a9dfdc0858c"} Feb 23 09:23:04 crc kubenswrapper[4940]: I0223 09:23:04.036689 4940 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-qrzvz" podStartSLOduration=2.604527612 podStartE2EDuration="7.036665449s" podCreationTimestamp="2026-02-23 09:22:57 +0000 UTC" firstStartedPulling="2026-02-23 09:22:58.951045428 +0000 UTC m=+2110.334251585" lastFinishedPulling="2026-02-23 09:23:03.383183265 +0000 UTC m=+2114.766389422" observedRunningTime="2026-02-23 09:23:04.030118405 +0000 UTC m=+2115.413324602" watchObservedRunningTime="2026-02-23 09:23:04.036665449 +0000 UTC m=+2115.419871616" Feb 23 09:23:08 crc kubenswrapper[4940]: I0223 09:23:08.173326 4940 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-qrzvz" Feb 23 09:23:08 crc kubenswrapper[4940]: I0223 09:23:08.173803 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-qrzvz" Feb 23 09:23:09 crc kubenswrapper[4940]: I0223 09:23:09.247051 4940 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-qrzvz" podUID="347c982c-1253-4120-b37b-2850671ab3e1" containerName="registry-server" probeResult="failure" output=< Feb 23 09:23:09 crc kubenswrapper[4940]: timeout: failed to connect service ":50051" within 1s Feb 23 09:23:09 crc kubenswrapper[4940]: > Feb 23 09:23:18 crc kubenswrapper[4940]: I0223 09:23:18.221653 4940 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-qrzvz" Feb 23 09:23:18 crc kubenswrapper[4940]: I0223 09:23:18.280579 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-qrzvz" Feb 23 09:23:18 crc kubenswrapper[4940]: I0223 09:23:18.472830 4940 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-qrzvz"] Feb 23 09:23:20 crc kubenswrapper[4940]: I0223 09:23:20.149276 4940 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-qrzvz" podUID="347c982c-1253-4120-b37b-2850671ab3e1" containerName="registry-server" containerID="cri-o://cf2719c7dbf47a59a2005e1a90c3166df4e21f2e3a8c0139af291a9dfdc0858c" gracePeriod=2 Feb 23 09:23:20 crc kubenswrapper[4940]: I0223 09:23:20.650086 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qrzvz" Feb 23 09:23:20 crc kubenswrapper[4940]: I0223 09:23:20.803152 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gzkhz\" (UniqueName: \"kubernetes.io/projected/347c982c-1253-4120-b37b-2850671ab3e1-kube-api-access-gzkhz\") pod \"347c982c-1253-4120-b37b-2850671ab3e1\" (UID: \"347c982c-1253-4120-b37b-2850671ab3e1\") " Feb 23 09:23:20 crc kubenswrapper[4940]: I0223 09:23:20.803529 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/347c982c-1253-4120-b37b-2850671ab3e1-utilities\") pod \"347c982c-1253-4120-b37b-2850671ab3e1\" (UID: \"347c982c-1253-4120-b37b-2850671ab3e1\") " Feb 23 09:23:20 crc kubenswrapper[4940]: I0223 09:23:20.803555 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/347c982c-1253-4120-b37b-2850671ab3e1-catalog-content\") pod \"347c982c-1253-4120-b37b-2850671ab3e1\" (UID: \"347c982c-1253-4120-b37b-2850671ab3e1\") " Feb 23 09:23:20 crc kubenswrapper[4940]: I0223 09:23:20.804552 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/347c982c-1253-4120-b37b-2850671ab3e1-utilities" (OuterVolumeSpecName: "utilities") pod "347c982c-1253-4120-b37b-2850671ab3e1" (UID: "347c982c-1253-4120-b37b-2850671ab3e1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 09:23:20 crc kubenswrapper[4940]: I0223 09:23:20.811089 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/347c982c-1253-4120-b37b-2850671ab3e1-kube-api-access-gzkhz" (OuterVolumeSpecName: "kube-api-access-gzkhz") pod "347c982c-1253-4120-b37b-2850671ab3e1" (UID: "347c982c-1253-4120-b37b-2850671ab3e1"). InnerVolumeSpecName "kube-api-access-gzkhz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 09:23:20 crc kubenswrapper[4940]: I0223 09:23:20.906428 4940 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gzkhz\" (UniqueName: \"kubernetes.io/projected/347c982c-1253-4120-b37b-2850671ab3e1-kube-api-access-gzkhz\") on node \"crc\" DevicePath \"\"" Feb 23 09:23:20 crc kubenswrapper[4940]: I0223 09:23:20.906463 4940 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/347c982c-1253-4120-b37b-2850671ab3e1-utilities\") on node \"crc\" DevicePath \"\"" Feb 23 09:23:20 crc kubenswrapper[4940]: I0223 09:23:20.928064 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/347c982c-1253-4120-b37b-2850671ab3e1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "347c982c-1253-4120-b37b-2850671ab3e1" (UID: "347c982c-1253-4120-b37b-2850671ab3e1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 09:23:21 crc kubenswrapper[4940]: I0223 09:23:21.008510 4940 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/347c982c-1253-4120-b37b-2850671ab3e1-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 23 09:23:21 crc kubenswrapper[4940]: I0223 09:23:21.160987 4940 generic.go:334] "Generic (PLEG): container finished" podID="347c982c-1253-4120-b37b-2850671ab3e1" containerID="cf2719c7dbf47a59a2005e1a90c3166df4e21f2e3a8c0139af291a9dfdc0858c" exitCode=0 Feb 23 09:23:21 crc kubenswrapper[4940]: I0223 09:23:21.161047 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qrzvz" event={"ID":"347c982c-1253-4120-b37b-2850671ab3e1","Type":"ContainerDied","Data":"cf2719c7dbf47a59a2005e1a90c3166df4e21f2e3a8c0139af291a9dfdc0858c"} Feb 23 09:23:21 crc kubenswrapper[4940]: I0223 09:23:21.161088 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qrzvz" event={"ID":"347c982c-1253-4120-b37b-2850671ab3e1","Type":"ContainerDied","Data":"17d85cf83c5d4fb2a532e142fdb8fefe0dc44225dedd2a90a0979903569dd6fd"} Feb 23 09:23:21 crc kubenswrapper[4940]: I0223 09:23:21.161089 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qrzvz" Feb 23 09:23:21 crc kubenswrapper[4940]: I0223 09:23:21.161109 4940 scope.go:117] "RemoveContainer" containerID="cf2719c7dbf47a59a2005e1a90c3166df4e21f2e3a8c0139af291a9dfdc0858c" Feb 23 09:23:21 crc kubenswrapper[4940]: I0223 09:23:21.188295 4940 scope.go:117] "RemoveContainer" containerID="c6d180260b24db5823462f1b9cfbeb00bf5e554d61e0dd55b68b0385fc9f0446" Feb 23 09:23:21 crc kubenswrapper[4940]: I0223 09:23:21.196833 4940 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-qrzvz"] Feb 23 09:23:21 crc kubenswrapper[4940]: I0223 09:23:21.209267 4940 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-qrzvz"] Feb 23 09:23:21 crc kubenswrapper[4940]: I0223 09:23:21.221657 4940 scope.go:117] "RemoveContainer" containerID="e7eeeb1019938b5a85d0f788f3a7bff5ac03c052342d4f68e216ee5bb0f8fd15" Feb 23 09:23:21 crc kubenswrapper[4940]: I0223 09:23:21.264358 4940 scope.go:117] "RemoveContainer" containerID="cf2719c7dbf47a59a2005e1a90c3166df4e21f2e3a8c0139af291a9dfdc0858c" Feb 23 09:23:21 crc kubenswrapper[4940]: E0223 09:23:21.265070 4940 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cf2719c7dbf47a59a2005e1a90c3166df4e21f2e3a8c0139af291a9dfdc0858c\": container with ID starting with cf2719c7dbf47a59a2005e1a90c3166df4e21f2e3a8c0139af291a9dfdc0858c not found: ID does not exist" containerID="cf2719c7dbf47a59a2005e1a90c3166df4e21f2e3a8c0139af291a9dfdc0858c" Feb 23 09:23:21 crc kubenswrapper[4940]: I0223 09:23:21.265187 4940 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cf2719c7dbf47a59a2005e1a90c3166df4e21f2e3a8c0139af291a9dfdc0858c"} err="failed to get container status \"cf2719c7dbf47a59a2005e1a90c3166df4e21f2e3a8c0139af291a9dfdc0858c\": rpc error: code = NotFound desc = could not find container \"cf2719c7dbf47a59a2005e1a90c3166df4e21f2e3a8c0139af291a9dfdc0858c\": container with ID starting with cf2719c7dbf47a59a2005e1a90c3166df4e21f2e3a8c0139af291a9dfdc0858c not found: ID does not exist" Feb 23 09:23:21 crc kubenswrapper[4940]: I0223 09:23:21.265273 4940 scope.go:117] "RemoveContainer" containerID="c6d180260b24db5823462f1b9cfbeb00bf5e554d61e0dd55b68b0385fc9f0446" Feb 23 09:23:21 crc kubenswrapper[4940]: E0223 09:23:21.265674 4940 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c6d180260b24db5823462f1b9cfbeb00bf5e554d61e0dd55b68b0385fc9f0446\": container with ID starting with c6d180260b24db5823462f1b9cfbeb00bf5e554d61e0dd55b68b0385fc9f0446 not found: ID does not exist" containerID="c6d180260b24db5823462f1b9cfbeb00bf5e554d61e0dd55b68b0385fc9f0446" Feb 23 09:23:21 crc kubenswrapper[4940]: I0223 09:23:21.265758 4940 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c6d180260b24db5823462f1b9cfbeb00bf5e554d61e0dd55b68b0385fc9f0446"} err="failed to get container status \"c6d180260b24db5823462f1b9cfbeb00bf5e554d61e0dd55b68b0385fc9f0446\": rpc error: code = NotFound desc = could not find container \"c6d180260b24db5823462f1b9cfbeb00bf5e554d61e0dd55b68b0385fc9f0446\": container with ID starting with c6d180260b24db5823462f1b9cfbeb00bf5e554d61e0dd55b68b0385fc9f0446 not found: ID does not exist" Feb 23 09:23:21 crc kubenswrapper[4940]: I0223 09:23:21.265842 4940 scope.go:117] "RemoveContainer" containerID="e7eeeb1019938b5a85d0f788f3a7bff5ac03c052342d4f68e216ee5bb0f8fd15" Feb 23 09:23:21 crc kubenswrapper[4940]: E0223 09:23:21.266583 4940 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e7eeeb1019938b5a85d0f788f3a7bff5ac03c052342d4f68e216ee5bb0f8fd15\": container with ID starting with e7eeeb1019938b5a85d0f788f3a7bff5ac03c052342d4f68e216ee5bb0f8fd15 not found: ID does not exist" containerID="e7eeeb1019938b5a85d0f788f3a7bff5ac03c052342d4f68e216ee5bb0f8fd15" Feb 23 09:23:21 crc kubenswrapper[4940]: I0223 09:23:21.266689 4940 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e7eeeb1019938b5a85d0f788f3a7bff5ac03c052342d4f68e216ee5bb0f8fd15"} err="failed to get container status \"e7eeeb1019938b5a85d0f788f3a7bff5ac03c052342d4f68e216ee5bb0f8fd15\": rpc error: code = NotFound desc = could not find container \"e7eeeb1019938b5a85d0f788f3a7bff5ac03c052342d4f68e216ee5bb0f8fd15\": container with ID starting with e7eeeb1019938b5a85d0f788f3a7bff5ac03c052342d4f68e216ee5bb0f8fd15 not found: ID does not exist" Feb 23 09:23:21 crc kubenswrapper[4940]: I0223 09:23:21.357946 4940 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="347c982c-1253-4120-b37b-2850671ab3e1" path="/var/lib/kubelet/pods/347c982c-1253-4120-b37b-2850671ab3e1/volumes" Feb 23 09:23:25 crc kubenswrapper[4940]: I0223 09:23:25.194014 4940 generic.go:334] "Generic (PLEG): container finished" podID="d252356a-80f4-4cf3-b739-520d9bd4b2c1" containerID="a677f6195f8fdeb41b0815adbca73c0e312f3c5d872657b9b92328e612553c4f" exitCode=0 Feb 23 09:23:25 crc kubenswrapper[4940]: I0223 09:23:25.194141 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ltqdk" event={"ID":"d252356a-80f4-4cf3-b739-520d9bd4b2c1","Type":"ContainerDied","Data":"a677f6195f8fdeb41b0815adbca73c0e312f3c5d872657b9b92328e612553c4f"} Feb 23 09:23:26 crc kubenswrapper[4940]: I0223 09:23:26.622999 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ltqdk" Feb 23 09:23:26 crc kubenswrapper[4940]: I0223 09:23:26.728755 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kgdg9\" (UniqueName: \"kubernetes.io/projected/d252356a-80f4-4cf3-b739-520d9bd4b2c1-kube-api-access-kgdg9\") pod \"d252356a-80f4-4cf3-b739-520d9bd4b2c1\" (UID: \"d252356a-80f4-4cf3-b739-520d9bd4b2c1\") " Feb 23 09:23:26 crc kubenswrapper[4940]: I0223 09:23:26.728824 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d252356a-80f4-4cf3-b739-520d9bd4b2c1-ssh-key-openstack-edpm-ipam\") pod \"d252356a-80f4-4cf3-b739-520d9bd4b2c1\" (UID: \"d252356a-80f4-4cf3-b739-520d9bd4b2c1\") " Feb 23 09:23:26 crc kubenswrapper[4940]: I0223 09:23:26.728870 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d252356a-80f4-4cf3-b739-520d9bd4b2c1-ovn-combined-ca-bundle\") pod \"d252356a-80f4-4cf3-b739-520d9bd4b2c1\" (UID: \"d252356a-80f4-4cf3-b739-520d9bd4b2c1\") " Feb 23 09:23:26 crc kubenswrapper[4940]: I0223 09:23:26.729048 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d252356a-80f4-4cf3-b739-520d9bd4b2c1-inventory\") pod \"d252356a-80f4-4cf3-b739-520d9bd4b2c1\" (UID: \"d252356a-80f4-4cf3-b739-520d9bd4b2c1\") " Feb 23 09:23:26 crc kubenswrapper[4940]: I0223 09:23:26.729093 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/d252356a-80f4-4cf3-b739-520d9bd4b2c1-ovncontroller-config-0\") pod \"d252356a-80f4-4cf3-b739-520d9bd4b2c1\" (UID: \"d252356a-80f4-4cf3-b739-520d9bd4b2c1\") " Feb 23 09:23:26 crc kubenswrapper[4940]: I0223 09:23:26.734191 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d252356a-80f4-4cf3-b739-520d9bd4b2c1-kube-api-access-kgdg9" (OuterVolumeSpecName: "kube-api-access-kgdg9") pod "d252356a-80f4-4cf3-b739-520d9bd4b2c1" (UID: "d252356a-80f4-4cf3-b739-520d9bd4b2c1"). InnerVolumeSpecName "kube-api-access-kgdg9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 09:23:26 crc kubenswrapper[4940]: I0223 09:23:26.734240 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d252356a-80f4-4cf3-b739-520d9bd4b2c1-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "d252356a-80f4-4cf3-b739-520d9bd4b2c1" (UID: "d252356a-80f4-4cf3-b739-520d9bd4b2c1"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 09:23:26 crc kubenswrapper[4940]: I0223 09:23:26.762979 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d252356a-80f4-4cf3-b739-520d9bd4b2c1-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "d252356a-80f4-4cf3-b739-520d9bd4b2c1" (UID: "d252356a-80f4-4cf3-b739-520d9bd4b2c1"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 09:23:26 crc kubenswrapper[4940]: I0223 09:23:26.776238 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d252356a-80f4-4cf3-b739-520d9bd4b2c1-inventory" (OuterVolumeSpecName: "inventory") pod "d252356a-80f4-4cf3-b739-520d9bd4b2c1" (UID: "d252356a-80f4-4cf3-b739-520d9bd4b2c1"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 09:23:26 crc kubenswrapper[4940]: I0223 09:23:26.788371 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d252356a-80f4-4cf3-b739-520d9bd4b2c1-ovncontroller-config-0" (OuterVolumeSpecName: "ovncontroller-config-0") pod "d252356a-80f4-4cf3-b739-520d9bd4b2c1" (UID: "d252356a-80f4-4cf3-b739-520d9bd4b2c1"). InnerVolumeSpecName "ovncontroller-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 09:23:26 crc kubenswrapper[4940]: I0223 09:23:26.832164 4940 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d252356a-80f4-4cf3-b739-520d9bd4b2c1-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 23 09:23:26 crc kubenswrapper[4940]: I0223 09:23:26.832225 4940 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d252356a-80f4-4cf3-b739-520d9bd4b2c1-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 09:23:26 crc kubenswrapper[4940]: I0223 09:23:26.832247 4940 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d252356a-80f4-4cf3-b739-520d9bd4b2c1-inventory\") on node \"crc\" DevicePath \"\"" Feb 23 09:23:26 crc kubenswrapper[4940]: I0223 09:23:26.832266 4940 reconciler_common.go:293] "Volume detached for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/d252356a-80f4-4cf3-b739-520d9bd4b2c1-ovncontroller-config-0\") on node \"crc\" DevicePath \"\"" Feb 23 09:23:26 crc kubenswrapper[4940]: I0223 09:23:26.832284 4940 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kgdg9\" (UniqueName: \"kubernetes.io/projected/d252356a-80f4-4cf3-b739-520d9bd4b2c1-kube-api-access-kgdg9\") on node \"crc\" DevicePath \"\"" Feb 23 09:23:27 crc kubenswrapper[4940]: I0223 09:23:27.221430 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ltqdk" event={"ID":"d252356a-80f4-4cf3-b739-520d9bd4b2c1","Type":"ContainerDied","Data":"97b18fd35b44252603bc7096123712146feb6d069e06b27d8a8b5c442ce6d6b6"} Feb 23 09:23:27 crc kubenswrapper[4940]: I0223 09:23:27.221478 4940 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="97b18fd35b44252603bc7096123712146feb6d069e06b27d8a8b5c442ce6d6b6" Feb 23 09:23:27 crc kubenswrapper[4940]: I0223 09:23:27.221572 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ltqdk" Feb 23 09:23:27 crc kubenswrapper[4940]: I0223 09:23:27.332260 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-jl6h4"] Feb 23 09:23:27 crc kubenswrapper[4940]: E0223 09:23:27.333120 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="347c982c-1253-4120-b37b-2850671ab3e1" containerName="extract-content" Feb 23 09:23:27 crc kubenswrapper[4940]: I0223 09:23:27.333147 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="347c982c-1253-4120-b37b-2850671ab3e1" containerName="extract-content" Feb 23 09:23:27 crc kubenswrapper[4940]: E0223 09:23:27.333161 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="347c982c-1253-4120-b37b-2850671ab3e1" containerName="extract-utilities" Feb 23 09:23:27 crc kubenswrapper[4940]: I0223 09:23:27.333169 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="347c982c-1253-4120-b37b-2850671ab3e1" containerName="extract-utilities" Feb 23 09:23:27 crc kubenswrapper[4940]: E0223 09:23:27.333188 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="347c982c-1253-4120-b37b-2850671ab3e1" containerName="registry-server" Feb 23 09:23:27 crc kubenswrapper[4940]: I0223 09:23:27.333199 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="347c982c-1253-4120-b37b-2850671ab3e1" containerName="registry-server" Feb 23 09:23:27 crc kubenswrapper[4940]: E0223 09:23:27.333228 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d252356a-80f4-4cf3-b739-520d9bd4b2c1" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Feb 23 09:23:27 crc kubenswrapper[4940]: I0223 09:23:27.333240 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="d252356a-80f4-4cf3-b739-520d9bd4b2c1" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Feb 23 09:23:27 crc kubenswrapper[4940]: I0223 09:23:27.333522 4940 memory_manager.go:354] "RemoveStaleState removing state" podUID="347c982c-1253-4120-b37b-2850671ab3e1" containerName="registry-server" Feb 23 09:23:27 crc kubenswrapper[4940]: I0223 09:23:27.333550 4940 memory_manager.go:354] "RemoveStaleState removing state" podUID="d252356a-80f4-4cf3-b739-520d9bd4b2c1" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Feb 23 09:23:27 crc kubenswrapper[4940]: I0223 09:23:27.334438 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-jl6h4" Feb 23 09:23:27 crc kubenswrapper[4940]: I0223 09:23:27.338088 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-neutron-config" Feb 23 09:23:27 crc kubenswrapper[4940]: I0223 09:23:27.338124 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-ovn-metadata-agent-neutron-config" Feb 23 09:23:27 crc kubenswrapper[4940]: I0223 09:23:27.338486 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 23 09:23:27 crc kubenswrapper[4940]: I0223 09:23:27.338905 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 23 09:23:27 crc kubenswrapper[4940]: I0223 09:23:27.339133 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-x648h" Feb 23 09:23:27 crc kubenswrapper[4940]: I0223 09:23:27.340235 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 23 09:23:27 crc kubenswrapper[4940]: I0223 09:23:27.343242 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-jl6h4"] Feb 23 09:23:27 crc kubenswrapper[4940]: I0223 09:23:27.343257 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/dea28292-7367-4777-9e99-80da3a9c51cf-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-jl6h4\" (UID: \"dea28292-7367-4777-9e99-80da3a9c51cf\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-jl6h4" Feb 23 09:23:27 crc kubenswrapper[4940]: I0223 09:23:27.343513 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dea28292-7367-4777-9e99-80da3a9c51cf-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-jl6h4\" (UID: \"dea28292-7367-4777-9e99-80da3a9c51cf\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-jl6h4" Feb 23 09:23:27 crc kubenswrapper[4940]: I0223 09:23:27.343557 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/dea28292-7367-4777-9e99-80da3a9c51cf-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-jl6h4\" (UID: \"dea28292-7367-4777-9e99-80da3a9c51cf\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-jl6h4" Feb 23 09:23:27 crc kubenswrapper[4940]: I0223 09:23:27.343593 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4gj25\" (UniqueName: \"kubernetes.io/projected/dea28292-7367-4777-9e99-80da3a9c51cf-kube-api-access-4gj25\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-jl6h4\" (UID: \"dea28292-7367-4777-9e99-80da3a9c51cf\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-jl6h4" Feb 23 09:23:27 crc kubenswrapper[4940]: I0223 09:23:27.343666 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dea28292-7367-4777-9e99-80da3a9c51cf-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-jl6h4\" (UID: \"dea28292-7367-4777-9e99-80da3a9c51cf\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-jl6h4" Feb 23 09:23:27 crc kubenswrapper[4940]: I0223 09:23:27.343718 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/dea28292-7367-4777-9e99-80da3a9c51cf-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-jl6h4\" (UID: \"dea28292-7367-4777-9e99-80da3a9c51cf\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-jl6h4" Feb 23 09:23:27 crc kubenswrapper[4940]: I0223 09:23:27.444774 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/dea28292-7367-4777-9e99-80da3a9c51cf-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-jl6h4\" (UID: \"dea28292-7367-4777-9e99-80da3a9c51cf\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-jl6h4" Feb 23 09:23:27 crc kubenswrapper[4940]: I0223 09:23:27.444929 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dea28292-7367-4777-9e99-80da3a9c51cf-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-jl6h4\" (UID: \"dea28292-7367-4777-9e99-80da3a9c51cf\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-jl6h4" Feb 23 09:23:27 crc kubenswrapper[4940]: I0223 09:23:27.444952 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/dea28292-7367-4777-9e99-80da3a9c51cf-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-jl6h4\" (UID: \"dea28292-7367-4777-9e99-80da3a9c51cf\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-jl6h4" Feb 23 09:23:27 crc kubenswrapper[4940]: I0223 09:23:27.444972 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4gj25\" (UniqueName: \"kubernetes.io/projected/dea28292-7367-4777-9e99-80da3a9c51cf-kube-api-access-4gj25\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-jl6h4\" (UID: \"dea28292-7367-4777-9e99-80da3a9c51cf\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-jl6h4" Feb 23 09:23:27 crc kubenswrapper[4940]: I0223 09:23:27.445004 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dea28292-7367-4777-9e99-80da3a9c51cf-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-jl6h4\" (UID: \"dea28292-7367-4777-9e99-80da3a9c51cf\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-jl6h4" Feb 23 09:23:27 crc kubenswrapper[4940]: I0223 09:23:27.445032 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/dea28292-7367-4777-9e99-80da3a9c51cf-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-jl6h4\" (UID: \"dea28292-7367-4777-9e99-80da3a9c51cf\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-jl6h4" Feb 23 09:23:27 crc kubenswrapper[4940]: I0223 09:23:27.449742 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/dea28292-7367-4777-9e99-80da3a9c51cf-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-jl6h4\" (UID: \"dea28292-7367-4777-9e99-80da3a9c51cf\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-jl6h4" Feb 23 09:23:27 crc kubenswrapper[4940]: I0223 09:23:27.450045 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dea28292-7367-4777-9e99-80da3a9c51cf-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-jl6h4\" (UID: \"dea28292-7367-4777-9e99-80da3a9c51cf\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-jl6h4" Feb 23 09:23:27 crc kubenswrapper[4940]: I0223 09:23:27.450352 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/dea28292-7367-4777-9e99-80da3a9c51cf-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-jl6h4\" (UID: \"dea28292-7367-4777-9e99-80da3a9c51cf\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-jl6h4" Feb 23 09:23:27 crc kubenswrapper[4940]: I0223 09:23:27.450703 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dea28292-7367-4777-9e99-80da3a9c51cf-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-jl6h4\" (UID: \"dea28292-7367-4777-9e99-80da3a9c51cf\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-jl6h4" Feb 23 09:23:27 crc kubenswrapper[4940]: I0223 09:23:27.459239 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/dea28292-7367-4777-9e99-80da3a9c51cf-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-jl6h4\" (UID: \"dea28292-7367-4777-9e99-80da3a9c51cf\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-jl6h4" Feb 23 09:23:27 crc kubenswrapper[4940]: I0223 09:23:27.466556 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4gj25\" (UniqueName: \"kubernetes.io/projected/dea28292-7367-4777-9e99-80da3a9c51cf-kube-api-access-4gj25\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-jl6h4\" (UID: \"dea28292-7367-4777-9e99-80da3a9c51cf\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-jl6h4" Feb 23 09:23:27 crc kubenswrapper[4940]: I0223 09:23:27.661832 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-jl6h4" Feb 23 09:23:28 crc kubenswrapper[4940]: I0223 09:23:28.224491 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-jl6h4"] Feb 23 09:23:28 crc kubenswrapper[4940]: I0223 09:23:28.245348 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-jl6h4" event={"ID":"dea28292-7367-4777-9e99-80da3a9c51cf","Type":"ContainerStarted","Data":"6ef0ec3434c86d6a602a071f2863e8a9400604abecc0eac13fc3d478f273b2a1"} Feb 23 09:23:29 crc kubenswrapper[4940]: I0223 09:23:29.255660 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-jl6h4" event={"ID":"dea28292-7367-4777-9e99-80da3a9c51cf","Type":"ContainerStarted","Data":"fabedf19d623f7df2a5418b3ddb04371a9b3342219b5a806a9d1c1613a8ee10c"} Feb 23 09:23:29 crc kubenswrapper[4940]: I0223 09:23:29.280747 4940 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-jl6h4" podStartSLOduration=1.789292923 podStartE2EDuration="2.280728965s" podCreationTimestamp="2026-02-23 09:23:27 +0000 UTC" firstStartedPulling="2026-02-23 09:23:28.227786955 +0000 UTC m=+2139.610993122" lastFinishedPulling="2026-02-23 09:23:28.719222987 +0000 UTC m=+2140.102429164" observedRunningTime="2026-02-23 09:23:29.272749005 +0000 UTC m=+2140.655955192" watchObservedRunningTime="2026-02-23 09:23:29.280728965 +0000 UTC m=+2140.663935112" Feb 23 09:24:11 crc kubenswrapper[4940]: I0223 09:24:11.602456 4940 generic.go:334] "Generic (PLEG): container finished" podID="dea28292-7367-4777-9e99-80da3a9c51cf" containerID="fabedf19d623f7df2a5418b3ddb04371a9b3342219b5a806a9d1c1613a8ee10c" exitCode=0 Feb 23 09:24:11 crc kubenswrapper[4940]: I0223 09:24:11.602555 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-jl6h4" event={"ID":"dea28292-7367-4777-9e99-80da3a9c51cf","Type":"ContainerDied","Data":"fabedf19d623f7df2a5418b3ddb04371a9b3342219b5a806a9d1c1613a8ee10c"} Feb 23 09:24:13 crc kubenswrapper[4940]: I0223 09:24:13.114455 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-jl6h4" Feb 23 09:24:13 crc kubenswrapper[4940]: I0223 09:24:13.214536 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dea28292-7367-4777-9e99-80da3a9c51cf-neutron-metadata-combined-ca-bundle\") pod \"dea28292-7367-4777-9e99-80da3a9c51cf\" (UID: \"dea28292-7367-4777-9e99-80da3a9c51cf\") " Feb 23 09:24:13 crc kubenswrapper[4940]: I0223 09:24:13.214635 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dea28292-7367-4777-9e99-80da3a9c51cf-inventory\") pod \"dea28292-7367-4777-9e99-80da3a9c51cf\" (UID: \"dea28292-7367-4777-9e99-80da3a9c51cf\") " Feb 23 09:24:13 crc kubenswrapper[4940]: I0223 09:24:13.214748 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/dea28292-7367-4777-9e99-80da3a9c51cf-neutron-ovn-metadata-agent-neutron-config-0\") pod \"dea28292-7367-4777-9e99-80da3a9c51cf\" (UID: \"dea28292-7367-4777-9e99-80da3a9c51cf\") " Feb 23 09:24:13 crc kubenswrapper[4940]: I0223 09:24:13.214821 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/dea28292-7367-4777-9e99-80da3a9c51cf-nova-metadata-neutron-config-0\") pod \"dea28292-7367-4777-9e99-80da3a9c51cf\" (UID: \"dea28292-7367-4777-9e99-80da3a9c51cf\") " Feb 23 09:24:13 crc kubenswrapper[4940]: I0223 09:24:13.214867 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4gj25\" (UniqueName: \"kubernetes.io/projected/dea28292-7367-4777-9e99-80da3a9c51cf-kube-api-access-4gj25\") pod \"dea28292-7367-4777-9e99-80da3a9c51cf\" (UID: \"dea28292-7367-4777-9e99-80da3a9c51cf\") " Feb 23 09:24:13 crc kubenswrapper[4940]: I0223 09:24:13.214987 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/dea28292-7367-4777-9e99-80da3a9c51cf-ssh-key-openstack-edpm-ipam\") pod \"dea28292-7367-4777-9e99-80da3a9c51cf\" (UID: \"dea28292-7367-4777-9e99-80da3a9c51cf\") " Feb 23 09:24:13 crc kubenswrapper[4940]: I0223 09:24:13.234353 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dea28292-7367-4777-9e99-80da3a9c51cf-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "dea28292-7367-4777-9e99-80da3a9c51cf" (UID: "dea28292-7367-4777-9e99-80da3a9c51cf"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 09:24:13 crc kubenswrapper[4940]: I0223 09:24:13.234434 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dea28292-7367-4777-9e99-80da3a9c51cf-kube-api-access-4gj25" (OuterVolumeSpecName: "kube-api-access-4gj25") pod "dea28292-7367-4777-9e99-80da3a9c51cf" (UID: "dea28292-7367-4777-9e99-80da3a9c51cf"). InnerVolumeSpecName "kube-api-access-4gj25". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 09:24:13 crc kubenswrapper[4940]: I0223 09:24:13.246497 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dea28292-7367-4777-9e99-80da3a9c51cf-nova-metadata-neutron-config-0" (OuterVolumeSpecName: "nova-metadata-neutron-config-0") pod "dea28292-7367-4777-9e99-80da3a9c51cf" (UID: "dea28292-7367-4777-9e99-80da3a9c51cf"). InnerVolumeSpecName "nova-metadata-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 09:24:13 crc kubenswrapper[4940]: I0223 09:24:13.258683 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dea28292-7367-4777-9e99-80da3a9c51cf-inventory" (OuterVolumeSpecName: "inventory") pod "dea28292-7367-4777-9e99-80da3a9c51cf" (UID: "dea28292-7367-4777-9e99-80da3a9c51cf"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 09:24:13 crc kubenswrapper[4940]: I0223 09:24:13.261936 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dea28292-7367-4777-9e99-80da3a9c51cf-neutron-ovn-metadata-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-ovn-metadata-agent-neutron-config-0") pod "dea28292-7367-4777-9e99-80da3a9c51cf" (UID: "dea28292-7367-4777-9e99-80da3a9c51cf"). InnerVolumeSpecName "neutron-ovn-metadata-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 09:24:13 crc kubenswrapper[4940]: I0223 09:24:13.265796 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dea28292-7367-4777-9e99-80da3a9c51cf-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "dea28292-7367-4777-9e99-80da3a9c51cf" (UID: "dea28292-7367-4777-9e99-80da3a9c51cf"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 09:24:13 crc kubenswrapper[4940]: I0223 09:24:13.317818 4940 reconciler_common.go:293] "Volume detached for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/dea28292-7367-4777-9e99-80da3a9c51cf-neutron-ovn-metadata-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Feb 23 09:24:13 crc kubenswrapper[4940]: I0223 09:24:13.317850 4940 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/dea28292-7367-4777-9e99-80da3a9c51cf-nova-metadata-neutron-config-0\") on node \"crc\" DevicePath \"\"" Feb 23 09:24:13 crc kubenswrapper[4940]: I0223 09:24:13.317861 4940 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4gj25\" (UniqueName: \"kubernetes.io/projected/dea28292-7367-4777-9e99-80da3a9c51cf-kube-api-access-4gj25\") on node \"crc\" DevicePath \"\"" Feb 23 09:24:13 crc kubenswrapper[4940]: I0223 09:24:13.317869 4940 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/dea28292-7367-4777-9e99-80da3a9c51cf-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 23 09:24:13 crc kubenswrapper[4940]: I0223 09:24:13.317880 4940 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dea28292-7367-4777-9e99-80da3a9c51cf-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 09:24:13 crc kubenswrapper[4940]: I0223 09:24:13.317890 4940 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dea28292-7367-4777-9e99-80da3a9c51cf-inventory\") on node \"crc\" DevicePath \"\"" Feb 23 09:24:13 crc kubenswrapper[4940]: I0223 09:24:13.625239 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-jl6h4" event={"ID":"dea28292-7367-4777-9e99-80da3a9c51cf","Type":"ContainerDied","Data":"6ef0ec3434c86d6a602a071f2863e8a9400604abecc0eac13fc3d478f273b2a1"} Feb 23 09:24:13 crc kubenswrapper[4940]: I0223 09:24:13.625290 4940 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6ef0ec3434c86d6a602a071f2863e8a9400604abecc0eac13fc3d478f273b2a1" Feb 23 09:24:13 crc kubenswrapper[4940]: I0223 09:24:13.625289 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-jl6h4" Feb 23 09:24:13 crc kubenswrapper[4940]: I0223 09:24:13.728241 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-t6ttl"] Feb 23 09:24:13 crc kubenswrapper[4940]: E0223 09:24:13.729051 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dea28292-7367-4777-9e99-80da3a9c51cf" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Feb 23 09:24:13 crc kubenswrapper[4940]: I0223 09:24:13.729202 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="dea28292-7367-4777-9e99-80da3a9c51cf" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Feb 23 09:24:13 crc kubenswrapper[4940]: I0223 09:24:13.729576 4940 memory_manager.go:354] "RemoveStaleState removing state" podUID="dea28292-7367-4777-9e99-80da3a9c51cf" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Feb 23 09:24:13 crc kubenswrapper[4940]: I0223 09:24:13.732191 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-t6ttl" Feb 23 09:24:13 crc kubenswrapper[4940]: I0223 09:24:13.737472 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-t6ttl"] Feb 23 09:24:13 crc kubenswrapper[4940]: I0223 09:24:13.738074 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 23 09:24:13 crc kubenswrapper[4940]: I0223 09:24:13.738636 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 23 09:24:13 crc kubenswrapper[4940]: I0223 09:24:13.738820 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 23 09:24:13 crc kubenswrapper[4940]: I0223 09:24:13.738979 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-x648h" Feb 23 09:24:13 crc kubenswrapper[4940]: I0223 09:24:13.751012 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"libvirt-secret" Feb 23 09:24:13 crc kubenswrapper[4940]: I0223 09:24:13.831934 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7376823f-eb39-4631-9cac-0d4b297a9580-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-t6ttl\" (UID: \"7376823f-eb39-4631-9cac-0d4b297a9580\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-t6ttl" Feb 23 09:24:13 crc kubenswrapper[4940]: I0223 09:24:13.832002 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-txwlf\" (UniqueName: \"kubernetes.io/projected/7376823f-eb39-4631-9cac-0d4b297a9580-kube-api-access-txwlf\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-t6ttl\" (UID: \"7376823f-eb39-4631-9cac-0d4b297a9580\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-t6ttl" Feb 23 09:24:13 crc kubenswrapper[4940]: I0223 09:24:13.832033 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/7376823f-eb39-4631-9cac-0d4b297a9580-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-t6ttl\" (UID: \"7376823f-eb39-4631-9cac-0d4b297a9580\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-t6ttl" Feb 23 09:24:13 crc kubenswrapper[4940]: I0223 09:24:13.832307 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7376823f-eb39-4631-9cac-0d4b297a9580-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-t6ttl\" (UID: \"7376823f-eb39-4631-9cac-0d4b297a9580\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-t6ttl" Feb 23 09:24:13 crc kubenswrapper[4940]: I0223 09:24:13.832445 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7376823f-eb39-4631-9cac-0d4b297a9580-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-t6ttl\" (UID: \"7376823f-eb39-4631-9cac-0d4b297a9580\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-t6ttl" Feb 23 09:24:13 crc kubenswrapper[4940]: I0223 09:24:13.934757 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7376823f-eb39-4631-9cac-0d4b297a9580-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-t6ttl\" (UID: \"7376823f-eb39-4631-9cac-0d4b297a9580\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-t6ttl" Feb 23 09:24:13 crc kubenswrapper[4940]: I0223 09:24:13.934815 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-txwlf\" (UniqueName: \"kubernetes.io/projected/7376823f-eb39-4631-9cac-0d4b297a9580-kube-api-access-txwlf\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-t6ttl\" (UID: \"7376823f-eb39-4631-9cac-0d4b297a9580\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-t6ttl" Feb 23 09:24:13 crc kubenswrapper[4940]: I0223 09:24:13.934841 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/7376823f-eb39-4631-9cac-0d4b297a9580-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-t6ttl\" (UID: \"7376823f-eb39-4631-9cac-0d4b297a9580\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-t6ttl" Feb 23 09:24:13 crc kubenswrapper[4940]: I0223 09:24:13.934975 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7376823f-eb39-4631-9cac-0d4b297a9580-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-t6ttl\" (UID: \"7376823f-eb39-4631-9cac-0d4b297a9580\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-t6ttl" Feb 23 09:24:13 crc kubenswrapper[4940]: I0223 09:24:13.935029 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7376823f-eb39-4631-9cac-0d4b297a9580-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-t6ttl\" (UID: \"7376823f-eb39-4631-9cac-0d4b297a9580\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-t6ttl" Feb 23 09:24:13 crc kubenswrapper[4940]: I0223 09:24:13.940066 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7376823f-eb39-4631-9cac-0d4b297a9580-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-t6ttl\" (UID: \"7376823f-eb39-4631-9cac-0d4b297a9580\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-t6ttl" Feb 23 09:24:13 crc kubenswrapper[4940]: I0223 09:24:13.940749 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7376823f-eb39-4631-9cac-0d4b297a9580-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-t6ttl\" (UID: \"7376823f-eb39-4631-9cac-0d4b297a9580\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-t6ttl" Feb 23 09:24:13 crc kubenswrapper[4940]: I0223 09:24:13.940765 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/7376823f-eb39-4631-9cac-0d4b297a9580-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-t6ttl\" (UID: \"7376823f-eb39-4631-9cac-0d4b297a9580\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-t6ttl" Feb 23 09:24:13 crc kubenswrapper[4940]: I0223 09:24:13.941025 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7376823f-eb39-4631-9cac-0d4b297a9580-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-t6ttl\" (UID: \"7376823f-eb39-4631-9cac-0d4b297a9580\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-t6ttl" Feb 23 09:24:13 crc kubenswrapper[4940]: I0223 09:24:13.966699 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-txwlf\" (UniqueName: \"kubernetes.io/projected/7376823f-eb39-4631-9cac-0d4b297a9580-kube-api-access-txwlf\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-t6ttl\" (UID: \"7376823f-eb39-4631-9cac-0d4b297a9580\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-t6ttl" Feb 23 09:24:14 crc kubenswrapper[4940]: I0223 09:24:14.061864 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-t6ttl" Feb 23 09:24:14 crc kubenswrapper[4940]: I0223 09:24:14.515531 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-t6ttl"] Feb 23 09:24:14 crc kubenswrapper[4940]: I0223 09:24:14.640683 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-t6ttl" event={"ID":"7376823f-eb39-4631-9cac-0d4b297a9580","Type":"ContainerStarted","Data":"3e06008587d379a381db539875d93868cba8e66561ea10391670248cfdb95fde"} Feb 23 09:24:15 crc kubenswrapper[4940]: I0223 09:24:15.656502 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-t6ttl" event={"ID":"7376823f-eb39-4631-9cac-0d4b297a9580","Type":"ContainerStarted","Data":"f3fd538bd5bd089f70155a60a46a7786f902ca013793534ad92a33ad428df83e"} Feb 23 09:24:15 crc kubenswrapper[4940]: I0223 09:24:15.685440 4940 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-t6ttl" podStartSLOduration=2.269094957 podStartE2EDuration="2.685418885s" podCreationTimestamp="2026-02-23 09:24:13 +0000 UTC" firstStartedPulling="2026-02-23 09:24:14.521545019 +0000 UTC m=+2185.904751186" lastFinishedPulling="2026-02-23 09:24:14.937868957 +0000 UTC m=+2186.321075114" observedRunningTime="2026-02-23 09:24:15.677735085 +0000 UTC m=+2187.060941242" watchObservedRunningTime="2026-02-23 09:24:15.685418885 +0000 UTC m=+2187.068625042" Feb 23 09:24:31 crc kubenswrapper[4940]: I0223 09:24:31.430403 4940 patch_prober.go:28] interesting pod/machine-config-daemon-26mgs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 23 09:24:31 crc kubenswrapper[4940]: I0223 09:24:31.431021 4940 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" podUID="f3f2cfd6-5ddf-436d-998f-440f1cc642b1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 23 09:25:01 crc kubenswrapper[4940]: I0223 09:25:01.429321 4940 patch_prober.go:28] interesting pod/machine-config-daemon-26mgs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 23 09:25:01 crc kubenswrapper[4940]: I0223 09:25:01.429850 4940 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" podUID="f3f2cfd6-5ddf-436d-998f-440f1cc642b1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 23 09:25:31 crc kubenswrapper[4940]: I0223 09:25:31.429817 4940 patch_prober.go:28] interesting pod/machine-config-daemon-26mgs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 23 09:25:31 crc kubenswrapper[4940]: I0223 09:25:31.430359 4940 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" podUID="f3f2cfd6-5ddf-436d-998f-440f1cc642b1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 23 09:25:31 crc kubenswrapper[4940]: I0223 09:25:31.430412 4940 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" Feb 23 09:25:31 crc kubenswrapper[4940]: I0223 09:25:31.431271 4940 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"6b12db6909c870fba93a3600a398d05d040f133d1fff6f41fc66f1a7da8177f8"} pod="openshift-machine-config-operator/machine-config-daemon-26mgs" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 23 09:25:31 crc kubenswrapper[4940]: I0223 09:25:31.431315 4940 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" podUID="f3f2cfd6-5ddf-436d-998f-440f1cc642b1" containerName="machine-config-daemon" containerID="cri-o://6b12db6909c870fba93a3600a398d05d040f133d1fff6f41fc66f1a7da8177f8" gracePeriod=600 Feb 23 09:25:31 crc kubenswrapper[4940]: E0223 09:25:31.558867 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-26mgs_openshift-machine-config-operator(f3f2cfd6-5ddf-436d-998f-440f1cc642b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" podUID="f3f2cfd6-5ddf-436d-998f-440f1cc642b1" Feb 23 09:25:32 crc kubenswrapper[4940]: I0223 09:25:32.338674 4940 generic.go:334] "Generic (PLEG): container finished" podID="f3f2cfd6-5ddf-436d-998f-440f1cc642b1" containerID="6b12db6909c870fba93a3600a398d05d040f133d1fff6f41fc66f1a7da8177f8" exitCode=0 Feb 23 09:25:32 crc kubenswrapper[4940]: I0223 09:25:32.338712 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" event={"ID":"f3f2cfd6-5ddf-436d-998f-440f1cc642b1","Type":"ContainerDied","Data":"6b12db6909c870fba93a3600a398d05d040f133d1fff6f41fc66f1a7da8177f8"} Feb 23 09:25:32 crc kubenswrapper[4940]: I0223 09:25:32.339156 4940 scope.go:117] "RemoveContainer" containerID="0e22f4141042ccdf07818d61e1f9266aa39799c5d25cbea563603ab4fa191be6" Feb 23 09:25:32 crc kubenswrapper[4940]: I0223 09:25:32.341142 4940 scope.go:117] "RemoveContainer" containerID="6b12db6909c870fba93a3600a398d05d040f133d1fff6f41fc66f1a7da8177f8" Feb 23 09:25:32 crc kubenswrapper[4940]: E0223 09:25:32.341999 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-26mgs_openshift-machine-config-operator(f3f2cfd6-5ddf-436d-998f-440f1cc642b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" podUID="f3f2cfd6-5ddf-436d-998f-440f1cc642b1" Feb 23 09:25:44 crc kubenswrapper[4940]: I0223 09:25:44.346358 4940 scope.go:117] "RemoveContainer" containerID="6b12db6909c870fba93a3600a398d05d040f133d1fff6f41fc66f1a7da8177f8" Feb 23 09:25:44 crc kubenswrapper[4940]: E0223 09:25:44.347251 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-26mgs_openshift-machine-config-operator(f3f2cfd6-5ddf-436d-998f-440f1cc642b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" podUID="f3f2cfd6-5ddf-436d-998f-440f1cc642b1" Feb 23 09:25:48 crc kubenswrapper[4940]: I0223 09:25:48.933547 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-mm58q"] Feb 23 09:25:48 crc kubenswrapper[4940]: I0223 09:25:48.936527 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mm58q" Feb 23 09:25:48 crc kubenswrapper[4940]: I0223 09:25:48.953794 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-mm58q"] Feb 23 09:25:48 crc kubenswrapper[4940]: I0223 09:25:48.964562 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kbd66\" (UniqueName: \"kubernetes.io/projected/504546d3-1a8c-4bde-b9f0-b2f30de9fe7f-kube-api-access-kbd66\") pod \"certified-operators-mm58q\" (UID: \"504546d3-1a8c-4bde-b9f0-b2f30de9fe7f\") " pod="openshift-marketplace/certified-operators-mm58q" Feb 23 09:25:48 crc kubenswrapper[4940]: I0223 09:25:48.964699 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/504546d3-1a8c-4bde-b9f0-b2f30de9fe7f-utilities\") pod \"certified-operators-mm58q\" (UID: \"504546d3-1a8c-4bde-b9f0-b2f30de9fe7f\") " pod="openshift-marketplace/certified-operators-mm58q" Feb 23 09:25:48 crc kubenswrapper[4940]: I0223 09:25:48.965059 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/504546d3-1a8c-4bde-b9f0-b2f30de9fe7f-catalog-content\") pod \"certified-operators-mm58q\" (UID: \"504546d3-1a8c-4bde-b9f0-b2f30de9fe7f\") " pod="openshift-marketplace/certified-operators-mm58q" Feb 23 09:25:49 crc kubenswrapper[4940]: I0223 09:25:49.067142 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kbd66\" (UniqueName: \"kubernetes.io/projected/504546d3-1a8c-4bde-b9f0-b2f30de9fe7f-kube-api-access-kbd66\") pod \"certified-operators-mm58q\" (UID: \"504546d3-1a8c-4bde-b9f0-b2f30de9fe7f\") " pod="openshift-marketplace/certified-operators-mm58q" Feb 23 09:25:49 crc kubenswrapper[4940]: I0223 09:25:49.067218 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/504546d3-1a8c-4bde-b9f0-b2f30de9fe7f-utilities\") pod \"certified-operators-mm58q\" (UID: \"504546d3-1a8c-4bde-b9f0-b2f30de9fe7f\") " pod="openshift-marketplace/certified-operators-mm58q" Feb 23 09:25:49 crc kubenswrapper[4940]: I0223 09:25:49.067300 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/504546d3-1a8c-4bde-b9f0-b2f30de9fe7f-catalog-content\") pod \"certified-operators-mm58q\" (UID: \"504546d3-1a8c-4bde-b9f0-b2f30de9fe7f\") " pod="openshift-marketplace/certified-operators-mm58q" Feb 23 09:25:49 crc kubenswrapper[4940]: I0223 09:25:49.067768 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/504546d3-1a8c-4bde-b9f0-b2f30de9fe7f-catalog-content\") pod \"certified-operators-mm58q\" (UID: \"504546d3-1a8c-4bde-b9f0-b2f30de9fe7f\") " pod="openshift-marketplace/certified-operators-mm58q" Feb 23 09:25:49 crc kubenswrapper[4940]: I0223 09:25:49.068096 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/504546d3-1a8c-4bde-b9f0-b2f30de9fe7f-utilities\") pod \"certified-operators-mm58q\" (UID: \"504546d3-1a8c-4bde-b9f0-b2f30de9fe7f\") " pod="openshift-marketplace/certified-operators-mm58q" Feb 23 09:25:49 crc kubenswrapper[4940]: I0223 09:25:49.089440 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kbd66\" (UniqueName: \"kubernetes.io/projected/504546d3-1a8c-4bde-b9f0-b2f30de9fe7f-kube-api-access-kbd66\") pod \"certified-operators-mm58q\" (UID: \"504546d3-1a8c-4bde-b9f0-b2f30de9fe7f\") " pod="openshift-marketplace/certified-operators-mm58q" Feb 23 09:25:49 crc kubenswrapper[4940]: I0223 09:25:49.255423 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mm58q" Feb 23 09:25:49 crc kubenswrapper[4940]: I0223 09:25:49.838074 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-mm58q"] Feb 23 09:25:50 crc kubenswrapper[4940]: I0223 09:25:50.504346 4940 generic.go:334] "Generic (PLEG): container finished" podID="504546d3-1a8c-4bde-b9f0-b2f30de9fe7f" containerID="2b00dbb41d5dd5a6c3b4414e5fbc806870167bbf088722ee37ef4fda5b793a5e" exitCode=0 Feb 23 09:25:50 crc kubenswrapper[4940]: I0223 09:25:50.504401 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mm58q" event={"ID":"504546d3-1a8c-4bde-b9f0-b2f30de9fe7f","Type":"ContainerDied","Data":"2b00dbb41d5dd5a6c3b4414e5fbc806870167bbf088722ee37ef4fda5b793a5e"} Feb 23 09:25:50 crc kubenswrapper[4940]: I0223 09:25:50.504572 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mm58q" event={"ID":"504546d3-1a8c-4bde-b9f0-b2f30de9fe7f","Type":"ContainerStarted","Data":"d69a7465f351a425e0290066e73f999091d1a267150734ddcc8e6a9734f2e0b2"} Feb 23 09:25:50 crc kubenswrapper[4940]: I0223 09:25:50.508227 4940 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 23 09:25:51 crc kubenswrapper[4940]: I0223 09:25:51.925796 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-6wmjl"] Feb 23 09:25:51 crc kubenswrapper[4940]: I0223 09:25:51.930158 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6wmjl" Feb 23 09:25:51 crc kubenswrapper[4940]: I0223 09:25:51.940696 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-6wmjl"] Feb 23 09:25:52 crc kubenswrapper[4940]: I0223 09:25:52.039138 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nr4gq\" (UniqueName: \"kubernetes.io/projected/0a702910-ccc7-4f1e-a90e-b4dc27c4881a-kube-api-access-nr4gq\") pod \"redhat-marketplace-6wmjl\" (UID: \"0a702910-ccc7-4f1e-a90e-b4dc27c4881a\") " pod="openshift-marketplace/redhat-marketplace-6wmjl" Feb 23 09:25:52 crc kubenswrapper[4940]: I0223 09:25:52.039250 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0a702910-ccc7-4f1e-a90e-b4dc27c4881a-utilities\") pod \"redhat-marketplace-6wmjl\" (UID: \"0a702910-ccc7-4f1e-a90e-b4dc27c4881a\") " pod="openshift-marketplace/redhat-marketplace-6wmjl" Feb 23 09:25:52 crc kubenswrapper[4940]: I0223 09:25:52.039342 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0a702910-ccc7-4f1e-a90e-b4dc27c4881a-catalog-content\") pod \"redhat-marketplace-6wmjl\" (UID: \"0a702910-ccc7-4f1e-a90e-b4dc27c4881a\") " pod="openshift-marketplace/redhat-marketplace-6wmjl" Feb 23 09:25:52 crc kubenswrapper[4940]: I0223 09:25:52.141410 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nr4gq\" (UniqueName: \"kubernetes.io/projected/0a702910-ccc7-4f1e-a90e-b4dc27c4881a-kube-api-access-nr4gq\") pod \"redhat-marketplace-6wmjl\" (UID: \"0a702910-ccc7-4f1e-a90e-b4dc27c4881a\") " pod="openshift-marketplace/redhat-marketplace-6wmjl" Feb 23 09:25:52 crc kubenswrapper[4940]: I0223 09:25:52.141509 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0a702910-ccc7-4f1e-a90e-b4dc27c4881a-utilities\") pod \"redhat-marketplace-6wmjl\" (UID: \"0a702910-ccc7-4f1e-a90e-b4dc27c4881a\") " pod="openshift-marketplace/redhat-marketplace-6wmjl" Feb 23 09:25:52 crc kubenswrapper[4940]: I0223 09:25:52.141576 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0a702910-ccc7-4f1e-a90e-b4dc27c4881a-catalog-content\") pod \"redhat-marketplace-6wmjl\" (UID: \"0a702910-ccc7-4f1e-a90e-b4dc27c4881a\") " pod="openshift-marketplace/redhat-marketplace-6wmjl" Feb 23 09:25:52 crc kubenswrapper[4940]: I0223 09:25:52.142061 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0a702910-ccc7-4f1e-a90e-b4dc27c4881a-catalog-content\") pod \"redhat-marketplace-6wmjl\" (UID: \"0a702910-ccc7-4f1e-a90e-b4dc27c4881a\") " pod="openshift-marketplace/redhat-marketplace-6wmjl" Feb 23 09:25:52 crc kubenswrapper[4940]: I0223 09:25:52.142103 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0a702910-ccc7-4f1e-a90e-b4dc27c4881a-utilities\") pod \"redhat-marketplace-6wmjl\" (UID: \"0a702910-ccc7-4f1e-a90e-b4dc27c4881a\") " pod="openshift-marketplace/redhat-marketplace-6wmjl" Feb 23 09:25:52 crc kubenswrapper[4940]: I0223 09:25:52.161288 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nr4gq\" (UniqueName: \"kubernetes.io/projected/0a702910-ccc7-4f1e-a90e-b4dc27c4881a-kube-api-access-nr4gq\") pod \"redhat-marketplace-6wmjl\" (UID: \"0a702910-ccc7-4f1e-a90e-b4dc27c4881a\") " pod="openshift-marketplace/redhat-marketplace-6wmjl" Feb 23 09:25:52 crc kubenswrapper[4940]: I0223 09:25:52.290105 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6wmjl" Feb 23 09:25:52 crc kubenswrapper[4940]: I0223 09:25:52.529132 4940 generic.go:334] "Generic (PLEG): container finished" podID="504546d3-1a8c-4bde-b9f0-b2f30de9fe7f" containerID="d936bccb181d28e579f4ddbb734a8cb73dd3a6f095b85b2ef8f9e3632aeae332" exitCode=0 Feb 23 09:25:52 crc kubenswrapper[4940]: I0223 09:25:52.529437 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mm58q" event={"ID":"504546d3-1a8c-4bde-b9f0-b2f30de9fe7f","Type":"ContainerDied","Data":"d936bccb181d28e579f4ddbb734a8cb73dd3a6f095b85b2ef8f9e3632aeae332"} Feb 23 09:25:52 crc kubenswrapper[4940]: W0223 09:25:52.779604 4940 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0a702910_ccc7_4f1e_a90e_b4dc27c4881a.slice/crio-4abdef9dc6922daf012cf6537e9c069690967ed3e5b1604d0d7e601245a6334e WatchSource:0}: Error finding container 4abdef9dc6922daf012cf6537e9c069690967ed3e5b1604d0d7e601245a6334e: Status 404 returned error can't find the container with id 4abdef9dc6922daf012cf6537e9c069690967ed3e5b1604d0d7e601245a6334e Feb 23 09:25:52 crc kubenswrapper[4940]: I0223 09:25:52.783050 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-6wmjl"] Feb 23 09:25:53 crc kubenswrapper[4940]: I0223 09:25:53.544710 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mm58q" event={"ID":"504546d3-1a8c-4bde-b9f0-b2f30de9fe7f","Type":"ContainerStarted","Data":"b1349275cda9521699d81e0230206e9eb1b9ccd77ce91ce349d8053ff74f2260"} Feb 23 09:25:53 crc kubenswrapper[4940]: I0223 09:25:53.547645 4940 generic.go:334] "Generic (PLEG): container finished" podID="0a702910-ccc7-4f1e-a90e-b4dc27c4881a" containerID="32c192a0c086790493ec2cebda82fcf2e192d2d6f7313bdc5c471a804ab3b8c5" exitCode=0 Feb 23 09:25:53 crc kubenswrapper[4940]: I0223 09:25:53.547708 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6wmjl" event={"ID":"0a702910-ccc7-4f1e-a90e-b4dc27c4881a","Type":"ContainerDied","Data":"32c192a0c086790493ec2cebda82fcf2e192d2d6f7313bdc5c471a804ab3b8c5"} Feb 23 09:25:53 crc kubenswrapper[4940]: I0223 09:25:53.547743 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6wmjl" event={"ID":"0a702910-ccc7-4f1e-a90e-b4dc27c4881a","Type":"ContainerStarted","Data":"4abdef9dc6922daf012cf6537e9c069690967ed3e5b1604d0d7e601245a6334e"} Feb 23 09:25:53 crc kubenswrapper[4940]: I0223 09:25:53.573403 4940 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-mm58q" podStartSLOduration=2.921324189 podStartE2EDuration="5.573376651s" podCreationTimestamp="2026-02-23 09:25:48 +0000 UTC" firstStartedPulling="2026-02-23 09:25:50.507897291 +0000 UTC m=+2281.891103448" lastFinishedPulling="2026-02-23 09:25:53.159949753 +0000 UTC m=+2284.543155910" observedRunningTime="2026-02-23 09:25:53.56857551 +0000 UTC m=+2284.951781697" watchObservedRunningTime="2026-02-23 09:25:53.573376651 +0000 UTC m=+2284.956582818" Feb 23 09:25:54 crc kubenswrapper[4940]: I0223 09:25:54.561736 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6wmjl" event={"ID":"0a702910-ccc7-4f1e-a90e-b4dc27c4881a","Type":"ContainerStarted","Data":"54cb9e526c3968e5558fac3e477db6dce6a953bc72279a0942263d596c738d0f"} Feb 23 09:25:55 crc kubenswrapper[4940]: I0223 09:25:55.572146 4940 generic.go:334] "Generic (PLEG): container finished" podID="0a702910-ccc7-4f1e-a90e-b4dc27c4881a" containerID="54cb9e526c3968e5558fac3e477db6dce6a953bc72279a0942263d596c738d0f" exitCode=0 Feb 23 09:25:55 crc kubenswrapper[4940]: I0223 09:25:55.572229 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6wmjl" event={"ID":"0a702910-ccc7-4f1e-a90e-b4dc27c4881a","Type":"ContainerDied","Data":"54cb9e526c3968e5558fac3e477db6dce6a953bc72279a0942263d596c738d0f"} Feb 23 09:25:55 crc kubenswrapper[4940]: I0223 09:25:55.572539 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6wmjl" event={"ID":"0a702910-ccc7-4f1e-a90e-b4dc27c4881a","Type":"ContainerStarted","Data":"d76568a1836835dffe805e1ab10820d07c4eb8a57c458d968a9d1cda63f4c685"} Feb 23 09:25:55 crc kubenswrapper[4940]: I0223 09:25:55.596176 4940 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-6wmjl" podStartSLOduration=3.189221381 podStartE2EDuration="4.5961561s" podCreationTimestamp="2026-02-23 09:25:51 +0000 UTC" firstStartedPulling="2026-02-23 09:25:53.549107547 +0000 UTC m=+2284.932313724" lastFinishedPulling="2026-02-23 09:25:54.956042286 +0000 UTC m=+2286.339248443" observedRunningTime="2026-02-23 09:25:55.588913971 +0000 UTC m=+2286.972120128" watchObservedRunningTime="2026-02-23 09:25:55.5961561 +0000 UTC m=+2286.979362257" Feb 23 09:25:56 crc kubenswrapper[4940]: I0223 09:25:56.345574 4940 scope.go:117] "RemoveContainer" containerID="6b12db6909c870fba93a3600a398d05d040f133d1fff6f41fc66f1a7da8177f8" Feb 23 09:25:56 crc kubenswrapper[4940]: E0223 09:25:56.345891 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-26mgs_openshift-machine-config-operator(f3f2cfd6-5ddf-436d-998f-440f1cc642b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" podUID="f3f2cfd6-5ddf-436d-998f-440f1cc642b1" Feb 23 09:25:59 crc kubenswrapper[4940]: I0223 09:25:59.256523 4940 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-mm58q" Feb 23 09:25:59 crc kubenswrapper[4940]: I0223 09:25:59.257120 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-mm58q" Feb 23 09:25:59 crc kubenswrapper[4940]: I0223 09:25:59.314254 4940 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-mm58q" Feb 23 09:25:59 crc kubenswrapper[4940]: I0223 09:25:59.656904 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-mm58q" Feb 23 09:26:02 crc kubenswrapper[4940]: I0223 09:26:02.290243 4940 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-6wmjl" Feb 23 09:26:02 crc kubenswrapper[4940]: I0223 09:26:02.290540 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-6wmjl" Feb 23 09:26:02 crc kubenswrapper[4940]: I0223 09:26:02.327790 4940 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-mm58q"] Feb 23 09:26:02 crc kubenswrapper[4940]: I0223 09:26:02.328262 4940 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-mm58q" podUID="504546d3-1a8c-4bde-b9f0-b2f30de9fe7f" containerName="registry-server" containerID="cri-o://b1349275cda9521699d81e0230206e9eb1b9ccd77ce91ce349d8053ff74f2260" gracePeriod=2 Feb 23 09:26:02 crc kubenswrapper[4940]: I0223 09:26:02.363048 4940 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-6wmjl" Feb 23 09:26:02 crc kubenswrapper[4940]: I0223 09:26:02.646563 4940 generic.go:334] "Generic (PLEG): container finished" podID="504546d3-1a8c-4bde-b9f0-b2f30de9fe7f" containerID="b1349275cda9521699d81e0230206e9eb1b9ccd77ce91ce349d8053ff74f2260" exitCode=0 Feb 23 09:26:02 crc kubenswrapper[4940]: I0223 09:26:02.647705 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mm58q" event={"ID":"504546d3-1a8c-4bde-b9f0-b2f30de9fe7f","Type":"ContainerDied","Data":"b1349275cda9521699d81e0230206e9eb1b9ccd77ce91ce349d8053ff74f2260"} Feb 23 09:26:02 crc kubenswrapper[4940]: I0223 09:26:02.741619 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-6wmjl" Feb 23 09:26:02 crc kubenswrapper[4940]: I0223 09:26:02.819104 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mm58q" Feb 23 09:26:02 crc kubenswrapper[4940]: I0223 09:26:02.988152 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/504546d3-1a8c-4bde-b9f0-b2f30de9fe7f-catalog-content\") pod \"504546d3-1a8c-4bde-b9f0-b2f30de9fe7f\" (UID: \"504546d3-1a8c-4bde-b9f0-b2f30de9fe7f\") " Feb 23 09:26:02 crc kubenswrapper[4940]: I0223 09:26:02.988471 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/504546d3-1a8c-4bde-b9f0-b2f30de9fe7f-utilities\") pod \"504546d3-1a8c-4bde-b9f0-b2f30de9fe7f\" (UID: \"504546d3-1a8c-4bde-b9f0-b2f30de9fe7f\") " Feb 23 09:26:02 crc kubenswrapper[4940]: I0223 09:26:02.988534 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kbd66\" (UniqueName: \"kubernetes.io/projected/504546d3-1a8c-4bde-b9f0-b2f30de9fe7f-kube-api-access-kbd66\") pod \"504546d3-1a8c-4bde-b9f0-b2f30de9fe7f\" (UID: \"504546d3-1a8c-4bde-b9f0-b2f30de9fe7f\") " Feb 23 09:26:02 crc kubenswrapper[4940]: I0223 09:26:02.989364 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/504546d3-1a8c-4bde-b9f0-b2f30de9fe7f-utilities" (OuterVolumeSpecName: "utilities") pod "504546d3-1a8c-4bde-b9f0-b2f30de9fe7f" (UID: "504546d3-1a8c-4bde-b9f0-b2f30de9fe7f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 09:26:02 crc kubenswrapper[4940]: I0223 09:26:02.996039 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/504546d3-1a8c-4bde-b9f0-b2f30de9fe7f-kube-api-access-kbd66" (OuterVolumeSpecName: "kube-api-access-kbd66") pod "504546d3-1a8c-4bde-b9f0-b2f30de9fe7f" (UID: "504546d3-1a8c-4bde-b9f0-b2f30de9fe7f"). InnerVolumeSpecName "kube-api-access-kbd66". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 09:26:03 crc kubenswrapper[4940]: I0223 09:26:03.038792 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/504546d3-1a8c-4bde-b9f0-b2f30de9fe7f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "504546d3-1a8c-4bde-b9f0-b2f30de9fe7f" (UID: "504546d3-1a8c-4bde-b9f0-b2f30de9fe7f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 09:26:03 crc kubenswrapper[4940]: I0223 09:26:03.091377 4940 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/504546d3-1a8c-4bde-b9f0-b2f30de9fe7f-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 23 09:26:03 crc kubenswrapper[4940]: I0223 09:26:03.091413 4940 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/504546d3-1a8c-4bde-b9f0-b2f30de9fe7f-utilities\") on node \"crc\" DevicePath \"\"" Feb 23 09:26:03 crc kubenswrapper[4940]: I0223 09:26:03.091422 4940 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kbd66\" (UniqueName: \"kubernetes.io/projected/504546d3-1a8c-4bde-b9f0-b2f30de9fe7f-kube-api-access-kbd66\") on node \"crc\" DevicePath \"\"" Feb 23 09:26:03 crc kubenswrapper[4940]: I0223 09:26:03.662252 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mm58q" event={"ID":"504546d3-1a8c-4bde-b9f0-b2f30de9fe7f","Type":"ContainerDied","Data":"d69a7465f351a425e0290066e73f999091d1a267150734ddcc8e6a9734f2e0b2"} Feb 23 09:26:03 crc kubenswrapper[4940]: I0223 09:26:03.662312 4940 scope.go:117] "RemoveContainer" containerID="b1349275cda9521699d81e0230206e9eb1b9ccd77ce91ce349d8053ff74f2260" Feb 23 09:26:03 crc kubenswrapper[4940]: I0223 09:26:03.662328 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mm58q" Feb 23 09:26:03 crc kubenswrapper[4940]: I0223 09:26:03.691465 4940 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-mm58q"] Feb 23 09:26:03 crc kubenswrapper[4940]: I0223 09:26:03.698473 4940 scope.go:117] "RemoveContainer" containerID="d936bccb181d28e579f4ddbb734a8cb73dd3a6f095b85b2ef8f9e3632aeae332" Feb 23 09:26:03 crc kubenswrapper[4940]: I0223 09:26:03.700875 4940 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-mm58q"] Feb 23 09:26:03 crc kubenswrapper[4940]: I0223 09:26:03.722391 4940 scope.go:117] "RemoveContainer" containerID="2b00dbb41d5dd5a6c3b4414e5fbc806870167bbf088722ee37ef4fda5b793a5e" Feb 23 09:26:05 crc kubenswrapper[4940]: I0223 09:26:05.356503 4940 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="504546d3-1a8c-4bde-b9f0-b2f30de9fe7f" path="/var/lib/kubelet/pods/504546d3-1a8c-4bde-b9f0-b2f30de9fe7f/volumes" Feb 23 09:26:05 crc kubenswrapper[4940]: I0223 09:26:05.919912 4940 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-6wmjl"] Feb 23 09:26:05 crc kubenswrapper[4940]: I0223 09:26:05.920211 4940 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-6wmjl" podUID="0a702910-ccc7-4f1e-a90e-b4dc27c4881a" containerName="registry-server" containerID="cri-o://d76568a1836835dffe805e1ab10820d07c4eb8a57c458d968a9d1cda63f4c685" gracePeriod=2 Feb 23 09:26:06 crc kubenswrapper[4940]: I0223 09:26:06.393028 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6wmjl" Feb 23 09:26:06 crc kubenswrapper[4940]: I0223 09:26:06.563649 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0a702910-ccc7-4f1e-a90e-b4dc27c4881a-catalog-content\") pod \"0a702910-ccc7-4f1e-a90e-b4dc27c4881a\" (UID: \"0a702910-ccc7-4f1e-a90e-b4dc27c4881a\") " Feb 23 09:26:06 crc kubenswrapper[4940]: I0223 09:26:06.563882 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0a702910-ccc7-4f1e-a90e-b4dc27c4881a-utilities\") pod \"0a702910-ccc7-4f1e-a90e-b4dc27c4881a\" (UID: \"0a702910-ccc7-4f1e-a90e-b4dc27c4881a\") " Feb 23 09:26:06 crc kubenswrapper[4940]: I0223 09:26:06.563916 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nr4gq\" (UniqueName: \"kubernetes.io/projected/0a702910-ccc7-4f1e-a90e-b4dc27c4881a-kube-api-access-nr4gq\") pod \"0a702910-ccc7-4f1e-a90e-b4dc27c4881a\" (UID: \"0a702910-ccc7-4f1e-a90e-b4dc27c4881a\") " Feb 23 09:26:06 crc kubenswrapper[4940]: I0223 09:26:06.564865 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0a702910-ccc7-4f1e-a90e-b4dc27c4881a-utilities" (OuterVolumeSpecName: "utilities") pod "0a702910-ccc7-4f1e-a90e-b4dc27c4881a" (UID: "0a702910-ccc7-4f1e-a90e-b4dc27c4881a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 09:26:06 crc kubenswrapper[4940]: I0223 09:26:06.570166 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0a702910-ccc7-4f1e-a90e-b4dc27c4881a-kube-api-access-nr4gq" (OuterVolumeSpecName: "kube-api-access-nr4gq") pod "0a702910-ccc7-4f1e-a90e-b4dc27c4881a" (UID: "0a702910-ccc7-4f1e-a90e-b4dc27c4881a"). InnerVolumeSpecName "kube-api-access-nr4gq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 09:26:06 crc kubenswrapper[4940]: I0223 09:26:06.594995 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0a702910-ccc7-4f1e-a90e-b4dc27c4881a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0a702910-ccc7-4f1e-a90e-b4dc27c4881a" (UID: "0a702910-ccc7-4f1e-a90e-b4dc27c4881a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 09:26:06 crc kubenswrapper[4940]: I0223 09:26:06.665913 4940 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0a702910-ccc7-4f1e-a90e-b4dc27c4881a-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 23 09:26:06 crc kubenswrapper[4940]: I0223 09:26:06.666212 4940 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nr4gq\" (UniqueName: \"kubernetes.io/projected/0a702910-ccc7-4f1e-a90e-b4dc27c4881a-kube-api-access-nr4gq\") on node \"crc\" DevicePath \"\"" Feb 23 09:26:06 crc kubenswrapper[4940]: I0223 09:26:06.666225 4940 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0a702910-ccc7-4f1e-a90e-b4dc27c4881a-utilities\") on node \"crc\" DevicePath \"\"" Feb 23 09:26:06 crc kubenswrapper[4940]: I0223 09:26:06.697473 4940 generic.go:334] "Generic (PLEG): container finished" podID="0a702910-ccc7-4f1e-a90e-b4dc27c4881a" containerID="d76568a1836835dffe805e1ab10820d07c4eb8a57c458d968a9d1cda63f4c685" exitCode=0 Feb 23 09:26:06 crc kubenswrapper[4940]: I0223 09:26:06.697525 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6wmjl" event={"ID":"0a702910-ccc7-4f1e-a90e-b4dc27c4881a","Type":"ContainerDied","Data":"d76568a1836835dffe805e1ab10820d07c4eb8a57c458d968a9d1cda63f4c685"} Feb 23 09:26:06 crc kubenswrapper[4940]: I0223 09:26:06.697600 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6wmjl" event={"ID":"0a702910-ccc7-4f1e-a90e-b4dc27c4881a","Type":"ContainerDied","Data":"4abdef9dc6922daf012cf6537e9c069690967ed3e5b1604d0d7e601245a6334e"} Feb 23 09:26:06 crc kubenswrapper[4940]: I0223 09:26:06.697548 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6wmjl" Feb 23 09:26:06 crc kubenswrapper[4940]: I0223 09:26:06.697649 4940 scope.go:117] "RemoveContainer" containerID="d76568a1836835dffe805e1ab10820d07c4eb8a57c458d968a9d1cda63f4c685" Feb 23 09:26:06 crc kubenswrapper[4940]: I0223 09:26:06.741239 4940 scope.go:117] "RemoveContainer" containerID="54cb9e526c3968e5558fac3e477db6dce6a953bc72279a0942263d596c738d0f" Feb 23 09:26:06 crc kubenswrapper[4940]: I0223 09:26:06.743074 4940 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-6wmjl"] Feb 23 09:26:06 crc kubenswrapper[4940]: I0223 09:26:06.763114 4940 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-6wmjl"] Feb 23 09:26:06 crc kubenswrapper[4940]: I0223 09:26:06.781235 4940 scope.go:117] "RemoveContainer" containerID="32c192a0c086790493ec2cebda82fcf2e192d2d6f7313bdc5c471a804ab3b8c5" Feb 23 09:26:06 crc kubenswrapper[4940]: I0223 09:26:06.818606 4940 scope.go:117] "RemoveContainer" containerID="d76568a1836835dffe805e1ab10820d07c4eb8a57c458d968a9d1cda63f4c685" Feb 23 09:26:06 crc kubenswrapper[4940]: E0223 09:26:06.819197 4940 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d76568a1836835dffe805e1ab10820d07c4eb8a57c458d968a9d1cda63f4c685\": container with ID starting with d76568a1836835dffe805e1ab10820d07c4eb8a57c458d968a9d1cda63f4c685 not found: ID does not exist" containerID="d76568a1836835dffe805e1ab10820d07c4eb8a57c458d968a9d1cda63f4c685" Feb 23 09:26:06 crc kubenswrapper[4940]: I0223 09:26:06.819314 4940 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d76568a1836835dffe805e1ab10820d07c4eb8a57c458d968a9d1cda63f4c685"} err="failed to get container status \"d76568a1836835dffe805e1ab10820d07c4eb8a57c458d968a9d1cda63f4c685\": rpc error: code = NotFound desc = could not find container \"d76568a1836835dffe805e1ab10820d07c4eb8a57c458d968a9d1cda63f4c685\": container with ID starting with d76568a1836835dffe805e1ab10820d07c4eb8a57c458d968a9d1cda63f4c685 not found: ID does not exist" Feb 23 09:26:06 crc kubenswrapper[4940]: I0223 09:26:06.819449 4940 scope.go:117] "RemoveContainer" containerID="54cb9e526c3968e5558fac3e477db6dce6a953bc72279a0942263d596c738d0f" Feb 23 09:26:06 crc kubenswrapper[4940]: E0223 09:26:06.819866 4940 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"54cb9e526c3968e5558fac3e477db6dce6a953bc72279a0942263d596c738d0f\": container with ID starting with 54cb9e526c3968e5558fac3e477db6dce6a953bc72279a0942263d596c738d0f not found: ID does not exist" containerID="54cb9e526c3968e5558fac3e477db6dce6a953bc72279a0942263d596c738d0f" Feb 23 09:26:06 crc kubenswrapper[4940]: I0223 09:26:06.819945 4940 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"54cb9e526c3968e5558fac3e477db6dce6a953bc72279a0942263d596c738d0f"} err="failed to get container status \"54cb9e526c3968e5558fac3e477db6dce6a953bc72279a0942263d596c738d0f\": rpc error: code = NotFound desc = could not find container \"54cb9e526c3968e5558fac3e477db6dce6a953bc72279a0942263d596c738d0f\": container with ID starting with 54cb9e526c3968e5558fac3e477db6dce6a953bc72279a0942263d596c738d0f not found: ID does not exist" Feb 23 09:26:06 crc kubenswrapper[4940]: I0223 09:26:06.820010 4940 scope.go:117] "RemoveContainer" containerID="32c192a0c086790493ec2cebda82fcf2e192d2d6f7313bdc5c471a804ab3b8c5" Feb 23 09:26:06 crc kubenswrapper[4940]: E0223 09:26:06.820376 4940 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"32c192a0c086790493ec2cebda82fcf2e192d2d6f7313bdc5c471a804ab3b8c5\": container with ID starting with 32c192a0c086790493ec2cebda82fcf2e192d2d6f7313bdc5c471a804ab3b8c5 not found: ID does not exist" containerID="32c192a0c086790493ec2cebda82fcf2e192d2d6f7313bdc5c471a804ab3b8c5" Feb 23 09:26:06 crc kubenswrapper[4940]: I0223 09:26:06.820456 4940 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"32c192a0c086790493ec2cebda82fcf2e192d2d6f7313bdc5c471a804ab3b8c5"} err="failed to get container status \"32c192a0c086790493ec2cebda82fcf2e192d2d6f7313bdc5c471a804ab3b8c5\": rpc error: code = NotFound desc = could not find container \"32c192a0c086790493ec2cebda82fcf2e192d2d6f7313bdc5c471a804ab3b8c5\": container with ID starting with 32c192a0c086790493ec2cebda82fcf2e192d2d6f7313bdc5c471a804ab3b8c5 not found: ID does not exist" Feb 23 09:26:07 crc kubenswrapper[4940]: I0223 09:26:07.370302 4940 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0a702910-ccc7-4f1e-a90e-b4dc27c4881a" path="/var/lib/kubelet/pods/0a702910-ccc7-4f1e-a90e-b4dc27c4881a/volumes" Feb 23 09:26:11 crc kubenswrapper[4940]: I0223 09:26:11.347870 4940 scope.go:117] "RemoveContainer" containerID="6b12db6909c870fba93a3600a398d05d040f133d1fff6f41fc66f1a7da8177f8" Feb 23 09:26:11 crc kubenswrapper[4940]: E0223 09:26:11.349435 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-26mgs_openshift-machine-config-operator(f3f2cfd6-5ddf-436d-998f-440f1cc642b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" podUID="f3f2cfd6-5ddf-436d-998f-440f1cc642b1" Feb 23 09:26:22 crc kubenswrapper[4940]: I0223 09:26:22.345425 4940 scope.go:117] "RemoveContainer" containerID="6b12db6909c870fba93a3600a398d05d040f133d1fff6f41fc66f1a7da8177f8" Feb 23 09:26:22 crc kubenswrapper[4940]: E0223 09:26:22.346180 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-26mgs_openshift-machine-config-operator(f3f2cfd6-5ddf-436d-998f-440f1cc642b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" podUID="f3f2cfd6-5ddf-436d-998f-440f1cc642b1" Feb 23 09:26:33 crc kubenswrapper[4940]: I0223 09:26:33.346794 4940 scope.go:117] "RemoveContainer" containerID="6b12db6909c870fba93a3600a398d05d040f133d1fff6f41fc66f1a7da8177f8" Feb 23 09:26:33 crc kubenswrapper[4940]: E0223 09:26:33.348327 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-26mgs_openshift-machine-config-operator(f3f2cfd6-5ddf-436d-998f-440f1cc642b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" podUID="f3f2cfd6-5ddf-436d-998f-440f1cc642b1" Feb 23 09:26:46 crc kubenswrapper[4940]: I0223 09:26:46.346013 4940 scope.go:117] "RemoveContainer" containerID="6b12db6909c870fba93a3600a398d05d040f133d1fff6f41fc66f1a7da8177f8" Feb 23 09:26:46 crc kubenswrapper[4940]: E0223 09:26:46.346901 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-26mgs_openshift-machine-config-operator(f3f2cfd6-5ddf-436d-998f-440f1cc642b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" podUID="f3f2cfd6-5ddf-436d-998f-440f1cc642b1" Feb 23 09:26:47 crc kubenswrapper[4940]: I0223 09:26:47.269796 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-k5tft"] Feb 23 09:26:47 crc kubenswrapper[4940]: E0223 09:26:47.270560 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a702910-ccc7-4f1e-a90e-b4dc27c4881a" containerName="extract-utilities" Feb 23 09:26:47 crc kubenswrapper[4940]: I0223 09:26:47.270582 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a702910-ccc7-4f1e-a90e-b4dc27c4881a" containerName="extract-utilities" Feb 23 09:26:47 crc kubenswrapper[4940]: E0223 09:26:47.270597 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="504546d3-1a8c-4bde-b9f0-b2f30de9fe7f" containerName="registry-server" Feb 23 09:26:47 crc kubenswrapper[4940]: I0223 09:26:47.270605 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="504546d3-1a8c-4bde-b9f0-b2f30de9fe7f" containerName="registry-server" Feb 23 09:26:47 crc kubenswrapper[4940]: E0223 09:26:47.270638 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a702910-ccc7-4f1e-a90e-b4dc27c4881a" containerName="registry-server" Feb 23 09:26:47 crc kubenswrapper[4940]: I0223 09:26:47.270646 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a702910-ccc7-4f1e-a90e-b4dc27c4881a" containerName="registry-server" Feb 23 09:26:47 crc kubenswrapper[4940]: E0223 09:26:47.270675 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="504546d3-1a8c-4bde-b9f0-b2f30de9fe7f" containerName="extract-utilities" Feb 23 09:26:47 crc kubenswrapper[4940]: I0223 09:26:47.270682 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="504546d3-1a8c-4bde-b9f0-b2f30de9fe7f" containerName="extract-utilities" Feb 23 09:26:47 crc kubenswrapper[4940]: E0223 09:26:47.270691 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="504546d3-1a8c-4bde-b9f0-b2f30de9fe7f" containerName="extract-content" Feb 23 09:26:47 crc kubenswrapper[4940]: I0223 09:26:47.270698 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="504546d3-1a8c-4bde-b9f0-b2f30de9fe7f" containerName="extract-content" Feb 23 09:26:47 crc kubenswrapper[4940]: E0223 09:26:47.270714 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a702910-ccc7-4f1e-a90e-b4dc27c4881a" containerName="extract-content" Feb 23 09:26:47 crc kubenswrapper[4940]: I0223 09:26:47.270721 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a702910-ccc7-4f1e-a90e-b4dc27c4881a" containerName="extract-content" Feb 23 09:26:47 crc kubenswrapper[4940]: I0223 09:26:47.270951 4940 memory_manager.go:354] "RemoveStaleState removing state" podUID="504546d3-1a8c-4bde-b9f0-b2f30de9fe7f" containerName="registry-server" Feb 23 09:26:47 crc kubenswrapper[4940]: I0223 09:26:47.270986 4940 memory_manager.go:354] "RemoveStaleState removing state" podUID="0a702910-ccc7-4f1e-a90e-b4dc27c4881a" containerName="registry-server" Feb 23 09:26:47 crc kubenswrapper[4940]: I0223 09:26:47.272687 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-k5tft" Feb 23 09:26:47 crc kubenswrapper[4940]: I0223 09:26:47.281718 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-k5tft"] Feb 23 09:26:47 crc kubenswrapper[4940]: I0223 09:26:47.475350 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7d3f2281-cfa7-4651-966a-0105d4ebb98e-catalog-content\") pod \"community-operators-k5tft\" (UID: \"7d3f2281-cfa7-4651-966a-0105d4ebb98e\") " pod="openshift-marketplace/community-operators-k5tft" Feb 23 09:26:47 crc kubenswrapper[4940]: I0223 09:26:47.475487 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hcvkv\" (UniqueName: \"kubernetes.io/projected/7d3f2281-cfa7-4651-966a-0105d4ebb98e-kube-api-access-hcvkv\") pod \"community-operators-k5tft\" (UID: \"7d3f2281-cfa7-4651-966a-0105d4ebb98e\") " pod="openshift-marketplace/community-operators-k5tft" Feb 23 09:26:47 crc kubenswrapper[4940]: I0223 09:26:47.475645 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7d3f2281-cfa7-4651-966a-0105d4ebb98e-utilities\") pod \"community-operators-k5tft\" (UID: \"7d3f2281-cfa7-4651-966a-0105d4ebb98e\") " pod="openshift-marketplace/community-operators-k5tft" Feb 23 09:26:47 crc kubenswrapper[4940]: I0223 09:26:47.578826 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7d3f2281-cfa7-4651-966a-0105d4ebb98e-catalog-content\") pod \"community-operators-k5tft\" (UID: \"7d3f2281-cfa7-4651-966a-0105d4ebb98e\") " pod="openshift-marketplace/community-operators-k5tft" Feb 23 09:26:47 crc kubenswrapper[4940]: I0223 09:26:47.578919 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hcvkv\" (UniqueName: \"kubernetes.io/projected/7d3f2281-cfa7-4651-966a-0105d4ebb98e-kube-api-access-hcvkv\") pod \"community-operators-k5tft\" (UID: \"7d3f2281-cfa7-4651-966a-0105d4ebb98e\") " pod="openshift-marketplace/community-operators-k5tft" Feb 23 09:26:47 crc kubenswrapper[4940]: I0223 09:26:47.578961 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7d3f2281-cfa7-4651-966a-0105d4ebb98e-utilities\") pod \"community-operators-k5tft\" (UID: \"7d3f2281-cfa7-4651-966a-0105d4ebb98e\") " pod="openshift-marketplace/community-operators-k5tft" Feb 23 09:26:47 crc kubenswrapper[4940]: I0223 09:26:47.579438 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7d3f2281-cfa7-4651-966a-0105d4ebb98e-catalog-content\") pod \"community-operators-k5tft\" (UID: \"7d3f2281-cfa7-4651-966a-0105d4ebb98e\") " pod="openshift-marketplace/community-operators-k5tft" Feb 23 09:26:47 crc kubenswrapper[4940]: I0223 09:26:47.579493 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7d3f2281-cfa7-4651-966a-0105d4ebb98e-utilities\") pod \"community-operators-k5tft\" (UID: \"7d3f2281-cfa7-4651-966a-0105d4ebb98e\") " pod="openshift-marketplace/community-operators-k5tft" Feb 23 09:26:47 crc kubenswrapper[4940]: I0223 09:26:47.604343 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hcvkv\" (UniqueName: \"kubernetes.io/projected/7d3f2281-cfa7-4651-966a-0105d4ebb98e-kube-api-access-hcvkv\") pod \"community-operators-k5tft\" (UID: \"7d3f2281-cfa7-4651-966a-0105d4ebb98e\") " pod="openshift-marketplace/community-operators-k5tft" Feb 23 09:26:47 crc kubenswrapper[4940]: I0223 09:26:47.614067 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-k5tft" Feb 23 09:26:48 crc kubenswrapper[4940]: I0223 09:26:48.173717 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-k5tft"] Feb 23 09:26:48 crc kubenswrapper[4940]: W0223 09:26:48.176677 4940 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7d3f2281_cfa7_4651_966a_0105d4ebb98e.slice/crio-b6759c5ec769c6c8bbd7a04091997886cea4dbbe603fa034a2579def9ab6ffc9 WatchSource:0}: Error finding container b6759c5ec769c6c8bbd7a04091997886cea4dbbe603fa034a2579def9ab6ffc9: Status 404 returned error can't find the container with id b6759c5ec769c6c8bbd7a04091997886cea4dbbe603fa034a2579def9ab6ffc9 Feb 23 09:26:49 crc kubenswrapper[4940]: I0223 09:26:49.086401 4940 generic.go:334] "Generic (PLEG): container finished" podID="7d3f2281-cfa7-4651-966a-0105d4ebb98e" containerID="b6c1693bd91613a2c437dd9d31fbd5ab91feb9c4d836c4b91eb8ad29f73c37de" exitCode=0 Feb 23 09:26:49 crc kubenswrapper[4940]: I0223 09:26:49.086467 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-k5tft" event={"ID":"7d3f2281-cfa7-4651-966a-0105d4ebb98e","Type":"ContainerDied","Data":"b6c1693bd91613a2c437dd9d31fbd5ab91feb9c4d836c4b91eb8ad29f73c37de"} Feb 23 09:26:49 crc kubenswrapper[4940]: I0223 09:26:49.086751 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-k5tft" event={"ID":"7d3f2281-cfa7-4651-966a-0105d4ebb98e","Type":"ContainerStarted","Data":"b6759c5ec769c6c8bbd7a04091997886cea4dbbe603fa034a2579def9ab6ffc9"} Feb 23 09:26:51 crc kubenswrapper[4940]: I0223 09:26:51.105682 4940 generic.go:334] "Generic (PLEG): container finished" podID="7d3f2281-cfa7-4651-966a-0105d4ebb98e" containerID="b450c52a7014b9b6484fce7dfc83fde4402a1afbb0da9cb71e682ad34b00eee0" exitCode=0 Feb 23 09:26:51 crc kubenswrapper[4940]: I0223 09:26:51.105734 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-k5tft" event={"ID":"7d3f2281-cfa7-4651-966a-0105d4ebb98e","Type":"ContainerDied","Data":"b450c52a7014b9b6484fce7dfc83fde4402a1afbb0da9cb71e682ad34b00eee0"} Feb 23 09:26:52 crc kubenswrapper[4940]: I0223 09:26:52.117215 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-k5tft" event={"ID":"7d3f2281-cfa7-4651-966a-0105d4ebb98e","Type":"ContainerStarted","Data":"11bc100a7ce6e21b27c17f2841b6a7317f7e452c5688c495cf93af94d8e7dc6f"} Feb 23 09:26:52 crc kubenswrapper[4940]: I0223 09:26:52.143651 4940 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-k5tft" podStartSLOduration=2.75222764 podStartE2EDuration="5.143629135s" podCreationTimestamp="2026-02-23 09:26:47 +0000 UTC" firstStartedPulling="2026-02-23 09:26:49.08873734 +0000 UTC m=+2340.471943497" lastFinishedPulling="2026-02-23 09:26:51.480138815 +0000 UTC m=+2342.863344992" observedRunningTime="2026-02-23 09:26:52.134622241 +0000 UTC m=+2343.517828408" watchObservedRunningTime="2026-02-23 09:26:52.143629135 +0000 UTC m=+2343.526835312" Feb 23 09:26:57 crc kubenswrapper[4940]: I0223 09:26:57.346555 4940 scope.go:117] "RemoveContainer" containerID="6b12db6909c870fba93a3600a398d05d040f133d1fff6f41fc66f1a7da8177f8" Feb 23 09:26:57 crc kubenswrapper[4940]: E0223 09:26:57.347326 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-26mgs_openshift-machine-config-operator(f3f2cfd6-5ddf-436d-998f-440f1cc642b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" podUID="f3f2cfd6-5ddf-436d-998f-440f1cc642b1" Feb 23 09:26:57 crc kubenswrapper[4940]: I0223 09:26:57.615751 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-k5tft" Feb 23 09:26:57 crc kubenswrapper[4940]: I0223 09:26:57.615818 4940 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-k5tft" Feb 23 09:26:57 crc kubenswrapper[4940]: I0223 09:26:57.672841 4940 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-k5tft" Feb 23 09:26:58 crc kubenswrapper[4940]: I0223 09:26:58.219584 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-k5tft" Feb 23 09:26:58 crc kubenswrapper[4940]: I0223 09:26:58.276720 4940 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-k5tft"] Feb 23 09:27:00 crc kubenswrapper[4940]: I0223 09:27:00.181984 4940 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-k5tft" podUID="7d3f2281-cfa7-4651-966a-0105d4ebb98e" containerName="registry-server" containerID="cri-o://11bc100a7ce6e21b27c17f2841b6a7317f7e452c5688c495cf93af94d8e7dc6f" gracePeriod=2 Feb 23 09:27:00 crc kubenswrapper[4940]: I0223 09:27:00.590515 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-k5tft" Feb 23 09:27:00 crc kubenswrapper[4940]: I0223 09:27:00.692658 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7d3f2281-cfa7-4651-966a-0105d4ebb98e-utilities\") pod \"7d3f2281-cfa7-4651-966a-0105d4ebb98e\" (UID: \"7d3f2281-cfa7-4651-966a-0105d4ebb98e\") " Feb 23 09:27:00 crc kubenswrapper[4940]: I0223 09:27:00.692772 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7d3f2281-cfa7-4651-966a-0105d4ebb98e-catalog-content\") pod \"7d3f2281-cfa7-4651-966a-0105d4ebb98e\" (UID: \"7d3f2281-cfa7-4651-966a-0105d4ebb98e\") " Feb 23 09:27:00 crc kubenswrapper[4940]: I0223 09:27:00.692813 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hcvkv\" (UniqueName: \"kubernetes.io/projected/7d3f2281-cfa7-4651-966a-0105d4ebb98e-kube-api-access-hcvkv\") pod \"7d3f2281-cfa7-4651-966a-0105d4ebb98e\" (UID: \"7d3f2281-cfa7-4651-966a-0105d4ebb98e\") " Feb 23 09:27:00 crc kubenswrapper[4940]: I0223 09:27:00.695114 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7d3f2281-cfa7-4651-966a-0105d4ebb98e-utilities" (OuterVolumeSpecName: "utilities") pod "7d3f2281-cfa7-4651-966a-0105d4ebb98e" (UID: "7d3f2281-cfa7-4651-966a-0105d4ebb98e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 09:27:00 crc kubenswrapper[4940]: I0223 09:27:00.700108 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7d3f2281-cfa7-4651-966a-0105d4ebb98e-kube-api-access-hcvkv" (OuterVolumeSpecName: "kube-api-access-hcvkv") pod "7d3f2281-cfa7-4651-966a-0105d4ebb98e" (UID: "7d3f2281-cfa7-4651-966a-0105d4ebb98e"). InnerVolumeSpecName "kube-api-access-hcvkv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 09:27:00 crc kubenswrapper[4940]: I0223 09:27:00.752161 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7d3f2281-cfa7-4651-966a-0105d4ebb98e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7d3f2281-cfa7-4651-966a-0105d4ebb98e" (UID: "7d3f2281-cfa7-4651-966a-0105d4ebb98e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 09:27:00 crc kubenswrapper[4940]: I0223 09:27:00.795581 4940 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7d3f2281-cfa7-4651-966a-0105d4ebb98e-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 23 09:27:00 crc kubenswrapper[4940]: I0223 09:27:00.795663 4940 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hcvkv\" (UniqueName: \"kubernetes.io/projected/7d3f2281-cfa7-4651-966a-0105d4ebb98e-kube-api-access-hcvkv\") on node \"crc\" DevicePath \"\"" Feb 23 09:27:00 crc kubenswrapper[4940]: I0223 09:27:00.795684 4940 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7d3f2281-cfa7-4651-966a-0105d4ebb98e-utilities\") on node \"crc\" DevicePath \"\"" Feb 23 09:27:01 crc kubenswrapper[4940]: I0223 09:27:01.191728 4940 generic.go:334] "Generic (PLEG): container finished" podID="7d3f2281-cfa7-4651-966a-0105d4ebb98e" containerID="11bc100a7ce6e21b27c17f2841b6a7317f7e452c5688c495cf93af94d8e7dc6f" exitCode=0 Feb 23 09:27:01 crc kubenswrapper[4940]: I0223 09:27:01.191788 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-k5tft" event={"ID":"7d3f2281-cfa7-4651-966a-0105d4ebb98e","Type":"ContainerDied","Data":"11bc100a7ce6e21b27c17f2841b6a7317f7e452c5688c495cf93af94d8e7dc6f"} Feb 23 09:27:01 crc kubenswrapper[4940]: I0223 09:27:01.192104 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-k5tft" event={"ID":"7d3f2281-cfa7-4651-966a-0105d4ebb98e","Type":"ContainerDied","Data":"b6759c5ec769c6c8bbd7a04091997886cea4dbbe603fa034a2579def9ab6ffc9"} Feb 23 09:27:01 crc kubenswrapper[4940]: I0223 09:27:01.192133 4940 scope.go:117] "RemoveContainer" containerID="11bc100a7ce6e21b27c17f2841b6a7317f7e452c5688c495cf93af94d8e7dc6f" Feb 23 09:27:01 crc kubenswrapper[4940]: I0223 09:27:01.191777 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-k5tft" Feb 23 09:27:01 crc kubenswrapper[4940]: I0223 09:27:01.214652 4940 scope.go:117] "RemoveContainer" containerID="b450c52a7014b9b6484fce7dfc83fde4402a1afbb0da9cb71e682ad34b00eee0" Feb 23 09:27:01 crc kubenswrapper[4940]: I0223 09:27:01.247312 4940 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-k5tft"] Feb 23 09:27:01 crc kubenswrapper[4940]: I0223 09:27:01.248922 4940 scope.go:117] "RemoveContainer" containerID="b6c1693bd91613a2c437dd9d31fbd5ab91feb9c4d836c4b91eb8ad29f73c37de" Feb 23 09:27:01 crc kubenswrapper[4940]: I0223 09:27:01.260974 4940 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-k5tft"] Feb 23 09:27:01 crc kubenswrapper[4940]: I0223 09:27:01.293130 4940 scope.go:117] "RemoveContainer" containerID="11bc100a7ce6e21b27c17f2841b6a7317f7e452c5688c495cf93af94d8e7dc6f" Feb 23 09:27:01 crc kubenswrapper[4940]: E0223 09:27:01.294004 4940 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"11bc100a7ce6e21b27c17f2841b6a7317f7e452c5688c495cf93af94d8e7dc6f\": container with ID starting with 11bc100a7ce6e21b27c17f2841b6a7317f7e452c5688c495cf93af94d8e7dc6f not found: ID does not exist" containerID="11bc100a7ce6e21b27c17f2841b6a7317f7e452c5688c495cf93af94d8e7dc6f" Feb 23 09:27:01 crc kubenswrapper[4940]: I0223 09:27:01.294133 4940 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"11bc100a7ce6e21b27c17f2841b6a7317f7e452c5688c495cf93af94d8e7dc6f"} err="failed to get container status \"11bc100a7ce6e21b27c17f2841b6a7317f7e452c5688c495cf93af94d8e7dc6f\": rpc error: code = NotFound desc = could not find container \"11bc100a7ce6e21b27c17f2841b6a7317f7e452c5688c495cf93af94d8e7dc6f\": container with ID starting with 11bc100a7ce6e21b27c17f2841b6a7317f7e452c5688c495cf93af94d8e7dc6f not found: ID does not exist" Feb 23 09:27:01 crc kubenswrapper[4940]: I0223 09:27:01.294240 4940 scope.go:117] "RemoveContainer" containerID="b450c52a7014b9b6484fce7dfc83fde4402a1afbb0da9cb71e682ad34b00eee0" Feb 23 09:27:01 crc kubenswrapper[4940]: E0223 09:27:01.295024 4940 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b450c52a7014b9b6484fce7dfc83fde4402a1afbb0da9cb71e682ad34b00eee0\": container with ID starting with b450c52a7014b9b6484fce7dfc83fde4402a1afbb0da9cb71e682ad34b00eee0 not found: ID does not exist" containerID="b450c52a7014b9b6484fce7dfc83fde4402a1afbb0da9cb71e682ad34b00eee0" Feb 23 09:27:01 crc kubenswrapper[4940]: I0223 09:27:01.295170 4940 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b450c52a7014b9b6484fce7dfc83fde4402a1afbb0da9cb71e682ad34b00eee0"} err="failed to get container status \"b450c52a7014b9b6484fce7dfc83fde4402a1afbb0da9cb71e682ad34b00eee0\": rpc error: code = NotFound desc = could not find container \"b450c52a7014b9b6484fce7dfc83fde4402a1afbb0da9cb71e682ad34b00eee0\": container with ID starting with b450c52a7014b9b6484fce7dfc83fde4402a1afbb0da9cb71e682ad34b00eee0 not found: ID does not exist" Feb 23 09:27:01 crc kubenswrapper[4940]: I0223 09:27:01.295288 4940 scope.go:117] "RemoveContainer" containerID="b6c1693bd91613a2c437dd9d31fbd5ab91feb9c4d836c4b91eb8ad29f73c37de" Feb 23 09:27:01 crc kubenswrapper[4940]: E0223 09:27:01.295535 4940 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b6c1693bd91613a2c437dd9d31fbd5ab91feb9c4d836c4b91eb8ad29f73c37de\": container with ID starting with b6c1693bd91613a2c437dd9d31fbd5ab91feb9c4d836c4b91eb8ad29f73c37de not found: ID does not exist" containerID="b6c1693bd91613a2c437dd9d31fbd5ab91feb9c4d836c4b91eb8ad29f73c37de" Feb 23 09:27:01 crc kubenswrapper[4940]: I0223 09:27:01.295647 4940 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b6c1693bd91613a2c437dd9d31fbd5ab91feb9c4d836c4b91eb8ad29f73c37de"} err="failed to get container status \"b6c1693bd91613a2c437dd9d31fbd5ab91feb9c4d836c4b91eb8ad29f73c37de\": rpc error: code = NotFound desc = could not find container \"b6c1693bd91613a2c437dd9d31fbd5ab91feb9c4d836c4b91eb8ad29f73c37de\": container with ID starting with b6c1693bd91613a2c437dd9d31fbd5ab91feb9c4d836c4b91eb8ad29f73c37de not found: ID does not exist" Feb 23 09:27:01 crc kubenswrapper[4940]: I0223 09:27:01.355945 4940 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7d3f2281-cfa7-4651-966a-0105d4ebb98e" path="/var/lib/kubelet/pods/7d3f2281-cfa7-4651-966a-0105d4ebb98e/volumes" Feb 23 09:27:09 crc kubenswrapper[4940]: I0223 09:27:09.354149 4940 scope.go:117] "RemoveContainer" containerID="6b12db6909c870fba93a3600a398d05d040f133d1fff6f41fc66f1a7da8177f8" Feb 23 09:27:09 crc kubenswrapper[4940]: E0223 09:27:09.354871 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-26mgs_openshift-machine-config-operator(f3f2cfd6-5ddf-436d-998f-440f1cc642b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" podUID="f3f2cfd6-5ddf-436d-998f-440f1cc642b1" Feb 23 09:27:21 crc kubenswrapper[4940]: I0223 09:27:21.346522 4940 scope.go:117] "RemoveContainer" containerID="6b12db6909c870fba93a3600a398d05d040f133d1fff6f41fc66f1a7da8177f8" Feb 23 09:27:21 crc kubenswrapper[4940]: E0223 09:27:21.347206 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-26mgs_openshift-machine-config-operator(f3f2cfd6-5ddf-436d-998f-440f1cc642b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" podUID="f3f2cfd6-5ddf-436d-998f-440f1cc642b1" Feb 23 09:27:32 crc kubenswrapper[4940]: I0223 09:27:32.345460 4940 scope.go:117] "RemoveContainer" containerID="6b12db6909c870fba93a3600a398d05d040f133d1fff6f41fc66f1a7da8177f8" Feb 23 09:27:32 crc kubenswrapper[4940]: E0223 09:27:32.346434 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-26mgs_openshift-machine-config-operator(f3f2cfd6-5ddf-436d-998f-440f1cc642b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" podUID="f3f2cfd6-5ddf-436d-998f-440f1cc642b1" Feb 23 09:27:44 crc kubenswrapper[4940]: I0223 09:27:44.346085 4940 scope.go:117] "RemoveContainer" containerID="6b12db6909c870fba93a3600a398d05d040f133d1fff6f41fc66f1a7da8177f8" Feb 23 09:27:44 crc kubenswrapper[4940]: E0223 09:27:44.347207 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-26mgs_openshift-machine-config-operator(f3f2cfd6-5ddf-436d-998f-440f1cc642b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" podUID="f3f2cfd6-5ddf-436d-998f-440f1cc642b1" Feb 23 09:27:48 crc kubenswrapper[4940]: I0223 09:27:48.626987 4940 generic.go:334] "Generic (PLEG): container finished" podID="7376823f-eb39-4631-9cac-0d4b297a9580" containerID="f3fd538bd5bd089f70155a60a46a7786f902ca013793534ad92a33ad428df83e" exitCode=0 Feb 23 09:27:48 crc kubenswrapper[4940]: I0223 09:27:48.627061 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-t6ttl" event={"ID":"7376823f-eb39-4631-9cac-0d4b297a9580","Type":"ContainerDied","Data":"f3fd538bd5bd089f70155a60a46a7786f902ca013793534ad92a33ad428df83e"} Feb 23 09:27:50 crc kubenswrapper[4940]: I0223 09:27:50.073885 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-t6ttl" Feb 23 09:27:50 crc kubenswrapper[4940]: I0223 09:27:50.205781 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7376823f-eb39-4631-9cac-0d4b297a9580-inventory\") pod \"7376823f-eb39-4631-9cac-0d4b297a9580\" (UID: \"7376823f-eb39-4631-9cac-0d4b297a9580\") " Feb 23 09:27:50 crc kubenswrapper[4940]: I0223 09:27:50.205873 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-txwlf\" (UniqueName: \"kubernetes.io/projected/7376823f-eb39-4631-9cac-0d4b297a9580-kube-api-access-txwlf\") pod \"7376823f-eb39-4631-9cac-0d4b297a9580\" (UID: \"7376823f-eb39-4631-9cac-0d4b297a9580\") " Feb 23 09:27:50 crc kubenswrapper[4940]: I0223 09:27:50.206069 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7376823f-eb39-4631-9cac-0d4b297a9580-ssh-key-openstack-edpm-ipam\") pod \"7376823f-eb39-4631-9cac-0d4b297a9580\" (UID: \"7376823f-eb39-4631-9cac-0d4b297a9580\") " Feb 23 09:27:50 crc kubenswrapper[4940]: I0223 09:27:50.206090 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/7376823f-eb39-4631-9cac-0d4b297a9580-libvirt-secret-0\") pod \"7376823f-eb39-4631-9cac-0d4b297a9580\" (UID: \"7376823f-eb39-4631-9cac-0d4b297a9580\") " Feb 23 09:27:50 crc kubenswrapper[4940]: I0223 09:27:50.206712 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7376823f-eb39-4631-9cac-0d4b297a9580-libvirt-combined-ca-bundle\") pod \"7376823f-eb39-4631-9cac-0d4b297a9580\" (UID: \"7376823f-eb39-4631-9cac-0d4b297a9580\") " Feb 23 09:27:50 crc kubenswrapper[4940]: I0223 09:27:50.212736 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7376823f-eb39-4631-9cac-0d4b297a9580-kube-api-access-txwlf" (OuterVolumeSpecName: "kube-api-access-txwlf") pod "7376823f-eb39-4631-9cac-0d4b297a9580" (UID: "7376823f-eb39-4631-9cac-0d4b297a9580"). InnerVolumeSpecName "kube-api-access-txwlf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 09:27:50 crc kubenswrapper[4940]: I0223 09:27:50.214034 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7376823f-eb39-4631-9cac-0d4b297a9580-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "7376823f-eb39-4631-9cac-0d4b297a9580" (UID: "7376823f-eb39-4631-9cac-0d4b297a9580"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 09:27:50 crc kubenswrapper[4940]: I0223 09:27:50.238021 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7376823f-eb39-4631-9cac-0d4b297a9580-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "7376823f-eb39-4631-9cac-0d4b297a9580" (UID: "7376823f-eb39-4631-9cac-0d4b297a9580"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 09:27:50 crc kubenswrapper[4940]: I0223 09:27:50.238322 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7376823f-eb39-4631-9cac-0d4b297a9580-inventory" (OuterVolumeSpecName: "inventory") pod "7376823f-eb39-4631-9cac-0d4b297a9580" (UID: "7376823f-eb39-4631-9cac-0d4b297a9580"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 09:27:50 crc kubenswrapper[4940]: I0223 09:27:50.243762 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7376823f-eb39-4631-9cac-0d4b297a9580-libvirt-secret-0" (OuterVolumeSpecName: "libvirt-secret-0") pod "7376823f-eb39-4631-9cac-0d4b297a9580" (UID: "7376823f-eb39-4631-9cac-0d4b297a9580"). InnerVolumeSpecName "libvirt-secret-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 09:27:50 crc kubenswrapper[4940]: I0223 09:27:50.309182 4940 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7376823f-eb39-4631-9cac-0d4b297a9580-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 09:27:50 crc kubenswrapper[4940]: I0223 09:27:50.309209 4940 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7376823f-eb39-4631-9cac-0d4b297a9580-inventory\") on node \"crc\" DevicePath \"\"" Feb 23 09:27:50 crc kubenswrapper[4940]: I0223 09:27:50.309221 4940 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-txwlf\" (UniqueName: \"kubernetes.io/projected/7376823f-eb39-4631-9cac-0d4b297a9580-kube-api-access-txwlf\") on node \"crc\" DevicePath \"\"" Feb 23 09:27:50 crc kubenswrapper[4940]: I0223 09:27:50.309231 4940 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7376823f-eb39-4631-9cac-0d4b297a9580-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 23 09:27:50 crc kubenswrapper[4940]: I0223 09:27:50.309239 4940 reconciler_common.go:293] "Volume detached for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/7376823f-eb39-4631-9cac-0d4b297a9580-libvirt-secret-0\") on node \"crc\" DevicePath \"\"" Feb 23 09:27:50 crc kubenswrapper[4940]: I0223 09:27:50.646104 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-t6ttl" event={"ID":"7376823f-eb39-4631-9cac-0d4b297a9580","Type":"ContainerDied","Data":"3e06008587d379a381db539875d93868cba8e66561ea10391670248cfdb95fde"} Feb 23 09:27:50 crc kubenswrapper[4940]: I0223 09:27:50.646179 4940 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3e06008587d379a381db539875d93868cba8e66561ea10391670248cfdb95fde" Feb 23 09:27:50 crc kubenswrapper[4940]: I0223 09:27:50.646186 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-t6ttl" Feb 23 09:27:50 crc kubenswrapper[4940]: I0223 09:27:50.803381 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-wj7m8"] Feb 23 09:27:50 crc kubenswrapper[4940]: E0223 09:27:50.804543 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d3f2281-cfa7-4651-966a-0105d4ebb98e" containerName="extract-content" Feb 23 09:27:50 crc kubenswrapper[4940]: I0223 09:27:50.804561 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d3f2281-cfa7-4651-966a-0105d4ebb98e" containerName="extract-content" Feb 23 09:27:50 crc kubenswrapper[4940]: E0223 09:27:50.804588 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7376823f-eb39-4631-9cac-0d4b297a9580" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Feb 23 09:27:50 crc kubenswrapper[4940]: I0223 09:27:50.804627 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="7376823f-eb39-4631-9cac-0d4b297a9580" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Feb 23 09:27:50 crc kubenswrapper[4940]: E0223 09:27:50.804683 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d3f2281-cfa7-4651-966a-0105d4ebb98e" containerName="extract-utilities" Feb 23 09:27:50 crc kubenswrapper[4940]: I0223 09:27:50.804693 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d3f2281-cfa7-4651-966a-0105d4ebb98e" containerName="extract-utilities" Feb 23 09:27:50 crc kubenswrapper[4940]: E0223 09:27:50.804712 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d3f2281-cfa7-4651-966a-0105d4ebb98e" containerName="registry-server" Feb 23 09:27:50 crc kubenswrapper[4940]: I0223 09:27:50.804721 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d3f2281-cfa7-4651-966a-0105d4ebb98e" containerName="registry-server" Feb 23 09:27:50 crc kubenswrapper[4940]: I0223 09:27:50.805065 4940 memory_manager.go:354] "RemoveStaleState removing state" podUID="7d3f2281-cfa7-4651-966a-0105d4ebb98e" containerName="registry-server" Feb 23 09:27:50 crc kubenswrapper[4940]: I0223 09:27:50.805102 4940 memory_manager.go:354] "RemoveStaleState removing state" podUID="7376823f-eb39-4631-9cac-0d4b297a9580" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Feb 23 09:27:50 crc kubenswrapper[4940]: I0223 09:27:50.806227 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-wj7m8" Feb 23 09:27:50 crc kubenswrapper[4940]: I0223 09:27:50.809417 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 23 09:27:50 crc kubenswrapper[4940]: I0223 09:27:50.809791 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 23 09:27:50 crc kubenswrapper[4940]: I0223 09:27:50.810084 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"nova-extra-config" Feb 23 09:27:50 crc kubenswrapper[4940]: I0223 09:27:50.810183 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-x648h" Feb 23 09:27:50 crc kubenswrapper[4940]: I0223 09:27:50.810202 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-migration-ssh-key" Feb 23 09:27:50 crc kubenswrapper[4940]: I0223 09:27:50.810231 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 23 09:27:50 crc kubenswrapper[4940]: I0223 09:27:50.810348 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-config" Feb 23 09:27:50 crc kubenswrapper[4940]: I0223 09:27:50.845297 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-wj7m8"] Feb 23 09:27:50 crc kubenswrapper[4940]: I0223 09:27:50.921983 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/4528f4f4-45cd-415f-902e-d15ecef72b60-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-wj7m8\" (UID: \"4528f4f4-45cd-415f-902e-d15ecef72b60\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-wj7m8" Feb 23 09:27:50 crc kubenswrapper[4940]: I0223 09:27:50.922295 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q5phc\" (UniqueName: \"kubernetes.io/projected/4528f4f4-45cd-415f-902e-d15ecef72b60-kube-api-access-q5phc\") pod \"nova-edpm-deployment-openstack-edpm-ipam-wj7m8\" (UID: \"4528f4f4-45cd-415f-902e-d15ecef72b60\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-wj7m8" Feb 23 09:27:50 crc kubenswrapper[4940]: I0223 09:27:50.922622 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/4528f4f4-45cd-415f-902e-d15ecef72b60-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-wj7m8\" (UID: \"4528f4f4-45cd-415f-902e-d15ecef72b60\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-wj7m8" Feb 23 09:27:50 crc kubenswrapper[4940]: I0223 09:27:50.922736 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/4528f4f4-45cd-415f-902e-d15ecef72b60-nova-cell1-compute-config-3\") pod \"nova-edpm-deployment-openstack-edpm-ipam-wj7m8\" (UID: \"4528f4f4-45cd-415f-902e-d15ecef72b60\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-wj7m8" Feb 23 09:27:50 crc kubenswrapper[4940]: I0223 09:27:50.922797 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/4528f4f4-45cd-415f-902e-d15ecef72b60-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-wj7m8\" (UID: \"4528f4f4-45cd-415f-902e-d15ecef72b60\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-wj7m8" Feb 23 09:27:50 crc kubenswrapper[4940]: I0223 09:27:50.922851 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/4528f4f4-45cd-415f-902e-d15ecef72b60-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-wj7m8\" (UID: \"4528f4f4-45cd-415f-902e-d15ecef72b60\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-wj7m8" Feb 23 09:27:50 crc kubenswrapper[4940]: I0223 09:27:50.922891 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/4528f4f4-45cd-415f-902e-d15ecef72b60-nova-cell1-compute-config-2\") pod \"nova-edpm-deployment-openstack-edpm-ipam-wj7m8\" (UID: \"4528f4f4-45cd-415f-902e-d15ecef72b60\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-wj7m8" Feb 23 09:27:50 crc kubenswrapper[4940]: I0223 09:27:50.922986 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4528f4f4-45cd-415f-902e-d15ecef72b60-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-wj7m8\" (UID: \"4528f4f4-45cd-415f-902e-d15ecef72b60\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-wj7m8" Feb 23 09:27:50 crc kubenswrapper[4940]: I0223 09:27:50.923027 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/4528f4f4-45cd-415f-902e-d15ecef72b60-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-wj7m8\" (UID: \"4528f4f4-45cd-415f-902e-d15ecef72b60\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-wj7m8" Feb 23 09:27:50 crc kubenswrapper[4940]: I0223 09:27:50.923168 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4528f4f4-45cd-415f-902e-d15ecef72b60-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-wj7m8\" (UID: \"4528f4f4-45cd-415f-902e-d15ecef72b60\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-wj7m8" Feb 23 09:27:50 crc kubenswrapper[4940]: I0223 09:27:50.923294 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4528f4f4-45cd-415f-902e-d15ecef72b60-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-wj7m8\" (UID: \"4528f4f4-45cd-415f-902e-d15ecef72b60\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-wj7m8" Feb 23 09:27:51 crc kubenswrapper[4940]: I0223 09:27:51.025003 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4528f4f4-45cd-415f-902e-d15ecef72b60-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-wj7m8\" (UID: \"4528f4f4-45cd-415f-902e-d15ecef72b60\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-wj7m8" Feb 23 09:27:51 crc kubenswrapper[4940]: I0223 09:27:51.025444 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/4528f4f4-45cd-415f-902e-d15ecef72b60-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-wj7m8\" (UID: \"4528f4f4-45cd-415f-902e-d15ecef72b60\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-wj7m8" Feb 23 09:27:51 crc kubenswrapper[4940]: I0223 09:27:51.025680 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q5phc\" (UniqueName: \"kubernetes.io/projected/4528f4f4-45cd-415f-902e-d15ecef72b60-kube-api-access-q5phc\") pod \"nova-edpm-deployment-openstack-edpm-ipam-wj7m8\" (UID: \"4528f4f4-45cd-415f-902e-d15ecef72b60\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-wj7m8" Feb 23 09:27:51 crc kubenswrapper[4940]: I0223 09:27:51.025818 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/4528f4f4-45cd-415f-902e-d15ecef72b60-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-wj7m8\" (UID: \"4528f4f4-45cd-415f-902e-d15ecef72b60\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-wj7m8" Feb 23 09:27:51 crc kubenswrapper[4940]: I0223 09:27:51.025914 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/4528f4f4-45cd-415f-902e-d15ecef72b60-nova-cell1-compute-config-3\") pod \"nova-edpm-deployment-openstack-edpm-ipam-wj7m8\" (UID: \"4528f4f4-45cd-415f-902e-d15ecef72b60\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-wj7m8" Feb 23 09:27:51 crc kubenswrapper[4940]: I0223 09:27:51.026002 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/4528f4f4-45cd-415f-902e-d15ecef72b60-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-wj7m8\" (UID: \"4528f4f4-45cd-415f-902e-d15ecef72b60\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-wj7m8" Feb 23 09:27:51 crc kubenswrapper[4940]: I0223 09:27:51.026081 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/4528f4f4-45cd-415f-902e-d15ecef72b60-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-wj7m8\" (UID: \"4528f4f4-45cd-415f-902e-d15ecef72b60\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-wj7m8" Feb 23 09:27:51 crc kubenswrapper[4940]: I0223 09:27:51.026200 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/4528f4f4-45cd-415f-902e-d15ecef72b60-nova-cell1-compute-config-2\") pod \"nova-edpm-deployment-openstack-edpm-ipam-wj7m8\" (UID: \"4528f4f4-45cd-415f-902e-d15ecef72b60\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-wj7m8" Feb 23 09:27:51 crc kubenswrapper[4940]: I0223 09:27:51.026297 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4528f4f4-45cd-415f-902e-d15ecef72b60-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-wj7m8\" (UID: \"4528f4f4-45cd-415f-902e-d15ecef72b60\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-wj7m8" Feb 23 09:27:51 crc kubenswrapper[4940]: I0223 09:27:51.026375 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/4528f4f4-45cd-415f-902e-d15ecef72b60-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-wj7m8\" (UID: \"4528f4f4-45cd-415f-902e-d15ecef72b60\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-wj7m8" Feb 23 09:27:51 crc kubenswrapper[4940]: I0223 09:27:51.026494 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4528f4f4-45cd-415f-902e-d15ecef72b60-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-wj7m8\" (UID: \"4528f4f4-45cd-415f-902e-d15ecef72b60\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-wj7m8" Feb 23 09:27:51 crc kubenswrapper[4940]: I0223 09:27:51.027089 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/4528f4f4-45cd-415f-902e-d15ecef72b60-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-wj7m8\" (UID: \"4528f4f4-45cd-415f-902e-d15ecef72b60\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-wj7m8" Feb 23 09:27:51 crc kubenswrapper[4940]: I0223 09:27:51.029977 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/4528f4f4-45cd-415f-902e-d15ecef72b60-nova-cell1-compute-config-2\") pod \"nova-edpm-deployment-openstack-edpm-ipam-wj7m8\" (UID: \"4528f4f4-45cd-415f-902e-d15ecef72b60\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-wj7m8" Feb 23 09:27:51 crc kubenswrapper[4940]: I0223 09:27:51.030279 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4528f4f4-45cd-415f-902e-d15ecef72b60-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-wj7m8\" (UID: \"4528f4f4-45cd-415f-902e-d15ecef72b60\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-wj7m8" Feb 23 09:27:51 crc kubenswrapper[4940]: I0223 09:27:51.030449 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/4528f4f4-45cd-415f-902e-d15ecef72b60-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-wj7m8\" (UID: \"4528f4f4-45cd-415f-902e-d15ecef72b60\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-wj7m8" Feb 23 09:27:51 crc kubenswrapper[4940]: I0223 09:27:51.031768 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/4528f4f4-45cd-415f-902e-d15ecef72b60-nova-cell1-compute-config-3\") pod \"nova-edpm-deployment-openstack-edpm-ipam-wj7m8\" (UID: \"4528f4f4-45cd-415f-902e-d15ecef72b60\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-wj7m8" Feb 23 09:27:51 crc kubenswrapper[4940]: I0223 09:27:51.032182 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/4528f4f4-45cd-415f-902e-d15ecef72b60-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-wj7m8\" (UID: \"4528f4f4-45cd-415f-902e-d15ecef72b60\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-wj7m8" Feb 23 09:27:51 crc kubenswrapper[4940]: I0223 09:27:51.045442 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/4528f4f4-45cd-415f-902e-d15ecef72b60-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-wj7m8\" (UID: \"4528f4f4-45cd-415f-902e-d15ecef72b60\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-wj7m8" Feb 23 09:27:51 crc kubenswrapper[4940]: I0223 09:27:51.047164 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/4528f4f4-45cd-415f-902e-d15ecef72b60-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-wj7m8\" (UID: \"4528f4f4-45cd-415f-902e-d15ecef72b60\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-wj7m8" Feb 23 09:27:51 crc kubenswrapper[4940]: I0223 09:27:51.047857 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4528f4f4-45cd-415f-902e-d15ecef72b60-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-wj7m8\" (UID: \"4528f4f4-45cd-415f-902e-d15ecef72b60\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-wj7m8" Feb 23 09:27:51 crc kubenswrapper[4940]: I0223 09:27:51.048516 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4528f4f4-45cd-415f-902e-d15ecef72b60-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-wj7m8\" (UID: \"4528f4f4-45cd-415f-902e-d15ecef72b60\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-wj7m8" Feb 23 09:27:51 crc kubenswrapper[4940]: I0223 09:27:51.058716 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q5phc\" (UniqueName: \"kubernetes.io/projected/4528f4f4-45cd-415f-902e-d15ecef72b60-kube-api-access-q5phc\") pod \"nova-edpm-deployment-openstack-edpm-ipam-wj7m8\" (UID: \"4528f4f4-45cd-415f-902e-d15ecef72b60\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-wj7m8" Feb 23 09:27:51 crc kubenswrapper[4940]: I0223 09:27:51.139760 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-wj7m8" Feb 23 09:27:51 crc kubenswrapper[4940]: I0223 09:27:51.730054 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-wj7m8"] Feb 23 09:27:52 crc kubenswrapper[4940]: I0223 09:27:52.676493 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-wj7m8" event={"ID":"4528f4f4-45cd-415f-902e-d15ecef72b60","Type":"ContainerStarted","Data":"97526a3f9455364d3f0bc394a4d207026cf666a090c91a7dc95ee84d21580e39"} Feb 23 09:27:52 crc kubenswrapper[4940]: I0223 09:27:52.677110 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-wj7m8" event={"ID":"4528f4f4-45cd-415f-902e-d15ecef72b60","Type":"ContainerStarted","Data":"b641abe943de9d2f0cbde780ae6ad2e07a9ae67fbf0b551d380a0f375866a510"} Feb 23 09:27:52 crc kubenswrapper[4940]: I0223 09:27:52.701536 4940 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-wj7m8" podStartSLOduration=2.067967989 podStartE2EDuration="2.70151442s" podCreationTimestamp="2026-02-23 09:27:50 +0000 UTC" firstStartedPulling="2026-02-23 09:27:51.732658485 +0000 UTC m=+2403.115864642" lastFinishedPulling="2026-02-23 09:27:52.366204916 +0000 UTC m=+2403.749411073" observedRunningTime="2026-02-23 09:27:52.696368058 +0000 UTC m=+2404.079574225" watchObservedRunningTime="2026-02-23 09:27:52.70151442 +0000 UTC m=+2404.084720577" Feb 23 09:27:58 crc kubenswrapper[4940]: I0223 09:27:58.345598 4940 scope.go:117] "RemoveContainer" containerID="6b12db6909c870fba93a3600a398d05d040f133d1fff6f41fc66f1a7da8177f8" Feb 23 09:27:58 crc kubenswrapper[4940]: E0223 09:27:58.346439 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-26mgs_openshift-machine-config-operator(f3f2cfd6-5ddf-436d-998f-440f1cc642b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" podUID="f3f2cfd6-5ddf-436d-998f-440f1cc642b1" Feb 23 09:28:12 crc kubenswrapper[4940]: I0223 09:28:12.345337 4940 scope.go:117] "RemoveContainer" containerID="6b12db6909c870fba93a3600a398d05d040f133d1fff6f41fc66f1a7da8177f8" Feb 23 09:28:12 crc kubenswrapper[4940]: E0223 09:28:12.347451 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-26mgs_openshift-machine-config-operator(f3f2cfd6-5ddf-436d-998f-440f1cc642b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" podUID="f3f2cfd6-5ddf-436d-998f-440f1cc642b1" Feb 23 09:28:24 crc kubenswrapper[4940]: I0223 09:28:24.346095 4940 scope.go:117] "RemoveContainer" containerID="6b12db6909c870fba93a3600a398d05d040f133d1fff6f41fc66f1a7da8177f8" Feb 23 09:28:24 crc kubenswrapper[4940]: E0223 09:28:24.346920 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-26mgs_openshift-machine-config-operator(f3f2cfd6-5ddf-436d-998f-440f1cc642b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" podUID="f3f2cfd6-5ddf-436d-998f-440f1cc642b1" Feb 23 09:28:39 crc kubenswrapper[4940]: I0223 09:28:39.354510 4940 scope.go:117] "RemoveContainer" containerID="6b12db6909c870fba93a3600a398d05d040f133d1fff6f41fc66f1a7da8177f8" Feb 23 09:28:39 crc kubenswrapper[4940]: E0223 09:28:39.355758 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-26mgs_openshift-machine-config-operator(f3f2cfd6-5ddf-436d-998f-440f1cc642b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" podUID="f3f2cfd6-5ddf-436d-998f-440f1cc642b1" Feb 23 09:28:53 crc kubenswrapper[4940]: I0223 09:28:53.356399 4940 scope.go:117] "RemoveContainer" containerID="6b12db6909c870fba93a3600a398d05d040f133d1fff6f41fc66f1a7da8177f8" Feb 23 09:28:53 crc kubenswrapper[4940]: E0223 09:28:53.359835 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-26mgs_openshift-machine-config-operator(f3f2cfd6-5ddf-436d-998f-440f1cc642b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" podUID="f3f2cfd6-5ddf-436d-998f-440f1cc642b1" Feb 23 09:29:06 crc kubenswrapper[4940]: I0223 09:29:06.345975 4940 scope.go:117] "RemoveContainer" containerID="6b12db6909c870fba93a3600a398d05d040f133d1fff6f41fc66f1a7da8177f8" Feb 23 09:29:06 crc kubenswrapper[4940]: E0223 09:29:06.346988 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-26mgs_openshift-machine-config-operator(f3f2cfd6-5ddf-436d-998f-440f1cc642b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" podUID="f3f2cfd6-5ddf-436d-998f-440f1cc642b1" Feb 23 09:29:20 crc kubenswrapper[4940]: I0223 09:29:20.347401 4940 scope.go:117] "RemoveContainer" containerID="6b12db6909c870fba93a3600a398d05d040f133d1fff6f41fc66f1a7da8177f8" Feb 23 09:29:20 crc kubenswrapper[4940]: E0223 09:29:20.348658 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-26mgs_openshift-machine-config-operator(f3f2cfd6-5ddf-436d-998f-440f1cc642b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" podUID="f3f2cfd6-5ddf-436d-998f-440f1cc642b1" Feb 23 09:29:33 crc kubenswrapper[4940]: I0223 09:29:33.345799 4940 scope.go:117] "RemoveContainer" containerID="6b12db6909c870fba93a3600a398d05d040f133d1fff6f41fc66f1a7da8177f8" Feb 23 09:29:33 crc kubenswrapper[4940]: E0223 09:29:33.346577 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-26mgs_openshift-machine-config-operator(f3f2cfd6-5ddf-436d-998f-440f1cc642b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" podUID="f3f2cfd6-5ddf-436d-998f-440f1cc642b1" Feb 23 09:29:45 crc kubenswrapper[4940]: I0223 09:29:45.346133 4940 scope.go:117] "RemoveContainer" containerID="6b12db6909c870fba93a3600a398d05d040f133d1fff6f41fc66f1a7da8177f8" Feb 23 09:29:45 crc kubenswrapper[4940]: E0223 09:29:45.347016 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-26mgs_openshift-machine-config-operator(f3f2cfd6-5ddf-436d-998f-440f1cc642b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" podUID="f3f2cfd6-5ddf-436d-998f-440f1cc642b1" Feb 23 09:29:59 crc kubenswrapper[4940]: I0223 09:29:59.356327 4940 scope.go:117] "RemoveContainer" containerID="6b12db6909c870fba93a3600a398d05d040f133d1fff6f41fc66f1a7da8177f8" Feb 23 09:29:59 crc kubenswrapper[4940]: E0223 09:29:59.357234 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-26mgs_openshift-machine-config-operator(f3f2cfd6-5ddf-436d-998f-440f1cc642b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" podUID="f3f2cfd6-5ddf-436d-998f-440f1cc642b1" Feb 23 09:30:00 crc kubenswrapper[4940]: I0223 09:30:00.180317 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29530650-5t7cp"] Feb 23 09:30:00 crc kubenswrapper[4940]: I0223 09:30:00.185095 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29530650-5t7cp" Feb 23 09:30:00 crc kubenswrapper[4940]: I0223 09:30:00.187973 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 23 09:30:00 crc kubenswrapper[4940]: I0223 09:30:00.188396 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 23 09:30:00 crc kubenswrapper[4940]: I0223 09:30:00.205353 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29530650-5t7cp"] Feb 23 09:30:00 crc kubenswrapper[4940]: I0223 09:30:00.250213 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jmq5r\" (UniqueName: \"kubernetes.io/projected/608b2d6d-4d96-4ccf-82f8-8b8e0f0f15c3-kube-api-access-jmq5r\") pod \"collect-profiles-29530650-5t7cp\" (UID: \"608b2d6d-4d96-4ccf-82f8-8b8e0f0f15c3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29530650-5t7cp" Feb 23 09:30:00 crc kubenswrapper[4940]: I0223 09:30:00.250274 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/608b2d6d-4d96-4ccf-82f8-8b8e0f0f15c3-config-volume\") pod \"collect-profiles-29530650-5t7cp\" (UID: \"608b2d6d-4d96-4ccf-82f8-8b8e0f0f15c3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29530650-5t7cp" Feb 23 09:30:00 crc kubenswrapper[4940]: I0223 09:30:00.250296 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/608b2d6d-4d96-4ccf-82f8-8b8e0f0f15c3-secret-volume\") pod \"collect-profiles-29530650-5t7cp\" (UID: \"608b2d6d-4d96-4ccf-82f8-8b8e0f0f15c3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29530650-5t7cp" Feb 23 09:30:00 crc kubenswrapper[4940]: I0223 09:30:00.352037 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jmq5r\" (UniqueName: \"kubernetes.io/projected/608b2d6d-4d96-4ccf-82f8-8b8e0f0f15c3-kube-api-access-jmq5r\") pod \"collect-profiles-29530650-5t7cp\" (UID: \"608b2d6d-4d96-4ccf-82f8-8b8e0f0f15c3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29530650-5t7cp" Feb 23 09:30:00 crc kubenswrapper[4940]: I0223 09:30:00.352095 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/608b2d6d-4d96-4ccf-82f8-8b8e0f0f15c3-config-volume\") pod \"collect-profiles-29530650-5t7cp\" (UID: \"608b2d6d-4d96-4ccf-82f8-8b8e0f0f15c3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29530650-5t7cp" Feb 23 09:30:00 crc kubenswrapper[4940]: I0223 09:30:00.352114 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/608b2d6d-4d96-4ccf-82f8-8b8e0f0f15c3-secret-volume\") pod \"collect-profiles-29530650-5t7cp\" (UID: \"608b2d6d-4d96-4ccf-82f8-8b8e0f0f15c3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29530650-5t7cp" Feb 23 09:30:00 crc kubenswrapper[4940]: I0223 09:30:00.353034 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/608b2d6d-4d96-4ccf-82f8-8b8e0f0f15c3-config-volume\") pod \"collect-profiles-29530650-5t7cp\" (UID: \"608b2d6d-4d96-4ccf-82f8-8b8e0f0f15c3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29530650-5t7cp" Feb 23 09:30:00 crc kubenswrapper[4940]: I0223 09:30:00.358466 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/608b2d6d-4d96-4ccf-82f8-8b8e0f0f15c3-secret-volume\") pod \"collect-profiles-29530650-5t7cp\" (UID: \"608b2d6d-4d96-4ccf-82f8-8b8e0f0f15c3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29530650-5t7cp" Feb 23 09:30:00 crc kubenswrapper[4940]: I0223 09:30:00.370197 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jmq5r\" (UniqueName: \"kubernetes.io/projected/608b2d6d-4d96-4ccf-82f8-8b8e0f0f15c3-kube-api-access-jmq5r\") pod \"collect-profiles-29530650-5t7cp\" (UID: \"608b2d6d-4d96-4ccf-82f8-8b8e0f0f15c3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29530650-5t7cp" Feb 23 09:30:00 crc kubenswrapper[4940]: I0223 09:30:00.529686 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29530650-5t7cp" Feb 23 09:30:00 crc kubenswrapper[4940]: I0223 09:30:00.970447 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29530650-5t7cp"] Feb 23 09:30:00 crc kubenswrapper[4940]: W0223 09:30:00.972928 4940 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod608b2d6d_4d96_4ccf_82f8_8b8e0f0f15c3.slice/crio-eed75dbefdefeaede921b1533e5d09eaed5d9760fe0c672844073aef6e3d6b8d WatchSource:0}: Error finding container eed75dbefdefeaede921b1533e5d09eaed5d9760fe0c672844073aef6e3d6b8d: Status 404 returned error can't find the container with id eed75dbefdefeaede921b1533e5d09eaed5d9760fe0c672844073aef6e3d6b8d Feb 23 09:30:01 crc kubenswrapper[4940]: I0223 09:30:01.927446 4940 generic.go:334] "Generic (PLEG): container finished" podID="608b2d6d-4d96-4ccf-82f8-8b8e0f0f15c3" containerID="5fcdb617cb1878693644da3e8b924fe966f5f382e6c84787909dd58a44ac1a19" exitCode=0 Feb 23 09:30:01 crc kubenswrapper[4940]: I0223 09:30:01.927748 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29530650-5t7cp" event={"ID":"608b2d6d-4d96-4ccf-82f8-8b8e0f0f15c3","Type":"ContainerDied","Data":"5fcdb617cb1878693644da3e8b924fe966f5f382e6c84787909dd58a44ac1a19"} Feb 23 09:30:01 crc kubenswrapper[4940]: I0223 09:30:01.927781 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29530650-5t7cp" event={"ID":"608b2d6d-4d96-4ccf-82f8-8b8e0f0f15c3","Type":"ContainerStarted","Data":"eed75dbefdefeaede921b1533e5d09eaed5d9760fe0c672844073aef6e3d6b8d"} Feb 23 09:30:02 crc kubenswrapper[4940]: I0223 09:30:02.936852 4940 generic.go:334] "Generic (PLEG): container finished" podID="4528f4f4-45cd-415f-902e-d15ecef72b60" containerID="97526a3f9455364d3f0bc394a4d207026cf666a090c91a7dc95ee84d21580e39" exitCode=0 Feb 23 09:30:02 crc kubenswrapper[4940]: I0223 09:30:02.936924 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-wj7m8" event={"ID":"4528f4f4-45cd-415f-902e-d15ecef72b60","Type":"ContainerDied","Data":"97526a3f9455364d3f0bc394a4d207026cf666a090c91a7dc95ee84d21580e39"} Feb 23 09:30:03 crc kubenswrapper[4940]: I0223 09:30:03.280385 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29530650-5t7cp" Feb 23 09:30:03 crc kubenswrapper[4940]: I0223 09:30:03.418557 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/608b2d6d-4d96-4ccf-82f8-8b8e0f0f15c3-secret-volume\") pod \"608b2d6d-4d96-4ccf-82f8-8b8e0f0f15c3\" (UID: \"608b2d6d-4d96-4ccf-82f8-8b8e0f0f15c3\") " Feb 23 09:30:03 crc kubenswrapper[4940]: I0223 09:30:03.418671 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/608b2d6d-4d96-4ccf-82f8-8b8e0f0f15c3-config-volume\") pod \"608b2d6d-4d96-4ccf-82f8-8b8e0f0f15c3\" (UID: \"608b2d6d-4d96-4ccf-82f8-8b8e0f0f15c3\") " Feb 23 09:30:03 crc kubenswrapper[4940]: I0223 09:30:03.418896 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jmq5r\" (UniqueName: \"kubernetes.io/projected/608b2d6d-4d96-4ccf-82f8-8b8e0f0f15c3-kube-api-access-jmq5r\") pod \"608b2d6d-4d96-4ccf-82f8-8b8e0f0f15c3\" (UID: \"608b2d6d-4d96-4ccf-82f8-8b8e0f0f15c3\") " Feb 23 09:30:03 crc kubenswrapper[4940]: I0223 09:30:03.419359 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/608b2d6d-4d96-4ccf-82f8-8b8e0f0f15c3-config-volume" (OuterVolumeSpecName: "config-volume") pod "608b2d6d-4d96-4ccf-82f8-8b8e0f0f15c3" (UID: "608b2d6d-4d96-4ccf-82f8-8b8e0f0f15c3"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 09:30:03 crc kubenswrapper[4940]: I0223 09:30:03.420560 4940 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/608b2d6d-4d96-4ccf-82f8-8b8e0f0f15c3-config-volume\") on node \"crc\" DevicePath \"\"" Feb 23 09:30:03 crc kubenswrapper[4940]: I0223 09:30:03.425107 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/608b2d6d-4d96-4ccf-82f8-8b8e0f0f15c3-kube-api-access-jmq5r" (OuterVolumeSpecName: "kube-api-access-jmq5r") pod "608b2d6d-4d96-4ccf-82f8-8b8e0f0f15c3" (UID: "608b2d6d-4d96-4ccf-82f8-8b8e0f0f15c3"). InnerVolumeSpecName "kube-api-access-jmq5r". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 09:30:03 crc kubenswrapper[4940]: I0223 09:30:03.433847 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/608b2d6d-4d96-4ccf-82f8-8b8e0f0f15c3-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "608b2d6d-4d96-4ccf-82f8-8b8e0f0f15c3" (UID: "608b2d6d-4d96-4ccf-82f8-8b8e0f0f15c3"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 09:30:03 crc kubenswrapper[4940]: I0223 09:30:03.523177 4940 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/608b2d6d-4d96-4ccf-82f8-8b8e0f0f15c3-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 23 09:30:03 crc kubenswrapper[4940]: I0223 09:30:03.523210 4940 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jmq5r\" (UniqueName: \"kubernetes.io/projected/608b2d6d-4d96-4ccf-82f8-8b8e0f0f15c3-kube-api-access-jmq5r\") on node \"crc\" DevicePath \"\"" Feb 23 09:30:03 crc kubenswrapper[4940]: I0223 09:30:03.947126 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29530650-5t7cp" Feb 23 09:30:03 crc kubenswrapper[4940]: I0223 09:30:03.947122 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29530650-5t7cp" event={"ID":"608b2d6d-4d96-4ccf-82f8-8b8e0f0f15c3","Type":"ContainerDied","Data":"eed75dbefdefeaede921b1533e5d09eaed5d9760fe0c672844073aef6e3d6b8d"} Feb 23 09:30:03 crc kubenswrapper[4940]: I0223 09:30:03.947182 4940 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eed75dbefdefeaede921b1533e5d09eaed5d9760fe0c672844073aef6e3d6b8d" Feb 23 09:30:04 crc kubenswrapper[4940]: I0223 09:30:04.353647 4940 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29530605-25rxm"] Feb 23 09:30:04 crc kubenswrapper[4940]: I0223 09:30:04.365010 4940 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29530605-25rxm"] Feb 23 09:30:04 crc kubenswrapper[4940]: I0223 09:30:04.403355 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-wj7m8" Feb 23 09:30:04 crc kubenswrapper[4940]: I0223 09:30:04.573122 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/4528f4f4-45cd-415f-902e-d15ecef72b60-nova-cell1-compute-config-0\") pod \"4528f4f4-45cd-415f-902e-d15ecef72b60\" (UID: \"4528f4f4-45cd-415f-902e-d15ecef72b60\") " Feb 23 09:30:04 crc kubenswrapper[4940]: I0223 09:30:04.573211 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4528f4f4-45cd-415f-902e-d15ecef72b60-nova-combined-ca-bundle\") pod \"4528f4f4-45cd-415f-902e-d15ecef72b60\" (UID: \"4528f4f4-45cd-415f-902e-d15ecef72b60\") " Feb 23 09:30:04 crc kubenswrapper[4940]: I0223 09:30:04.573241 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/4528f4f4-45cd-415f-902e-d15ecef72b60-nova-cell1-compute-config-2\") pod \"4528f4f4-45cd-415f-902e-d15ecef72b60\" (UID: \"4528f4f4-45cd-415f-902e-d15ecef72b60\") " Feb 23 09:30:04 crc kubenswrapper[4940]: I0223 09:30:04.573259 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q5phc\" (UniqueName: \"kubernetes.io/projected/4528f4f4-45cd-415f-902e-d15ecef72b60-kube-api-access-q5phc\") pod \"4528f4f4-45cd-415f-902e-d15ecef72b60\" (UID: \"4528f4f4-45cd-415f-902e-d15ecef72b60\") " Feb 23 09:30:04 crc kubenswrapper[4940]: I0223 09:30:04.573353 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/4528f4f4-45cd-415f-902e-d15ecef72b60-nova-cell1-compute-config-3\") pod \"4528f4f4-45cd-415f-902e-d15ecef72b60\" (UID: \"4528f4f4-45cd-415f-902e-d15ecef72b60\") " Feb 23 09:30:04 crc kubenswrapper[4940]: I0223 09:30:04.573395 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/4528f4f4-45cd-415f-902e-d15ecef72b60-nova-extra-config-0\") pod \"4528f4f4-45cd-415f-902e-d15ecef72b60\" (UID: \"4528f4f4-45cd-415f-902e-d15ecef72b60\") " Feb 23 09:30:04 crc kubenswrapper[4940]: I0223 09:30:04.573415 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/4528f4f4-45cd-415f-902e-d15ecef72b60-nova-migration-ssh-key-1\") pod \"4528f4f4-45cd-415f-902e-d15ecef72b60\" (UID: \"4528f4f4-45cd-415f-902e-d15ecef72b60\") " Feb 23 09:30:04 crc kubenswrapper[4940]: I0223 09:30:04.573453 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/4528f4f4-45cd-415f-902e-d15ecef72b60-nova-migration-ssh-key-0\") pod \"4528f4f4-45cd-415f-902e-d15ecef72b60\" (UID: \"4528f4f4-45cd-415f-902e-d15ecef72b60\") " Feb 23 09:30:04 crc kubenswrapper[4940]: I0223 09:30:04.573475 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/4528f4f4-45cd-415f-902e-d15ecef72b60-nova-cell1-compute-config-1\") pod \"4528f4f4-45cd-415f-902e-d15ecef72b60\" (UID: \"4528f4f4-45cd-415f-902e-d15ecef72b60\") " Feb 23 09:30:04 crc kubenswrapper[4940]: I0223 09:30:04.573501 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4528f4f4-45cd-415f-902e-d15ecef72b60-ssh-key-openstack-edpm-ipam\") pod \"4528f4f4-45cd-415f-902e-d15ecef72b60\" (UID: \"4528f4f4-45cd-415f-902e-d15ecef72b60\") " Feb 23 09:30:04 crc kubenswrapper[4940]: I0223 09:30:04.573572 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4528f4f4-45cd-415f-902e-d15ecef72b60-inventory\") pod \"4528f4f4-45cd-415f-902e-d15ecef72b60\" (UID: \"4528f4f4-45cd-415f-902e-d15ecef72b60\") " Feb 23 09:30:04 crc kubenswrapper[4940]: I0223 09:30:04.583931 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4528f4f4-45cd-415f-902e-d15ecef72b60-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "4528f4f4-45cd-415f-902e-d15ecef72b60" (UID: "4528f4f4-45cd-415f-902e-d15ecef72b60"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 09:30:04 crc kubenswrapper[4940]: I0223 09:30:04.584122 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4528f4f4-45cd-415f-902e-d15ecef72b60-kube-api-access-q5phc" (OuterVolumeSpecName: "kube-api-access-q5phc") pod "4528f4f4-45cd-415f-902e-d15ecef72b60" (UID: "4528f4f4-45cd-415f-902e-d15ecef72b60"). InnerVolumeSpecName "kube-api-access-q5phc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 09:30:04 crc kubenswrapper[4940]: I0223 09:30:04.605697 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4528f4f4-45cd-415f-902e-d15ecef72b60-nova-extra-config-0" (OuterVolumeSpecName: "nova-extra-config-0") pod "4528f4f4-45cd-415f-902e-d15ecef72b60" (UID: "4528f4f4-45cd-415f-902e-d15ecef72b60"). InnerVolumeSpecName "nova-extra-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 09:30:04 crc kubenswrapper[4940]: I0223 09:30:04.607108 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4528f4f4-45cd-415f-902e-d15ecef72b60-nova-migration-ssh-key-1" (OuterVolumeSpecName: "nova-migration-ssh-key-1") pod "4528f4f4-45cd-415f-902e-d15ecef72b60" (UID: "4528f4f4-45cd-415f-902e-d15ecef72b60"). InnerVolumeSpecName "nova-migration-ssh-key-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 09:30:04 crc kubenswrapper[4940]: I0223 09:30:04.607587 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4528f4f4-45cd-415f-902e-d15ecef72b60-nova-cell1-compute-config-3" (OuterVolumeSpecName: "nova-cell1-compute-config-3") pod "4528f4f4-45cd-415f-902e-d15ecef72b60" (UID: "4528f4f4-45cd-415f-902e-d15ecef72b60"). InnerVolumeSpecName "nova-cell1-compute-config-3". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 09:30:04 crc kubenswrapper[4940]: I0223 09:30:04.609636 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4528f4f4-45cd-415f-902e-d15ecef72b60-nova-cell1-compute-config-0" (OuterVolumeSpecName: "nova-cell1-compute-config-0") pod "4528f4f4-45cd-415f-902e-d15ecef72b60" (UID: "4528f4f4-45cd-415f-902e-d15ecef72b60"). InnerVolumeSpecName "nova-cell1-compute-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 09:30:04 crc kubenswrapper[4940]: I0223 09:30:04.609686 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4528f4f4-45cd-415f-902e-d15ecef72b60-nova-cell1-compute-config-2" (OuterVolumeSpecName: "nova-cell1-compute-config-2") pod "4528f4f4-45cd-415f-902e-d15ecef72b60" (UID: "4528f4f4-45cd-415f-902e-d15ecef72b60"). InnerVolumeSpecName "nova-cell1-compute-config-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 09:30:04 crc kubenswrapper[4940]: I0223 09:30:04.610203 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4528f4f4-45cd-415f-902e-d15ecef72b60-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "4528f4f4-45cd-415f-902e-d15ecef72b60" (UID: "4528f4f4-45cd-415f-902e-d15ecef72b60"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 09:30:04 crc kubenswrapper[4940]: I0223 09:30:04.611320 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4528f4f4-45cd-415f-902e-d15ecef72b60-nova-cell1-compute-config-1" (OuterVolumeSpecName: "nova-cell1-compute-config-1") pod "4528f4f4-45cd-415f-902e-d15ecef72b60" (UID: "4528f4f4-45cd-415f-902e-d15ecef72b60"). InnerVolumeSpecName "nova-cell1-compute-config-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 09:30:04 crc kubenswrapper[4940]: I0223 09:30:04.612070 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4528f4f4-45cd-415f-902e-d15ecef72b60-inventory" (OuterVolumeSpecName: "inventory") pod "4528f4f4-45cd-415f-902e-d15ecef72b60" (UID: "4528f4f4-45cd-415f-902e-d15ecef72b60"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 09:30:04 crc kubenswrapper[4940]: I0223 09:30:04.613842 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4528f4f4-45cd-415f-902e-d15ecef72b60-nova-migration-ssh-key-0" (OuterVolumeSpecName: "nova-migration-ssh-key-0") pod "4528f4f4-45cd-415f-902e-d15ecef72b60" (UID: "4528f4f4-45cd-415f-902e-d15ecef72b60"). InnerVolumeSpecName "nova-migration-ssh-key-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 09:30:04 crc kubenswrapper[4940]: I0223 09:30:04.676144 4940 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4528f4f4-45cd-415f-902e-d15ecef72b60-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 09:30:04 crc kubenswrapper[4940]: I0223 09:30:04.676179 4940 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/4528f4f4-45cd-415f-902e-d15ecef72b60-nova-cell1-compute-config-2\") on node \"crc\" DevicePath \"\"" Feb 23 09:30:04 crc kubenswrapper[4940]: I0223 09:30:04.676188 4940 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q5phc\" (UniqueName: \"kubernetes.io/projected/4528f4f4-45cd-415f-902e-d15ecef72b60-kube-api-access-q5phc\") on node \"crc\" DevicePath \"\"" Feb 23 09:30:04 crc kubenswrapper[4940]: I0223 09:30:04.676196 4940 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/4528f4f4-45cd-415f-902e-d15ecef72b60-nova-cell1-compute-config-3\") on node \"crc\" DevicePath \"\"" Feb 23 09:30:04 crc kubenswrapper[4940]: I0223 09:30:04.676206 4940 reconciler_common.go:293] "Volume detached for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/4528f4f4-45cd-415f-902e-d15ecef72b60-nova-extra-config-0\") on node \"crc\" DevicePath \"\"" Feb 23 09:30:04 crc kubenswrapper[4940]: I0223 09:30:04.676215 4940 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/4528f4f4-45cd-415f-902e-d15ecef72b60-nova-migration-ssh-key-1\") on node \"crc\" DevicePath \"\"" Feb 23 09:30:04 crc kubenswrapper[4940]: I0223 09:30:04.676222 4940 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/4528f4f4-45cd-415f-902e-d15ecef72b60-nova-migration-ssh-key-0\") on node \"crc\" DevicePath \"\"" Feb 23 09:30:04 crc kubenswrapper[4940]: I0223 09:30:04.676230 4940 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/4528f4f4-45cd-415f-902e-d15ecef72b60-nova-cell1-compute-config-1\") on node \"crc\" DevicePath \"\"" Feb 23 09:30:04 crc kubenswrapper[4940]: I0223 09:30:04.676237 4940 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4528f4f4-45cd-415f-902e-d15ecef72b60-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 23 09:30:04 crc kubenswrapper[4940]: I0223 09:30:04.676246 4940 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4528f4f4-45cd-415f-902e-d15ecef72b60-inventory\") on node \"crc\" DevicePath \"\"" Feb 23 09:30:04 crc kubenswrapper[4940]: I0223 09:30:04.676256 4940 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/4528f4f4-45cd-415f-902e-d15ecef72b60-nova-cell1-compute-config-0\") on node \"crc\" DevicePath \"\"" Feb 23 09:30:04 crc kubenswrapper[4940]: I0223 09:30:04.962335 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-wj7m8" event={"ID":"4528f4f4-45cd-415f-902e-d15ecef72b60","Type":"ContainerDied","Data":"b641abe943de9d2f0cbde780ae6ad2e07a9ae67fbf0b551d380a0f375866a510"} Feb 23 09:30:04 crc kubenswrapper[4940]: I0223 09:30:04.962882 4940 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b641abe943de9d2f0cbde780ae6ad2e07a9ae67fbf0b551d380a0f375866a510" Feb 23 09:30:04 crc kubenswrapper[4940]: I0223 09:30:04.962838 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-wj7m8" Feb 23 09:30:05 crc kubenswrapper[4940]: I0223 09:30:05.046597 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-5zv49"] Feb 23 09:30:05 crc kubenswrapper[4940]: E0223 09:30:05.047128 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4528f4f4-45cd-415f-902e-d15ecef72b60" containerName="nova-edpm-deployment-openstack-edpm-ipam" Feb 23 09:30:05 crc kubenswrapper[4940]: I0223 09:30:05.047157 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="4528f4f4-45cd-415f-902e-d15ecef72b60" containerName="nova-edpm-deployment-openstack-edpm-ipam" Feb 23 09:30:05 crc kubenswrapper[4940]: E0223 09:30:05.047178 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="608b2d6d-4d96-4ccf-82f8-8b8e0f0f15c3" containerName="collect-profiles" Feb 23 09:30:05 crc kubenswrapper[4940]: I0223 09:30:05.047186 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="608b2d6d-4d96-4ccf-82f8-8b8e0f0f15c3" containerName="collect-profiles" Feb 23 09:30:05 crc kubenswrapper[4940]: I0223 09:30:05.047433 4940 memory_manager.go:354] "RemoveStaleState removing state" podUID="4528f4f4-45cd-415f-902e-d15ecef72b60" containerName="nova-edpm-deployment-openstack-edpm-ipam" Feb 23 09:30:05 crc kubenswrapper[4940]: I0223 09:30:05.047469 4940 memory_manager.go:354] "RemoveStaleState removing state" podUID="608b2d6d-4d96-4ccf-82f8-8b8e0f0f15c3" containerName="collect-profiles" Feb 23 09:30:05 crc kubenswrapper[4940]: I0223 09:30:05.048314 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-5zv49" Feb 23 09:30:05 crc kubenswrapper[4940]: I0223 09:30:05.051025 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 23 09:30:05 crc kubenswrapper[4940]: I0223 09:30:05.051140 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-compute-config-data" Feb 23 09:30:05 crc kubenswrapper[4940]: I0223 09:30:05.051368 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 23 09:30:05 crc kubenswrapper[4940]: I0223 09:30:05.053495 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 23 09:30:05 crc kubenswrapper[4940]: I0223 09:30:05.053706 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-x648h" Feb 23 09:30:05 crc kubenswrapper[4940]: I0223 09:30:05.058964 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-5zv49"] Feb 23 09:30:05 crc kubenswrapper[4940]: I0223 09:30:05.187096 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/16b77a40-fb67-4fe3-b4c8-d87dd4be9b25-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-5zv49\" (UID: \"16b77a40-fb67-4fe3-b4c8-d87dd4be9b25\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-5zv49" Feb 23 09:30:05 crc kubenswrapper[4940]: I0223 09:30:05.187135 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16b77a40-fb67-4fe3-b4c8-d87dd4be9b25-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-5zv49\" (UID: \"16b77a40-fb67-4fe3-b4c8-d87dd4be9b25\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-5zv49" Feb 23 09:30:05 crc kubenswrapper[4940]: I0223 09:30:05.187220 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/16b77a40-fb67-4fe3-b4c8-d87dd4be9b25-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-5zv49\" (UID: \"16b77a40-fb67-4fe3-b4c8-d87dd4be9b25\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-5zv49" Feb 23 09:30:05 crc kubenswrapper[4940]: I0223 09:30:05.187242 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/16b77a40-fb67-4fe3-b4c8-d87dd4be9b25-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-5zv49\" (UID: \"16b77a40-fb67-4fe3-b4c8-d87dd4be9b25\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-5zv49" Feb 23 09:30:05 crc kubenswrapper[4940]: I0223 09:30:05.187270 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/16b77a40-fb67-4fe3-b4c8-d87dd4be9b25-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-5zv49\" (UID: \"16b77a40-fb67-4fe3-b4c8-d87dd4be9b25\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-5zv49" Feb 23 09:30:05 crc kubenswrapper[4940]: I0223 09:30:05.187291 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w4q55\" (UniqueName: \"kubernetes.io/projected/16b77a40-fb67-4fe3-b4c8-d87dd4be9b25-kube-api-access-w4q55\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-5zv49\" (UID: \"16b77a40-fb67-4fe3-b4c8-d87dd4be9b25\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-5zv49" Feb 23 09:30:05 crc kubenswrapper[4940]: I0223 09:30:05.187317 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/16b77a40-fb67-4fe3-b4c8-d87dd4be9b25-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-5zv49\" (UID: \"16b77a40-fb67-4fe3-b4c8-d87dd4be9b25\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-5zv49" Feb 23 09:30:05 crc kubenswrapper[4940]: I0223 09:30:05.289758 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/16b77a40-fb67-4fe3-b4c8-d87dd4be9b25-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-5zv49\" (UID: \"16b77a40-fb67-4fe3-b4c8-d87dd4be9b25\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-5zv49" Feb 23 09:30:05 crc kubenswrapper[4940]: I0223 09:30:05.289822 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/16b77a40-fb67-4fe3-b4c8-d87dd4be9b25-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-5zv49\" (UID: \"16b77a40-fb67-4fe3-b4c8-d87dd4be9b25\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-5zv49" Feb 23 09:30:05 crc kubenswrapper[4940]: I0223 09:30:05.289863 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/16b77a40-fb67-4fe3-b4c8-d87dd4be9b25-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-5zv49\" (UID: \"16b77a40-fb67-4fe3-b4c8-d87dd4be9b25\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-5zv49" Feb 23 09:30:05 crc kubenswrapper[4940]: I0223 09:30:05.289900 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w4q55\" (UniqueName: \"kubernetes.io/projected/16b77a40-fb67-4fe3-b4c8-d87dd4be9b25-kube-api-access-w4q55\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-5zv49\" (UID: \"16b77a40-fb67-4fe3-b4c8-d87dd4be9b25\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-5zv49" Feb 23 09:30:05 crc kubenswrapper[4940]: I0223 09:30:05.289933 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/16b77a40-fb67-4fe3-b4c8-d87dd4be9b25-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-5zv49\" (UID: \"16b77a40-fb67-4fe3-b4c8-d87dd4be9b25\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-5zv49" Feb 23 09:30:05 crc kubenswrapper[4940]: I0223 09:30:05.290036 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/16b77a40-fb67-4fe3-b4c8-d87dd4be9b25-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-5zv49\" (UID: \"16b77a40-fb67-4fe3-b4c8-d87dd4be9b25\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-5zv49" Feb 23 09:30:05 crc kubenswrapper[4940]: I0223 09:30:05.290056 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16b77a40-fb67-4fe3-b4c8-d87dd4be9b25-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-5zv49\" (UID: \"16b77a40-fb67-4fe3-b4c8-d87dd4be9b25\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-5zv49" Feb 23 09:30:05 crc kubenswrapper[4940]: I0223 09:30:05.293988 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/16b77a40-fb67-4fe3-b4c8-d87dd4be9b25-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-5zv49\" (UID: \"16b77a40-fb67-4fe3-b4c8-d87dd4be9b25\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-5zv49" Feb 23 09:30:05 crc kubenswrapper[4940]: I0223 09:30:05.294016 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/16b77a40-fb67-4fe3-b4c8-d87dd4be9b25-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-5zv49\" (UID: \"16b77a40-fb67-4fe3-b4c8-d87dd4be9b25\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-5zv49" Feb 23 09:30:05 crc kubenswrapper[4940]: I0223 09:30:05.294475 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/16b77a40-fb67-4fe3-b4c8-d87dd4be9b25-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-5zv49\" (UID: \"16b77a40-fb67-4fe3-b4c8-d87dd4be9b25\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-5zv49" Feb 23 09:30:05 crc kubenswrapper[4940]: I0223 09:30:05.295583 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/16b77a40-fb67-4fe3-b4c8-d87dd4be9b25-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-5zv49\" (UID: \"16b77a40-fb67-4fe3-b4c8-d87dd4be9b25\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-5zv49" Feb 23 09:30:05 crc kubenswrapper[4940]: I0223 09:30:05.296900 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/16b77a40-fb67-4fe3-b4c8-d87dd4be9b25-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-5zv49\" (UID: \"16b77a40-fb67-4fe3-b4c8-d87dd4be9b25\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-5zv49" Feb 23 09:30:05 crc kubenswrapper[4940]: I0223 09:30:05.303399 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16b77a40-fb67-4fe3-b4c8-d87dd4be9b25-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-5zv49\" (UID: \"16b77a40-fb67-4fe3-b4c8-d87dd4be9b25\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-5zv49" Feb 23 09:30:05 crc kubenswrapper[4940]: I0223 09:30:05.312490 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w4q55\" (UniqueName: \"kubernetes.io/projected/16b77a40-fb67-4fe3-b4c8-d87dd4be9b25-kube-api-access-w4q55\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-5zv49\" (UID: \"16b77a40-fb67-4fe3-b4c8-d87dd4be9b25\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-5zv49" Feb 23 09:30:05 crc kubenswrapper[4940]: I0223 09:30:05.359780 4940 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1b16df0c-b660-4f3c-9d26-cfff395d5c88" path="/var/lib/kubelet/pods/1b16df0c-b660-4f3c-9d26-cfff395d5c88/volumes" Feb 23 09:30:05 crc kubenswrapper[4940]: I0223 09:30:05.377736 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-5zv49" Feb 23 09:30:05 crc kubenswrapper[4940]: W0223 09:30:05.905556 4940 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod16b77a40_fb67_4fe3_b4c8_d87dd4be9b25.slice/crio-8a6885351d67a64d32ee81f74dfd737eec55f9ea3c9a63b5be660187abe81e3c WatchSource:0}: Error finding container 8a6885351d67a64d32ee81f74dfd737eec55f9ea3c9a63b5be660187abe81e3c: Status 404 returned error can't find the container with id 8a6885351d67a64d32ee81f74dfd737eec55f9ea3c9a63b5be660187abe81e3c Feb 23 09:30:05 crc kubenswrapper[4940]: I0223 09:30:05.910420 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-5zv49"] Feb 23 09:30:05 crc kubenswrapper[4940]: I0223 09:30:05.971870 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-5zv49" event={"ID":"16b77a40-fb67-4fe3-b4c8-d87dd4be9b25","Type":"ContainerStarted","Data":"8a6885351d67a64d32ee81f74dfd737eec55f9ea3c9a63b5be660187abe81e3c"} Feb 23 09:30:06 crc kubenswrapper[4940]: I0223 09:30:06.982387 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-5zv49" event={"ID":"16b77a40-fb67-4fe3-b4c8-d87dd4be9b25","Type":"ContainerStarted","Data":"b936b3bca7fec7fcaa0a32d838f7dc89e7522bf1b882e06b998bfdbb62c9a017"} Feb 23 09:30:07 crc kubenswrapper[4940]: I0223 09:30:07.022866 4940 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-5zv49" podStartSLOduration=1.508274662 podStartE2EDuration="2.022845682s" podCreationTimestamp="2026-02-23 09:30:05 +0000 UTC" firstStartedPulling="2026-02-23 09:30:05.907920464 +0000 UTC m=+2537.291126621" lastFinishedPulling="2026-02-23 09:30:06.422491484 +0000 UTC m=+2537.805697641" observedRunningTime="2026-02-23 09:30:06.999190828 +0000 UTC m=+2538.382396985" watchObservedRunningTime="2026-02-23 09:30:07.022845682 +0000 UTC m=+2538.406051849" Feb 23 09:30:11 crc kubenswrapper[4940]: I0223 09:30:11.397390 4940 scope.go:117] "RemoveContainer" containerID="6b12db6909c870fba93a3600a398d05d040f133d1fff6f41fc66f1a7da8177f8" Feb 23 09:30:11 crc kubenswrapper[4940]: E0223 09:30:11.398248 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-26mgs_openshift-machine-config-operator(f3f2cfd6-5ddf-436d-998f-440f1cc642b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" podUID="f3f2cfd6-5ddf-436d-998f-440f1cc642b1" Feb 23 09:30:24 crc kubenswrapper[4940]: I0223 09:30:24.346970 4940 scope.go:117] "RemoveContainer" containerID="6b12db6909c870fba93a3600a398d05d040f133d1fff6f41fc66f1a7da8177f8" Feb 23 09:30:24 crc kubenswrapper[4940]: E0223 09:30:24.348497 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-26mgs_openshift-machine-config-operator(f3f2cfd6-5ddf-436d-998f-440f1cc642b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" podUID="f3f2cfd6-5ddf-436d-998f-440f1cc642b1" Feb 23 09:30:35 crc kubenswrapper[4940]: I0223 09:30:35.472798 4940 scope.go:117] "RemoveContainer" containerID="517398e88a48d2218e2707899fb06889fdece6b02715d0d6da6fdfd4576022e5" Feb 23 09:30:39 crc kubenswrapper[4940]: I0223 09:30:39.351364 4940 scope.go:117] "RemoveContainer" containerID="6b12db6909c870fba93a3600a398d05d040f133d1fff6f41fc66f1a7da8177f8" Feb 23 09:30:40 crc kubenswrapper[4940]: I0223 09:30:40.521910 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" event={"ID":"f3f2cfd6-5ddf-436d-998f-440f1cc642b1","Type":"ContainerStarted","Data":"b8953f6e299fb6e672b60d70959469929c73e7840881290e17fc29267585d923"} Feb 23 09:32:18 crc kubenswrapper[4940]: I0223 09:32:18.399631 4940 generic.go:334] "Generic (PLEG): container finished" podID="16b77a40-fb67-4fe3-b4c8-d87dd4be9b25" containerID="b936b3bca7fec7fcaa0a32d838f7dc89e7522bf1b882e06b998bfdbb62c9a017" exitCode=0 Feb 23 09:32:18 crc kubenswrapper[4940]: I0223 09:32:18.399671 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-5zv49" event={"ID":"16b77a40-fb67-4fe3-b4c8-d87dd4be9b25","Type":"ContainerDied","Data":"b936b3bca7fec7fcaa0a32d838f7dc89e7522bf1b882e06b998bfdbb62c9a017"} Feb 23 09:32:19 crc kubenswrapper[4940]: I0223 09:32:19.860580 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-5zv49" Feb 23 09:32:20 crc kubenswrapper[4940]: I0223 09:32:20.017646 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/16b77a40-fb67-4fe3-b4c8-d87dd4be9b25-ssh-key-openstack-edpm-ipam\") pod \"16b77a40-fb67-4fe3-b4c8-d87dd4be9b25\" (UID: \"16b77a40-fb67-4fe3-b4c8-d87dd4be9b25\") " Feb 23 09:32:20 crc kubenswrapper[4940]: I0223 09:32:20.017824 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/16b77a40-fb67-4fe3-b4c8-d87dd4be9b25-ceilometer-compute-config-data-2\") pod \"16b77a40-fb67-4fe3-b4c8-d87dd4be9b25\" (UID: \"16b77a40-fb67-4fe3-b4c8-d87dd4be9b25\") " Feb 23 09:32:20 crc kubenswrapper[4940]: I0223 09:32:20.018080 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/16b77a40-fb67-4fe3-b4c8-d87dd4be9b25-ceilometer-compute-config-data-1\") pod \"16b77a40-fb67-4fe3-b4c8-d87dd4be9b25\" (UID: \"16b77a40-fb67-4fe3-b4c8-d87dd4be9b25\") " Feb 23 09:32:20 crc kubenswrapper[4940]: I0223 09:32:20.018133 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/16b77a40-fb67-4fe3-b4c8-d87dd4be9b25-ceilometer-compute-config-data-0\") pod \"16b77a40-fb67-4fe3-b4c8-d87dd4be9b25\" (UID: \"16b77a40-fb67-4fe3-b4c8-d87dd4be9b25\") " Feb 23 09:32:20 crc kubenswrapper[4940]: I0223 09:32:20.018194 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4q55\" (UniqueName: \"kubernetes.io/projected/16b77a40-fb67-4fe3-b4c8-d87dd4be9b25-kube-api-access-w4q55\") pod \"16b77a40-fb67-4fe3-b4c8-d87dd4be9b25\" (UID: \"16b77a40-fb67-4fe3-b4c8-d87dd4be9b25\") " Feb 23 09:32:20 crc kubenswrapper[4940]: I0223 09:32:20.018262 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/16b77a40-fb67-4fe3-b4c8-d87dd4be9b25-inventory\") pod \"16b77a40-fb67-4fe3-b4c8-d87dd4be9b25\" (UID: \"16b77a40-fb67-4fe3-b4c8-d87dd4be9b25\") " Feb 23 09:32:20 crc kubenswrapper[4940]: I0223 09:32:20.018333 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16b77a40-fb67-4fe3-b4c8-d87dd4be9b25-telemetry-combined-ca-bundle\") pod \"16b77a40-fb67-4fe3-b4c8-d87dd4be9b25\" (UID: \"16b77a40-fb67-4fe3-b4c8-d87dd4be9b25\") " Feb 23 09:32:20 crc kubenswrapper[4940]: I0223 09:32:20.025275 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16b77a40-fb67-4fe3-b4c8-d87dd4be9b25-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "16b77a40-fb67-4fe3-b4c8-d87dd4be9b25" (UID: "16b77a40-fb67-4fe3-b4c8-d87dd4be9b25"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 09:32:20 crc kubenswrapper[4940]: I0223 09:32:20.026726 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/16b77a40-fb67-4fe3-b4c8-d87dd4be9b25-kube-api-access-w4q55" (OuterVolumeSpecName: "kube-api-access-w4q55") pod "16b77a40-fb67-4fe3-b4c8-d87dd4be9b25" (UID: "16b77a40-fb67-4fe3-b4c8-d87dd4be9b25"). InnerVolumeSpecName "kube-api-access-w4q55". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 09:32:20 crc kubenswrapper[4940]: I0223 09:32:20.055469 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16b77a40-fb67-4fe3-b4c8-d87dd4be9b25-ceilometer-compute-config-data-1" (OuterVolumeSpecName: "ceilometer-compute-config-data-1") pod "16b77a40-fb67-4fe3-b4c8-d87dd4be9b25" (UID: "16b77a40-fb67-4fe3-b4c8-d87dd4be9b25"). InnerVolumeSpecName "ceilometer-compute-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 09:32:20 crc kubenswrapper[4940]: I0223 09:32:20.058605 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16b77a40-fb67-4fe3-b4c8-d87dd4be9b25-ceilometer-compute-config-data-0" (OuterVolumeSpecName: "ceilometer-compute-config-data-0") pod "16b77a40-fb67-4fe3-b4c8-d87dd4be9b25" (UID: "16b77a40-fb67-4fe3-b4c8-d87dd4be9b25"). InnerVolumeSpecName "ceilometer-compute-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 09:32:20 crc kubenswrapper[4940]: I0223 09:32:20.059659 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16b77a40-fb67-4fe3-b4c8-d87dd4be9b25-ceilometer-compute-config-data-2" (OuterVolumeSpecName: "ceilometer-compute-config-data-2") pod "16b77a40-fb67-4fe3-b4c8-d87dd4be9b25" (UID: "16b77a40-fb67-4fe3-b4c8-d87dd4be9b25"). InnerVolumeSpecName "ceilometer-compute-config-data-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 09:32:20 crc kubenswrapper[4940]: I0223 09:32:20.061516 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16b77a40-fb67-4fe3-b4c8-d87dd4be9b25-inventory" (OuterVolumeSpecName: "inventory") pod "16b77a40-fb67-4fe3-b4c8-d87dd4be9b25" (UID: "16b77a40-fb67-4fe3-b4c8-d87dd4be9b25"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 09:32:20 crc kubenswrapper[4940]: I0223 09:32:20.064522 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16b77a40-fb67-4fe3-b4c8-d87dd4be9b25-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "16b77a40-fb67-4fe3-b4c8-d87dd4be9b25" (UID: "16b77a40-fb67-4fe3-b4c8-d87dd4be9b25"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 09:32:20 crc kubenswrapper[4940]: I0223 09:32:20.125430 4940 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/16b77a40-fb67-4fe3-b4c8-d87dd4be9b25-ceilometer-compute-config-data-1\") on node \"crc\" DevicePath \"\"" Feb 23 09:32:20 crc kubenswrapper[4940]: I0223 09:32:20.125481 4940 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/16b77a40-fb67-4fe3-b4c8-d87dd4be9b25-ceilometer-compute-config-data-0\") on node \"crc\" DevicePath \"\"" Feb 23 09:32:20 crc kubenswrapper[4940]: I0223 09:32:20.125501 4940 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4q55\" (UniqueName: \"kubernetes.io/projected/16b77a40-fb67-4fe3-b4c8-d87dd4be9b25-kube-api-access-w4q55\") on node \"crc\" DevicePath \"\"" Feb 23 09:32:20 crc kubenswrapper[4940]: I0223 09:32:20.125518 4940 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/16b77a40-fb67-4fe3-b4c8-d87dd4be9b25-inventory\") on node \"crc\" DevicePath \"\"" Feb 23 09:32:20 crc kubenswrapper[4940]: I0223 09:32:20.125534 4940 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16b77a40-fb67-4fe3-b4c8-d87dd4be9b25-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 09:32:20 crc kubenswrapper[4940]: I0223 09:32:20.125546 4940 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/16b77a40-fb67-4fe3-b4c8-d87dd4be9b25-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 23 09:32:20 crc kubenswrapper[4940]: I0223 09:32:20.125557 4940 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/16b77a40-fb67-4fe3-b4c8-d87dd4be9b25-ceilometer-compute-config-data-2\") on node \"crc\" DevicePath \"\"" Feb 23 09:32:20 crc kubenswrapper[4940]: I0223 09:32:20.423202 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-5zv49" event={"ID":"16b77a40-fb67-4fe3-b4c8-d87dd4be9b25","Type":"ContainerDied","Data":"8a6885351d67a64d32ee81f74dfd737eec55f9ea3c9a63b5be660187abe81e3c"} Feb 23 09:32:20 crc kubenswrapper[4940]: I0223 09:32:20.423270 4940 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8a6885351d67a64d32ee81f74dfd737eec55f9ea3c9a63b5be660187abe81e3c" Feb 23 09:32:20 crc kubenswrapper[4940]: I0223 09:32:20.423297 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-5zv49" Feb 23 09:33:01 crc kubenswrapper[4940]: I0223 09:33:01.429323 4940 patch_prober.go:28] interesting pod/machine-config-daemon-26mgs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 23 09:33:01 crc kubenswrapper[4940]: I0223 09:33:01.429902 4940 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" podUID="f3f2cfd6-5ddf-436d-998f-440f1cc642b1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 23 09:33:27 crc kubenswrapper[4940]: I0223 09:33:27.362319 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tempest-tests-tempest"] Feb 23 09:33:27 crc kubenswrapper[4940]: E0223 09:33:27.363383 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="16b77a40-fb67-4fe3-b4c8-d87dd4be9b25" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Feb 23 09:33:27 crc kubenswrapper[4940]: I0223 09:33:27.363412 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="16b77a40-fb67-4fe3-b4c8-d87dd4be9b25" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Feb 23 09:33:27 crc kubenswrapper[4940]: I0223 09:33:27.363712 4940 memory_manager.go:354] "RemoveStaleState removing state" podUID="16b77a40-fb67-4fe3-b4c8-d87dd4be9b25" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Feb 23 09:33:27 crc kubenswrapper[4940]: I0223 09:33:27.364561 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Feb 23 09:33:27 crc kubenswrapper[4940]: I0223 09:33:27.367604 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"test-operator-controller-priv-key" Feb 23 09:33:27 crc kubenswrapper[4940]: I0223 09:33:27.368224 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Feb 23 09:33:27 crc kubenswrapper[4940]: I0223 09:33:27.368493 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-custom-data-s0" Feb 23 09:33:27 crc kubenswrapper[4940]: I0223 09:33:27.386919 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Feb 23 09:33:27 crc kubenswrapper[4940]: I0223 09:33:27.485542 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/c7cd2a10-7128-40ff-98b8-6d3026b08566-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"c7cd2a10-7128-40ff-98b8-6d3026b08566\") " pod="openstack/tempest-tests-tempest" Feb 23 09:33:27 crc kubenswrapper[4940]: I0223 09:33:27.485602 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c7cd2a10-7128-40ff-98b8-6d3026b08566-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"c7cd2a10-7128-40ff-98b8-6d3026b08566\") " pod="openstack/tempest-tests-tempest" Feb 23 09:33:27 crc kubenswrapper[4940]: I0223 09:33:27.485745 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c7cd2a10-7128-40ff-98b8-6d3026b08566-config-data\") pod \"tempest-tests-tempest\" (UID: \"c7cd2a10-7128-40ff-98b8-6d3026b08566\") " pod="openstack/tempest-tests-tempest" Feb 23 09:33:27 crc kubenswrapper[4940]: I0223 09:33:27.485791 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/c7cd2a10-7128-40ff-98b8-6d3026b08566-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"c7cd2a10-7128-40ff-98b8-6d3026b08566\") " pod="openstack/tempest-tests-tempest" Feb 23 09:33:27 crc kubenswrapper[4940]: I0223 09:33:27.485878 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"tempest-tests-tempest\" (UID: \"c7cd2a10-7128-40ff-98b8-6d3026b08566\") " pod="openstack/tempest-tests-tempest" Feb 23 09:33:27 crc kubenswrapper[4940]: I0223 09:33:27.485965 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/c7cd2a10-7128-40ff-98b8-6d3026b08566-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"c7cd2a10-7128-40ff-98b8-6d3026b08566\") " pod="openstack/tempest-tests-tempest" Feb 23 09:33:27 crc kubenswrapper[4940]: I0223 09:33:27.486033 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/c7cd2a10-7128-40ff-98b8-6d3026b08566-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"c7cd2a10-7128-40ff-98b8-6d3026b08566\") " pod="openstack/tempest-tests-tempest" Feb 23 09:33:27 crc kubenswrapper[4940]: I0223 09:33:27.486062 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/c7cd2a10-7128-40ff-98b8-6d3026b08566-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"c7cd2a10-7128-40ff-98b8-6d3026b08566\") " pod="openstack/tempest-tests-tempest" Feb 23 09:33:27 crc kubenswrapper[4940]: I0223 09:33:27.486085 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bvqdk\" (UniqueName: \"kubernetes.io/projected/c7cd2a10-7128-40ff-98b8-6d3026b08566-kube-api-access-bvqdk\") pod \"tempest-tests-tempest\" (UID: \"c7cd2a10-7128-40ff-98b8-6d3026b08566\") " pod="openstack/tempest-tests-tempest" Feb 23 09:33:27 crc kubenswrapper[4940]: I0223 09:33:27.587878 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"tempest-tests-tempest\" (UID: \"c7cd2a10-7128-40ff-98b8-6d3026b08566\") " pod="openstack/tempest-tests-tempest" Feb 23 09:33:27 crc kubenswrapper[4940]: I0223 09:33:27.587994 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/c7cd2a10-7128-40ff-98b8-6d3026b08566-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"c7cd2a10-7128-40ff-98b8-6d3026b08566\") " pod="openstack/tempest-tests-tempest" Feb 23 09:33:27 crc kubenswrapper[4940]: I0223 09:33:27.588066 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/c7cd2a10-7128-40ff-98b8-6d3026b08566-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"c7cd2a10-7128-40ff-98b8-6d3026b08566\") " pod="openstack/tempest-tests-tempest" Feb 23 09:33:27 crc kubenswrapper[4940]: I0223 09:33:27.588095 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/c7cd2a10-7128-40ff-98b8-6d3026b08566-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"c7cd2a10-7128-40ff-98b8-6d3026b08566\") " pod="openstack/tempest-tests-tempest" Feb 23 09:33:27 crc kubenswrapper[4940]: I0223 09:33:27.588120 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bvqdk\" (UniqueName: \"kubernetes.io/projected/c7cd2a10-7128-40ff-98b8-6d3026b08566-kube-api-access-bvqdk\") pod \"tempest-tests-tempest\" (UID: \"c7cd2a10-7128-40ff-98b8-6d3026b08566\") " pod="openstack/tempest-tests-tempest" Feb 23 09:33:27 crc kubenswrapper[4940]: I0223 09:33:27.588149 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/c7cd2a10-7128-40ff-98b8-6d3026b08566-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"c7cd2a10-7128-40ff-98b8-6d3026b08566\") " pod="openstack/tempest-tests-tempest" Feb 23 09:33:27 crc kubenswrapper[4940]: I0223 09:33:27.588171 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c7cd2a10-7128-40ff-98b8-6d3026b08566-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"c7cd2a10-7128-40ff-98b8-6d3026b08566\") " pod="openstack/tempest-tests-tempest" Feb 23 09:33:27 crc kubenswrapper[4940]: I0223 09:33:27.588211 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c7cd2a10-7128-40ff-98b8-6d3026b08566-config-data\") pod \"tempest-tests-tempest\" (UID: \"c7cd2a10-7128-40ff-98b8-6d3026b08566\") " pod="openstack/tempest-tests-tempest" Feb 23 09:33:27 crc kubenswrapper[4940]: I0223 09:33:27.588259 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/c7cd2a10-7128-40ff-98b8-6d3026b08566-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"c7cd2a10-7128-40ff-98b8-6d3026b08566\") " pod="openstack/tempest-tests-tempest" Feb 23 09:33:27 crc kubenswrapper[4940]: I0223 09:33:27.589515 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/c7cd2a10-7128-40ff-98b8-6d3026b08566-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"c7cd2a10-7128-40ff-98b8-6d3026b08566\") " pod="openstack/tempest-tests-tempest" Feb 23 09:33:27 crc kubenswrapper[4940]: I0223 09:33:27.589791 4940 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"tempest-tests-tempest\" (UID: \"c7cd2a10-7128-40ff-98b8-6d3026b08566\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/tempest-tests-tempest" Feb 23 09:33:27 crc kubenswrapper[4940]: I0223 09:33:27.590077 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/c7cd2a10-7128-40ff-98b8-6d3026b08566-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"c7cd2a10-7128-40ff-98b8-6d3026b08566\") " pod="openstack/tempest-tests-tempest" Feb 23 09:33:27 crc kubenswrapper[4940]: I0223 09:33:27.591171 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/c7cd2a10-7128-40ff-98b8-6d3026b08566-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"c7cd2a10-7128-40ff-98b8-6d3026b08566\") " pod="openstack/tempest-tests-tempest" Feb 23 09:33:27 crc kubenswrapper[4940]: I0223 09:33:27.591196 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c7cd2a10-7128-40ff-98b8-6d3026b08566-config-data\") pod \"tempest-tests-tempest\" (UID: \"c7cd2a10-7128-40ff-98b8-6d3026b08566\") " pod="openstack/tempest-tests-tempest" Feb 23 09:33:27 crc kubenswrapper[4940]: I0223 09:33:27.598387 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c7cd2a10-7128-40ff-98b8-6d3026b08566-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"c7cd2a10-7128-40ff-98b8-6d3026b08566\") " pod="openstack/tempest-tests-tempest" Feb 23 09:33:27 crc kubenswrapper[4940]: I0223 09:33:27.598513 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/c7cd2a10-7128-40ff-98b8-6d3026b08566-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"c7cd2a10-7128-40ff-98b8-6d3026b08566\") " pod="openstack/tempest-tests-tempest" Feb 23 09:33:27 crc kubenswrapper[4940]: I0223 09:33:27.598953 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/c7cd2a10-7128-40ff-98b8-6d3026b08566-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"c7cd2a10-7128-40ff-98b8-6d3026b08566\") " pod="openstack/tempest-tests-tempest" Feb 23 09:33:27 crc kubenswrapper[4940]: I0223 09:33:27.610647 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bvqdk\" (UniqueName: \"kubernetes.io/projected/c7cd2a10-7128-40ff-98b8-6d3026b08566-kube-api-access-bvqdk\") pod \"tempest-tests-tempest\" (UID: \"c7cd2a10-7128-40ff-98b8-6d3026b08566\") " pod="openstack/tempest-tests-tempest" Feb 23 09:33:27 crc kubenswrapper[4940]: I0223 09:33:27.625710 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"tempest-tests-tempest\" (UID: \"c7cd2a10-7128-40ff-98b8-6d3026b08566\") " pod="openstack/tempest-tests-tempest" Feb 23 09:33:27 crc kubenswrapper[4940]: I0223 09:33:27.691420 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Feb 23 09:33:28 crc kubenswrapper[4940]: I0223 09:33:28.167096 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Feb 23 09:33:28 crc kubenswrapper[4940]: I0223 09:33:28.173899 4940 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 23 09:33:29 crc kubenswrapper[4940]: I0223 09:33:29.164821 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"c7cd2a10-7128-40ff-98b8-6d3026b08566","Type":"ContainerStarted","Data":"1651ee54a2d524c531fbcb3da84015af21c4e80438e01a53178f22abc41200fd"} Feb 23 09:33:31 crc kubenswrapper[4940]: I0223 09:33:31.433736 4940 patch_prober.go:28] interesting pod/machine-config-daemon-26mgs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 23 09:33:31 crc kubenswrapper[4940]: I0223 09:33:31.434311 4940 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" podUID="f3f2cfd6-5ddf-436d-998f-440f1cc642b1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 23 09:33:43 crc kubenswrapper[4940]: I0223 09:33:43.648969 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-nmqvl"] Feb 23 09:33:43 crc kubenswrapper[4940]: I0223 09:33:43.652754 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nmqvl" Feb 23 09:33:43 crc kubenswrapper[4940]: I0223 09:33:43.666957 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-nmqvl"] Feb 23 09:33:43 crc kubenswrapper[4940]: I0223 09:33:43.711528 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vw5t2\" (UniqueName: \"kubernetes.io/projected/539b3fff-1918-4211-b6cf-409a8ab9ccec-kube-api-access-vw5t2\") pod \"redhat-operators-nmqvl\" (UID: \"539b3fff-1918-4211-b6cf-409a8ab9ccec\") " pod="openshift-marketplace/redhat-operators-nmqvl" Feb 23 09:33:43 crc kubenswrapper[4940]: I0223 09:33:43.711950 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/539b3fff-1918-4211-b6cf-409a8ab9ccec-catalog-content\") pod \"redhat-operators-nmqvl\" (UID: \"539b3fff-1918-4211-b6cf-409a8ab9ccec\") " pod="openshift-marketplace/redhat-operators-nmqvl" Feb 23 09:33:43 crc kubenswrapper[4940]: I0223 09:33:43.712317 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/539b3fff-1918-4211-b6cf-409a8ab9ccec-utilities\") pod \"redhat-operators-nmqvl\" (UID: \"539b3fff-1918-4211-b6cf-409a8ab9ccec\") " pod="openshift-marketplace/redhat-operators-nmqvl" Feb 23 09:33:43 crc kubenswrapper[4940]: I0223 09:33:43.814567 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vw5t2\" (UniqueName: \"kubernetes.io/projected/539b3fff-1918-4211-b6cf-409a8ab9ccec-kube-api-access-vw5t2\") pod \"redhat-operators-nmqvl\" (UID: \"539b3fff-1918-4211-b6cf-409a8ab9ccec\") " pod="openshift-marketplace/redhat-operators-nmqvl" Feb 23 09:33:43 crc kubenswrapper[4940]: I0223 09:33:43.815119 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/539b3fff-1918-4211-b6cf-409a8ab9ccec-catalog-content\") pod \"redhat-operators-nmqvl\" (UID: \"539b3fff-1918-4211-b6cf-409a8ab9ccec\") " pod="openshift-marketplace/redhat-operators-nmqvl" Feb 23 09:33:43 crc kubenswrapper[4940]: I0223 09:33:43.815820 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/539b3fff-1918-4211-b6cf-409a8ab9ccec-catalog-content\") pod \"redhat-operators-nmqvl\" (UID: \"539b3fff-1918-4211-b6cf-409a8ab9ccec\") " pod="openshift-marketplace/redhat-operators-nmqvl" Feb 23 09:33:43 crc kubenswrapper[4940]: I0223 09:33:43.816002 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/539b3fff-1918-4211-b6cf-409a8ab9ccec-utilities\") pod \"redhat-operators-nmqvl\" (UID: \"539b3fff-1918-4211-b6cf-409a8ab9ccec\") " pod="openshift-marketplace/redhat-operators-nmqvl" Feb 23 09:33:43 crc kubenswrapper[4940]: I0223 09:33:43.816312 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/539b3fff-1918-4211-b6cf-409a8ab9ccec-utilities\") pod \"redhat-operators-nmqvl\" (UID: \"539b3fff-1918-4211-b6cf-409a8ab9ccec\") " pod="openshift-marketplace/redhat-operators-nmqvl" Feb 23 09:33:43 crc kubenswrapper[4940]: I0223 09:33:43.835152 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vw5t2\" (UniqueName: \"kubernetes.io/projected/539b3fff-1918-4211-b6cf-409a8ab9ccec-kube-api-access-vw5t2\") pod \"redhat-operators-nmqvl\" (UID: \"539b3fff-1918-4211-b6cf-409a8ab9ccec\") " pod="openshift-marketplace/redhat-operators-nmqvl" Feb 23 09:33:43 crc kubenswrapper[4940]: I0223 09:33:43.984411 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nmqvl" Feb 23 09:34:00 crc kubenswrapper[4940]: E0223 09:34:00.905865 4940 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified" Feb 23 09:34:00 crc kubenswrapper[4940]: E0223 09:34:00.906776 4940 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:tempest-tests-tempest-tests-runner,Image:quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/test_operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-workdir,ReadOnly:false,MountPath:/var/lib/tempest,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-temporary,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-logs,ReadOnly:false,MountPath:/var/lib/tempest/external_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/etc/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/var/lib/tempest/.config/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/etc/openstack/secure.yaml,SubPath:secure.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ca-certs,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ssh-key,ReadOnly:false,MountPath:/var/lib/tempest/id_ecdsa,SubPath:ssh_key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bvqdk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42480,RunAsNonRoot:*false,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*true,RunAsGroup:*42480,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-custom-data-s0,},Optional:nil,},SecretRef:nil,},EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-env-vars-s0,},Optional:nil,},SecretRef:nil,},},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod tempest-tests-tempest_openstack(c7cd2a10-7128-40ff-98b8-6d3026b08566): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 23 09:34:00 crc kubenswrapper[4940]: E0223 09:34:00.908093 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/tempest-tests-tempest" podUID="c7cd2a10-7128-40ff-98b8-6d3026b08566" Feb 23 09:34:01 crc kubenswrapper[4940]: E0223 09:34:01.036167 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified\\\"\"" pod="openstack/tempest-tests-tempest" podUID="c7cd2a10-7128-40ff-98b8-6d3026b08566" Feb 23 09:34:01 crc kubenswrapper[4940]: I0223 09:34:01.367333 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-nmqvl"] Feb 23 09:34:01 crc kubenswrapper[4940]: I0223 09:34:01.428926 4940 patch_prober.go:28] interesting pod/machine-config-daemon-26mgs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 23 09:34:01 crc kubenswrapper[4940]: I0223 09:34:01.428983 4940 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" podUID="f3f2cfd6-5ddf-436d-998f-440f1cc642b1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 23 09:34:01 crc kubenswrapper[4940]: I0223 09:34:01.429031 4940 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" Feb 23 09:34:01 crc kubenswrapper[4940]: I0223 09:34:01.429836 4940 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b8953f6e299fb6e672b60d70959469929c73e7840881290e17fc29267585d923"} pod="openshift-machine-config-operator/machine-config-daemon-26mgs" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 23 09:34:01 crc kubenswrapper[4940]: I0223 09:34:01.429936 4940 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" podUID="f3f2cfd6-5ddf-436d-998f-440f1cc642b1" containerName="machine-config-daemon" containerID="cri-o://b8953f6e299fb6e672b60d70959469929c73e7840881290e17fc29267585d923" gracePeriod=600 Feb 23 09:34:01 crc kubenswrapper[4940]: E0223 09:34:01.578895 4940 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf3f2cfd6_5ddf_436d_998f_440f1cc642b1.slice/crio-b8953f6e299fb6e672b60d70959469929c73e7840881290e17fc29267585d923.scope\": RecentStats: unable to find data in memory cache]" Feb 23 09:34:02 crc kubenswrapper[4940]: I0223 09:34:02.043845 4940 generic.go:334] "Generic (PLEG): container finished" podID="f3f2cfd6-5ddf-436d-998f-440f1cc642b1" containerID="b8953f6e299fb6e672b60d70959469929c73e7840881290e17fc29267585d923" exitCode=0 Feb 23 09:34:02 crc kubenswrapper[4940]: I0223 09:34:02.043929 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" event={"ID":"f3f2cfd6-5ddf-436d-998f-440f1cc642b1","Type":"ContainerDied","Data":"b8953f6e299fb6e672b60d70959469929c73e7840881290e17fc29267585d923"} Feb 23 09:34:02 crc kubenswrapper[4940]: I0223 09:34:02.044252 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" event={"ID":"f3f2cfd6-5ddf-436d-998f-440f1cc642b1","Type":"ContainerStarted","Data":"4dc5d52077cd2baef92405476f8423bdee15b141572c600fd683d3ef957f6198"} Feb 23 09:34:02 crc kubenswrapper[4940]: I0223 09:34:02.044274 4940 scope.go:117] "RemoveContainer" containerID="6b12db6909c870fba93a3600a398d05d040f133d1fff6f41fc66f1a7da8177f8" Feb 23 09:34:02 crc kubenswrapper[4940]: I0223 09:34:02.047966 4940 generic.go:334] "Generic (PLEG): container finished" podID="539b3fff-1918-4211-b6cf-409a8ab9ccec" containerID="a7ba263a2f92cbbb976ae131db3296a7d4f914d786f66c1136dd2ca1b920615e" exitCode=0 Feb 23 09:34:02 crc kubenswrapper[4940]: I0223 09:34:02.048017 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nmqvl" event={"ID":"539b3fff-1918-4211-b6cf-409a8ab9ccec","Type":"ContainerDied","Data":"a7ba263a2f92cbbb976ae131db3296a7d4f914d786f66c1136dd2ca1b920615e"} Feb 23 09:34:02 crc kubenswrapper[4940]: I0223 09:34:02.048042 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nmqvl" event={"ID":"539b3fff-1918-4211-b6cf-409a8ab9ccec","Type":"ContainerStarted","Data":"808dce6e2117ad70b7e20657c1f540639645bbe227c8182bf89390db194e3b61"} Feb 23 09:34:03 crc kubenswrapper[4940]: I0223 09:34:03.061907 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nmqvl" event={"ID":"539b3fff-1918-4211-b6cf-409a8ab9ccec","Type":"ContainerStarted","Data":"36e0386832be1921d9414cd24cadd01e00bcb2ccf3687aaa3dc06500640b2b7d"} Feb 23 09:34:08 crc kubenswrapper[4940]: I0223 09:34:08.153333 4940 generic.go:334] "Generic (PLEG): container finished" podID="539b3fff-1918-4211-b6cf-409a8ab9ccec" containerID="36e0386832be1921d9414cd24cadd01e00bcb2ccf3687aaa3dc06500640b2b7d" exitCode=0 Feb 23 09:34:08 crc kubenswrapper[4940]: I0223 09:34:08.153815 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nmqvl" event={"ID":"539b3fff-1918-4211-b6cf-409a8ab9ccec","Type":"ContainerDied","Data":"36e0386832be1921d9414cd24cadd01e00bcb2ccf3687aaa3dc06500640b2b7d"} Feb 23 09:34:09 crc kubenswrapper[4940]: I0223 09:34:09.165697 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nmqvl" event={"ID":"539b3fff-1918-4211-b6cf-409a8ab9ccec","Type":"ContainerStarted","Data":"e4dbbce022bb70aee8f06f49663ed769f2bb7aa36dbbea8c691779d732ef0015"} Feb 23 09:34:09 crc kubenswrapper[4940]: I0223 09:34:09.185093 4940 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-nmqvl" podStartSLOduration=19.691347337 podStartE2EDuration="26.185059483s" podCreationTimestamp="2026-02-23 09:33:43 +0000 UTC" firstStartedPulling="2026-02-23 09:34:02.050328378 +0000 UTC m=+2773.433534535" lastFinishedPulling="2026-02-23 09:34:08.544040524 +0000 UTC m=+2779.927246681" observedRunningTime="2026-02-23 09:34:09.181879873 +0000 UTC m=+2780.565086070" watchObservedRunningTime="2026-02-23 09:34:09.185059483 +0000 UTC m=+2780.568265640" Feb 23 09:34:13 crc kubenswrapper[4940]: I0223 09:34:13.828705 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Feb 23 09:34:13 crc kubenswrapper[4940]: I0223 09:34:13.985544 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-nmqvl" Feb 23 09:34:13 crc kubenswrapper[4940]: I0223 09:34:13.985690 4940 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-nmqvl" Feb 23 09:34:15 crc kubenswrapper[4940]: I0223 09:34:15.042510 4940 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-nmqvl" podUID="539b3fff-1918-4211-b6cf-409a8ab9ccec" containerName="registry-server" probeResult="failure" output=< Feb 23 09:34:15 crc kubenswrapper[4940]: timeout: failed to connect service ":50051" within 1s Feb 23 09:34:15 crc kubenswrapper[4940]: > Feb 23 09:34:15 crc kubenswrapper[4940]: I0223 09:34:15.243059 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"c7cd2a10-7128-40ff-98b8-6d3026b08566","Type":"ContainerStarted","Data":"0e03c7ffc9ed6ac4348d53f29a3feb3bbb26909466b8232d2fdf482217df0f15"} Feb 23 09:34:15 crc kubenswrapper[4940]: I0223 09:34:15.271115 4940 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tempest-tests-tempest" podStartSLOduration=3.618707868 podStartE2EDuration="49.271087269s" podCreationTimestamp="2026-02-23 09:33:26 +0000 UTC" firstStartedPulling="2026-02-23 09:33:28.173532051 +0000 UTC m=+2739.556738208" lastFinishedPulling="2026-02-23 09:34:13.825911432 +0000 UTC m=+2785.209117609" observedRunningTime="2026-02-23 09:34:15.267843147 +0000 UTC m=+2786.651049354" watchObservedRunningTime="2026-02-23 09:34:15.271087269 +0000 UTC m=+2786.654293466" Feb 23 09:34:24 crc kubenswrapper[4940]: I0223 09:34:24.034703 4940 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-nmqvl" Feb 23 09:34:24 crc kubenswrapper[4940]: I0223 09:34:24.085937 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-nmqvl" Feb 23 09:34:24 crc kubenswrapper[4940]: I0223 09:34:24.271918 4940 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-nmqvl"] Feb 23 09:34:25 crc kubenswrapper[4940]: I0223 09:34:25.332714 4940 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-nmqvl" podUID="539b3fff-1918-4211-b6cf-409a8ab9ccec" containerName="registry-server" containerID="cri-o://e4dbbce022bb70aee8f06f49663ed769f2bb7aa36dbbea8c691779d732ef0015" gracePeriod=2 Feb 23 09:34:25 crc kubenswrapper[4940]: I0223 09:34:25.834642 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nmqvl" Feb 23 09:34:25 crc kubenswrapper[4940]: I0223 09:34:25.989242 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/539b3fff-1918-4211-b6cf-409a8ab9ccec-catalog-content\") pod \"539b3fff-1918-4211-b6cf-409a8ab9ccec\" (UID: \"539b3fff-1918-4211-b6cf-409a8ab9ccec\") " Feb 23 09:34:25 crc kubenswrapper[4940]: I0223 09:34:25.989801 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/539b3fff-1918-4211-b6cf-409a8ab9ccec-utilities\") pod \"539b3fff-1918-4211-b6cf-409a8ab9ccec\" (UID: \"539b3fff-1918-4211-b6cf-409a8ab9ccec\") " Feb 23 09:34:25 crc kubenswrapper[4940]: I0223 09:34:25.989942 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vw5t2\" (UniqueName: \"kubernetes.io/projected/539b3fff-1918-4211-b6cf-409a8ab9ccec-kube-api-access-vw5t2\") pod \"539b3fff-1918-4211-b6cf-409a8ab9ccec\" (UID: \"539b3fff-1918-4211-b6cf-409a8ab9ccec\") " Feb 23 09:34:25 crc kubenswrapper[4940]: I0223 09:34:25.991068 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/539b3fff-1918-4211-b6cf-409a8ab9ccec-utilities" (OuterVolumeSpecName: "utilities") pod "539b3fff-1918-4211-b6cf-409a8ab9ccec" (UID: "539b3fff-1918-4211-b6cf-409a8ab9ccec"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 09:34:25 crc kubenswrapper[4940]: I0223 09:34:25.997365 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/539b3fff-1918-4211-b6cf-409a8ab9ccec-kube-api-access-vw5t2" (OuterVolumeSpecName: "kube-api-access-vw5t2") pod "539b3fff-1918-4211-b6cf-409a8ab9ccec" (UID: "539b3fff-1918-4211-b6cf-409a8ab9ccec"). InnerVolumeSpecName "kube-api-access-vw5t2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 09:34:26 crc kubenswrapper[4940]: I0223 09:34:26.092786 4940 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/539b3fff-1918-4211-b6cf-409a8ab9ccec-utilities\") on node \"crc\" DevicePath \"\"" Feb 23 09:34:26 crc kubenswrapper[4940]: I0223 09:34:26.092833 4940 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vw5t2\" (UniqueName: \"kubernetes.io/projected/539b3fff-1918-4211-b6cf-409a8ab9ccec-kube-api-access-vw5t2\") on node \"crc\" DevicePath \"\"" Feb 23 09:34:26 crc kubenswrapper[4940]: I0223 09:34:26.108794 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/539b3fff-1918-4211-b6cf-409a8ab9ccec-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "539b3fff-1918-4211-b6cf-409a8ab9ccec" (UID: "539b3fff-1918-4211-b6cf-409a8ab9ccec"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 09:34:26 crc kubenswrapper[4940]: I0223 09:34:26.195155 4940 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/539b3fff-1918-4211-b6cf-409a8ab9ccec-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 23 09:34:26 crc kubenswrapper[4940]: I0223 09:34:26.343987 4940 generic.go:334] "Generic (PLEG): container finished" podID="539b3fff-1918-4211-b6cf-409a8ab9ccec" containerID="e4dbbce022bb70aee8f06f49663ed769f2bb7aa36dbbea8c691779d732ef0015" exitCode=0 Feb 23 09:34:26 crc kubenswrapper[4940]: I0223 09:34:26.344067 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nmqvl" Feb 23 09:34:26 crc kubenswrapper[4940]: I0223 09:34:26.344063 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nmqvl" event={"ID":"539b3fff-1918-4211-b6cf-409a8ab9ccec","Type":"ContainerDied","Data":"e4dbbce022bb70aee8f06f49663ed769f2bb7aa36dbbea8c691779d732ef0015"} Feb 23 09:34:26 crc kubenswrapper[4940]: I0223 09:34:26.344185 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nmqvl" event={"ID":"539b3fff-1918-4211-b6cf-409a8ab9ccec","Type":"ContainerDied","Data":"808dce6e2117ad70b7e20657c1f540639645bbe227c8182bf89390db194e3b61"} Feb 23 09:34:26 crc kubenswrapper[4940]: I0223 09:34:26.344204 4940 scope.go:117] "RemoveContainer" containerID="e4dbbce022bb70aee8f06f49663ed769f2bb7aa36dbbea8c691779d732ef0015" Feb 23 09:34:26 crc kubenswrapper[4940]: I0223 09:34:26.379752 4940 scope.go:117] "RemoveContainer" containerID="36e0386832be1921d9414cd24cadd01e00bcb2ccf3687aaa3dc06500640b2b7d" Feb 23 09:34:26 crc kubenswrapper[4940]: I0223 09:34:26.384097 4940 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-nmqvl"] Feb 23 09:34:26 crc kubenswrapper[4940]: I0223 09:34:26.393039 4940 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-nmqvl"] Feb 23 09:34:26 crc kubenswrapper[4940]: I0223 09:34:26.406286 4940 scope.go:117] "RemoveContainer" containerID="a7ba263a2f92cbbb976ae131db3296a7d4f914d786f66c1136dd2ca1b920615e" Feb 23 09:34:26 crc kubenswrapper[4940]: I0223 09:34:26.449017 4940 scope.go:117] "RemoveContainer" containerID="e4dbbce022bb70aee8f06f49663ed769f2bb7aa36dbbea8c691779d732ef0015" Feb 23 09:34:26 crc kubenswrapper[4940]: E0223 09:34:26.449630 4940 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e4dbbce022bb70aee8f06f49663ed769f2bb7aa36dbbea8c691779d732ef0015\": container with ID starting with e4dbbce022bb70aee8f06f49663ed769f2bb7aa36dbbea8c691779d732ef0015 not found: ID does not exist" containerID="e4dbbce022bb70aee8f06f49663ed769f2bb7aa36dbbea8c691779d732ef0015" Feb 23 09:34:26 crc kubenswrapper[4940]: I0223 09:34:26.449693 4940 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e4dbbce022bb70aee8f06f49663ed769f2bb7aa36dbbea8c691779d732ef0015"} err="failed to get container status \"e4dbbce022bb70aee8f06f49663ed769f2bb7aa36dbbea8c691779d732ef0015\": rpc error: code = NotFound desc = could not find container \"e4dbbce022bb70aee8f06f49663ed769f2bb7aa36dbbea8c691779d732ef0015\": container with ID starting with e4dbbce022bb70aee8f06f49663ed769f2bb7aa36dbbea8c691779d732ef0015 not found: ID does not exist" Feb 23 09:34:26 crc kubenswrapper[4940]: I0223 09:34:26.449731 4940 scope.go:117] "RemoveContainer" containerID="36e0386832be1921d9414cd24cadd01e00bcb2ccf3687aaa3dc06500640b2b7d" Feb 23 09:34:26 crc kubenswrapper[4940]: E0223 09:34:26.450688 4940 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"36e0386832be1921d9414cd24cadd01e00bcb2ccf3687aaa3dc06500640b2b7d\": container with ID starting with 36e0386832be1921d9414cd24cadd01e00bcb2ccf3687aaa3dc06500640b2b7d not found: ID does not exist" containerID="36e0386832be1921d9414cd24cadd01e00bcb2ccf3687aaa3dc06500640b2b7d" Feb 23 09:34:26 crc kubenswrapper[4940]: I0223 09:34:26.450778 4940 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"36e0386832be1921d9414cd24cadd01e00bcb2ccf3687aaa3dc06500640b2b7d"} err="failed to get container status \"36e0386832be1921d9414cd24cadd01e00bcb2ccf3687aaa3dc06500640b2b7d\": rpc error: code = NotFound desc = could not find container \"36e0386832be1921d9414cd24cadd01e00bcb2ccf3687aaa3dc06500640b2b7d\": container with ID starting with 36e0386832be1921d9414cd24cadd01e00bcb2ccf3687aaa3dc06500640b2b7d not found: ID does not exist" Feb 23 09:34:26 crc kubenswrapper[4940]: I0223 09:34:26.450836 4940 scope.go:117] "RemoveContainer" containerID="a7ba263a2f92cbbb976ae131db3296a7d4f914d786f66c1136dd2ca1b920615e" Feb 23 09:34:26 crc kubenswrapper[4940]: E0223 09:34:26.451402 4940 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a7ba263a2f92cbbb976ae131db3296a7d4f914d786f66c1136dd2ca1b920615e\": container with ID starting with a7ba263a2f92cbbb976ae131db3296a7d4f914d786f66c1136dd2ca1b920615e not found: ID does not exist" containerID="a7ba263a2f92cbbb976ae131db3296a7d4f914d786f66c1136dd2ca1b920615e" Feb 23 09:34:26 crc kubenswrapper[4940]: I0223 09:34:26.451439 4940 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a7ba263a2f92cbbb976ae131db3296a7d4f914d786f66c1136dd2ca1b920615e"} err="failed to get container status \"a7ba263a2f92cbbb976ae131db3296a7d4f914d786f66c1136dd2ca1b920615e\": rpc error: code = NotFound desc = could not find container \"a7ba263a2f92cbbb976ae131db3296a7d4f914d786f66c1136dd2ca1b920615e\": container with ID starting with a7ba263a2f92cbbb976ae131db3296a7d4f914d786f66c1136dd2ca1b920615e not found: ID does not exist" Feb 23 09:34:27 crc kubenswrapper[4940]: I0223 09:34:27.371340 4940 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="539b3fff-1918-4211-b6cf-409a8ab9ccec" path="/var/lib/kubelet/pods/539b3fff-1918-4211-b6cf-409a8ab9ccec/volumes" Feb 23 09:36:01 crc kubenswrapper[4940]: I0223 09:36:01.430736 4940 patch_prober.go:28] interesting pod/machine-config-daemon-26mgs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 23 09:36:01 crc kubenswrapper[4940]: I0223 09:36:01.431206 4940 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" podUID="f3f2cfd6-5ddf-436d-998f-440f1cc642b1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 23 09:36:28 crc kubenswrapper[4940]: I0223 09:36:28.691604 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-9n78h"] Feb 23 09:36:28 crc kubenswrapper[4940]: E0223 09:36:28.692400 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="539b3fff-1918-4211-b6cf-409a8ab9ccec" containerName="registry-server" Feb 23 09:36:28 crc kubenswrapper[4940]: I0223 09:36:28.693591 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="539b3fff-1918-4211-b6cf-409a8ab9ccec" containerName="registry-server" Feb 23 09:36:28 crc kubenswrapper[4940]: E0223 09:36:28.693625 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="539b3fff-1918-4211-b6cf-409a8ab9ccec" containerName="extract-utilities" Feb 23 09:36:28 crc kubenswrapper[4940]: I0223 09:36:28.693632 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="539b3fff-1918-4211-b6cf-409a8ab9ccec" containerName="extract-utilities" Feb 23 09:36:28 crc kubenswrapper[4940]: E0223 09:36:28.693644 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="539b3fff-1918-4211-b6cf-409a8ab9ccec" containerName="extract-content" Feb 23 09:36:28 crc kubenswrapper[4940]: I0223 09:36:28.693650 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="539b3fff-1918-4211-b6cf-409a8ab9ccec" containerName="extract-content" Feb 23 09:36:28 crc kubenswrapper[4940]: I0223 09:36:28.693846 4940 memory_manager.go:354] "RemoveStaleState removing state" podUID="539b3fff-1918-4211-b6cf-409a8ab9ccec" containerName="registry-server" Feb 23 09:36:28 crc kubenswrapper[4940]: I0223 09:36:28.695881 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9n78h" Feb 23 09:36:28 crc kubenswrapper[4940]: I0223 09:36:28.715484 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-9n78h"] Feb 23 09:36:28 crc kubenswrapper[4940]: I0223 09:36:28.795254 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8pwsp\" (UniqueName: \"kubernetes.io/projected/25d3c502-d30d-443f-9269-bdd7eec6a432-kube-api-access-8pwsp\") pod \"redhat-marketplace-9n78h\" (UID: \"25d3c502-d30d-443f-9269-bdd7eec6a432\") " pod="openshift-marketplace/redhat-marketplace-9n78h" Feb 23 09:36:28 crc kubenswrapper[4940]: I0223 09:36:28.795326 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/25d3c502-d30d-443f-9269-bdd7eec6a432-catalog-content\") pod \"redhat-marketplace-9n78h\" (UID: \"25d3c502-d30d-443f-9269-bdd7eec6a432\") " pod="openshift-marketplace/redhat-marketplace-9n78h" Feb 23 09:36:28 crc kubenswrapper[4940]: I0223 09:36:28.795351 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/25d3c502-d30d-443f-9269-bdd7eec6a432-utilities\") pod \"redhat-marketplace-9n78h\" (UID: \"25d3c502-d30d-443f-9269-bdd7eec6a432\") " pod="openshift-marketplace/redhat-marketplace-9n78h" Feb 23 09:36:28 crc kubenswrapper[4940]: I0223 09:36:28.898037 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/25d3c502-d30d-443f-9269-bdd7eec6a432-catalog-content\") pod \"redhat-marketplace-9n78h\" (UID: \"25d3c502-d30d-443f-9269-bdd7eec6a432\") " pod="openshift-marketplace/redhat-marketplace-9n78h" Feb 23 09:36:28 crc kubenswrapper[4940]: I0223 09:36:28.898101 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/25d3c502-d30d-443f-9269-bdd7eec6a432-utilities\") pod \"redhat-marketplace-9n78h\" (UID: \"25d3c502-d30d-443f-9269-bdd7eec6a432\") " pod="openshift-marketplace/redhat-marketplace-9n78h" Feb 23 09:36:28 crc kubenswrapper[4940]: I0223 09:36:28.898296 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8pwsp\" (UniqueName: \"kubernetes.io/projected/25d3c502-d30d-443f-9269-bdd7eec6a432-kube-api-access-8pwsp\") pod \"redhat-marketplace-9n78h\" (UID: \"25d3c502-d30d-443f-9269-bdd7eec6a432\") " pod="openshift-marketplace/redhat-marketplace-9n78h" Feb 23 09:36:28 crc kubenswrapper[4940]: I0223 09:36:28.898500 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/25d3c502-d30d-443f-9269-bdd7eec6a432-catalog-content\") pod \"redhat-marketplace-9n78h\" (UID: \"25d3c502-d30d-443f-9269-bdd7eec6a432\") " pod="openshift-marketplace/redhat-marketplace-9n78h" Feb 23 09:36:28 crc kubenswrapper[4940]: I0223 09:36:28.898566 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/25d3c502-d30d-443f-9269-bdd7eec6a432-utilities\") pod \"redhat-marketplace-9n78h\" (UID: \"25d3c502-d30d-443f-9269-bdd7eec6a432\") " pod="openshift-marketplace/redhat-marketplace-9n78h" Feb 23 09:36:28 crc kubenswrapper[4940]: I0223 09:36:28.928842 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8pwsp\" (UniqueName: \"kubernetes.io/projected/25d3c502-d30d-443f-9269-bdd7eec6a432-kube-api-access-8pwsp\") pod \"redhat-marketplace-9n78h\" (UID: \"25d3c502-d30d-443f-9269-bdd7eec6a432\") " pod="openshift-marketplace/redhat-marketplace-9n78h" Feb 23 09:36:29 crc kubenswrapper[4940]: I0223 09:36:29.017265 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9n78h" Feb 23 09:36:29 crc kubenswrapper[4940]: I0223 09:36:29.570065 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-9n78h"] Feb 23 09:36:30 crc kubenswrapper[4940]: I0223 09:36:30.474368 4940 generic.go:334] "Generic (PLEG): container finished" podID="25d3c502-d30d-443f-9269-bdd7eec6a432" containerID="a68ae12457004be939b736fd161c2b31acb45a75ed9a9751511c71f28f858f87" exitCode=0 Feb 23 09:36:30 crc kubenswrapper[4940]: I0223 09:36:30.474467 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9n78h" event={"ID":"25d3c502-d30d-443f-9269-bdd7eec6a432","Type":"ContainerDied","Data":"a68ae12457004be939b736fd161c2b31acb45a75ed9a9751511c71f28f858f87"} Feb 23 09:36:30 crc kubenswrapper[4940]: I0223 09:36:30.474828 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9n78h" event={"ID":"25d3c502-d30d-443f-9269-bdd7eec6a432","Type":"ContainerStarted","Data":"082fa7cf5ba40405bb63ea5853fea9788670354122053fec6af283e7444177d3"} Feb 23 09:36:31 crc kubenswrapper[4940]: I0223 09:36:31.429602 4940 patch_prober.go:28] interesting pod/machine-config-daemon-26mgs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 23 09:36:31 crc kubenswrapper[4940]: I0223 09:36:31.430259 4940 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" podUID="f3f2cfd6-5ddf-436d-998f-440f1cc642b1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 23 09:36:31 crc kubenswrapper[4940]: I0223 09:36:31.485665 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9n78h" event={"ID":"25d3c502-d30d-443f-9269-bdd7eec6a432","Type":"ContainerStarted","Data":"be2eb647f7acda6b1ec57df6501942b9f557fb22e4c1bdc39c15e9b7adbdf7f5"} Feb 23 09:36:32 crc kubenswrapper[4940]: I0223 09:36:32.496932 4940 generic.go:334] "Generic (PLEG): container finished" podID="25d3c502-d30d-443f-9269-bdd7eec6a432" containerID="be2eb647f7acda6b1ec57df6501942b9f557fb22e4c1bdc39c15e9b7adbdf7f5" exitCode=0 Feb 23 09:36:32 crc kubenswrapper[4940]: I0223 09:36:32.497845 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9n78h" event={"ID":"25d3c502-d30d-443f-9269-bdd7eec6a432","Type":"ContainerDied","Data":"be2eb647f7acda6b1ec57df6501942b9f557fb22e4c1bdc39c15e9b7adbdf7f5"} Feb 23 09:36:33 crc kubenswrapper[4940]: I0223 09:36:33.513038 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9n78h" event={"ID":"25d3c502-d30d-443f-9269-bdd7eec6a432","Type":"ContainerStarted","Data":"a888a092b8c161f40e258d9f33e95e088e2d9b17a9033d46278dd6c2c2fa7142"} Feb 23 09:36:33 crc kubenswrapper[4940]: I0223 09:36:33.535727 4940 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-9n78h" podStartSLOduration=3.122636626 podStartE2EDuration="5.535706961s" podCreationTimestamp="2026-02-23 09:36:28 +0000 UTC" firstStartedPulling="2026-02-23 09:36:30.478572924 +0000 UTC m=+2921.861779081" lastFinishedPulling="2026-02-23 09:36:32.891643259 +0000 UTC m=+2924.274849416" observedRunningTime="2026-02-23 09:36:33.532021565 +0000 UTC m=+2924.915227722" watchObservedRunningTime="2026-02-23 09:36:33.535706961 +0000 UTC m=+2924.918913118" Feb 23 09:36:39 crc kubenswrapper[4940]: I0223 09:36:39.017796 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-9n78h" Feb 23 09:36:39 crc kubenswrapper[4940]: I0223 09:36:39.018371 4940 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-9n78h" Feb 23 09:36:39 crc kubenswrapper[4940]: I0223 09:36:39.071751 4940 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-9n78h" Feb 23 09:36:39 crc kubenswrapper[4940]: I0223 09:36:39.622016 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-9n78h" Feb 23 09:36:40 crc kubenswrapper[4940]: I0223 09:36:40.312680 4940 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-9n78h"] Feb 23 09:36:41 crc kubenswrapper[4940]: I0223 09:36:41.580619 4940 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-9n78h" podUID="25d3c502-d30d-443f-9269-bdd7eec6a432" containerName="registry-server" containerID="cri-o://a888a092b8c161f40e258d9f33e95e088e2d9b17a9033d46278dd6c2c2fa7142" gracePeriod=2 Feb 23 09:36:42 crc kubenswrapper[4940]: I0223 09:36:42.225749 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9n78h" Feb 23 09:36:42 crc kubenswrapper[4940]: I0223 09:36:42.385948 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/25d3c502-d30d-443f-9269-bdd7eec6a432-utilities\") pod \"25d3c502-d30d-443f-9269-bdd7eec6a432\" (UID: \"25d3c502-d30d-443f-9269-bdd7eec6a432\") " Feb 23 09:36:42 crc kubenswrapper[4940]: I0223 09:36:42.386149 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/25d3c502-d30d-443f-9269-bdd7eec6a432-catalog-content\") pod \"25d3c502-d30d-443f-9269-bdd7eec6a432\" (UID: \"25d3c502-d30d-443f-9269-bdd7eec6a432\") " Feb 23 09:36:42 crc kubenswrapper[4940]: I0223 09:36:42.386299 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8pwsp\" (UniqueName: \"kubernetes.io/projected/25d3c502-d30d-443f-9269-bdd7eec6a432-kube-api-access-8pwsp\") pod \"25d3c502-d30d-443f-9269-bdd7eec6a432\" (UID: \"25d3c502-d30d-443f-9269-bdd7eec6a432\") " Feb 23 09:36:42 crc kubenswrapper[4940]: I0223 09:36:42.387603 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/25d3c502-d30d-443f-9269-bdd7eec6a432-utilities" (OuterVolumeSpecName: "utilities") pod "25d3c502-d30d-443f-9269-bdd7eec6a432" (UID: "25d3c502-d30d-443f-9269-bdd7eec6a432"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 09:36:42 crc kubenswrapper[4940]: I0223 09:36:42.395939 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25d3c502-d30d-443f-9269-bdd7eec6a432-kube-api-access-8pwsp" (OuterVolumeSpecName: "kube-api-access-8pwsp") pod "25d3c502-d30d-443f-9269-bdd7eec6a432" (UID: "25d3c502-d30d-443f-9269-bdd7eec6a432"). InnerVolumeSpecName "kube-api-access-8pwsp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 09:36:42 crc kubenswrapper[4940]: I0223 09:36:42.414707 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/25d3c502-d30d-443f-9269-bdd7eec6a432-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "25d3c502-d30d-443f-9269-bdd7eec6a432" (UID: "25d3c502-d30d-443f-9269-bdd7eec6a432"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 09:36:42 crc kubenswrapper[4940]: I0223 09:36:42.489019 4940 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/25d3c502-d30d-443f-9269-bdd7eec6a432-utilities\") on node \"crc\" DevicePath \"\"" Feb 23 09:36:42 crc kubenswrapper[4940]: I0223 09:36:42.489049 4940 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/25d3c502-d30d-443f-9269-bdd7eec6a432-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 23 09:36:42 crc kubenswrapper[4940]: I0223 09:36:42.489061 4940 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8pwsp\" (UniqueName: \"kubernetes.io/projected/25d3c502-d30d-443f-9269-bdd7eec6a432-kube-api-access-8pwsp\") on node \"crc\" DevicePath \"\"" Feb 23 09:36:42 crc kubenswrapper[4940]: I0223 09:36:42.598140 4940 generic.go:334] "Generic (PLEG): container finished" podID="25d3c502-d30d-443f-9269-bdd7eec6a432" containerID="a888a092b8c161f40e258d9f33e95e088e2d9b17a9033d46278dd6c2c2fa7142" exitCode=0 Feb 23 09:36:42 crc kubenswrapper[4940]: I0223 09:36:42.598186 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9n78h" event={"ID":"25d3c502-d30d-443f-9269-bdd7eec6a432","Type":"ContainerDied","Data":"a888a092b8c161f40e258d9f33e95e088e2d9b17a9033d46278dd6c2c2fa7142"} Feb 23 09:36:42 crc kubenswrapper[4940]: I0223 09:36:42.598217 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9n78h" event={"ID":"25d3c502-d30d-443f-9269-bdd7eec6a432","Type":"ContainerDied","Data":"082fa7cf5ba40405bb63ea5853fea9788670354122053fec6af283e7444177d3"} Feb 23 09:36:42 crc kubenswrapper[4940]: I0223 09:36:42.598237 4940 scope.go:117] "RemoveContainer" containerID="a888a092b8c161f40e258d9f33e95e088e2d9b17a9033d46278dd6c2c2fa7142" Feb 23 09:36:42 crc kubenswrapper[4940]: I0223 09:36:42.598241 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9n78h" Feb 23 09:36:42 crc kubenswrapper[4940]: I0223 09:36:42.640718 4940 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-9n78h"] Feb 23 09:36:42 crc kubenswrapper[4940]: I0223 09:36:42.650442 4940 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-9n78h"] Feb 23 09:36:42 crc kubenswrapper[4940]: I0223 09:36:42.659700 4940 scope.go:117] "RemoveContainer" containerID="be2eb647f7acda6b1ec57df6501942b9f557fb22e4c1bdc39c15e9b7adbdf7f5" Feb 23 09:36:42 crc kubenswrapper[4940]: I0223 09:36:42.703682 4940 scope.go:117] "RemoveContainer" containerID="a68ae12457004be939b736fd161c2b31acb45a75ed9a9751511c71f28f858f87" Feb 23 09:36:42 crc kubenswrapper[4940]: I0223 09:36:42.751214 4940 scope.go:117] "RemoveContainer" containerID="a888a092b8c161f40e258d9f33e95e088e2d9b17a9033d46278dd6c2c2fa7142" Feb 23 09:36:42 crc kubenswrapper[4940]: E0223 09:36:42.751574 4940 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a888a092b8c161f40e258d9f33e95e088e2d9b17a9033d46278dd6c2c2fa7142\": container with ID starting with a888a092b8c161f40e258d9f33e95e088e2d9b17a9033d46278dd6c2c2fa7142 not found: ID does not exist" containerID="a888a092b8c161f40e258d9f33e95e088e2d9b17a9033d46278dd6c2c2fa7142" Feb 23 09:36:42 crc kubenswrapper[4940]: I0223 09:36:42.751716 4940 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a888a092b8c161f40e258d9f33e95e088e2d9b17a9033d46278dd6c2c2fa7142"} err="failed to get container status \"a888a092b8c161f40e258d9f33e95e088e2d9b17a9033d46278dd6c2c2fa7142\": rpc error: code = NotFound desc = could not find container \"a888a092b8c161f40e258d9f33e95e088e2d9b17a9033d46278dd6c2c2fa7142\": container with ID starting with a888a092b8c161f40e258d9f33e95e088e2d9b17a9033d46278dd6c2c2fa7142 not found: ID does not exist" Feb 23 09:36:42 crc kubenswrapper[4940]: I0223 09:36:42.751840 4940 scope.go:117] "RemoveContainer" containerID="be2eb647f7acda6b1ec57df6501942b9f557fb22e4c1bdc39c15e9b7adbdf7f5" Feb 23 09:36:42 crc kubenswrapper[4940]: E0223 09:36:42.752632 4940 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"be2eb647f7acda6b1ec57df6501942b9f557fb22e4c1bdc39c15e9b7adbdf7f5\": container with ID starting with be2eb647f7acda6b1ec57df6501942b9f557fb22e4c1bdc39c15e9b7adbdf7f5 not found: ID does not exist" containerID="be2eb647f7acda6b1ec57df6501942b9f557fb22e4c1bdc39c15e9b7adbdf7f5" Feb 23 09:36:42 crc kubenswrapper[4940]: I0223 09:36:42.752702 4940 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"be2eb647f7acda6b1ec57df6501942b9f557fb22e4c1bdc39c15e9b7adbdf7f5"} err="failed to get container status \"be2eb647f7acda6b1ec57df6501942b9f557fb22e4c1bdc39c15e9b7adbdf7f5\": rpc error: code = NotFound desc = could not find container \"be2eb647f7acda6b1ec57df6501942b9f557fb22e4c1bdc39c15e9b7adbdf7f5\": container with ID starting with be2eb647f7acda6b1ec57df6501942b9f557fb22e4c1bdc39c15e9b7adbdf7f5 not found: ID does not exist" Feb 23 09:36:42 crc kubenswrapper[4940]: I0223 09:36:42.752735 4940 scope.go:117] "RemoveContainer" containerID="a68ae12457004be939b736fd161c2b31acb45a75ed9a9751511c71f28f858f87" Feb 23 09:36:42 crc kubenswrapper[4940]: E0223 09:36:42.753016 4940 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a68ae12457004be939b736fd161c2b31acb45a75ed9a9751511c71f28f858f87\": container with ID starting with a68ae12457004be939b736fd161c2b31acb45a75ed9a9751511c71f28f858f87 not found: ID does not exist" containerID="a68ae12457004be939b736fd161c2b31acb45a75ed9a9751511c71f28f858f87" Feb 23 09:36:42 crc kubenswrapper[4940]: I0223 09:36:42.753036 4940 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a68ae12457004be939b736fd161c2b31acb45a75ed9a9751511c71f28f858f87"} err="failed to get container status \"a68ae12457004be939b736fd161c2b31acb45a75ed9a9751511c71f28f858f87\": rpc error: code = NotFound desc = could not find container \"a68ae12457004be939b736fd161c2b31acb45a75ed9a9751511c71f28f858f87\": container with ID starting with a68ae12457004be939b736fd161c2b31acb45a75ed9a9751511c71f28f858f87 not found: ID does not exist" Feb 23 09:36:43 crc kubenswrapper[4940]: I0223 09:36:43.323578 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-w8xkj"] Feb 23 09:36:43 crc kubenswrapper[4940]: E0223 09:36:43.324485 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="25d3c502-d30d-443f-9269-bdd7eec6a432" containerName="registry-server" Feb 23 09:36:43 crc kubenswrapper[4940]: I0223 09:36:43.324568 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="25d3c502-d30d-443f-9269-bdd7eec6a432" containerName="registry-server" Feb 23 09:36:43 crc kubenswrapper[4940]: E0223 09:36:43.324672 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="25d3c502-d30d-443f-9269-bdd7eec6a432" containerName="extract-content" Feb 23 09:36:43 crc kubenswrapper[4940]: I0223 09:36:43.324754 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="25d3c502-d30d-443f-9269-bdd7eec6a432" containerName="extract-content" Feb 23 09:36:43 crc kubenswrapper[4940]: E0223 09:36:43.324889 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="25d3c502-d30d-443f-9269-bdd7eec6a432" containerName="extract-utilities" Feb 23 09:36:43 crc kubenswrapper[4940]: I0223 09:36:43.324959 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="25d3c502-d30d-443f-9269-bdd7eec6a432" containerName="extract-utilities" Feb 23 09:36:43 crc kubenswrapper[4940]: I0223 09:36:43.325304 4940 memory_manager.go:354] "RemoveStaleState removing state" podUID="25d3c502-d30d-443f-9269-bdd7eec6a432" containerName="registry-server" Feb 23 09:36:43 crc kubenswrapper[4940]: I0223 09:36:43.328455 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-w8xkj" Feb 23 09:36:43 crc kubenswrapper[4940]: I0223 09:36:43.335601 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-w8xkj"] Feb 23 09:36:43 crc kubenswrapper[4940]: I0223 09:36:43.377490 4940 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25d3c502-d30d-443f-9269-bdd7eec6a432" path="/var/lib/kubelet/pods/25d3c502-d30d-443f-9269-bdd7eec6a432/volumes" Feb 23 09:36:43 crc kubenswrapper[4940]: I0223 09:36:43.408122 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3d7bb6d0-bc44-4090-814f-b06f16fb0b87-catalog-content\") pod \"certified-operators-w8xkj\" (UID: \"3d7bb6d0-bc44-4090-814f-b06f16fb0b87\") " pod="openshift-marketplace/certified-operators-w8xkj" Feb 23 09:36:43 crc kubenswrapper[4940]: I0223 09:36:43.408231 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3d7bb6d0-bc44-4090-814f-b06f16fb0b87-utilities\") pod \"certified-operators-w8xkj\" (UID: \"3d7bb6d0-bc44-4090-814f-b06f16fb0b87\") " pod="openshift-marketplace/certified-operators-w8xkj" Feb 23 09:36:43 crc kubenswrapper[4940]: I0223 09:36:43.408450 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-56777\" (UniqueName: \"kubernetes.io/projected/3d7bb6d0-bc44-4090-814f-b06f16fb0b87-kube-api-access-56777\") pod \"certified-operators-w8xkj\" (UID: \"3d7bb6d0-bc44-4090-814f-b06f16fb0b87\") " pod="openshift-marketplace/certified-operators-w8xkj" Feb 23 09:36:43 crc kubenswrapper[4940]: I0223 09:36:43.511071 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3d7bb6d0-bc44-4090-814f-b06f16fb0b87-catalog-content\") pod \"certified-operators-w8xkj\" (UID: \"3d7bb6d0-bc44-4090-814f-b06f16fb0b87\") " pod="openshift-marketplace/certified-operators-w8xkj" Feb 23 09:36:43 crc kubenswrapper[4940]: I0223 09:36:43.511152 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3d7bb6d0-bc44-4090-814f-b06f16fb0b87-utilities\") pod \"certified-operators-w8xkj\" (UID: \"3d7bb6d0-bc44-4090-814f-b06f16fb0b87\") " pod="openshift-marketplace/certified-operators-w8xkj" Feb 23 09:36:43 crc kubenswrapper[4940]: I0223 09:36:43.511216 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-56777\" (UniqueName: \"kubernetes.io/projected/3d7bb6d0-bc44-4090-814f-b06f16fb0b87-kube-api-access-56777\") pod \"certified-operators-w8xkj\" (UID: \"3d7bb6d0-bc44-4090-814f-b06f16fb0b87\") " pod="openshift-marketplace/certified-operators-w8xkj" Feb 23 09:36:43 crc kubenswrapper[4940]: I0223 09:36:43.511785 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3d7bb6d0-bc44-4090-814f-b06f16fb0b87-catalog-content\") pod \"certified-operators-w8xkj\" (UID: \"3d7bb6d0-bc44-4090-814f-b06f16fb0b87\") " pod="openshift-marketplace/certified-operators-w8xkj" Feb 23 09:36:43 crc kubenswrapper[4940]: I0223 09:36:43.512140 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3d7bb6d0-bc44-4090-814f-b06f16fb0b87-utilities\") pod \"certified-operators-w8xkj\" (UID: \"3d7bb6d0-bc44-4090-814f-b06f16fb0b87\") " pod="openshift-marketplace/certified-operators-w8xkj" Feb 23 09:36:43 crc kubenswrapper[4940]: I0223 09:36:43.532543 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-56777\" (UniqueName: \"kubernetes.io/projected/3d7bb6d0-bc44-4090-814f-b06f16fb0b87-kube-api-access-56777\") pod \"certified-operators-w8xkj\" (UID: \"3d7bb6d0-bc44-4090-814f-b06f16fb0b87\") " pod="openshift-marketplace/certified-operators-w8xkj" Feb 23 09:36:43 crc kubenswrapper[4940]: I0223 09:36:43.651096 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-w8xkj" Feb 23 09:36:44 crc kubenswrapper[4940]: I0223 09:36:44.159983 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-w8xkj"] Feb 23 09:36:44 crc kubenswrapper[4940]: I0223 09:36:44.623026 4940 generic.go:334] "Generic (PLEG): container finished" podID="3d7bb6d0-bc44-4090-814f-b06f16fb0b87" containerID="4b4e957966f439b909266b66ca322cd57608b71d743fd8c228e80d45d3fb2e45" exitCode=0 Feb 23 09:36:44 crc kubenswrapper[4940]: I0223 09:36:44.623162 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w8xkj" event={"ID":"3d7bb6d0-bc44-4090-814f-b06f16fb0b87","Type":"ContainerDied","Data":"4b4e957966f439b909266b66ca322cd57608b71d743fd8c228e80d45d3fb2e45"} Feb 23 09:36:44 crc kubenswrapper[4940]: I0223 09:36:44.623309 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w8xkj" event={"ID":"3d7bb6d0-bc44-4090-814f-b06f16fb0b87","Type":"ContainerStarted","Data":"0e55dfa682637e9e23f4ef4a4c5c943d5a88d3ebd240498f3f3a76ff160cfce0"} Feb 23 09:36:47 crc kubenswrapper[4940]: I0223 09:36:47.652701 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w8xkj" event={"ID":"3d7bb6d0-bc44-4090-814f-b06f16fb0b87","Type":"ContainerStarted","Data":"c816726d0a42a54df435d777124789d9704db5d7f0e0e1fda72ddaf19397faa2"} Feb 23 09:36:49 crc kubenswrapper[4940]: I0223 09:36:49.820544 4940 generic.go:334] "Generic (PLEG): container finished" podID="3d7bb6d0-bc44-4090-814f-b06f16fb0b87" containerID="c816726d0a42a54df435d777124789d9704db5d7f0e0e1fda72ddaf19397faa2" exitCode=0 Feb 23 09:36:49 crc kubenswrapper[4940]: I0223 09:36:49.820659 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w8xkj" event={"ID":"3d7bb6d0-bc44-4090-814f-b06f16fb0b87","Type":"ContainerDied","Data":"c816726d0a42a54df435d777124789d9704db5d7f0e0e1fda72ddaf19397faa2"} Feb 23 09:36:55 crc kubenswrapper[4940]: I0223 09:36:55.893696 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w8xkj" event={"ID":"3d7bb6d0-bc44-4090-814f-b06f16fb0b87","Type":"ContainerStarted","Data":"1f473ec726fe95366d9e0f277891d3210ac28feb8f2a1f6244c0071c7cef3e8b"} Feb 23 09:36:55 crc kubenswrapper[4940]: I0223 09:36:55.922123 4940 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-w8xkj" podStartSLOduration=2.240773839 podStartE2EDuration="12.922105285s" podCreationTimestamp="2026-02-23 09:36:43 +0000 UTC" firstStartedPulling="2026-02-23 09:36:44.626553817 +0000 UTC m=+2936.009759974" lastFinishedPulling="2026-02-23 09:36:55.307885253 +0000 UTC m=+2946.691091420" observedRunningTime="2026-02-23 09:36:55.916881091 +0000 UTC m=+2947.300087258" watchObservedRunningTime="2026-02-23 09:36:55.922105285 +0000 UTC m=+2947.305311442" Feb 23 09:37:01 crc kubenswrapper[4940]: I0223 09:37:01.429625 4940 patch_prober.go:28] interesting pod/machine-config-daemon-26mgs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 23 09:37:01 crc kubenswrapper[4940]: I0223 09:37:01.430181 4940 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" podUID="f3f2cfd6-5ddf-436d-998f-440f1cc642b1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 23 09:37:01 crc kubenswrapper[4940]: I0223 09:37:01.430241 4940 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" Feb 23 09:37:01 crc kubenswrapper[4940]: I0223 09:37:01.431073 4940 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"4dc5d52077cd2baef92405476f8423bdee15b141572c600fd683d3ef957f6198"} pod="openshift-machine-config-operator/machine-config-daemon-26mgs" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 23 09:37:01 crc kubenswrapper[4940]: I0223 09:37:01.431139 4940 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" podUID="f3f2cfd6-5ddf-436d-998f-440f1cc642b1" containerName="machine-config-daemon" containerID="cri-o://4dc5d52077cd2baef92405476f8423bdee15b141572c600fd683d3ef957f6198" gracePeriod=600 Feb 23 09:37:01 crc kubenswrapper[4940]: I0223 09:37:01.942736 4940 generic.go:334] "Generic (PLEG): container finished" podID="f3f2cfd6-5ddf-436d-998f-440f1cc642b1" containerID="4dc5d52077cd2baef92405476f8423bdee15b141572c600fd683d3ef957f6198" exitCode=0 Feb 23 09:37:01 crc kubenswrapper[4940]: I0223 09:37:01.942790 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" event={"ID":"f3f2cfd6-5ddf-436d-998f-440f1cc642b1","Type":"ContainerDied","Data":"4dc5d52077cd2baef92405476f8423bdee15b141572c600fd683d3ef957f6198"} Feb 23 09:37:01 crc kubenswrapper[4940]: I0223 09:37:01.942837 4940 scope.go:117] "RemoveContainer" containerID="b8953f6e299fb6e672b60d70959469929c73e7840881290e17fc29267585d923" Feb 23 09:37:02 crc kubenswrapper[4940]: E0223 09:37:02.079733 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-26mgs_openshift-machine-config-operator(f3f2cfd6-5ddf-436d-998f-440f1cc642b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" podUID="f3f2cfd6-5ddf-436d-998f-440f1cc642b1" Feb 23 09:37:02 crc kubenswrapper[4940]: I0223 09:37:02.953787 4940 scope.go:117] "RemoveContainer" containerID="4dc5d52077cd2baef92405476f8423bdee15b141572c600fd683d3ef957f6198" Feb 23 09:37:02 crc kubenswrapper[4940]: E0223 09:37:02.954127 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-26mgs_openshift-machine-config-operator(f3f2cfd6-5ddf-436d-998f-440f1cc642b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" podUID="f3f2cfd6-5ddf-436d-998f-440f1cc642b1" Feb 23 09:37:03 crc kubenswrapper[4940]: I0223 09:37:03.652153 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-w8xkj" Feb 23 09:37:03 crc kubenswrapper[4940]: I0223 09:37:03.653665 4940 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-w8xkj" Feb 23 09:37:03 crc kubenswrapper[4940]: I0223 09:37:03.716752 4940 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-w8xkj" Feb 23 09:37:04 crc kubenswrapper[4940]: I0223 09:37:04.010872 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-w8xkj" Feb 23 09:37:04 crc kubenswrapper[4940]: I0223 09:37:04.058906 4940 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-w8xkj"] Feb 23 09:37:05 crc kubenswrapper[4940]: I0223 09:37:05.976511 4940 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-w8xkj" podUID="3d7bb6d0-bc44-4090-814f-b06f16fb0b87" containerName="registry-server" containerID="cri-o://1f473ec726fe95366d9e0f277891d3210ac28feb8f2a1f6244c0071c7cef3e8b" gracePeriod=2 Feb 23 09:37:06 crc kubenswrapper[4940]: I0223 09:37:06.578500 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-w8xkj" Feb 23 09:37:06 crc kubenswrapper[4940]: I0223 09:37:06.650900 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3d7bb6d0-bc44-4090-814f-b06f16fb0b87-catalog-content\") pod \"3d7bb6d0-bc44-4090-814f-b06f16fb0b87\" (UID: \"3d7bb6d0-bc44-4090-814f-b06f16fb0b87\") " Feb 23 09:37:06 crc kubenswrapper[4940]: I0223 09:37:06.651017 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3d7bb6d0-bc44-4090-814f-b06f16fb0b87-utilities\") pod \"3d7bb6d0-bc44-4090-814f-b06f16fb0b87\" (UID: \"3d7bb6d0-bc44-4090-814f-b06f16fb0b87\") " Feb 23 09:37:06 crc kubenswrapper[4940]: I0223 09:37:06.651080 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-56777\" (UniqueName: \"kubernetes.io/projected/3d7bb6d0-bc44-4090-814f-b06f16fb0b87-kube-api-access-56777\") pod \"3d7bb6d0-bc44-4090-814f-b06f16fb0b87\" (UID: \"3d7bb6d0-bc44-4090-814f-b06f16fb0b87\") " Feb 23 09:37:06 crc kubenswrapper[4940]: I0223 09:37:06.651821 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3d7bb6d0-bc44-4090-814f-b06f16fb0b87-utilities" (OuterVolumeSpecName: "utilities") pod "3d7bb6d0-bc44-4090-814f-b06f16fb0b87" (UID: "3d7bb6d0-bc44-4090-814f-b06f16fb0b87"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 09:37:06 crc kubenswrapper[4940]: I0223 09:37:06.657583 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3d7bb6d0-bc44-4090-814f-b06f16fb0b87-kube-api-access-56777" (OuterVolumeSpecName: "kube-api-access-56777") pod "3d7bb6d0-bc44-4090-814f-b06f16fb0b87" (UID: "3d7bb6d0-bc44-4090-814f-b06f16fb0b87"). InnerVolumeSpecName "kube-api-access-56777". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 09:37:06 crc kubenswrapper[4940]: I0223 09:37:06.703886 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3d7bb6d0-bc44-4090-814f-b06f16fb0b87-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3d7bb6d0-bc44-4090-814f-b06f16fb0b87" (UID: "3d7bb6d0-bc44-4090-814f-b06f16fb0b87"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 09:37:06 crc kubenswrapper[4940]: I0223 09:37:06.753464 4940 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3d7bb6d0-bc44-4090-814f-b06f16fb0b87-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 23 09:37:06 crc kubenswrapper[4940]: I0223 09:37:06.753501 4940 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3d7bb6d0-bc44-4090-814f-b06f16fb0b87-utilities\") on node \"crc\" DevicePath \"\"" Feb 23 09:37:06 crc kubenswrapper[4940]: I0223 09:37:06.753514 4940 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-56777\" (UniqueName: \"kubernetes.io/projected/3d7bb6d0-bc44-4090-814f-b06f16fb0b87-kube-api-access-56777\") on node \"crc\" DevicePath \"\"" Feb 23 09:37:06 crc kubenswrapper[4940]: I0223 09:37:06.988252 4940 generic.go:334] "Generic (PLEG): container finished" podID="3d7bb6d0-bc44-4090-814f-b06f16fb0b87" containerID="1f473ec726fe95366d9e0f277891d3210ac28feb8f2a1f6244c0071c7cef3e8b" exitCode=0 Feb 23 09:37:06 crc kubenswrapper[4940]: I0223 09:37:06.988293 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w8xkj" event={"ID":"3d7bb6d0-bc44-4090-814f-b06f16fb0b87","Type":"ContainerDied","Data":"1f473ec726fe95366d9e0f277891d3210ac28feb8f2a1f6244c0071c7cef3e8b"} Feb 23 09:37:06 crc kubenswrapper[4940]: I0223 09:37:06.988318 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w8xkj" event={"ID":"3d7bb6d0-bc44-4090-814f-b06f16fb0b87","Type":"ContainerDied","Data":"0e55dfa682637e9e23f4ef4a4c5c943d5a88d3ebd240498f3f3a76ff160cfce0"} Feb 23 09:37:06 crc kubenswrapper[4940]: I0223 09:37:06.988335 4940 scope.go:117] "RemoveContainer" containerID="1f473ec726fe95366d9e0f277891d3210ac28feb8f2a1f6244c0071c7cef3e8b" Feb 23 09:37:06 crc kubenswrapper[4940]: I0223 09:37:06.988337 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-w8xkj" Feb 23 09:37:07 crc kubenswrapper[4940]: I0223 09:37:07.022820 4940 scope.go:117] "RemoveContainer" containerID="c816726d0a42a54df435d777124789d9704db5d7f0e0e1fda72ddaf19397faa2" Feb 23 09:37:07 crc kubenswrapper[4940]: I0223 09:37:07.034704 4940 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-w8xkj"] Feb 23 09:37:07 crc kubenswrapper[4940]: I0223 09:37:07.047910 4940 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-w8xkj"] Feb 23 09:37:07 crc kubenswrapper[4940]: I0223 09:37:07.079846 4940 scope.go:117] "RemoveContainer" containerID="4b4e957966f439b909266b66ca322cd57608b71d743fd8c228e80d45d3fb2e45" Feb 23 09:37:07 crc kubenswrapper[4940]: I0223 09:37:07.102515 4940 scope.go:117] "RemoveContainer" containerID="1f473ec726fe95366d9e0f277891d3210ac28feb8f2a1f6244c0071c7cef3e8b" Feb 23 09:37:07 crc kubenswrapper[4940]: E0223 09:37:07.103156 4940 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1f473ec726fe95366d9e0f277891d3210ac28feb8f2a1f6244c0071c7cef3e8b\": container with ID starting with 1f473ec726fe95366d9e0f277891d3210ac28feb8f2a1f6244c0071c7cef3e8b not found: ID does not exist" containerID="1f473ec726fe95366d9e0f277891d3210ac28feb8f2a1f6244c0071c7cef3e8b" Feb 23 09:37:07 crc kubenswrapper[4940]: I0223 09:37:07.103232 4940 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1f473ec726fe95366d9e0f277891d3210ac28feb8f2a1f6244c0071c7cef3e8b"} err="failed to get container status \"1f473ec726fe95366d9e0f277891d3210ac28feb8f2a1f6244c0071c7cef3e8b\": rpc error: code = NotFound desc = could not find container \"1f473ec726fe95366d9e0f277891d3210ac28feb8f2a1f6244c0071c7cef3e8b\": container with ID starting with 1f473ec726fe95366d9e0f277891d3210ac28feb8f2a1f6244c0071c7cef3e8b not found: ID does not exist" Feb 23 09:37:07 crc kubenswrapper[4940]: I0223 09:37:07.103272 4940 scope.go:117] "RemoveContainer" containerID="c816726d0a42a54df435d777124789d9704db5d7f0e0e1fda72ddaf19397faa2" Feb 23 09:37:07 crc kubenswrapper[4940]: E0223 09:37:07.103784 4940 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c816726d0a42a54df435d777124789d9704db5d7f0e0e1fda72ddaf19397faa2\": container with ID starting with c816726d0a42a54df435d777124789d9704db5d7f0e0e1fda72ddaf19397faa2 not found: ID does not exist" containerID="c816726d0a42a54df435d777124789d9704db5d7f0e0e1fda72ddaf19397faa2" Feb 23 09:37:07 crc kubenswrapper[4940]: I0223 09:37:07.103848 4940 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c816726d0a42a54df435d777124789d9704db5d7f0e0e1fda72ddaf19397faa2"} err="failed to get container status \"c816726d0a42a54df435d777124789d9704db5d7f0e0e1fda72ddaf19397faa2\": rpc error: code = NotFound desc = could not find container \"c816726d0a42a54df435d777124789d9704db5d7f0e0e1fda72ddaf19397faa2\": container with ID starting with c816726d0a42a54df435d777124789d9704db5d7f0e0e1fda72ddaf19397faa2 not found: ID does not exist" Feb 23 09:37:07 crc kubenswrapper[4940]: I0223 09:37:07.103882 4940 scope.go:117] "RemoveContainer" containerID="4b4e957966f439b909266b66ca322cd57608b71d743fd8c228e80d45d3fb2e45" Feb 23 09:37:07 crc kubenswrapper[4940]: E0223 09:37:07.104306 4940 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4b4e957966f439b909266b66ca322cd57608b71d743fd8c228e80d45d3fb2e45\": container with ID starting with 4b4e957966f439b909266b66ca322cd57608b71d743fd8c228e80d45d3fb2e45 not found: ID does not exist" containerID="4b4e957966f439b909266b66ca322cd57608b71d743fd8c228e80d45d3fb2e45" Feb 23 09:37:07 crc kubenswrapper[4940]: I0223 09:37:07.104343 4940 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4b4e957966f439b909266b66ca322cd57608b71d743fd8c228e80d45d3fb2e45"} err="failed to get container status \"4b4e957966f439b909266b66ca322cd57608b71d743fd8c228e80d45d3fb2e45\": rpc error: code = NotFound desc = could not find container \"4b4e957966f439b909266b66ca322cd57608b71d743fd8c228e80d45d3fb2e45\": container with ID starting with 4b4e957966f439b909266b66ca322cd57608b71d743fd8c228e80d45d3fb2e45 not found: ID does not exist" Feb 23 09:37:07 crc kubenswrapper[4940]: I0223 09:37:07.358344 4940 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3d7bb6d0-bc44-4090-814f-b06f16fb0b87" path="/var/lib/kubelet/pods/3d7bb6d0-bc44-4090-814f-b06f16fb0b87/volumes" Feb 23 09:37:15 crc kubenswrapper[4940]: I0223 09:37:15.346195 4940 scope.go:117] "RemoveContainer" containerID="4dc5d52077cd2baef92405476f8423bdee15b141572c600fd683d3ef957f6198" Feb 23 09:37:15 crc kubenswrapper[4940]: E0223 09:37:15.346914 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-26mgs_openshift-machine-config-operator(f3f2cfd6-5ddf-436d-998f-440f1cc642b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" podUID="f3f2cfd6-5ddf-436d-998f-440f1cc642b1" Feb 23 09:37:28 crc kubenswrapper[4940]: I0223 09:37:28.354377 4940 scope.go:117] "RemoveContainer" containerID="4dc5d52077cd2baef92405476f8423bdee15b141572c600fd683d3ef957f6198" Feb 23 09:37:28 crc kubenswrapper[4940]: E0223 09:37:28.355180 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-26mgs_openshift-machine-config-operator(f3f2cfd6-5ddf-436d-998f-440f1cc642b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" podUID="f3f2cfd6-5ddf-436d-998f-440f1cc642b1" Feb 23 09:37:31 crc kubenswrapper[4940]: I0223 09:37:31.414008 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-b726x"] Feb 23 09:37:31 crc kubenswrapper[4940]: E0223 09:37:31.415394 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d7bb6d0-bc44-4090-814f-b06f16fb0b87" containerName="extract-content" Feb 23 09:37:31 crc kubenswrapper[4940]: I0223 09:37:31.415420 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d7bb6d0-bc44-4090-814f-b06f16fb0b87" containerName="extract-content" Feb 23 09:37:31 crc kubenswrapper[4940]: E0223 09:37:31.415439 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d7bb6d0-bc44-4090-814f-b06f16fb0b87" containerName="registry-server" Feb 23 09:37:31 crc kubenswrapper[4940]: I0223 09:37:31.415450 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d7bb6d0-bc44-4090-814f-b06f16fb0b87" containerName="registry-server" Feb 23 09:37:31 crc kubenswrapper[4940]: E0223 09:37:31.415476 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d7bb6d0-bc44-4090-814f-b06f16fb0b87" containerName="extract-utilities" Feb 23 09:37:31 crc kubenswrapper[4940]: I0223 09:37:31.415487 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d7bb6d0-bc44-4090-814f-b06f16fb0b87" containerName="extract-utilities" Feb 23 09:37:31 crc kubenswrapper[4940]: I0223 09:37:31.416047 4940 memory_manager.go:354] "RemoveStaleState removing state" podUID="3d7bb6d0-bc44-4090-814f-b06f16fb0b87" containerName="registry-server" Feb 23 09:37:31 crc kubenswrapper[4940]: I0223 09:37:31.418697 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-b726x" Feb 23 09:37:31 crc kubenswrapper[4940]: I0223 09:37:31.430862 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-b726x"] Feb 23 09:37:31 crc kubenswrapper[4940]: I0223 09:37:31.496750 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pz48l\" (UniqueName: \"kubernetes.io/projected/3707b2ce-8bf8-47b8-98e2-b68ee4f73f6a-kube-api-access-pz48l\") pod \"community-operators-b726x\" (UID: \"3707b2ce-8bf8-47b8-98e2-b68ee4f73f6a\") " pod="openshift-marketplace/community-operators-b726x" Feb 23 09:37:31 crc kubenswrapper[4940]: I0223 09:37:31.496908 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3707b2ce-8bf8-47b8-98e2-b68ee4f73f6a-catalog-content\") pod \"community-operators-b726x\" (UID: \"3707b2ce-8bf8-47b8-98e2-b68ee4f73f6a\") " pod="openshift-marketplace/community-operators-b726x" Feb 23 09:37:31 crc kubenswrapper[4940]: I0223 09:37:31.496982 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3707b2ce-8bf8-47b8-98e2-b68ee4f73f6a-utilities\") pod \"community-operators-b726x\" (UID: \"3707b2ce-8bf8-47b8-98e2-b68ee4f73f6a\") " pod="openshift-marketplace/community-operators-b726x" Feb 23 09:37:31 crc kubenswrapper[4940]: I0223 09:37:31.598621 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3707b2ce-8bf8-47b8-98e2-b68ee4f73f6a-catalog-content\") pod \"community-operators-b726x\" (UID: \"3707b2ce-8bf8-47b8-98e2-b68ee4f73f6a\") " pod="openshift-marketplace/community-operators-b726x" Feb 23 09:37:31 crc kubenswrapper[4940]: I0223 09:37:31.598819 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3707b2ce-8bf8-47b8-98e2-b68ee4f73f6a-utilities\") pod \"community-operators-b726x\" (UID: \"3707b2ce-8bf8-47b8-98e2-b68ee4f73f6a\") " pod="openshift-marketplace/community-operators-b726x" Feb 23 09:37:31 crc kubenswrapper[4940]: I0223 09:37:31.598899 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pz48l\" (UniqueName: \"kubernetes.io/projected/3707b2ce-8bf8-47b8-98e2-b68ee4f73f6a-kube-api-access-pz48l\") pod \"community-operators-b726x\" (UID: \"3707b2ce-8bf8-47b8-98e2-b68ee4f73f6a\") " pod="openshift-marketplace/community-operators-b726x" Feb 23 09:37:31 crc kubenswrapper[4940]: I0223 09:37:31.599732 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3707b2ce-8bf8-47b8-98e2-b68ee4f73f6a-catalog-content\") pod \"community-operators-b726x\" (UID: \"3707b2ce-8bf8-47b8-98e2-b68ee4f73f6a\") " pod="openshift-marketplace/community-operators-b726x" Feb 23 09:37:31 crc kubenswrapper[4940]: I0223 09:37:31.600002 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3707b2ce-8bf8-47b8-98e2-b68ee4f73f6a-utilities\") pod \"community-operators-b726x\" (UID: \"3707b2ce-8bf8-47b8-98e2-b68ee4f73f6a\") " pod="openshift-marketplace/community-operators-b726x" Feb 23 09:37:31 crc kubenswrapper[4940]: I0223 09:37:31.619702 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pz48l\" (UniqueName: \"kubernetes.io/projected/3707b2ce-8bf8-47b8-98e2-b68ee4f73f6a-kube-api-access-pz48l\") pod \"community-operators-b726x\" (UID: \"3707b2ce-8bf8-47b8-98e2-b68ee4f73f6a\") " pod="openshift-marketplace/community-operators-b726x" Feb 23 09:37:31 crc kubenswrapper[4940]: I0223 09:37:31.740166 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-b726x" Feb 23 09:37:32 crc kubenswrapper[4940]: I0223 09:37:32.319035 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-b726x"] Feb 23 09:37:33 crc kubenswrapper[4940]: I0223 09:37:33.241191 4940 generic.go:334] "Generic (PLEG): container finished" podID="3707b2ce-8bf8-47b8-98e2-b68ee4f73f6a" containerID="f75c4c5ad463c437f98b5fdb0838167a5a7edef2ee24ddde7f962d335542136e" exitCode=0 Feb 23 09:37:33 crc kubenswrapper[4940]: I0223 09:37:33.241304 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b726x" event={"ID":"3707b2ce-8bf8-47b8-98e2-b68ee4f73f6a","Type":"ContainerDied","Data":"f75c4c5ad463c437f98b5fdb0838167a5a7edef2ee24ddde7f962d335542136e"} Feb 23 09:37:33 crc kubenswrapper[4940]: I0223 09:37:33.241442 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b726x" event={"ID":"3707b2ce-8bf8-47b8-98e2-b68ee4f73f6a","Type":"ContainerStarted","Data":"a8d91757d662ccaf2e8c3222aa2d342b21eb40304cc2f56b889a5ee0fbedac43"} Feb 23 09:37:34 crc kubenswrapper[4940]: I0223 09:37:34.251552 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b726x" event={"ID":"3707b2ce-8bf8-47b8-98e2-b68ee4f73f6a","Type":"ContainerStarted","Data":"d307d81631fa6ac21ea334c5e9b40460563f35eb80c52e8f1795da95c5db8671"} Feb 23 09:37:36 crc kubenswrapper[4940]: I0223 09:37:36.270288 4940 generic.go:334] "Generic (PLEG): container finished" podID="3707b2ce-8bf8-47b8-98e2-b68ee4f73f6a" containerID="d307d81631fa6ac21ea334c5e9b40460563f35eb80c52e8f1795da95c5db8671" exitCode=0 Feb 23 09:37:36 crc kubenswrapper[4940]: I0223 09:37:36.270841 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b726x" event={"ID":"3707b2ce-8bf8-47b8-98e2-b68ee4f73f6a","Type":"ContainerDied","Data":"d307d81631fa6ac21ea334c5e9b40460563f35eb80c52e8f1795da95c5db8671"} Feb 23 09:37:37 crc kubenswrapper[4940]: I0223 09:37:37.281363 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b726x" event={"ID":"3707b2ce-8bf8-47b8-98e2-b68ee4f73f6a","Type":"ContainerStarted","Data":"62f01e51fe866b21e50ab3c5cf295f7ae88ded611814b6474ac717c3cf7cb03c"} Feb 23 09:37:37 crc kubenswrapper[4940]: I0223 09:37:37.302532 4940 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-b726x" podStartSLOduration=2.593061977 podStartE2EDuration="6.302509776s" podCreationTimestamp="2026-02-23 09:37:31 +0000 UTC" firstStartedPulling="2026-02-23 09:37:33.243447167 +0000 UTC m=+2984.626653324" lastFinishedPulling="2026-02-23 09:37:36.952894966 +0000 UTC m=+2988.336101123" observedRunningTime="2026-02-23 09:37:37.297155968 +0000 UTC m=+2988.680362125" watchObservedRunningTime="2026-02-23 09:37:37.302509776 +0000 UTC m=+2988.685715933" Feb 23 09:37:41 crc kubenswrapper[4940]: I0223 09:37:41.740826 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-b726x" Feb 23 09:37:41 crc kubenswrapper[4940]: I0223 09:37:41.741208 4940 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-b726x" Feb 23 09:37:41 crc kubenswrapper[4940]: I0223 09:37:41.785878 4940 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-b726x" Feb 23 09:37:42 crc kubenswrapper[4940]: I0223 09:37:42.380112 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-b726x" Feb 23 09:37:42 crc kubenswrapper[4940]: I0223 09:37:42.445097 4940 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-b726x"] Feb 23 09:37:43 crc kubenswrapper[4940]: I0223 09:37:43.346574 4940 scope.go:117] "RemoveContainer" containerID="4dc5d52077cd2baef92405476f8423bdee15b141572c600fd683d3ef957f6198" Feb 23 09:37:43 crc kubenswrapper[4940]: E0223 09:37:43.347269 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-26mgs_openshift-machine-config-operator(f3f2cfd6-5ddf-436d-998f-440f1cc642b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" podUID="f3f2cfd6-5ddf-436d-998f-440f1cc642b1" Feb 23 09:37:44 crc kubenswrapper[4940]: I0223 09:37:44.344010 4940 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-b726x" podUID="3707b2ce-8bf8-47b8-98e2-b68ee4f73f6a" containerName="registry-server" containerID="cri-o://62f01e51fe866b21e50ab3c5cf295f7ae88ded611814b6474ac717c3cf7cb03c" gracePeriod=2 Feb 23 09:37:45 crc kubenswrapper[4940]: I0223 09:37:45.026640 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-b726x" Feb 23 09:37:45 crc kubenswrapper[4940]: I0223 09:37:45.197848 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pz48l\" (UniqueName: \"kubernetes.io/projected/3707b2ce-8bf8-47b8-98e2-b68ee4f73f6a-kube-api-access-pz48l\") pod \"3707b2ce-8bf8-47b8-98e2-b68ee4f73f6a\" (UID: \"3707b2ce-8bf8-47b8-98e2-b68ee4f73f6a\") " Feb 23 09:37:45 crc kubenswrapper[4940]: I0223 09:37:45.198215 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3707b2ce-8bf8-47b8-98e2-b68ee4f73f6a-utilities\") pod \"3707b2ce-8bf8-47b8-98e2-b68ee4f73f6a\" (UID: \"3707b2ce-8bf8-47b8-98e2-b68ee4f73f6a\") " Feb 23 09:37:45 crc kubenswrapper[4940]: I0223 09:37:45.198258 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3707b2ce-8bf8-47b8-98e2-b68ee4f73f6a-catalog-content\") pod \"3707b2ce-8bf8-47b8-98e2-b68ee4f73f6a\" (UID: \"3707b2ce-8bf8-47b8-98e2-b68ee4f73f6a\") " Feb 23 09:37:45 crc kubenswrapper[4940]: I0223 09:37:45.199128 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3707b2ce-8bf8-47b8-98e2-b68ee4f73f6a-utilities" (OuterVolumeSpecName: "utilities") pod "3707b2ce-8bf8-47b8-98e2-b68ee4f73f6a" (UID: "3707b2ce-8bf8-47b8-98e2-b68ee4f73f6a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 09:37:45 crc kubenswrapper[4940]: I0223 09:37:45.221452 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3707b2ce-8bf8-47b8-98e2-b68ee4f73f6a-kube-api-access-pz48l" (OuterVolumeSpecName: "kube-api-access-pz48l") pod "3707b2ce-8bf8-47b8-98e2-b68ee4f73f6a" (UID: "3707b2ce-8bf8-47b8-98e2-b68ee4f73f6a"). InnerVolumeSpecName "kube-api-access-pz48l". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 09:37:45 crc kubenswrapper[4940]: I0223 09:37:45.260263 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3707b2ce-8bf8-47b8-98e2-b68ee4f73f6a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3707b2ce-8bf8-47b8-98e2-b68ee4f73f6a" (UID: "3707b2ce-8bf8-47b8-98e2-b68ee4f73f6a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 09:37:45 crc kubenswrapper[4940]: I0223 09:37:45.301018 4940 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3707b2ce-8bf8-47b8-98e2-b68ee4f73f6a-utilities\") on node \"crc\" DevicePath \"\"" Feb 23 09:37:45 crc kubenswrapper[4940]: I0223 09:37:45.301228 4940 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3707b2ce-8bf8-47b8-98e2-b68ee4f73f6a-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 23 09:37:45 crc kubenswrapper[4940]: I0223 09:37:45.301325 4940 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pz48l\" (UniqueName: \"kubernetes.io/projected/3707b2ce-8bf8-47b8-98e2-b68ee4f73f6a-kube-api-access-pz48l\") on node \"crc\" DevicePath \"\"" Feb 23 09:37:45 crc kubenswrapper[4940]: I0223 09:37:45.355579 4940 generic.go:334] "Generic (PLEG): container finished" podID="3707b2ce-8bf8-47b8-98e2-b68ee4f73f6a" containerID="62f01e51fe866b21e50ab3c5cf295f7ae88ded611814b6474ac717c3cf7cb03c" exitCode=0 Feb 23 09:37:45 crc kubenswrapper[4940]: I0223 09:37:45.355584 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b726x" event={"ID":"3707b2ce-8bf8-47b8-98e2-b68ee4f73f6a","Type":"ContainerDied","Data":"62f01e51fe866b21e50ab3c5cf295f7ae88ded611814b6474ac717c3cf7cb03c"} Feb 23 09:37:45 crc kubenswrapper[4940]: I0223 09:37:45.355660 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b726x" event={"ID":"3707b2ce-8bf8-47b8-98e2-b68ee4f73f6a","Type":"ContainerDied","Data":"a8d91757d662ccaf2e8c3222aa2d342b21eb40304cc2f56b889a5ee0fbedac43"} Feb 23 09:37:45 crc kubenswrapper[4940]: I0223 09:37:45.355692 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-b726x" Feb 23 09:37:45 crc kubenswrapper[4940]: I0223 09:37:45.355712 4940 scope.go:117] "RemoveContainer" containerID="62f01e51fe866b21e50ab3c5cf295f7ae88ded611814b6474ac717c3cf7cb03c" Feb 23 09:37:45 crc kubenswrapper[4940]: I0223 09:37:45.381836 4940 scope.go:117] "RemoveContainer" containerID="d307d81631fa6ac21ea334c5e9b40460563f35eb80c52e8f1795da95c5db8671" Feb 23 09:37:45 crc kubenswrapper[4940]: I0223 09:37:45.400223 4940 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-b726x"] Feb 23 09:37:45 crc kubenswrapper[4940]: I0223 09:37:45.408026 4940 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-b726x"] Feb 23 09:37:45 crc kubenswrapper[4940]: I0223 09:37:45.427907 4940 scope.go:117] "RemoveContainer" containerID="f75c4c5ad463c437f98b5fdb0838167a5a7edef2ee24ddde7f962d335542136e" Feb 23 09:37:45 crc kubenswrapper[4940]: I0223 09:37:45.463554 4940 scope.go:117] "RemoveContainer" containerID="62f01e51fe866b21e50ab3c5cf295f7ae88ded611814b6474ac717c3cf7cb03c" Feb 23 09:37:45 crc kubenswrapper[4940]: E0223 09:37:45.464452 4940 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"62f01e51fe866b21e50ab3c5cf295f7ae88ded611814b6474ac717c3cf7cb03c\": container with ID starting with 62f01e51fe866b21e50ab3c5cf295f7ae88ded611814b6474ac717c3cf7cb03c not found: ID does not exist" containerID="62f01e51fe866b21e50ab3c5cf295f7ae88ded611814b6474ac717c3cf7cb03c" Feb 23 09:37:45 crc kubenswrapper[4940]: I0223 09:37:45.464568 4940 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"62f01e51fe866b21e50ab3c5cf295f7ae88ded611814b6474ac717c3cf7cb03c"} err="failed to get container status \"62f01e51fe866b21e50ab3c5cf295f7ae88ded611814b6474ac717c3cf7cb03c\": rpc error: code = NotFound desc = could not find container \"62f01e51fe866b21e50ab3c5cf295f7ae88ded611814b6474ac717c3cf7cb03c\": container with ID starting with 62f01e51fe866b21e50ab3c5cf295f7ae88ded611814b6474ac717c3cf7cb03c not found: ID does not exist" Feb 23 09:37:45 crc kubenswrapper[4940]: I0223 09:37:45.464689 4940 scope.go:117] "RemoveContainer" containerID="d307d81631fa6ac21ea334c5e9b40460563f35eb80c52e8f1795da95c5db8671" Feb 23 09:37:45 crc kubenswrapper[4940]: E0223 09:37:45.465060 4940 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d307d81631fa6ac21ea334c5e9b40460563f35eb80c52e8f1795da95c5db8671\": container with ID starting with d307d81631fa6ac21ea334c5e9b40460563f35eb80c52e8f1795da95c5db8671 not found: ID does not exist" containerID="d307d81631fa6ac21ea334c5e9b40460563f35eb80c52e8f1795da95c5db8671" Feb 23 09:37:45 crc kubenswrapper[4940]: I0223 09:37:45.465110 4940 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d307d81631fa6ac21ea334c5e9b40460563f35eb80c52e8f1795da95c5db8671"} err="failed to get container status \"d307d81631fa6ac21ea334c5e9b40460563f35eb80c52e8f1795da95c5db8671\": rpc error: code = NotFound desc = could not find container \"d307d81631fa6ac21ea334c5e9b40460563f35eb80c52e8f1795da95c5db8671\": container with ID starting with d307d81631fa6ac21ea334c5e9b40460563f35eb80c52e8f1795da95c5db8671 not found: ID does not exist" Feb 23 09:37:45 crc kubenswrapper[4940]: I0223 09:37:45.465131 4940 scope.go:117] "RemoveContainer" containerID="f75c4c5ad463c437f98b5fdb0838167a5a7edef2ee24ddde7f962d335542136e" Feb 23 09:37:45 crc kubenswrapper[4940]: E0223 09:37:45.465415 4940 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f75c4c5ad463c437f98b5fdb0838167a5a7edef2ee24ddde7f962d335542136e\": container with ID starting with f75c4c5ad463c437f98b5fdb0838167a5a7edef2ee24ddde7f962d335542136e not found: ID does not exist" containerID="f75c4c5ad463c437f98b5fdb0838167a5a7edef2ee24ddde7f962d335542136e" Feb 23 09:37:45 crc kubenswrapper[4940]: I0223 09:37:45.465443 4940 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f75c4c5ad463c437f98b5fdb0838167a5a7edef2ee24ddde7f962d335542136e"} err="failed to get container status \"f75c4c5ad463c437f98b5fdb0838167a5a7edef2ee24ddde7f962d335542136e\": rpc error: code = NotFound desc = could not find container \"f75c4c5ad463c437f98b5fdb0838167a5a7edef2ee24ddde7f962d335542136e\": container with ID starting with f75c4c5ad463c437f98b5fdb0838167a5a7edef2ee24ddde7f962d335542136e not found: ID does not exist" Feb 23 09:37:47 crc kubenswrapper[4940]: I0223 09:37:47.933373 4940 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3707b2ce-8bf8-47b8-98e2-b68ee4f73f6a" path="/var/lib/kubelet/pods/3707b2ce-8bf8-47b8-98e2-b68ee4f73f6a/volumes" Feb 23 09:37:58 crc kubenswrapper[4940]: I0223 09:37:58.346027 4940 scope.go:117] "RemoveContainer" containerID="4dc5d52077cd2baef92405476f8423bdee15b141572c600fd683d3ef957f6198" Feb 23 09:37:58 crc kubenswrapper[4940]: E0223 09:37:58.346918 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-26mgs_openshift-machine-config-operator(f3f2cfd6-5ddf-436d-998f-440f1cc642b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" podUID="f3f2cfd6-5ddf-436d-998f-440f1cc642b1" Feb 23 09:38:09 crc kubenswrapper[4940]: I0223 09:38:09.354127 4940 scope.go:117] "RemoveContainer" containerID="4dc5d52077cd2baef92405476f8423bdee15b141572c600fd683d3ef957f6198" Feb 23 09:38:09 crc kubenswrapper[4940]: E0223 09:38:09.355190 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-26mgs_openshift-machine-config-operator(f3f2cfd6-5ddf-436d-998f-440f1cc642b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" podUID="f3f2cfd6-5ddf-436d-998f-440f1cc642b1" Feb 23 09:38:22 crc kubenswrapper[4940]: I0223 09:38:22.345838 4940 scope.go:117] "RemoveContainer" containerID="4dc5d52077cd2baef92405476f8423bdee15b141572c600fd683d3ef957f6198" Feb 23 09:38:22 crc kubenswrapper[4940]: E0223 09:38:22.346585 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-26mgs_openshift-machine-config-operator(f3f2cfd6-5ddf-436d-998f-440f1cc642b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" podUID="f3f2cfd6-5ddf-436d-998f-440f1cc642b1" Feb 23 09:38:37 crc kubenswrapper[4940]: I0223 09:38:37.346030 4940 scope.go:117] "RemoveContainer" containerID="4dc5d52077cd2baef92405476f8423bdee15b141572c600fd683d3ef957f6198" Feb 23 09:38:37 crc kubenswrapper[4940]: E0223 09:38:37.347959 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-26mgs_openshift-machine-config-operator(f3f2cfd6-5ddf-436d-998f-440f1cc642b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" podUID="f3f2cfd6-5ddf-436d-998f-440f1cc642b1" Feb 23 09:38:52 crc kubenswrapper[4940]: I0223 09:38:52.346812 4940 scope.go:117] "RemoveContainer" containerID="4dc5d52077cd2baef92405476f8423bdee15b141572c600fd683d3ef957f6198" Feb 23 09:38:52 crc kubenswrapper[4940]: E0223 09:38:52.347881 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-26mgs_openshift-machine-config-operator(f3f2cfd6-5ddf-436d-998f-440f1cc642b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" podUID="f3f2cfd6-5ddf-436d-998f-440f1cc642b1" Feb 23 09:39:04 crc kubenswrapper[4940]: I0223 09:39:04.345855 4940 scope.go:117] "RemoveContainer" containerID="4dc5d52077cd2baef92405476f8423bdee15b141572c600fd683d3ef957f6198" Feb 23 09:39:04 crc kubenswrapper[4940]: E0223 09:39:04.347653 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-26mgs_openshift-machine-config-operator(f3f2cfd6-5ddf-436d-998f-440f1cc642b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" podUID="f3f2cfd6-5ddf-436d-998f-440f1cc642b1" Feb 23 09:39:18 crc kubenswrapper[4940]: I0223 09:39:18.345904 4940 scope.go:117] "RemoveContainer" containerID="4dc5d52077cd2baef92405476f8423bdee15b141572c600fd683d3ef957f6198" Feb 23 09:39:18 crc kubenswrapper[4940]: E0223 09:39:18.346654 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-26mgs_openshift-machine-config-operator(f3f2cfd6-5ddf-436d-998f-440f1cc642b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" podUID="f3f2cfd6-5ddf-436d-998f-440f1cc642b1" Feb 23 09:39:29 crc kubenswrapper[4940]: I0223 09:39:29.356131 4940 scope.go:117] "RemoveContainer" containerID="4dc5d52077cd2baef92405476f8423bdee15b141572c600fd683d3ef957f6198" Feb 23 09:39:29 crc kubenswrapper[4940]: E0223 09:39:29.360672 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-26mgs_openshift-machine-config-operator(f3f2cfd6-5ddf-436d-998f-440f1cc642b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" podUID="f3f2cfd6-5ddf-436d-998f-440f1cc642b1" Feb 23 09:39:44 crc kubenswrapper[4940]: I0223 09:39:44.346860 4940 scope.go:117] "RemoveContainer" containerID="4dc5d52077cd2baef92405476f8423bdee15b141572c600fd683d3ef957f6198" Feb 23 09:39:44 crc kubenswrapper[4940]: E0223 09:39:44.347566 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-26mgs_openshift-machine-config-operator(f3f2cfd6-5ddf-436d-998f-440f1cc642b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" podUID="f3f2cfd6-5ddf-436d-998f-440f1cc642b1" Feb 23 09:39:55 crc kubenswrapper[4940]: I0223 09:39:55.346045 4940 scope.go:117] "RemoveContainer" containerID="4dc5d52077cd2baef92405476f8423bdee15b141572c600fd683d3ef957f6198" Feb 23 09:39:55 crc kubenswrapper[4940]: E0223 09:39:55.347807 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-26mgs_openshift-machine-config-operator(f3f2cfd6-5ddf-436d-998f-440f1cc642b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" podUID="f3f2cfd6-5ddf-436d-998f-440f1cc642b1" Feb 23 09:40:07 crc kubenswrapper[4940]: I0223 09:40:07.345937 4940 scope.go:117] "RemoveContainer" containerID="4dc5d52077cd2baef92405476f8423bdee15b141572c600fd683d3ef957f6198" Feb 23 09:40:07 crc kubenswrapper[4940]: E0223 09:40:07.346686 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-26mgs_openshift-machine-config-operator(f3f2cfd6-5ddf-436d-998f-440f1cc642b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" podUID="f3f2cfd6-5ddf-436d-998f-440f1cc642b1" Feb 23 09:40:22 crc kubenswrapper[4940]: I0223 09:40:22.346059 4940 scope.go:117] "RemoveContainer" containerID="4dc5d52077cd2baef92405476f8423bdee15b141572c600fd683d3ef957f6198" Feb 23 09:40:22 crc kubenswrapper[4940]: E0223 09:40:22.346918 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-26mgs_openshift-machine-config-operator(f3f2cfd6-5ddf-436d-998f-440f1cc642b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" podUID="f3f2cfd6-5ddf-436d-998f-440f1cc642b1" Feb 23 09:40:33 crc kubenswrapper[4940]: I0223 09:40:33.346252 4940 scope.go:117] "RemoveContainer" containerID="4dc5d52077cd2baef92405476f8423bdee15b141572c600fd683d3ef957f6198" Feb 23 09:40:33 crc kubenswrapper[4940]: E0223 09:40:33.347195 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-26mgs_openshift-machine-config-operator(f3f2cfd6-5ddf-436d-998f-440f1cc642b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" podUID="f3f2cfd6-5ddf-436d-998f-440f1cc642b1" Feb 23 09:40:47 crc kubenswrapper[4940]: I0223 09:40:47.346659 4940 scope.go:117] "RemoveContainer" containerID="4dc5d52077cd2baef92405476f8423bdee15b141572c600fd683d3ef957f6198" Feb 23 09:40:47 crc kubenswrapper[4940]: E0223 09:40:47.347467 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-26mgs_openshift-machine-config-operator(f3f2cfd6-5ddf-436d-998f-440f1cc642b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" podUID="f3f2cfd6-5ddf-436d-998f-440f1cc642b1" Feb 23 09:40:58 crc kubenswrapper[4940]: I0223 09:40:58.345718 4940 scope.go:117] "RemoveContainer" containerID="4dc5d52077cd2baef92405476f8423bdee15b141572c600fd683d3ef957f6198" Feb 23 09:40:58 crc kubenswrapper[4940]: E0223 09:40:58.346538 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-26mgs_openshift-machine-config-operator(f3f2cfd6-5ddf-436d-998f-440f1cc642b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" podUID="f3f2cfd6-5ddf-436d-998f-440f1cc642b1" Feb 23 09:41:11 crc kubenswrapper[4940]: I0223 09:41:11.363325 4940 scope.go:117] "RemoveContainer" containerID="4dc5d52077cd2baef92405476f8423bdee15b141572c600fd683d3ef957f6198" Feb 23 09:41:11 crc kubenswrapper[4940]: E0223 09:41:11.364941 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-26mgs_openshift-machine-config-operator(f3f2cfd6-5ddf-436d-998f-440f1cc642b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" podUID="f3f2cfd6-5ddf-436d-998f-440f1cc642b1" Feb 23 09:41:22 crc kubenswrapper[4940]: I0223 09:41:22.346052 4940 scope.go:117] "RemoveContainer" containerID="4dc5d52077cd2baef92405476f8423bdee15b141572c600fd683d3ef957f6198" Feb 23 09:41:22 crc kubenswrapper[4940]: E0223 09:41:22.347070 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-26mgs_openshift-machine-config-operator(f3f2cfd6-5ddf-436d-998f-440f1cc642b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" podUID="f3f2cfd6-5ddf-436d-998f-440f1cc642b1" Feb 23 09:41:33 crc kubenswrapper[4940]: I0223 09:41:33.346510 4940 scope.go:117] "RemoveContainer" containerID="4dc5d52077cd2baef92405476f8423bdee15b141572c600fd683d3ef957f6198" Feb 23 09:41:33 crc kubenswrapper[4940]: E0223 09:41:33.347440 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-26mgs_openshift-machine-config-operator(f3f2cfd6-5ddf-436d-998f-440f1cc642b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" podUID="f3f2cfd6-5ddf-436d-998f-440f1cc642b1" Feb 23 09:41:46 crc kubenswrapper[4940]: I0223 09:41:46.347812 4940 scope.go:117] "RemoveContainer" containerID="4dc5d52077cd2baef92405476f8423bdee15b141572c600fd683d3ef957f6198" Feb 23 09:41:46 crc kubenswrapper[4940]: E0223 09:41:46.349740 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-26mgs_openshift-machine-config-operator(f3f2cfd6-5ddf-436d-998f-440f1cc642b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" podUID="f3f2cfd6-5ddf-436d-998f-440f1cc642b1" Feb 23 09:41:59 crc kubenswrapper[4940]: I0223 09:41:59.345468 4940 scope.go:117] "RemoveContainer" containerID="4dc5d52077cd2baef92405476f8423bdee15b141572c600fd683d3ef957f6198" Feb 23 09:41:59 crc kubenswrapper[4940]: E0223 09:41:59.346346 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-26mgs_openshift-machine-config-operator(f3f2cfd6-5ddf-436d-998f-440f1cc642b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" podUID="f3f2cfd6-5ddf-436d-998f-440f1cc642b1" Feb 23 09:42:12 crc kubenswrapper[4940]: I0223 09:42:12.345407 4940 scope.go:117] "RemoveContainer" containerID="4dc5d52077cd2baef92405476f8423bdee15b141572c600fd683d3ef957f6198" Feb 23 09:42:13 crc kubenswrapper[4940]: I0223 09:42:13.316130 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" event={"ID":"f3f2cfd6-5ddf-436d-998f-440f1cc642b1","Type":"ContainerStarted","Data":"961765df6f652b6e91dfb4f45c27aeee9b8de0cc5e96f4a51f8f8ee4d6eb0900"} Feb 23 09:43:47 crc kubenswrapper[4940]: I0223 09:43:47.980161 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-flc9g"] Feb 23 09:43:47 crc kubenswrapper[4940]: E0223 09:43:47.981755 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3707b2ce-8bf8-47b8-98e2-b68ee4f73f6a" containerName="extract-content" Feb 23 09:43:47 crc kubenswrapper[4940]: I0223 09:43:47.981774 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="3707b2ce-8bf8-47b8-98e2-b68ee4f73f6a" containerName="extract-content" Feb 23 09:43:47 crc kubenswrapper[4940]: E0223 09:43:47.981792 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3707b2ce-8bf8-47b8-98e2-b68ee4f73f6a" containerName="extract-utilities" Feb 23 09:43:47 crc kubenswrapper[4940]: I0223 09:43:47.981801 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="3707b2ce-8bf8-47b8-98e2-b68ee4f73f6a" containerName="extract-utilities" Feb 23 09:43:47 crc kubenswrapper[4940]: E0223 09:43:47.981824 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3707b2ce-8bf8-47b8-98e2-b68ee4f73f6a" containerName="registry-server" Feb 23 09:43:47 crc kubenswrapper[4940]: I0223 09:43:47.981832 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="3707b2ce-8bf8-47b8-98e2-b68ee4f73f6a" containerName="registry-server" Feb 23 09:43:47 crc kubenswrapper[4940]: I0223 09:43:47.982063 4940 memory_manager.go:354] "RemoveStaleState removing state" podUID="3707b2ce-8bf8-47b8-98e2-b68ee4f73f6a" containerName="registry-server" Feb 23 09:43:47 crc kubenswrapper[4940]: I0223 09:43:47.983802 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-flc9g" Feb 23 09:43:47 crc kubenswrapper[4940]: I0223 09:43:47.991559 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-flc9g"] Feb 23 09:43:48 crc kubenswrapper[4940]: I0223 09:43:48.023301 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0fda1880-2f59-4a17-a02a-807493f30e00-catalog-content\") pod \"redhat-operators-flc9g\" (UID: \"0fda1880-2f59-4a17-a02a-807493f30e00\") " pod="openshift-marketplace/redhat-operators-flc9g" Feb 23 09:43:48 crc kubenswrapper[4940]: I0223 09:43:48.023426 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0fda1880-2f59-4a17-a02a-807493f30e00-utilities\") pod \"redhat-operators-flc9g\" (UID: \"0fda1880-2f59-4a17-a02a-807493f30e00\") " pod="openshift-marketplace/redhat-operators-flc9g" Feb 23 09:43:48 crc kubenswrapper[4940]: I0223 09:43:48.023876 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bgch8\" (UniqueName: \"kubernetes.io/projected/0fda1880-2f59-4a17-a02a-807493f30e00-kube-api-access-bgch8\") pod \"redhat-operators-flc9g\" (UID: \"0fda1880-2f59-4a17-a02a-807493f30e00\") " pod="openshift-marketplace/redhat-operators-flc9g" Feb 23 09:43:48 crc kubenswrapper[4940]: I0223 09:43:48.124990 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bgch8\" (UniqueName: \"kubernetes.io/projected/0fda1880-2f59-4a17-a02a-807493f30e00-kube-api-access-bgch8\") pod \"redhat-operators-flc9g\" (UID: \"0fda1880-2f59-4a17-a02a-807493f30e00\") " pod="openshift-marketplace/redhat-operators-flc9g" Feb 23 09:43:48 crc kubenswrapper[4940]: I0223 09:43:48.125109 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0fda1880-2f59-4a17-a02a-807493f30e00-catalog-content\") pod \"redhat-operators-flc9g\" (UID: \"0fda1880-2f59-4a17-a02a-807493f30e00\") " pod="openshift-marketplace/redhat-operators-flc9g" Feb 23 09:43:48 crc kubenswrapper[4940]: I0223 09:43:48.125141 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0fda1880-2f59-4a17-a02a-807493f30e00-utilities\") pod \"redhat-operators-flc9g\" (UID: \"0fda1880-2f59-4a17-a02a-807493f30e00\") " pod="openshift-marketplace/redhat-operators-flc9g" Feb 23 09:43:48 crc kubenswrapper[4940]: I0223 09:43:48.125695 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0fda1880-2f59-4a17-a02a-807493f30e00-utilities\") pod \"redhat-operators-flc9g\" (UID: \"0fda1880-2f59-4a17-a02a-807493f30e00\") " pod="openshift-marketplace/redhat-operators-flc9g" Feb 23 09:43:48 crc kubenswrapper[4940]: I0223 09:43:48.126017 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0fda1880-2f59-4a17-a02a-807493f30e00-catalog-content\") pod \"redhat-operators-flc9g\" (UID: \"0fda1880-2f59-4a17-a02a-807493f30e00\") " pod="openshift-marketplace/redhat-operators-flc9g" Feb 23 09:43:48 crc kubenswrapper[4940]: I0223 09:43:48.148494 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bgch8\" (UniqueName: \"kubernetes.io/projected/0fda1880-2f59-4a17-a02a-807493f30e00-kube-api-access-bgch8\") pod \"redhat-operators-flc9g\" (UID: \"0fda1880-2f59-4a17-a02a-807493f30e00\") " pod="openshift-marketplace/redhat-operators-flc9g" Feb 23 09:43:48 crc kubenswrapper[4940]: I0223 09:43:48.335688 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-flc9g" Feb 23 09:43:48 crc kubenswrapper[4940]: I0223 09:43:48.835966 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-flc9g"] Feb 23 09:43:49 crc kubenswrapper[4940]: I0223 09:43:49.183756 4940 generic.go:334] "Generic (PLEG): container finished" podID="0fda1880-2f59-4a17-a02a-807493f30e00" containerID="8d6988b68a5e1110222c6b1de9e03d379d06008e523b19e53ed80699e1a7cc09" exitCode=0 Feb 23 09:43:49 crc kubenswrapper[4940]: I0223 09:43:49.183861 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-flc9g" event={"ID":"0fda1880-2f59-4a17-a02a-807493f30e00","Type":"ContainerDied","Data":"8d6988b68a5e1110222c6b1de9e03d379d06008e523b19e53ed80699e1a7cc09"} Feb 23 09:43:49 crc kubenswrapper[4940]: I0223 09:43:49.184090 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-flc9g" event={"ID":"0fda1880-2f59-4a17-a02a-807493f30e00","Type":"ContainerStarted","Data":"806e1c5b9ad566d7aacc1631629841653229243c05114d1ef07b287bd217ebbb"} Feb 23 09:43:49 crc kubenswrapper[4940]: I0223 09:43:49.186020 4940 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 23 09:43:50 crc kubenswrapper[4940]: I0223 09:43:50.199852 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-flc9g" event={"ID":"0fda1880-2f59-4a17-a02a-807493f30e00","Type":"ContainerStarted","Data":"8f4781146774f9fadf372ae756efce6a30e1f95849191c33defd9be8f96d481c"} Feb 23 09:43:56 crc kubenswrapper[4940]: I0223 09:43:56.257999 4940 generic.go:334] "Generic (PLEG): container finished" podID="0fda1880-2f59-4a17-a02a-807493f30e00" containerID="8f4781146774f9fadf372ae756efce6a30e1f95849191c33defd9be8f96d481c" exitCode=0 Feb 23 09:43:56 crc kubenswrapper[4940]: I0223 09:43:56.258072 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-flc9g" event={"ID":"0fda1880-2f59-4a17-a02a-807493f30e00","Type":"ContainerDied","Data":"8f4781146774f9fadf372ae756efce6a30e1f95849191c33defd9be8f96d481c"} Feb 23 09:43:57 crc kubenswrapper[4940]: I0223 09:43:57.269563 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-flc9g" event={"ID":"0fda1880-2f59-4a17-a02a-807493f30e00","Type":"ContainerStarted","Data":"a59a40084aa28eb15ce5757419768bf93364174acf4e180456e644cbe80ec4c5"} Feb 23 09:43:57 crc kubenswrapper[4940]: I0223 09:43:57.307795 4940 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-flc9g" podStartSLOduration=2.805195549 podStartE2EDuration="10.307772225s" podCreationTimestamp="2026-02-23 09:43:47 +0000 UTC" firstStartedPulling="2026-02-23 09:43:49.185704347 +0000 UTC m=+3360.568910514" lastFinishedPulling="2026-02-23 09:43:56.688281033 +0000 UTC m=+3368.071487190" observedRunningTime="2026-02-23 09:43:57.293410507 +0000 UTC m=+3368.676616684" watchObservedRunningTime="2026-02-23 09:43:57.307772225 +0000 UTC m=+3368.690978382" Feb 23 09:43:58 crc kubenswrapper[4940]: I0223 09:43:58.336415 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-flc9g" Feb 23 09:43:58 crc kubenswrapper[4940]: I0223 09:43:58.336789 4940 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-flc9g" Feb 23 09:43:59 crc kubenswrapper[4940]: I0223 09:43:59.381759 4940 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-flc9g" podUID="0fda1880-2f59-4a17-a02a-807493f30e00" containerName="registry-server" probeResult="failure" output=< Feb 23 09:43:59 crc kubenswrapper[4940]: timeout: failed to connect service ":50051" within 1s Feb 23 09:43:59 crc kubenswrapper[4940]: > Feb 23 09:44:09 crc kubenswrapper[4940]: I0223 09:44:09.406164 4940 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-flc9g" podUID="0fda1880-2f59-4a17-a02a-807493f30e00" containerName="registry-server" probeResult="failure" output=< Feb 23 09:44:09 crc kubenswrapper[4940]: timeout: failed to connect service ":50051" within 1s Feb 23 09:44:09 crc kubenswrapper[4940]: > Feb 23 09:44:19 crc kubenswrapper[4940]: I0223 09:44:19.391563 4940 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-flc9g" podUID="0fda1880-2f59-4a17-a02a-807493f30e00" containerName="registry-server" probeResult="failure" output=< Feb 23 09:44:19 crc kubenswrapper[4940]: timeout: failed to connect service ":50051" within 1s Feb 23 09:44:19 crc kubenswrapper[4940]: > Feb 23 09:44:28 crc kubenswrapper[4940]: I0223 09:44:28.385048 4940 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-flc9g" Feb 23 09:44:28 crc kubenswrapper[4940]: I0223 09:44:28.441350 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-flc9g" Feb 23 09:44:28 crc kubenswrapper[4940]: I0223 09:44:28.626153 4940 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-flc9g"] Feb 23 09:44:29 crc kubenswrapper[4940]: I0223 09:44:29.552054 4940 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-flc9g" podUID="0fda1880-2f59-4a17-a02a-807493f30e00" containerName="registry-server" containerID="cri-o://a59a40084aa28eb15ce5757419768bf93364174acf4e180456e644cbe80ec4c5" gracePeriod=2 Feb 23 09:44:30 crc kubenswrapper[4940]: I0223 09:44:30.329060 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-flc9g" Feb 23 09:44:30 crc kubenswrapper[4940]: I0223 09:44:30.384950 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0fda1880-2f59-4a17-a02a-807493f30e00-catalog-content\") pod \"0fda1880-2f59-4a17-a02a-807493f30e00\" (UID: \"0fda1880-2f59-4a17-a02a-807493f30e00\") " Feb 23 09:44:30 crc kubenswrapper[4940]: I0223 09:44:30.385129 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0fda1880-2f59-4a17-a02a-807493f30e00-utilities\") pod \"0fda1880-2f59-4a17-a02a-807493f30e00\" (UID: \"0fda1880-2f59-4a17-a02a-807493f30e00\") " Feb 23 09:44:30 crc kubenswrapper[4940]: I0223 09:44:30.385211 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bgch8\" (UniqueName: \"kubernetes.io/projected/0fda1880-2f59-4a17-a02a-807493f30e00-kube-api-access-bgch8\") pod \"0fda1880-2f59-4a17-a02a-807493f30e00\" (UID: \"0fda1880-2f59-4a17-a02a-807493f30e00\") " Feb 23 09:44:30 crc kubenswrapper[4940]: I0223 09:44:30.385863 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0fda1880-2f59-4a17-a02a-807493f30e00-utilities" (OuterVolumeSpecName: "utilities") pod "0fda1880-2f59-4a17-a02a-807493f30e00" (UID: "0fda1880-2f59-4a17-a02a-807493f30e00"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 09:44:30 crc kubenswrapper[4940]: I0223 09:44:30.391220 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0fda1880-2f59-4a17-a02a-807493f30e00-kube-api-access-bgch8" (OuterVolumeSpecName: "kube-api-access-bgch8") pod "0fda1880-2f59-4a17-a02a-807493f30e00" (UID: "0fda1880-2f59-4a17-a02a-807493f30e00"). InnerVolumeSpecName "kube-api-access-bgch8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 09:44:30 crc kubenswrapper[4940]: I0223 09:44:30.488234 4940 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bgch8\" (UniqueName: \"kubernetes.io/projected/0fda1880-2f59-4a17-a02a-807493f30e00-kube-api-access-bgch8\") on node \"crc\" DevicePath \"\"" Feb 23 09:44:30 crc kubenswrapper[4940]: I0223 09:44:30.488267 4940 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0fda1880-2f59-4a17-a02a-807493f30e00-utilities\") on node \"crc\" DevicePath \"\"" Feb 23 09:44:30 crc kubenswrapper[4940]: I0223 09:44:30.537453 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0fda1880-2f59-4a17-a02a-807493f30e00-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0fda1880-2f59-4a17-a02a-807493f30e00" (UID: "0fda1880-2f59-4a17-a02a-807493f30e00"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 09:44:30 crc kubenswrapper[4940]: I0223 09:44:30.561695 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-flc9g" Feb 23 09:44:30 crc kubenswrapper[4940]: I0223 09:44:30.561720 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-flc9g" event={"ID":"0fda1880-2f59-4a17-a02a-807493f30e00","Type":"ContainerDied","Data":"a59a40084aa28eb15ce5757419768bf93364174acf4e180456e644cbe80ec4c5"} Feb 23 09:44:30 crc kubenswrapper[4940]: I0223 09:44:30.561760 4940 scope.go:117] "RemoveContainer" containerID="a59a40084aa28eb15ce5757419768bf93364174acf4e180456e644cbe80ec4c5" Feb 23 09:44:30 crc kubenswrapper[4940]: I0223 09:44:30.561607 4940 generic.go:334] "Generic (PLEG): container finished" podID="0fda1880-2f59-4a17-a02a-807493f30e00" containerID="a59a40084aa28eb15ce5757419768bf93364174acf4e180456e644cbe80ec4c5" exitCode=0 Feb 23 09:44:30 crc kubenswrapper[4940]: I0223 09:44:30.561857 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-flc9g" event={"ID":"0fda1880-2f59-4a17-a02a-807493f30e00","Type":"ContainerDied","Data":"806e1c5b9ad566d7aacc1631629841653229243c05114d1ef07b287bd217ebbb"} Feb 23 09:44:30 crc kubenswrapper[4940]: I0223 09:44:30.590834 4940 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0fda1880-2f59-4a17-a02a-807493f30e00-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 23 09:44:30 crc kubenswrapper[4940]: I0223 09:44:30.592096 4940 scope.go:117] "RemoveContainer" containerID="8f4781146774f9fadf372ae756efce6a30e1f95849191c33defd9be8f96d481c" Feb 23 09:44:30 crc kubenswrapper[4940]: I0223 09:44:30.605372 4940 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-flc9g"] Feb 23 09:44:30 crc kubenswrapper[4940]: I0223 09:44:30.612795 4940 scope.go:117] "RemoveContainer" containerID="8d6988b68a5e1110222c6b1de9e03d379d06008e523b19e53ed80699e1a7cc09" Feb 23 09:44:30 crc kubenswrapper[4940]: I0223 09:44:30.617348 4940 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-flc9g"] Feb 23 09:44:30 crc kubenswrapper[4940]: I0223 09:44:30.659092 4940 scope.go:117] "RemoveContainer" containerID="a59a40084aa28eb15ce5757419768bf93364174acf4e180456e644cbe80ec4c5" Feb 23 09:44:30 crc kubenswrapper[4940]: E0223 09:44:30.659490 4940 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a59a40084aa28eb15ce5757419768bf93364174acf4e180456e644cbe80ec4c5\": container with ID starting with a59a40084aa28eb15ce5757419768bf93364174acf4e180456e644cbe80ec4c5 not found: ID does not exist" containerID="a59a40084aa28eb15ce5757419768bf93364174acf4e180456e644cbe80ec4c5" Feb 23 09:44:30 crc kubenswrapper[4940]: I0223 09:44:30.659560 4940 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a59a40084aa28eb15ce5757419768bf93364174acf4e180456e644cbe80ec4c5"} err="failed to get container status \"a59a40084aa28eb15ce5757419768bf93364174acf4e180456e644cbe80ec4c5\": rpc error: code = NotFound desc = could not find container \"a59a40084aa28eb15ce5757419768bf93364174acf4e180456e644cbe80ec4c5\": container with ID starting with a59a40084aa28eb15ce5757419768bf93364174acf4e180456e644cbe80ec4c5 not found: ID does not exist" Feb 23 09:44:30 crc kubenswrapper[4940]: I0223 09:44:30.659596 4940 scope.go:117] "RemoveContainer" containerID="8f4781146774f9fadf372ae756efce6a30e1f95849191c33defd9be8f96d481c" Feb 23 09:44:30 crc kubenswrapper[4940]: E0223 09:44:30.660219 4940 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8f4781146774f9fadf372ae756efce6a30e1f95849191c33defd9be8f96d481c\": container with ID starting with 8f4781146774f9fadf372ae756efce6a30e1f95849191c33defd9be8f96d481c not found: ID does not exist" containerID="8f4781146774f9fadf372ae756efce6a30e1f95849191c33defd9be8f96d481c" Feb 23 09:44:30 crc kubenswrapper[4940]: I0223 09:44:30.660253 4940 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8f4781146774f9fadf372ae756efce6a30e1f95849191c33defd9be8f96d481c"} err="failed to get container status \"8f4781146774f9fadf372ae756efce6a30e1f95849191c33defd9be8f96d481c\": rpc error: code = NotFound desc = could not find container \"8f4781146774f9fadf372ae756efce6a30e1f95849191c33defd9be8f96d481c\": container with ID starting with 8f4781146774f9fadf372ae756efce6a30e1f95849191c33defd9be8f96d481c not found: ID does not exist" Feb 23 09:44:30 crc kubenswrapper[4940]: I0223 09:44:30.660276 4940 scope.go:117] "RemoveContainer" containerID="8d6988b68a5e1110222c6b1de9e03d379d06008e523b19e53ed80699e1a7cc09" Feb 23 09:44:30 crc kubenswrapper[4940]: E0223 09:44:30.660738 4940 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8d6988b68a5e1110222c6b1de9e03d379d06008e523b19e53ed80699e1a7cc09\": container with ID starting with 8d6988b68a5e1110222c6b1de9e03d379d06008e523b19e53ed80699e1a7cc09 not found: ID does not exist" containerID="8d6988b68a5e1110222c6b1de9e03d379d06008e523b19e53ed80699e1a7cc09" Feb 23 09:44:30 crc kubenswrapper[4940]: I0223 09:44:30.660860 4940 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8d6988b68a5e1110222c6b1de9e03d379d06008e523b19e53ed80699e1a7cc09"} err="failed to get container status \"8d6988b68a5e1110222c6b1de9e03d379d06008e523b19e53ed80699e1a7cc09\": rpc error: code = NotFound desc = could not find container \"8d6988b68a5e1110222c6b1de9e03d379d06008e523b19e53ed80699e1a7cc09\": container with ID starting with 8d6988b68a5e1110222c6b1de9e03d379d06008e523b19e53ed80699e1a7cc09 not found: ID does not exist" Feb 23 09:44:31 crc kubenswrapper[4940]: I0223 09:44:31.356080 4940 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0fda1880-2f59-4a17-a02a-807493f30e00" path="/var/lib/kubelet/pods/0fda1880-2f59-4a17-a02a-807493f30e00/volumes" Feb 23 09:44:31 crc kubenswrapper[4940]: I0223 09:44:31.429237 4940 patch_prober.go:28] interesting pod/machine-config-daemon-26mgs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 23 09:44:31 crc kubenswrapper[4940]: I0223 09:44:31.429304 4940 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" podUID="f3f2cfd6-5ddf-436d-998f-440f1cc642b1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 23 09:45:00 crc kubenswrapper[4940]: I0223 09:45:00.175297 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29530665-dgsmr"] Feb 23 09:45:00 crc kubenswrapper[4940]: E0223 09:45:00.176373 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0fda1880-2f59-4a17-a02a-807493f30e00" containerName="registry-server" Feb 23 09:45:00 crc kubenswrapper[4940]: I0223 09:45:00.176392 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="0fda1880-2f59-4a17-a02a-807493f30e00" containerName="registry-server" Feb 23 09:45:00 crc kubenswrapper[4940]: E0223 09:45:00.176420 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0fda1880-2f59-4a17-a02a-807493f30e00" containerName="extract-utilities" Feb 23 09:45:00 crc kubenswrapper[4940]: I0223 09:45:00.176428 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="0fda1880-2f59-4a17-a02a-807493f30e00" containerName="extract-utilities" Feb 23 09:45:00 crc kubenswrapper[4940]: E0223 09:45:00.176456 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0fda1880-2f59-4a17-a02a-807493f30e00" containerName="extract-content" Feb 23 09:45:00 crc kubenswrapper[4940]: I0223 09:45:00.176464 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="0fda1880-2f59-4a17-a02a-807493f30e00" containerName="extract-content" Feb 23 09:45:00 crc kubenswrapper[4940]: I0223 09:45:00.176704 4940 memory_manager.go:354] "RemoveStaleState removing state" podUID="0fda1880-2f59-4a17-a02a-807493f30e00" containerName="registry-server" Feb 23 09:45:00 crc kubenswrapper[4940]: I0223 09:45:00.177578 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29530665-dgsmr" Feb 23 09:45:00 crc kubenswrapper[4940]: I0223 09:45:00.183353 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29530665-dgsmr"] Feb 23 09:45:00 crc kubenswrapper[4940]: I0223 09:45:00.190397 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 23 09:45:00 crc kubenswrapper[4940]: I0223 09:45:00.203361 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 23 09:45:00 crc kubenswrapper[4940]: I0223 09:45:00.237776 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ecb9a5cb-19c2-4d40-9140-07337f5528a4-secret-volume\") pod \"collect-profiles-29530665-dgsmr\" (UID: \"ecb9a5cb-19c2-4d40-9140-07337f5528a4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29530665-dgsmr" Feb 23 09:45:00 crc kubenswrapper[4940]: I0223 09:45:00.238020 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tfbjx\" (UniqueName: \"kubernetes.io/projected/ecb9a5cb-19c2-4d40-9140-07337f5528a4-kube-api-access-tfbjx\") pod \"collect-profiles-29530665-dgsmr\" (UID: \"ecb9a5cb-19c2-4d40-9140-07337f5528a4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29530665-dgsmr" Feb 23 09:45:00 crc kubenswrapper[4940]: I0223 09:45:00.238079 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ecb9a5cb-19c2-4d40-9140-07337f5528a4-config-volume\") pod \"collect-profiles-29530665-dgsmr\" (UID: \"ecb9a5cb-19c2-4d40-9140-07337f5528a4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29530665-dgsmr" Feb 23 09:45:00 crc kubenswrapper[4940]: I0223 09:45:00.340038 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ecb9a5cb-19c2-4d40-9140-07337f5528a4-config-volume\") pod \"collect-profiles-29530665-dgsmr\" (UID: \"ecb9a5cb-19c2-4d40-9140-07337f5528a4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29530665-dgsmr" Feb 23 09:45:00 crc kubenswrapper[4940]: I0223 09:45:00.340125 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ecb9a5cb-19c2-4d40-9140-07337f5528a4-secret-volume\") pod \"collect-profiles-29530665-dgsmr\" (UID: \"ecb9a5cb-19c2-4d40-9140-07337f5528a4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29530665-dgsmr" Feb 23 09:45:00 crc kubenswrapper[4940]: I0223 09:45:00.340249 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tfbjx\" (UniqueName: \"kubernetes.io/projected/ecb9a5cb-19c2-4d40-9140-07337f5528a4-kube-api-access-tfbjx\") pod \"collect-profiles-29530665-dgsmr\" (UID: \"ecb9a5cb-19c2-4d40-9140-07337f5528a4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29530665-dgsmr" Feb 23 09:45:00 crc kubenswrapper[4940]: I0223 09:45:00.340898 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ecb9a5cb-19c2-4d40-9140-07337f5528a4-config-volume\") pod \"collect-profiles-29530665-dgsmr\" (UID: \"ecb9a5cb-19c2-4d40-9140-07337f5528a4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29530665-dgsmr" Feb 23 09:45:00 crc kubenswrapper[4940]: I0223 09:45:00.347173 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ecb9a5cb-19c2-4d40-9140-07337f5528a4-secret-volume\") pod \"collect-profiles-29530665-dgsmr\" (UID: \"ecb9a5cb-19c2-4d40-9140-07337f5528a4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29530665-dgsmr" Feb 23 09:45:00 crc kubenswrapper[4940]: I0223 09:45:00.369066 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tfbjx\" (UniqueName: \"kubernetes.io/projected/ecb9a5cb-19c2-4d40-9140-07337f5528a4-kube-api-access-tfbjx\") pod \"collect-profiles-29530665-dgsmr\" (UID: \"ecb9a5cb-19c2-4d40-9140-07337f5528a4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29530665-dgsmr" Feb 23 09:45:00 crc kubenswrapper[4940]: I0223 09:45:00.551765 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29530665-dgsmr" Feb 23 09:45:01 crc kubenswrapper[4940]: I0223 09:45:01.044095 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29530665-dgsmr"] Feb 23 09:45:01 crc kubenswrapper[4940]: I0223 09:45:01.429216 4940 patch_prober.go:28] interesting pod/machine-config-daemon-26mgs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 23 09:45:01 crc kubenswrapper[4940]: I0223 09:45:01.429585 4940 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" podUID="f3f2cfd6-5ddf-436d-998f-440f1cc642b1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 23 09:45:01 crc kubenswrapper[4940]: I0223 09:45:01.873814 4940 generic.go:334] "Generic (PLEG): container finished" podID="ecb9a5cb-19c2-4d40-9140-07337f5528a4" containerID="fe55628fb268e413a9d02475d6dd48ef3256ebf1e334f0df4e93d62cb3d00500" exitCode=0 Feb 23 09:45:01 crc kubenswrapper[4940]: I0223 09:45:01.873869 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29530665-dgsmr" event={"ID":"ecb9a5cb-19c2-4d40-9140-07337f5528a4","Type":"ContainerDied","Data":"fe55628fb268e413a9d02475d6dd48ef3256ebf1e334f0df4e93d62cb3d00500"} Feb 23 09:45:01 crc kubenswrapper[4940]: I0223 09:45:01.873917 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29530665-dgsmr" event={"ID":"ecb9a5cb-19c2-4d40-9140-07337f5528a4","Type":"ContainerStarted","Data":"b730e7db4a34230ad75b8ba74a09585e4f237f28457ac6431fe3e1d5ade82073"} Feb 23 09:45:03 crc kubenswrapper[4940]: I0223 09:45:03.530747 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29530665-dgsmr" Feb 23 09:45:03 crc kubenswrapper[4940]: I0223 09:45:03.619091 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tfbjx\" (UniqueName: \"kubernetes.io/projected/ecb9a5cb-19c2-4d40-9140-07337f5528a4-kube-api-access-tfbjx\") pod \"ecb9a5cb-19c2-4d40-9140-07337f5528a4\" (UID: \"ecb9a5cb-19c2-4d40-9140-07337f5528a4\") " Feb 23 09:45:03 crc kubenswrapper[4940]: I0223 09:45:03.619141 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ecb9a5cb-19c2-4d40-9140-07337f5528a4-config-volume\") pod \"ecb9a5cb-19c2-4d40-9140-07337f5528a4\" (UID: \"ecb9a5cb-19c2-4d40-9140-07337f5528a4\") " Feb 23 09:45:03 crc kubenswrapper[4940]: I0223 09:45:03.619199 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ecb9a5cb-19c2-4d40-9140-07337f5528a4-secret-volume\") pod \"ecb9a5cb-19c2-4d40-9140-07337f5528a4\" (UID: \"ecb9a5cb-19c2-4d40-9140-07337f5528a4\") " Feb 23 09:45:03 crc kubenswrapper[4940]: I0223 09:45:03.619911 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ecb9a5cb-19c2-4d40-9140-07337f5528a4-config-volume" (OuterVolumeSpecName: "config-volume") pod "ecb9a5cb-19c2-4d40-9140-07337f5528a4" (UID: "ecb9a5cb-19c2-4d40-9140-07337f5528a4"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 09:45:03 crc kubenswrapper[4940]: I0223 09:45:03.625234 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ecb9a5cb-19c2-4d40-9140-07337f5528a4-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "ecb9a5cb-19c2-4d40-9140-07337f5528a4" (UID: "ecb9a5cb-19c2-4d40-9140-07337f5528a4"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 09:45:03 crc kubenswrapper[4940]: I0223 09:45:03.628372 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ecb9a5cb-19c2-4d40-9140-07337f5528a4-kube-api-access-tfbjx" (OuterVolumeSpecName: "kube-api-access-tfbjx") pod "ecb9a5cb-19c2-4d40-9140-07337f5528a4" (UID: "ecb9a5cb-19c2-4d40-9140-07337f5528a4"). InnerVolumeSpecName "kube-api-access-tfbjx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 09:45:03 crc kubenswrapper[4940]: I0223 09:45:03.722117 4940 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tfbjx\" (UniqueName: \"kubernetes.io/projected/ecb9a5cb-19c2-4d40-9140-07337f5528a4-kube-api-access-tfbjx\") on node \"crc\" DevicePath \"\"" Feb 23 09:45:03 crc kubenswrapper[4940]: I0223 09:45:03.722163 4940 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ecb9a5cb-19c2-4d40-9140-07337f5528a4-config-volume\") on node \"crc\" DevicePath \"\"" Feb 23 09:45:03 crc kubenswrapper[4940]: I0223 09:45:03.722176 4940 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ecb9a5cb-19c2-4d40-9140-07337f5528a4-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 23 09:45:03 crc kubenswrapper[4940]: I0223 09:45:03.891972 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29530665-dgsmr" event={"ID":"ecb9a5cb-19c2-4d40-9140-07337f5528a4","Type":"ContainerDied","Data":"b730e7db4a34230ad75b8ba74a09585e4f237f28457ac6431fe3e1d5ade82073"} Feb 23 09:45:03 crc kubenswrapper[4940]: I0223 09:45:03.892010 4940 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b730e7db4a34230ad75b8ba74a09585e4f237f28457ac6431fe3e1d5ade82073" Feb 23 09:45:03 crc kubenswrapper[4940]: I0223 09:45:03.892084 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29530665-dgsmr" Feb 23 09:45:04 crc kubenswrapper[4940]: I0223 09:45:04.628997 4940 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29530620-lwqfr"] Feb 23 09:45:04 crc kubenswrapper[4940]: I0223 09:45:04.637348 4940 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29530620-lwqfr"] Feb 23 09:45:05 crc kubenswrapper[4940]: I0223 09:45:05.360888 4940 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96a33e39-df26-4233-aca9-edbe7b31aa62" path="/var/lib/kubelet/pods/96a33e39-df26-4233-aca9-edbe7b31aa62/volumes" Feb 23 09:45:31 crc kubenswrapper[4940]: I0223 09:45:31.429581 4940 patch_prober.go:28] interesting pod/machine-config-daemon-26mgs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 23 09:45:31 crc kubenswrapper[4940]: I0223 09:45:31.430016 4940 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" podUID="f3f2cfd6-5ddf-436d-998f-440f1cc642b1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 23 09:45:31 crc kubenswrapper[4940]: I0223 09:45:31.430071 4940 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" Feb 23 09:45:31 crc kubenswrapper[4940]: I0223 09:45:31.430824 4940 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"961765df6f652b6e91dfb4f45c27aeee9b8de0cc5e96f4a51f8f8ee4d6eb0900"} pod="openshift-machine-config-operator/machine-config-daemon-26mgs" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 23 09:45:31 crc kubenswrapper[4940]: I0223 09:45:31.430866 4940 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" podUID="f3f2cfd6-5ddf-436d-998f-440f1cc642b1" containerName="machine-config-daemon" containerID="cri-o://961765df6f652b6e91dfb4f45c27aeee9b8de0cc5e96f4a51f8f8ee4d6eb0900" gracePeriod=600 Feb 23 09:45:32 crc kubenswrapper[4940]: I0223 09:45:32.144773 4940 generic.go:334] "Generic (PLEG): container finished" podID="f3f2cfd6-5ddf-436d-998f-440f1cc642b1" containerID="961765df6f652b6e91dfb4f45c27aeee9b8de0cc5e96f4a51f8f8ee4d6eb0900" exitCode=0 Feb 23 09:45:32 crc kubenswrapper[4940]: I0223 09:45:32.144847 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" event={"ID":"f3f2cfd6-5ddf-436d-998f-440f1cc642b1","Type":"ContainerDied","Data":"961765df6f652b6e91dfb4f45c27aeee9b8de0cc5e96f4a51f8f8ee4d6eb0900"} Feb 23 09:45:32 crc kubenswrapper[4940]: I0223 09:45:32.145130 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" event={"ID":"f3f2cfd6-5ddf-436d-998f-440f1cc642b1","Type":"ContainerStarted","Data":"141421c82e159881b47175069362afdd21be912da7cbb73536bf9e0ca5da9926"} Feb 23 09:45:32 crc kubenswrapper[4940]: I0223 09:45:32.145150 4940 scope.go:117] "RemoveContainer" containerID="4dc5d52077cd2baef92405476f8423bdee15b141572c600fd683d3ef957f6198" Feb 23 09:45:36 crc kubenswrapper[4940]: I0223 09:45:36.114004 4940 scope.go:117] "RemoveContainer" containerID="b4e34acd75f184b061368d58273c59c90b3d4ace233d52dd0374a92d234322e7" Feb 23 09:46:50 crc kubenswrapper[4940]: I0223 09:46:50.589704 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-65zf2"] Feb 23 09:46:50 crc kubenswrapper[4940]: E0223 09:46:50.590847 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ecb9a5cb-19c2-4d40-9140-07337f5528a4" containerName="collect-profiles" Feb 23 09:46:50 crc kubenswrapper[4940]: I0223 09:46:50.590867 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="ecb9a5cb-19c2-4d40-9140-07337f5528a4" containerName="collect-profiles" Feb 23 09:46:50 crc kubenswrapper[4940]: I0223 09:46:50.591123 4940 memory_manager.go:354] "RemoveStaleState removing state" podUID="ecb9a5cb-19c2-4d40-9140-07337f5528a4" containerName="collect-profiles" Feb 23 09:46:50 crc kubenswrapper[4940]: I0223 09:46:50.601904 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-65zf2" Feb 23 09:46:50 crc kubenswrapper[4940]: I0223 09:46:50.611407 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-65zf2"] Feb 23 09:46:50 crc kubenswrapper[4940]: I0223 09:46:50.746938 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kspmf\" (UniqueName: \"kubernetes.io/projected/300123d1-258a-485d-b889-73125ca992ca-kube-api-access-kspmf\") pod \"certified-operators-65zf2\" (UID: \"300123d1-258a-485d-b889-73125ca992ca\") " pod="openshift-marketplace/certified-operators-65zf2" Feb 23 09:46:50 crc kubenswrapper[4940]: I0223 09:46:50.746989 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/300123d1-258a-485d-b889-73125ca992ca-utilities\") pod \"certified-operators-65zf2\" (UID: \"300123d1-258a-485d-b889-73125ca992ca\") " pod="openshift-marketplace/certified-operators-65zf2" Feb 23 09:46:50 crc kubenswrapper[4940]: I0223 09:46:50.747783 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/300123d1-258a-485d-b889-73125ca992ca-catalog-content\") pod \"certified-operators-65zf2\" (UID: \"300123d1-258a-485d-b889-73125ca992ca\") " pod="openshift-marketplace/certified-operators-65zf2" Feb 23 09:46:50 crc kubenswrapper[4940]: I0223 09:46:50.849149 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kspmf\" (UniqueName: \"kubernetes.io/projected/300123d1-258a-485d-b889-73125ca992ca-kube-api-access-kspmf\") pod \"certified-operators-65zf2\" (UID: \"300123d1-258a-485d-b889-73125ca992ca\") " pod="openshift-marketplace/certified-operators-65zf2" Feb 23 09:46:50 crc kubenswrapper[4940]: I0223 09:46:50.849470 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/300123d1-258a-485d-b889-73125ca992ca-utilities\") pod \"certified-operators-65zf2\" (UID: \"300123d1-258a-485d-b889-73125ca992ca\") " pod="openshift-marketplace/certified-operators-65zf2" Feb 23 09:46:50 crc kubenswrapper[4940]: I0223 09:46:50.849671 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/300123d1-258a-485d-b889-73125ca992ca-catalog-content\") pod \"certified-operators-65zf2\" (UID: \"300123d1-258a-485d-b889-73125ca992ca\") " pod="openshift-marketplace/certified-operators-65zf2" Feb 23 09:46:50 crc kubenswrapper[4940]: I0223 09:46:50.850094 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/300123d1-258a-485d-b889-73125ca992ca-utilities\") pod \"certified-operators-65zf2\" (UID: \"300123d1-258a-485d-b889-73125ca992ca\") " pod="openshift-marketplace/certified-operators-65zf2" Feb 23 09:46:50 crc kubenswrapper[4940]: I0223 09:46:50.850204 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/300123d1-258a-485d-b889-73125ca992ca-catalog-content\") pod \"certified-operators-65zf2\" (UID: \"300123d1-258a-485d-b889-73125ca992ca\") " pod="openshift-marketplace/certified-operators-65zf2" Feb 23 09:46:50 crc kubenswrapper[4940]: I0223 09:46:50.871887 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kspmf\" (UniqueName: \"kubernetes.io/projected/300123d1-258a-485d-b889-73125ca992ca-kube-api-access-kspmf\") pod \"certified-operators-65zf2\" (UID: \"300123d1-258a-485d-b889-73125ca992ca\") " pod="openshift-marketplace/certified-operators-65zf2" Feb 23 09:46:50 crc kubenswrapper[4940]: I0223 09:46:50.933954 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-65zf2" Feb 23 09:46:52 crc kubenswrapper[4940]: I0223 09:46:52.268835 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-65zf2"] Feb 23 09:46:52 crc kubenswrapper[4940]: I0223 09:46:52.865993 4940 generic.go:334] "Generic (PLEG): container finished" podID="300123d1-258a-485d-b889-73125ca992ca" containerID="767c5da5b27575bd35369ac3caa99dad0d55d8cc22eed8021a3cbca17734d886" exitCode=0 Feb 23 09:46:52 crc kubenswrapper[4940]: I0223 09:46:52.866048 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-65zf2" event={"ID":"300123d1-258a-485d-b889-73125ca992ca","Type":"ContainerDied","Data":"767c5da5b27575bd35369ac3caa99dad0d55d8cc22eed8021a3cbca17734d886"} Feb 23 09:46:52 crc kubenswrapper[4940]: I0223 09:46:52.866670 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-65zf2" event={"ID":"300123d1-258a-485d-b889-73125ca992ca","Type":"ContainerStarted","Data":"c0e1f838e48c852b1a9fb42aab91f1453338eb2ed2d242324ada6e61f3493366"} Feb 23 09:46:53 crc kubenswrapper[4940]: I0223 09:46:53.875235 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-65zf2" event={"ID":"300123d1-258a-485d-b889-73125ca992ca","Type":"ContainerStarted","Data":"5b2cdd50a9fd3f0d8c2744741fc7a9b13297375a0f598db9bffcf5bf1768a505"} Feb 23 09:46:55 crc kubenswrapper[4940]: I0223 09:46:55.895111 4940 generic.go:334] "Generic (PLEG): container finished" podID="300123d1-258a-485d-b889-73125ca992ca" containerID="5b2cdd50a9fd3f0d8c2744741fc7a9b13297375a0f598db9bffcf5bf1768a505" exitCode=0 Feb 23 09:46:55 crc kubenswrapper[4940]: I0223 09:46:55.895182 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-65zf2" event={"ID":"300123d1-258a-485d-b889-73125ca992ca","Type":"ContainerDied","Data":"5b2cdd50a9fd3f0d8c2744741fc7a9b13297375a0f598db9bffcf5bf1768a505"} Feb 23 09:46:56 crc kubenswrapper[4940]: I0223 09:46:56.908359 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-65zf2" event={"ID":"300123d1-258a-485d-b889-73125ca992ca","Type":"ContainerStarted","Data":"9c3f8fdc23b93a40d9981e0afc4d8542d8392293e55705a97c6953a14b4bd804"} Feb 23 09:46:56 crc kubenswrapper[4940]: I0223 09:46:56.945312 4940 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-65zf2" podStartSLOduration=3.469114108 podStartE2EDuration="6.945285953s" podCreationTimestamp="2026-02-23 09:46:50 +0000 UTC" firstStartedPulling="2026-02-23 09:46:52.868260475 +0000 UTC m=+3544.251466632" lastFinishedPulling="2026-02-23 09:46:56.34443231 +0000 UTC m=+3547.727638477" observedRunningTime="2026-02-23 09:46:56.928706861 +0000 UTC m=+3548.311913028" watchObservedRunningTime="2026-02-23 09:46:56.945285953 +0000 UTC m=+3548.328492110" Feb 23 09:47:00 crc kubenswrapper[4940]: I0223 09:47:00.934559 4940 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-65zf2" Feb 23 09:47:00 crc kubenswrapper[4940]: I0223 09:47:00.935159 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-65zf2" Feb 23 09:47:02 crc kubenswrapper[4940]: I0223 09:47:02.001426 4940 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-65zf2" podUID="300123d1-258a-485d-b889-73125ca992ca" containerName="registry-server" probeResult="failure" output=< Feb 23 09:47:02 crc kubenswrapper[4940]: timeout: failed to connect service ":50051" within 1s Feb 23 09:47:02 crc kubenswrapper[4940]: > Feb 23 09:47:10 crc kubenswrapper[4940]: I0223 09:47:10.983426 4940 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-65zf2" Feb 23 09:47:11 crc kubenswrapper[4940]: I0223 09:47:11.046054 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-65zf2" Feb 23 09:47:11 crc kubenswrapper[4940]: I0223 09:47:11.231415 4940 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-65zf2"] Feb 23 09:47:12 crc kubenswrapper[4940]: I0223 09:47:12.044604 4940 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-65zf2" podUID="300123d1-258a-485d-b889-73125ca992ca" containerName="registry-server" containerID="cri-o://9c3f8fdc23b93a40d9981e0afc4d8542d8392293e55705a97c6953a14b4bd804" gracePeriod=2 Feb 23 09:47:12 crc kubenswrapper[4940]: I0223 09:47:12.784131 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-65zf2" Feb 23 09:47:12 crc kubenswrapper[4940]: I0223 09:47:12.865209 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/300123d1-258a-485d-b889-73125ca992ca-utilities\") pod \"300123d1-258a-485d-b889-73125ca992ca\" (UID: \"300123d1-258a-485d-b889-73125ca992ca\") " Feb 23 09:47:12 crc kubenswrapper[4940]: I0223 09:47:12.865401 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/300123d1-258a-485d-b889-73125ca992ca-catalog-content\") pod \"300123d1-258a-485d-b889-73125ca992ca\" (UID: \"300123d1-258a-485d-b889-73125ca992ca\") " Feb 23 09:47:12 crc kubenswrapper[4940]: I0223 09:47:12.866225 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/300123d1-258a-485d-b889-73125ca992ca-utilities" (OuterVolumeSpecName: "utilities") pod "300123d1-258a-485d-b889-73125ca992ca" (UID: "300123d1-258a-485d-b889-73125ca992ca"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 09:47:12 crc kubenswrapper[4940]: I0223 09:47:12.866332 4940 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/300123d1-258a-485d-b889-73125ca992ca-utilities\") on node \"crc\" DevicePath \"\"" Feb 23 09:47:12 crc kubenswrapper[4940]: I0223 09:47:12.932685 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/300123d1-258a-485d-b889-73125ca992ca-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "300123d1-258a-485d-b889-73125ca992ca" (UID: "300123d1-258a-485d-b889-73125ca992ca"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 09:47:12 crc kubenswrapper[4940]: I0223 09:47:12.967293 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kspmf\" (UniqueName: \"kubernetes.io/projected/300123d1-258a-485d-b889-73125ca992ca-kube-api-access-kspmf\") pod \"300123d1-258a-485d-b889-73125ca992ca\" (UID: \"300123d1-258a-485d-b889-73125ca992ca\") " Feb 23 09:47:12 crc kubenswrapper[4940]: I0223 09:47:12.968294 4940 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/300123d1-258a-485d-b889-73125ca992ca-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 23 09:47:12 crc kubenswrapper[4940]: I0223 09:47:12.978352 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/300123d1-258a-485d-b889-73125ca992ca-kube-api-access-kspmf" (OuterVolumeSpecName: "kube-api-access-kspmf") pod "300123d1-258a-485d-b889-73125ca992ca" (UID: "300123d1-258a-485d-b889-73125ca992ca"). InnerVolumeSpecName "kube-api-access-kspmf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 09:47:13 crc kubenswrapper[4940]: I0223 09:47:13.053987 4940 generic.go:334] "Generic (PLEG): container finished" podID="300123d1-258a-485d-b889-73125ca992ca" containerID="9c3f8fdc23b93a40d9981e0afc4d8542d8392293e55705a97c6953a14b4bd804" exitCode=0 Feb 23 09:47:13 crc kubenswrapper[4940]: I0223 09:47:13.054043 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-65zf2" event={"ID":"300123d1-258a-485d-b889-73125ca992ca","Type":"ContainerDied","Data":"9c3f8fdc23b93a40d9981e0afc4d8542d8392293e55705a97c6953a14b4bd804"} Feb 23 09:47:13 crc kubenswrapper[4940]: I0223 09:47:13.054084 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-65zf2" event={"ID":"300123d1-258a-485d-b889-73125ca992ca","Type":"ContainerDied","Data":"c0e1f838e48c852b1a9fb42aab91f1453338eb2ed2d242324ada6e61f3493366"} Feb 23 09:47:13 crc kubenswrapper[4940]: I0223 09:47:13.054107 4940 scope.go:117] "RemoveContainer" containerID="9c3f8fdc23b93a40d9981e0afc4d8542d8392293e55705a97c6953a14b4bd804" Feb 23 09:47:13 crc kubenswrapper[4940]: I0223 09:47:13.054124 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-65zf2" Feb 23 09:47:13 crc kubenswrapper[4940]: I0223 09:47:13.069593 4940 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kspmf\" (UniqueName: \"kubernetes.io/projected/300123d1-258a-485d-b889-73125ca992ca-kube-api-access-kspmf\") on node \"crc\" DevicePath \"\"" Feb 23 09:47:13 crc kubenswrapper[4940]: I0223 09:47:13.080377 4940 scope.go:117] "RemoveContainer" containerID="5b2cdd50a9fd3f0d8c2744741fc7a9b13297375a0f598db9bffcf5bf1768a505" Feb 23 09:47:13 crc kubenswrapper[4940]: I0223 09:47:13.106339 4940 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-65zf2"] Feb 23 09:47:13 crc kubenswrapper[4940]: I0223 09:47:13.117442 4940 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-65zf2"] Feb 23 09:47:13 crc kubenswrapper[4940]: I0223 09:47:13.120856 4940 scope.go:117] "RemoveContainer" containerID="767c5da5b27575bd35369ac3caa99dad0d55d8cc22eed8021a3cbca17734d886" Feb 23 09:47:13 crc kubenswrapper[4940]: I0223 09:47:13.149334 4940 scope.go:117] "RemoveContainer" containerID="9c3f8fdc23b93a40d9981e0afc4d8542d8392293e55705a97c6953a14b4bd804" Feb 23 09:47:13 crc kubenswrapper[4940]: E0223 09:47:13.149820 4940 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9c3f8fdc23b93a40d9981e0afc4d8542d8392293e55705a97c6953a14b4bd804\": container with ID starting with 9c3f8fdc23b93a40d9981e0afc4d8542d8392293e55705a97c6953a14b4bd804 not found: ID does not exist" containerID="9c3f8fdc23b93a40d9981e0afc4d8542d8392293e55705a97c6953a14b4bd804" Feb 23 09:47:13 crc kubenswrapper[4940]: I0223 09:47:13.149852 4940 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9c3f8fdc23b93a40d9981e0afc4d8542d8392293e55705a97c6953a14b4bd804"} err="failed to get container status \"9c3f8fdc23b93a40d9981e0afc4d8542d8392293e55705a97c6953a14b4bd804\": rpc error: code = NotFound desc = could not find container \"9c3f8fdc23b93a40d9981e0afc4d8542d8392293e55705a97c6953a14b4bd804\": container with ID starting with 9c3f8fdc23b93a40d9981e0afc4d8542d8392293e55705a97c6953a14b4bd804 not found: ID does not exist" Feb 23 09:47:13 crc kubenswrapper[4940]: I0223 09:47:13.149874 4940 scope.go:117] "RemoveContainer" containerID="5b2cdd50a9fd3f0d8c2744741fc7a9b13297375a0f598db9bffcf5bf1768a505" Feb 23 09:47:13 crc kubenswrapper[4940]: E0223 09:47:13.150262 4940 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5b2cdd50a9fd3f0d8c2744741fc7a9b13297375a0f598db9bffcf5bf1768a505\": container with ID starting with 5b2cdd50a9fd3f0d8c2744741fc7a9b13297375a0f598db9bffcf5bf1768a505 not found: ID does not exist" containerID="5b2cdd50a9fd3f0d8c2744741fc7a9b13297375a0f598db9bffcf5bf1768a505" Feb 23 09:47:13 crc kubenswrapper[4940]: I0223 09:47:13.150315 4940 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5b2cdd50a9fd3f0d8c2744741fc7a9b13297375a0f598db9bffcf5bf1768a505"} err="failed to get container status \"5b2cdd50a9fd3f0d8c2744741fc7a9b13297375a0f598db9bffcf5bf1768a505\": rpc error: code = NotFound desc = could not find container \"5b2cdd50a9fd3f0d8c2744741fc7a9b13297375a0f598db9bffcf5bf1768a505\": container with ID starting with 5b2cdd50a9fd3f0d8c2744741fc7a9b13297375a0f598db9bffcf5bf1768a505 not found: ID does not exist" Feb 23 09:47:13 crc kubenswrapper[4940]: I0223 09:47:13.150343 4940 scope.go:117] "RemoveContainer" containerID="767c5da5b27575bd35369ac3caa99dad0d55d8cc22eed8021a3cbca17734d886" Feb 23 09:47:13 crc kubenswrapper[4940]: E0223 09:47:13.150621 4940 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"767c5da5b27575bd35369ac3caa99dad0d55d8cc22eed8021a3cbca17734d886\": container with ID starting with 767c5da5b27575bd35369ac3caa99dad0d55d8cc22eed8021a3cbca17734d886 not found: ID does not exist" containerID="767c5da5b27575bd35369ac3caa99dad0d55d8cc22eed8021a3cbca17734d886" Feb 23 09:47:13 crc kubenswrapper[4940]: I0223 09:47:13.150646 4940 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"767c5da5b27575bd35369ac3caa99dad0d55d8cc22eed8021a3cbca17734d886"} err="failed to get container status \"767c5da5b27575bd35369ac3caa99dad0d55d8cc22eed8021a3cbca17734d886\": rpc error: code = NotFound desc = could not find container \"767c5da5b27575bd35369ac3caa99dad0d55d8cc22eed8021a3cbca17734d886\": container with ID starting with 767c5da5b27575bd35369ac3caa99dad0d55d8cc22eed8021a3cbca17734d886 not found: ID does not exist" Feb 23 09:47:13 crc kubenswrapper[4940]: I0223 09:47:13.357476 4940 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="300123d1-258a-485d-b889-73125ca992ca" path="/var/lib/kubelet/pods/300123d1-258a-485d-b889-73125ca992ca/volumes" Feb 23 09:47:30 crc kubenswrapper[4940]: I0223 09:47:30.736772 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-5hh5n"] Feb 23 09:47:30 crc kubenswrapper[4940]: E0223 09:47:30.737809 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="300123d1-258a-485d-b889-73125ca992ca" containerName="registry-server" Feb 23 09:47:30 crc kubenswrapper[4940]: I0223 09:47:30.737826 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="300123d1-258a-485d-b889-73125ca992ca" containerName="registry-server" Feb 23 09:47:30 crc kubenswrapper[4940]: E0223 09:47:30.737869 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="300123d1-258a-485d-b889-73125ca992ca" containerName="extract-content" Feb 23 09:47:30 crc kubenswrapper[4940]: I0223 09:47:30.737876 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="300123d1-258a-485d-b889-73125ca992ca" containerName="extract-content" Feb 23 09:47:30 crc kubenswrapper[4940]: E0223 09:47:30.737885 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="300123d1-258a-485d-b889-73125ca992ca" containerName="extract-utilities" Feb 23 09:47:30 crc kubenswrapper[4940]: I0223 09:47:30.737893 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="300123d1-258a-485d-b889-73125ca992ca" containerName="extract-utilities" Feb 23 09:47:30 crc kubenswrapper[4940]: I0223 09:47:30.738138 4940 memory_manager.go:354] "RemoveStaleState removing state" podUID="300123d1-258a-485d-b889-73125ca992ca" containerName="registry-server" Feb 23 09:47:30 crc kubenswrapper[4940]: I0223 09:47:30.739872 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5hh5n" Feb 23 09:47:30 crc kubenswrapper[4940]: I0223 09:47:30.754443 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-5hh5n"] Feb 23 09:47:30 crc kubenswrapper[4940]: I0223 09:47:30.782666 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e413db25-8bd6-48b7-af17-7ebc5de85f94-utilities\") pod \"redhat-marketplace-5hh5n\" (UID: \"e413db25-8bd6-48b7-af17-7ebc5de85f94\") " pod="openshift-marketplace/redhat-marketplace-5hh5n" Feb 23 09:47:30 crc kubenswrapper[4940]: I0223 09:47:30.782835 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e413db25-8bd6-48b7-af17-7ebc5de85f94-catalog-content\") pod \"redhat-marketplace-5hh5n\" (UID: \"e413db25-8bd6-48b7-af17-7ebc5de85f94\") " pod="openshift-marketplace/redhat-marketplace-5hh5n" Feb 23 09:47:30 crc kubenswrapper[4940]: I0223 09:47:30.782884 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8qlvq\" (UniqueName: \"kubernetes.io/projected/e413db25-8bd6-48b7-af17-7ebc5de85f94-kube-api-access-8qlvq\") pod \"redhat-marketplace-5hh5n\" (UID: \"e413db25-8bd6-48b7-af17-7ebc5de85f94\") " pod="openshift-marketplace/redhat-marketplace-5hh5n" Feb 23 09:47:30 crc kubenswrapper[4940]: I0223 09:47:30.889895 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e413db25-8bd6-48b7-af17-7ebc5de85f94-utilities\") pod \"redhat-marketplace-5hh5n\" (UID: \"e413db25-8bd6-48b7-af17-7ebc5de85f94\") " pod="openshift-marketplace/redhat-marketplace-5hh5n" Feb 23 09:47:30 crc kubenswrapper[4940]: I0223 09:47:30.890003 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e413db25-8bd6-48b7-af17-7ebc5de85f94-catalog-content\") pod \"redhat-marketplace-5hh5n\" (UID: \"e413db25-8bd6-48b7-af17-7ebc5de85f94\") " pod="openshift-marketplace/redhat-marketplace-5hh5n" Feb 23 09:47:30 crc kubenswrapper[4940]: I0223 09:47:30.890043 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8qlvq\" (UniqueName: \"kubernetes.io/projected/e413db25-8bd6-48b7-af17-7ebc5de85f94-kube-api-access-8qlvq\") pod \"redhat-marketplace-5hh5n\" (UID: \"e413db25-8bd6-48b7-af17-7ebc5de85f94\") " pod="openshift-marketplace/redhat-marketplace-5hh5n" Feb 23 09:47:30 crc kubenswrapper[4940]: I0223 09:47:30.890533 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e413db25-8bd6-48b7-af17-7ebc5de85f94-utilities\") pod \"redhat-marketplace-5hh5n\" (UID: \"e413db25-8bd6-48b7-af17-7ebc5de85f94\") " pod="openshift-marketplace/redhat-marketplace-5hh5n" Feb 23 09:47:30 crc kubenswrapper[4940]: I0223 09:47:30.890651 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e413db25-8bd6-48b7-af17-7ebc5de85f94-catalog-content\") pod \"redhat-marketplace-5hh5n\" (UID: \"e413db25-8bd6-48b7-af17-7ebc5de85f94\") " pod="openshift-marketplace/redhat-marketplace-5hh5n" Feb 23 09:47:30 crc kubenswrapper[4940]: I0223 09:47:30.929810 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8qlvq\" (UniqueName: \"kubernetes.io/projected/e413db25-8bd6-48b7-af17-7ebc5de85f94-kube-api-access-8qlvq\") pod \"redhat-marketplace-5hh5n\" (UID: \"e413db25-8bd6-48b7-af17-7ebc5de85f94\") " pod="openshift-marketplace/redhat-marketplace-5hh5n" Feb 23 09:47:31 crc kubenswrapper[4940]: I0223 09:47:31.074416 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5hh5n" Feb 23 09:47:31 crc kubenswrapper[4940]: I0223 09:47:31.429712 4940 patch_prober.go:28] interesting pod/machine-config-daemon-26mgs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 23 09:47:31 crc kubenswrapper[4940]: I0223 09:47:31.430511 4940 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" podUID="f3f2cfd6-5ddf-436d-998f-440f1cc642b1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 23 09:47:31 crc kubenswrapper[4940]: I0223 09:47:31.594184 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-5hh5n"] Feb 23 09:47:32 crc kubenswrapper[4940]: I0223 09:47:32.231151 4940 generic.go:334] "Generic (PLEG): container finished" podID="e413db25-8bd6-48b7-af17-7ebc5de85f94" containerID="e7f7f9d6a0fe98cf7b5eb42f174931394189a6bec48d92c6898f8bfb09d166a9" exitCode=0 Feb 23 09:47:32 crc kubenswrapper[4940]: I0223 09:47:32.231389 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5hh5n" event={"ID":"e413db25-8bd6-48b7-af17-7ebc5de85f94","Type":"ContainerDied","Data":"e7f7f9d6a0fe98cf7b5eb42f174931394189a6bec48d92c6898f8bfb09d166a9"} Feb 23 09:47:32 crc kubenswrapper[4940]: I0223 09:47:32.231417 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5hh5n" event={"ID":"e413db25-8bd6-48b7-af17-7ebc5de85f94","Type":"ContainerStarted","Data":"9258deccc62c7a2e9596b47a57fb0a697876fb818c680a0176fa9e421ab4796c"} Feb 23 09:47:33 crc kubenswrapper[4940]: I0223 09:47:33.243510 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5hh5n" event={"ID":"e413db25-8bd6-48b7-af17-7ebc5de85f94","Type":"ContainerStarted","Data":"47a7de87b5d5f896354a2cc7f13f648c96df068aef7cba87cda3386e21e914fb"} Feb 23 09:47:34 crc kubenswrapper[4940]: I0223 09:47:34.255272 4940 generic.go:334] "Generic (PLEG): container finished" podID="e413db25-8bd6-48b7-af17-7ebc5de85f94" containerID="47a7de87b5d5f896354a2cc7f13f648c96df068aef7cba87cda3386e21e914fb" exitCode=0 Feb 23 09:47:34 crc kubenswrapper[4940]: I0223 09:47:34.255383 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5hh5n" event={"ID":"e413db25-8bd6-48b7-af17-7ebc5de85f94","Type":"ContainerDied","Data":"47a7de87b5d5f896354a2cc7f13f648c96df068aef7cba87cda3386e21e914fb"} Feb 23 09:47:35 crc kubenswrapper[4940]: I0223 09:47:35.267451 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5hh5n" event={"ID":"e413db25-8bd6-48b7-af17-7ebc5de85f94","Type":"ContainerStarted","Data":"bbf5a59c8482f476a1f083e7d0067be09c6a813afd52998a2d5acb418925fff5"} Feb 23 09:47:35 crc kubenswrapper[4940]: I0223 09:47:35.293791 4940 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-5hh5n" podStartSLOduration=2.644067832 podStartE2EDuration="5.293768398s" podCreationTimestamp="2026-02-23 09:47:30 +0000 UTC" firstStartedPulling="2026-02-23 09:47:32.233138045 +0000 UTC m=+3583.616344202" lastFinishedPulling="2026-02-23 09:47:34.882838611 +0000 UTC m=+3586.266044768" observedRunningTime="2026-02-23 09:47:35.286742397 +0000 UTC m=+3586.669948564" watchObservedRunningTime="2026-02-23 09:47:35.293768398 +0000 UTC m=+3586.676974565" Feb 23 09:47:41 crc kubenswrapper[4940]: I0223 09:47:41.074779 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-5hh5n" Feb 23 09:47:41 crc kubenswrapper[4940]: I0223 09:47:41.076428 4940 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-5hh5n" Feb 23 09:47:41 crc kubenswrapper[4940]: I0223 09:47:41.123140 4940 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-5hh5n" Feb 23 09:47:41 crc kubenswrapper[4940]: I0223 09:47:41.368536 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-5hh5n" Feb 23 09:47:41 crc kubenswrapper[4940]: I0223 09:47:41.427083 4940 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-5hh5n"] Feb 23 09:47:43 crc kubenswrapper[4940]: I0223 09:47:43.331836 4940 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-5hh5n" podUID="e413db25-8bd6-48b7-af17-7ebc5de85f94" containerName="registry-server" containerID="cri-o://bbf5a59c8482f476a1f083e7d0067be09c6a813afd52998a2d5acb418925fff5" gracePeriod=2 Feb 23 09:47:43 crc kubenswrapper[4940]: I0223 09:47:43.992911 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5hh5n" Feb 23 09:47:44 crc kubenswrapper[4940]: I0223 09:47:44.088600 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e413db25-8bd6-48b7-af17-7ebc5de85f94-catalog-content\") pod \"e413db25-8bd6-48b7-af17-7ebc5de85f94\" (UID: \"e413db25-8bd6-48b7-af17-7ebc5de85f94\") " Feb 23 09:47:44 crc kubenswrapper[4940]: I0223 09:47:44.088792 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e413db25-8bd6-48b7-af17-7ebc5de85f94-utilities\") pod \"e413db25-8bd6-48b7-af17-7ebc5de85f94\" (UID: \"e413db25-8bd6-48b7-af17-7ebc5de85f94\") " Feb 23 09:47:44 crc kubenswrapper[4940]: I0223 09:47:44.088814 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8qlvq\" (UniqueName: \"kubernetes.io/projected/e413db25-8bd6-48b7-af17-7ebc5de85f94-kube-api-access-8qlvq\") pod \"e413db25-8bd6-48b7-af17-7ebc5de85f94\" (UID: \"e413db25-8bd6-48b7-af17-7ebc5de85f94\") " Feb 23 09:47:44 crc kubenswrapper[4940]: I0223 09:47:44.091066 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e413db25-8bd6-48b7-af17-7ebc5de85f94-utilities" (OuterVolumeSpecName: "utilities") pod "e413db25-8bd6-48b7-af17-7ebc5de85f94" (UID: "e413db25-8bd6-48b7-af17-7ebc5de85f94"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 09:47:44 crc kubenswrapper[4940]: I0223 09:47:44.094752 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e413db25-8bd6-48b7-af17-7ebc5de85f94-kube-api-access-8qlvq" (OuterVolumeSpecName: "kube-api-access-8qlvq") pod "e413db25-8bd6-48b7-af17-7ebc5de85f94" (UID: "e413db25-8bd6-48b7-af17-7ebc5de85f94"). InnerVolumeSpecName "kube-api-access-8qlvq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 09:47:44 crc kubenswrapper[4940]: I0223 09:47:44.120755 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e413db25-8bd6-48b7-af17-7ebc5de85f94-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e413db25-8bd6-48b7-af17-7ebc5de85f94" (UID: "e413db25-8bd6-48b7-af17-7ebc5de85f94"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 09:47:44 crc kubenswrapper[4940]: I0223 09:47:44.190917 4940 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e413db25-8bd6-48b7-af17-7ebc5de85f94-utilities\") on node \"crc\" DevicePath \"\"" Feb 23 09:47:44 crc kubenswrapper[4940]: I0223 09:47:44.190958 4940 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8qlvq\" (UniqueName: \"kubernetes.io/projected/e413db25-8bd6-48b7-af17-7ebc5de85f94-kube-api-access-8qlvq\") on node \"crc\" DevicePath \"\"" Feb 23 09:47:44 crc kubenswrapper[4940]: I0223 09:47:44.190968 4940 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e413db25-8bd6-48b7-af17-7ebc5de85f94-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 23 09:47:44 crc kubenswrapper[4940]: I0223 09:47:44.349454 4940 generic.go:334] "Generic (PLEG): container finished" podID="e413db25-8bd6-48b7-af17-7ebc5de85f94" containerID="bbf5a59c8482f476a1f083e7d0067be09c6a813afd52998a2d5acb418925fff5" exitCode=0 Feb 23 09:47:44 crc kubenswrapper[4940]: I0223 09:47:44.349502 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5hh5n" event={"ID":"e413db25-8bd6-48b7-af17-7ebc5de85f94","Type":"ContainerDied","Data":"bbf5a59c8482f476a1f083e7d0067be09c6a813afd52998a2d5acb418925fff5"} Feb 23 09:47:44 crc kubenswrapper[4940]: I0223 09:47:44.349542 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5hh5n" event={"ID":"e413db25-8bd6-48b7-af17-7ebc5de85f94","Type":"ContainerDied","Data":"9258deccc62c7a2e9596b47a57fb0a697876fb818c680a0176fa9e421ab4796c"} Feb 23 09:47:44 crc kubenswrapper[4940]: I0223 09:47:44.349565 4940 scope.go:117] "RemoveContainer" containerID="bbf5a59c8482f476a1f083e7d0067be09c6a813afd52998a2d5acb418925fff5" Feb 23 09:47:44 crc kubenswrapper[4940]: I0223 09:47:44.350713 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5hh5n" Feb 23 09:47:44 crc kubenswrapper[4940]: I0223 09:47:44.381287 4940 scope.go:117] "RemoveContainer" containerID="47a7de87b5d5f896354a2cc7f13f648c96df068aef7cba87cda3386e21e914fb" Feb 23 09:47:44 crc kubenswrapper[4940]: I0223 09:47:44.400747 4940 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-5hh5n"] Feb 23 09:47:44 crc kubenswrapper[4940]: I0223 09:47:44.410972 4940 scope.go:117] "RemoveContainer" containerID="e7f7f9d6a0fe98cf7b5eb42f174931394189a6bec48d92c6898f8bfb09d166a9" Feb 23 09:47:44 crc kubenswrapper[4940]: I0223 09:47:44.413752 4940 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-5hh5n"] Feb 23 09:47:44 crc kubenswrapper[4940]: I0223 09:47:44.454775 4940 scope.go:117] "RemoveContainer" containerID="bbf5a59c8482f476a1f083e7d0067be09c6a813afd52998a2d5acb418925fff5" Feb 23 09:47:44 crc kubenswrapper[4940]: E0223 09:47:44.455788 4940 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bbf5a59c8482f476a1f083e7d0067be09c6a813afd52998a2d5acb418925fff5\": container with ID starting with bbf5a59c8482f476a1f083e7d0067be09c6a813afd52998a2d5acb418925fff5 not found: ID does not exist" containerID="bbf5a59c8482f476a1f083e7d0067be09c6a813afd52998a2d5acb418925fff5" Feb 23 09:47:44 crc kubenswrapper[4940]: I0223 09:47:44.455860 4940 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bbf5a59c8482f476a1f083e7d0067be09c6a813afd52998a2d5acb418925fff5"} err="failed to get container status \"bbf5a59c8482f476a1f083e7d0067be09c6a813afd52998a2d5acb418925fff5\": rpc error: code = NotFound desc = could not find container \"bbf5a59c8482f476a1f083e7d0067be09c6a813afd52998a2d5acb418925fff5\": container with ID starting with bbf5a59c8482f476a1f083e7d0067be09c6a813afd52998a2d5acb418925fff5 not found: ID does not exist" Feb 23 09:47:44 crc kubenswrapper[4940]: I0223 09:47:44.455918 4940 scope.go:117] "RemoveContainer" containerID="47a7de87b5d5f896354a2cc7f13f648c96df068aef7cba87cda3386e21e914fb" Feb 23 09:47:44 crc kubenswrapper[4940]: E0223 09:47:44.456200 4940 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"47a7de87b5d5f896354a2cc7f13f648c96df068aef7cba87cda3386e21e914fb\": container with ID starting with 47a7de87b5d5f896354a2cc7f13f648c96df068aef7cba87cda3386e21e914fb not found: ID does not exist" containerID="47a7de87b5d5f896354a2cc7f13f648c96df068aef7cba87cda3386e21e914fb" Feb 23 09:47:44 crc kubenswrapper[4940]: I0223 09:47:44.456254 4940 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"47a7de87b5d5f896354a2cc7f13f648c96df068aef7cba87cda3386e21e914fb"} err="failed to get container status \"47a7de87b5d5f896354a2cc7f13f648c96df068aef7cba87cda3386e21e914fb\": rpc error: code = NotFound desc = could not find container \"47a7de87b5d5f896354a2cc7f13f648c96df068aef7cba87cda3386e21e914fb\": container with ID starting with 47a7de87b5d5f896354a2cc7f13f648c96df068aef7cba87cda3386e21e914fb not found: ID does not exist" Feb 23 09:47:44 crc kubenswrapper[4940]: I0223 09:47:44.456275 4940 scope.go:117] "RemoveContainer" containerID="e7f7f9d6a0fe98cf7b5eb42f174931394189a6bec48d92c6898f8bfb09d166a9" Feb 23 09:47:44 crc kubenswrapper[4940]: E0223 09:47:44.456771 4940 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e7f7f9d6a0fe98cf7b5eb42f174931394189a6bec48d92c6898f8bfb09d166a9\": container with ID starting with e7f7f9d6a0fe98cf7b5eb42f174931394189a6bec48d92c6898f8bfb09d166a9 not found: ID does not exist" containerID="e7f7f9d6a0fe98cf7b5eb42f174931394189a6bec48d92c6898f8bfb09d166a9" Feb 23 09:47:44 crc kubenswrapper[4940]: I0223 09:47:44.456829 4940 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e7f7f9d6a0fe98cf7b5eb42f174931394189a6bec48d92c6898f8bfb09d166a9"} err="failed to get container status \"e7f7f9d6a0fe98cf7b5eb42f174931394189a6bec48d92c6898f8bfb09d166a9\": rpc error: code = NotFound desc = could not find container \"e7f7f9d6a0fe98cf7b5eb42f174931394189a6bec48d92c6898f8bfb09d166a9\": container with ID starting with e7f7f9d6a0fe98cf7b5eb42f174931394189a6bec48d92c6898f8bfb09d166a9 not found: ID does not exist" Feb 23 09:47:45 crc kubenswrapper[4940]: I0223 09:47:45.357640 4940 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e413db25-8bd6-48b7-af17-7ebc5de85f94" path="/var/lib/kubelet/pods/e413db25-8bd6-48b7-af17-7ebc5de85f94/volumes" Feb 23 09:48:01 crc kubenswrapper[4940]: I0223 09:48:01.429540 4940 patch_prober.go:28] interesting pod/machine-config-daemon-26mgs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 23 09:48:01 crc kubenswrapper[4940]: I0223 09:48:01.430067 4940 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" podUID="f3f2cfd6-5ddf-436d-998f-440f1cc642b1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 23 09:48:31 crc kubenswrapper[4940]: I0223 09:48:31.429940 4940 patch_prober.go:28] interesting pod/machine-config-daemon-26mgs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 23 09:48:31 crc kubenswrapper[4940]: I0223 09:48:31.430514 4940 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" podUID="f3f2cfd6-5ddf-436d-998f-440f1cc642b1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 23 09:48:31 crc kubenswrapper[4940]: I0223 09:48:31.430567 4940 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" Feb 23 09:48:31 crc kubenswrapper[4940]: I0223 09:48:31.431524 4940 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"141421c82e159881b47175069362afdd21be912da7cbb73536bf9e0ca5da9926"} pod="openshift-machine-config-operator/machine-config-daemon-26mgs" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 23 09:48:31 crc kubenswrapper[4940]: I0223 09:48:31.431586 4940 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" podUID="f3f2cfd6-5ddf-436d-998f-440f1cc642b1" containerName="machine-config-daemon" containerID="cri-o://141421c82e159881b47175069362afdd21be912da7cbb73536bf9e0ca5da9926" gracePeriod=600 Feb 23 09:48:31 crc kubenswrapper[4940]: E0223 09:48:31.563317 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-26mgs_openshift-machine-config-operator(f3f2cfd6-5ddf-436d-998f-440f1cc642b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" podUID="f3f2cfd6-5ddf-436d-998f-440f1cc642b1" Feb 23 09:48:31 crc kubenswrapper[4940]: I0223 09:48:31.803864 4940 generic.go:334] "Generic (PLEG): container finished" podID="f3f2cfd6-5ddf-436d-998f-440f1cc642b1" containerID="141421c82e159881b47175069362afdd21be912da7cbb73536bf9e0ca5da9926" exitCode=0 Feb 23 09:48:31 crc kubenswrapper[4940]: I0223 09:48:31.803922 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" event={"ID":"f3f2cfd6-5ddf-436d-998f-440f1cc642b1","Type":"ContainerDied","Data":"141421c82e159881b47175069362afdd21be912da7cbb73536bf9e0ca5da9926"} Feb 23 09:48:31 crc kubenswrapper[4940]: I0223 09:48:31.803965 4940 scope.go:117] "RemoveContainer" containerID="961765df6f652b6e91dfb4f45c27aeee9b8de0cc5e96f4a51f8f8ee4d6eb0900" Feb 23 09:48:31 crc kubenswrapper[4940]: I0223 09:48:31.804829 4940 scope.go:117] "RemoveContainer" containerID="141421c82e159881b47175069362afdd21be912da7cbb73536bf9e0ca5da9926" Feb 23 09:48:31 crc kubenswrapper[4940]: E0223 09:48:31.805239 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-26mgs_openshift-machine-config-operator(f3f2cfd6-5ddf-436d-998f-440f1cc642b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" podUID="f3f2cfd6-5ddf-436d-998f-440f1cc642b1" Feb 23 09:48:43 crc kubenswrapper[4940]: I0223 09:48:43.346103 4940 scope.go:117] "RemoveContainer" containerID="141421c82e159881b47175069362afdd21be912da7cbb73536bf9e0ca5da9926" Feb 23 09:48:43 crc kubenswrapper[4940]: E0223 09:48:43.346917 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-26mgs_openshift-machine-config-operator(f3f2cfd6-5ddf-436d-998f-440f1cc642b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" podUID="f3f2cfd6-5ddf-436d-998f-440f1cc642b1" Feb 23 09:48:47 crc kubenswrapper[4940]: I0223 09:48:47.015311 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-8fsd6"] Feb 23 09:48:47 crc kubenswrapper[4940]: E0223 09:48:47.016381 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e413db25-8bd6-48b7-af17-7ebc5de85f94" containerName="extract-content" Feb 23 09:48:47 crc kubenswrapper[4940]: I0223 09:48:47.016398 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="e413db25-8bd6-48b7-af17-7ebc5de85f94" containerName="extract-content" Feb 23 09:48:47 crc kubenswrapper[4940]: E0223 09:48:47.016424 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e413db25-8bd6-48b7-af17-7ebc5de85f94" containerName="extract-utilities" Feb 23 09:48:47 crc kubenswrapper[4940]: I0223 09:48:47.016432 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="e413db25-8bd6-48b7-af17-7ebc5de85f94" containerName="extract-utilities" Feb 23 09:48:47 crc kubenswrapper[4940]: E0223 09:48:47.016475 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e413db25-8bd6-48b7-af17-7ebc5de85f94" containerName="registry-server" Feb 23 09:48:47 crc kubenswrapper[4940]: I0223 09:48:47.016483 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="e413db25-8bd6-48b7-af17-7ebc5de85f94" containerName="registry-server" Feb 23 09:48:47 crc kubenswrapper[4940]: I0223 09:48:47.016707 4940 memory_manager.go:354] "RemoveStaleState removing state" podUID="e413db25-8bd6-48b7-af17-7ebc5de85f94" containerName="registry-server" Feb 23 09:48:47 crc kubenswrapper[4940]: I0223 09:48:47.018391 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8fsd6" Feb 23 09:48:47 crc kubenswrapper[4940]: I0223 09:48:47.032127 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-8fsd6"] Feb 23 09:48:47 crc kubenswrapper[4940]: I0223 09:48:47.163481 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5a0db116-a448-439f-a9c6-d7fdd3a1b7a3-catalog-content\") pod \"community-operators-8fsd6\" (UID: \"5a0db116-a448-439f-a9c6-d7fdd3a1b7a3\") " pod="openshift-marketplace/community-operators-8fsd6" Feb 23 09:48:47 crc kubenswrapper[4940]: I0223 09:48:47.163872 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v9dp9\" (UniqueName: \"kubernetes.io/projected/5a0db116-a448-439f-a9c6-d7fdd3a1b7a3-kube-api-access-v9dp9\") pod \"community-operators-8fsd6\" (UID: \"5a0db116-a448-439f-a9c6-d7fdd3a1b7a3\") " pod="openshift-marketplace/community-operators-8fsd6" Feb 23 09:48:47 crc kubenswrapper[4940]: I0223 09:48:47.164008 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5a0db116-a448-439f-a9c6-d7fdd3a1b7a3-utilities\") pod \"community-operators-8fsd6\" (UID: \"5a0db116-a448-439f-a9c6-d7fdd3a1b7a3\") " pod="openshift-marketplace/community-operators-8fsd6" Feb 23 09:48:47 crc kubenswrapper[4940]: I0223 09:48:47.265450 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v9dp9\" (UniqueName: \"kubernetes.io/projected/5a0db116-a448-439f-a9c6-d7fdd3a1b7a3-kube-api-access-v9dp9\") pod \"community-operators-8fsd6\" (UID: \"5a0db116-a448-439f-a9c6-d7fdd3a1b7a3\") " pod="openshift-marketplace/community-operators-8fsd6" Feb 23 09:48:47 crc kubenswrapper[4940]: I0223 09:48:47.265582 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5a0db116-a448-439f-a9c6-d7fdd3a1b7a3-utilities\") pod \"community-operators-8fsd6\" (UID: \"5a0db116-a448-439f-a9c6-d7fdd3a1b7a3\") " pod="openshift-marketplace/community-operators-8fsd6" Feb 23 09:48:47 crc kubenswrapper[4940]: I0223 09:48:47.265651 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5a0db116-a448-439f-a9c6-d7fdd3a1b7a3-catalog-content\") pod \"community-operators-8fsd6\" (UID: \"5a0db116-a448-439f-a9c6-d7fdd3a1b7a3\") " pod="openshift-marketplace/community-operators-8fsd6" Feb 23 09:48:47 crc kubenswrapper[4940]: I0223 09:48:47.266160 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5a0db116-a448-439f-a9c6-d7fdd3a1b7a3-utilities\") pod \"community-operators-8fsd6\" (UID: \"5a0db116-a448-439f-a9c6-d7fdd3a1b7a3\") " pod="openshift-marketplace/community-operators-8fsd6" Feb 23 09:48:47 crc kubenswrapper[4940]: I0223 09:48:47.266194 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5a0db116-a448-439f-a9c6-d7fdd3a1b7a3-catalog-content\") pod \"community-operators-8fsd6\" (UID: \"5a0db116-a448-439f-a9c6-d7fdd3a1b7a3\") " pod="openshift-marketplace/community-operators-8fsd6" Feb 23 09:48:47 crc kubenswrapper[4940]: I0223 09:48:47.286023 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v9dp9\" (UniqueName: \"kubernetes.io/projected/5a0db116-a448-439f-a9c6-d7fdd3a1b7a3-kube-api-access-v9dp9\") pod \"community-operators-8fsd6\" (UID: \"5a0db116-a448-439f-a9c6-d7fdd3a1b7a3\") " pod="openshift-marketplace/community-operators-8fsd6" Feb 23 09:48:47 crc kubenswrapper[4940]: I0223 09:48:47.350512 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8fsd6" Feb 23 09:48:47 crc kubenswrapper[4940]: I0223 09:48:47.932432 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-8fsd6"] Feb 23 09:48:48 crc kubenswrapper[4940]: I0223 09:48:48.951167 4940 generic.go:334] "Generic (PLEG): container finished" podID="5a0db116-a448-439f-a9c6-d7fdd3a1b7a3" containerID="81958451233c045cd939fe618d05f27edb1781392d24b1b56d4e8430a33b5419" exitCode=0 Feb 23 09:48:48 crc kubenswrapper[4940]: I0223 09:48:48.951284 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8fsd6" event={"ID":"5a0db116-a448-439f-a9c6-d7fdd3a1b7a3","Type":"ContainerDied","Data":"81958451233c045cd939fe618d05f27edb1781392d24b1b56d4e8430a33b5419"} Feb 23 09:48:48 crc kubenswrapper[4940]: I0223 09:48:48.951529 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8fsd6" event={"ID":"5a0db116-a448-439f-a9c6-d7fdd3a1b7a3","Type":"ContainerStarted","Data":"86d122a4a558d78b585a70cc35039f5e9c823651d667b632f2db9b18588dfaf5"} Feb 23 09:48:50 crc kubenswrapper[4940]: I0223 09:48:50.987422 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8fsd6" event={"ID":"5a0db116-a448-439f-a9c6-d7fdd3a1b7a3","Type":"ContainerStarted","Data":"7b00b72473e6af814879b9d47832847de32e269487ff70c378b4c10e7e2dc497"} Feb 23 09:48:52 crc kubenswrapper[4940]: I0223 09:48:52.000276 4940 generic.go:334] "Generic (PLEG): container finished" podID="5a0db116-a448-439f-a9c6-d7fdd3a1b7a3" containerID="7b00b72473e6af814879b9d47832847de32e269487ff70c378b4c10e7e2dc497" exitCode=0 Feb 23 09:48:52 crc kubenswrapper[4940]: I0223 09:48:52.000372 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8fsd6" event={"ID":"5a0db116-a448-439f-a9c6-d7fdd3a1b7a3","Type":"ContainerDied","Data":"7b00b72473e6af814879b9d47832847de32e269487ff70c378b4c10e7e2dc497"} Feb 23 09:48:52 crc kubenswrapper[4940]: I0223 09:48:52.003007 4940 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 23 09:48:53 crc kubenswrapper[4940]: I0223 09:48:53.015152 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8fsd6" event={"ID":"5a0db116-a448-439f-a9c6-d7fdd3a1b7a3","Type":"ContainerStarted","Data":"f80f745990feee2903e4f5ff1a6647b45d8740d5b2766fb09994a04a180a0844"} Feb 23 09:48:53 crc kubenswrapper[4940]: I0223 09:48:53.036709 4940 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-8fsd6" podStartSLOduration=3.614757114 podStartE2EDuration="7.036688434s" podCreationTimestamp="2026-02-23 09:48:46 +0000 UTC" firstStartedPulling="2026-02-23 09:48:48.953381099 +0000 UTC m=+3660.336587256" lastFinishedPulling="2026-02-23 09:48:52.375312429 +0000 UTC m=+3663.758518576" observedRunningTime="2026-02-23 09:48:53.035095604 +0000 UTC m=+3664.418301761" watchObservedRunningTime="2026-02-23 09:48:53.036688434 +0000 UTC m=+3664.419894601" Feb 23 09:48:54 crc kubenswrapper[4940]: I0223 09:48:54.348070 4940 scope.go:117] "RemoveContainer" containerID="141421c82e159881b47175069362afdd21be912da7cbb73536bf9e0ca5da9926" Feb 23 09:48:54 crc kubenswrapper[4940]: E0223 09:48:54.348882 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-26mgs_openshift-machine-config-operator(f3f2cfd6-5ddf-436d-998f-440f1cc642b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" podUID="f3f2cfd6-5ddf-436d-998f-440f1cc642b1" Feb 23 09:48:57 crc kubenswrapper[4940]: I0223 09:48:57.357477 4940 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-8fsd6" Feb 23 09:48:57 crc kubenswrapper[4940]: I0223 09:48:57.357835 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-8fsd6" Feb 23 09:48:58 crc kubenswrapper[4940]: I0223 09:48:58.397138 4940 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-8fsd6" podUID="5a0db116-a448-439f-a9c6-d7fdd3a1b7a3" containerName="registry-server" probeResult="failure" output=< Feb 23 09:48:58 crc kubenswrapper[4940]: timeout: failed to connect service ":50051" within 1s Feb 23 09:48:58 crc kubenswrapper[4940]: > Feb 23 09:49:05 crc kubenswrapper[4940]: I0223 09:49:05.345827 4940 scope.go:117] "RemoveContainer" containerID="141421c82e159881b47175069362afdd21be912da7cbb73536bf9e0ca5da9926" Feb 23 09:49:05 crc kubenswrapper[4940]: E0223 09:49:05.346651 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-26mgs_openshift-machine-config-operator(f3f2cfd6-5ddf-436d-998f-440f1cc642b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" podUID="f3f2cfd6-5ddf-436d-998f-440f1cc642b1" Feb 23 09:49:07 crc kubenswrapper[4940]: I0223 09:49:07.413709 4940 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-8fsd6" Feb 23 09:49:07 crc kubenswrapper[4940]: I0223 09:49:07.474904 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-8fsd6" Feb 23 09:49:07 crc kubenswrapper[4940]: I0223 09:49:07.671456 4940 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-8fsd6"] Feb 23 09:49:09 crc kubenswrapper[4940]: I0223 09:49:09.162054 4940 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-8fsd6" podUID="5a0db116-a448-439f-a9c6-d7fdd3a1b7a3" containerName="registry-server" containerID="cri-o://f80f745990feee2903e4f5ff1a6647b45d8740d5b2766fb09994a04a180a0844" gracePeriod=2 Feb 23 09:49:09 crc kubenswrapper[4940]: I0223 09:49:09.763777 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8fsd6" Feb 23 09:49:09 crc kubenswrapper[4940]: I0223 09:49:09.786081 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5a0db116-a448-439f-a9c6-d7fdd3a1b7a3-utilities\") pod \"5a0db116-a448-439f-a9c6-d7fdd3a1b7a3\" (UID: \"5a0db116-a448-439f-a9c6-d7fdd3a1b7a3\") " Feb 23 09:49:09 crc kubenswrapper[4940]: I0223 09:49:09.786228 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v9dp9\" (UniqueName: \"kubernetes.io/projected/5a0db116-a448-439f-a9c6-d7fdd3a1b7a3-kube-api-access-v9dp9\") pod \"5a0db116-a448-439f-a9c6-d7fdd3a1b7a3\" (UID: \"5a0db116-a448-439f-a9c6-d7fdd3a1b7a3\") " Feb 23 09:49:09 crc kubenswrapper[4940]: I0223 09:49:09.786294 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5a0db116-a448-439f-a9c6-d7fdd3a1b7a3-catalog-content\") pod \"5a0db116-a448-439f-a9c6-d7fdd3a1b7a3\" (UID: \"5a0db116-a448-439f-a9c6-d7fdd3a1b7a3\") " Feb 23 09:49:09 crc kubenswrapper[4940]: I0223 09:49:09.787076 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5a0db116-a448-439f-a9c6-d7fdd3a1b7a3-utilities" (OuterVolumeSpecName: "utilities") pod "5a0db116-a448-439f-a9c6-d7fdd3a1b7a3" (UID: "5a0db116-a448-439f-a9c6-d7fdd3a1b7a3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 09:49:09 crc kubenswrapper[4940]: I0223 09:49:09.796894 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5a0db116-a448-439f-a9c6-d7fdd3a1b7a3-kube-api-access-v9dp9" (OuterVolumeSpecName: "kube-api-access-v9dp9") pod "5a0db116-a448-439f-a9c6-d7fdd3a1b7a3" (UID: "5a0db116-a448-439f-a9c6-d7fdd3a1b7a3"). InnerVolumeSpecName "kube-api-access-v9dp9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 09:49:09 crc kubenswrapper[4940]: I0223 09:49:09.843341 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5a0db116-a448-439f-a9c6-d7fdd3a1b7a3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5a0db116-a448-439f-a9c6-d7fdd3a1b7a3" (UID: "5a0db116-a448-439f-a9c6-d7fdd3a1b7a3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 09:49:09 crc kubenswrapper[4940]: I0223 09:49:09.889360 4940 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5a0db116-a448-439f-a9c6-d7fdd3a1b7a3-utilities\") on node \"crc\" DevicePath \"\"" Feb 23 09:49:09 crc kubenswrapper[4940]: I0223 09:49:09.889681 4940 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v9dp9\" (UniqueName: \"kubernetes.io/projected/5a0db116-a448-439f-a9c6-d7fdd3a1b7a3-kube-api-access-v9dp9\") on node \"crc\" DevicePath \"\"" Feb 23 09:49:09 crc kubenswrapper[4940]: I0223 09:49:09.889700 4940 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5a0db116-a448-439f-a9c6-d7fdd3a1b7a3-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 23 09:49:10 crc kubenswrapper[4940]: I0223 09:49:10.177709 4940 generic.go:334] "Generic (PLEG): container finished" podID="5a0db116-a448-439f-a9c6-d7fdd3a1b7a3" containerID="f80f745990feee2903e4f5ff1a6647b45d8740d5b2766fb09994a04a180a0844" exitCode=0 Feb 23 09:49:10 crc kubenswrapper[4940]: I0223 09:49:10.177757 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8fsd6" event={"ID":"5a0db116-a448-439f-a9c6-d7fdd3a1b7a3","Type":"ContainerDied","Data":"f80f745990feee2903e4f5ff1a6647b45d8740d5b2766fb09994a04a180a0844"} Feb 23 09:49:10 crc kubenswrapper[4940]: I0223 09:49:10.177787 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8fsd6" event={"ID":"5a0db116-a448-439f-a9c6-d7fdd3a1b7a3","Type":"ContainerDied","Data":"86d122a4a558d78b585a70cc35039f5e9c823651d667b632f2db9b18588dfaf5"} Feb 23 09:49:10 crc kubenswrapper[4940]: I0223 09:49:10.177810 4940 scope.go:117] "RemoveContainer" containerID="f80f745990feee2903e4f5ff1a6647b45d8740d5b2766fb09994a04a180a0844" Feb 23 09:49:10 crc kubenswrapper[4940]: I0223 09:49:10.177964 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8fsd6" Feb 23 09:49:10 crc kubenswrapper[4940]: I0223 09:49:10.204404 4940 scope.go:117] "RemoveContainer" containerID="7b00b72473e6af814879b9d47832847de32e269487ff70c378b4c10e7e2dc497" Feb 23 09:49:10 crc kubenswrapper[4940]: I0223 09:49:10.225774 4940 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-8fsd6"] Feb 23 09:49:10 crc kubenswrapper[4940]: I0223 09:49:10.237149 4940 scope.go:117] "RemoveContainer" containerID="81958451233c045cd939fe618d05f27edb1781392d24b1b56d4e8430a33b5419" Feb 23 09:49:10 crc kubenswrapper[4940]: I0223 09:49:10.237380 4940 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-8fsd6"] Feb 23 09:49:10 crc kubenswrapper[4940]: I0223 09:49:10.285792 4940 scope.go:117] "RemoveContainer" containerID="f80f745990feee2903e4f5ff1a6647b45d8740d5b2766fb09994a04a180a0844" Feb 23 09:49:10 crc kubenswrapper[4940]: E0223 09:49:10.286224 4940 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f80f745990feee2903e4f5ff1a6647b45d8740d5b2766fb09994a04a180a0844\": container with ID starting with f80f745990feee2903e4f5ff1a6647b45d8740d5b2766fb09994a04a180a0844 not found: ID does not exist" containerID="f80f745990feee2903e4f5ff1a6647b45d8740d5b2766fb09994a04a180a0844" Feb 23 09:49:10 crc kubenswrapper[4940]: I0223 09:49:10.286272 4940 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f80f745990feee2903e4f5ff1a6647b45d8740d5b2766fb09994a04a180a0844"} err="failed to get container status \"f80f745990feee2903e4f5ff1a6647b45d8740d5b2766fb09994a04a180a0844\": rpc error: code = NotFound desc = could not find container \"f80f745990feee2903e4f5ff1a6647b45d8740d5b2766fb09994a04a180a0844\": container with ID starting with f80f745990feee2903e4f5ff1a6647b45d8740d5b2766fb09994a04a180a0844 not found: ID does not exist" Feb 23 09:49:10 crc kubenswrapper[4940]: I0223 09:49:10.286301 4940 scope.go:117] "RemoveContainer" containerID="7b00b72473e6af814879b9d47832847de32e269487ff70c378b4c10e7e2dc497" Feb 23 09:49:10 crc kubenswrapper[4940]: E0223 09:49:10.286769 4940 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7b00b72473e6af814879b9d47832847de32e269487ff70c378b4c10e7e2dc497\": container with ID starting with 7b00b72473e6af814879b9d47832847de32e269487ff70c378b4c10e7e2dc497 not found: ID does not exist" containerID="7b00b72473e6af814879b9d47832847de32e269487ff70c378b4c10e7e2dc497" Feb 23 09:49:10 crc kubenswrapper[4940]: I0223 09:49:10.286814 4940 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7b00b72473e6af814879b9d47832847de32e269487ff70c378b4c10e7e2dc497"} err="failed to get container status \"7b00b72473e6af814879b9d47832847de32e269487ff70c378b4c10e7e2dc497\": rpc error: code = NotFound desc = could not find container \"7b00b72473e6af814879b9d47832847de32e269487ff70c378b4c10e7e2dc497\": container with ID starting with 7b00b72473e6af814879b9d47832847de32e269487ff70c378b4c10e7e2dc497 not found: ID does not exist" Feb 23 09:49:10 crc kubenswrapper[4940]: I0223 09:49:10.286843 4940 scope.go:117] "RemoveContainer" containerID="81958451233c045cd939fe618d05f27edb1781392d24b1b56d4e8430a33b5419" Feb 23 09:49:10 crc kubenswrapper[4940]: E0223 09:49:10.287121 4940 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"81958451233c045cd939fe618d05f27edb1781392d24b1b56d4e8430a33b5419\": container with ID starting with 81958451233c045cd939fe618d05f27edb1781392d24b1b56d4e8430a33b5419 not found: ID does not exist" containerID="81958451233c045cd939fe618d05f27edb1781392d24b1b56d4e8430a33b5419" Feb 23 09:49:10 crc kubenswrapper[4940]: I0223 09:49:10.287150 4940 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"81958451233c045cd939fe618d05f27edb1781392d24b1b56d4e8430a33b5419"} err="failed to get container status \"81958451233c045cd939fe618d05f27edb1781392d24b1b56d4e8430a33b5419\": rpc error: code = NotFound desc = could not find container \"81958451233c045cd939fe618d05f27edb1781392d24b1b56d4e8430a33b5419\": container with ID starting with 81958451233c045cd939fe618d05f27edb1781392d24b1b56d4e8430a33b5419 not found: ID does not exist" Feb 23 09:49:11 crc kubenswrapper[4940]: I0223 09:49:11.355206 4940 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5a0db116-a448-439f-a9c6-d7fdd3a1b7a3" path="/var/lib/kubelet/pods/5a0db116-a448-439f-a9c6-d7fdd3a1b7a3/volumes" Feb 23 09:49:18 crc kubenswrapper[4940]: I0223 09:49:18.350293 4940 scope.go:117] "RemoveContainer" containerID="141421c82e159881b47175069362afdd21be912da7cbb73536bf9e0ca5da9926" Feb 23 09:49:18 crc kubenswrapper[4940]: E0223 09:49:18.351805 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-26mgs_openshift-machine-config-operator(f3f2cfd6-5ddf-436d-998f-440f1cc642b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" podUID="f3f2cfd6-5ddf-436d-998f-440f1cc642b1" Feb 23 09:49:32 crc kubenswrapper[4940]: I0223 09:49:32.345571 4940 scope.go:117] "RemoveContainer" containerID="141421c82e159881b47175069362afdd21be912da7cbb73536bf9e0ca5da9926" Feb 23 09:49:32 crc kubenswrapper[4940]: E0223 09:49:32.346291 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-26mgs_openshift-machine-config-operator(f3f2cfd6-5ddf-436d-998f-440f1cc642b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" podUID="f3f2cfd6-5ddf-436d-998f-440f1cc642b1" Feb 23 09:49:45 crc kubenswrapper[4940]: I0223 09:49:45.346197 4940 scope.go:117] "RemoveContainer" containerID="141421c82e159881b47175069362afdd21be912da7cbb73536bf9e0ca5da9926" Feb 23 09:49:45 crc kubenswrapper[4940]: E0223 09:49:45.347061 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-26mgs_openshift-machine-config-operator(f3f2cfd6-5ddf-436d-998f-440f1cc642b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" podUID="f3f2cfd6-5ddf-436d-998f-440f1cc642b1" Feb 23 09:49:58 crc kubenswrapper[4940]: I0223 09:49:58.346043 4940 scope.go:117] "RemoveContainer" containerID="141421c82e159881b47175069362afdd21be912da7cbb73536bf9e0ca5da9926" Feb 23 09:49:58 crc kubenswrapper[4940]: E0223 09:49:58.347040 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-26mgs_openshift-machine-config-operator(f3f2cfd6-5ddf-436d-998f-440f1cc642b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" podUID="f3f2cfd6-5ddf-436d-998f-440f1cc642b1" Feb 23 09:50:12 crc kubenswrapper[4940]: I0223 09:50:12.345556 4940 scope.go:117] "RemoveContainer" containerID="141421c82e159881b47175069362afdd21be912da7cbb73536bf9e0ca5da9926" Feb 23 09:50:12 crc kubenswrapper[4940]: E0223 09:50:12.346451 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-26mgs_openshift-machine-config-operator(f3f2cfd6-5ddf-436d-998f-440f1cc642b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" podUID="f3f2cfd6-5ddf-436d-998f-440f1cc642b1" Feb 23 09:50:25 crc kubenswrapper[4940]: I0223 09:50:25.346036 4940 scope.go:117] "RemoveContainer" containerID="141421c82e159881b47175069362afdd21be912da7cbb73536bf9e0ca5da9926" Feb 23 09:50:25 crc kubenswrapper[4940]: E0223 09:50:25.347046 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-26mgs_openshift-machine-config-operator(f3f2cfd6-5ddf-436d-998f-440f1cc642b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" podUID="f3f2cfd6-5ddf-436d-998f-440f1cc642b1" Feb 23 09:50:38 crc kubenswrapper[4940]: I0223 09:50:38.346228 4940 scope.go:117] "RemoveContainer" containerID="141421c82e159881b47175069362afdd21be912da7cbb73536bf9e0ca5da9926" Feb 23 09:50:38 crc kubenswrapper[4940]: E0223 09:50:38.347275 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-26mgs_openshift-machine-config-operator(f3f2cfd6-5ddf-436d-998f-440f1cc642b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" podUID="f3f2cfd6-5ddf-436d-998f-440f1cc642b1" Feb 23 09:50:50 crc kubenswrapper[4940]: I0223 09:50:50.345906 4940 scope.go:117] "RemoveContainer" containerID="141421c82e159881b47175069362afdd21be912da7cbb73536bf9e0ca5da9926" Feb 23 09:50:50 crc kubenswrapper[4940]: E0223 09:50:50.346589 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-26mgs_openshift-machine-config-operator(f3f2cfd6-5ddf-436d-998f-440f1cc642b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" podUID="f3f2cfd6-5ddf-436d-998f-440f1cc642b1" Feb 23 09:51:02 crc kubenswrapper[4940]: I0223 09:51:02.345589 4940 scope.go:117] "RemoveContainer" containerID="141421c82e159881b47175069362afdd21be912da7cbb73536bf9e0ca5da9926" Feb 23 09:51:02 crc kubenswrapper[4940]: E0223 09:51:02.346456 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-26mgs_openshift-machine-config-operator(f3f2cfd6-5ddf-436d-998f-440f1cc642b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" podUID="f3f2cfd6-5ddf-436d-998f-440f1cc642b1" Feb 23 09:51:14 crc kubenswrapper[4940]: I0223 09:51:14.346201 4940 scope.go:117] "RemoveContainer" containerID="141421c82e159881b47175069362afdd21be912da7cbb73536bf9e0ca5da9926" Feb 23 09:51:14 crc kubenswrapper[4940]: E0223 09:51:14.347220 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-26mgs_openshift-machine-config-operator(f3f2cfd6-5ddf-436d-998f-440f1cc642b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" podUID="f3f2cfd6-5ddf-436d-998f-440f1cc642b1" Feb 23 09:51:29 crc kubenswrapper[4940]: I0223 09:51:29.352403 4940 scope.go:117] "RemoveContainer" containerID="141421c82e159881b47175069362afdd21be912da7cbb73536bf9e0ca5da9926" Feb 23 09:51:29 crc kubenswrapper[4940]: E0223 09:51:29.353470 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-26mgs_openshift-machine-config-operator(f3f2cfd6-5ddf-436d-998f-440f1cc642b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" podUID="f3f2cfd6-5ddf-436d-998f-440f1cc642b1" Feb 23 09:51:44 crc kubenswrapper[4940]: I0223 09:51:44.345900 4940 scope.go:117] "RemoveContainer" containerID="141421c82e159881b47175069362afdd21be912da7cbb73536bf9e0ca5da9926" Feb 23 09:51:44 crc kubenswrapper[4940]: E0223 09:51:44.347380 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-26mgs_openshift-machine-config-operator(f3f2cfd6-5ddf-436d-998f-440f1cc642b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" podUID="f3f2cfd6-5ddf-436d-998f-440f1cc642b1" Feb 23 09:51:58 crc kubenswrapper[4940]: I0223 09:51:58.345991 4940 scope.go:117] "RemoveContainer" containerID="141421c82e159881b47175069362afdd21be912da7cbb73536bf9e0ca5da9926" Feb 23 09:51:58 crc kubenswrapper[4940]: E0223 09:51:58.347156 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-26mgs_openshift-machine-config-operator(f3f2cfd6-5ddf-436d-998f-440f1cc642b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" podUID="f3f2cfd6-5ddf-436d-998f-440f1cc642b1" Feb 23 09:52:10 crc kubenswrapper[4940]: I0223 09:52:10.345899 4940 scope.go:117] "RemoveContainer" containerID="141421c82e159881b47175069362afdd21be912da7cbb73536bf9e0ca5da9926" Feb 23 09:52:10 crc kubenswrapper[4940]: E0223 09:52:10.346592 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-26mgs_openshift-machine-config-operator(f3f2cfd6-5ddf-436d-998f-440f1cc642b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" podUID="f3f2cfd6-5ddf-436d-998f-440f1cc642b1" Feb 23 09:52:25 crc kubenswrapper[4940]: I0223 09:52:25.346141 4940 scope.go:117] "RemoveContainer" containerID="141421c82e159881b47175069362afdd21be912da7cbb73536bf9e0ca5da9926" Feb 23 09:52:25 crc kubenswrapper[4940]: E0223 09:52:25.346881 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-26mgs_openshift-machine-config-operator(f3f2cfd6-5ddf-436d-998f-440f1cc642b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" podUID="f3f2cfd6-5ddf-436d-998f-440f1cc642b1" Feb 23 09:52:39 crc kubenswrapper[4940]: I0223 09:52:39.354050 4940 scope.go:117] "RemoveContainer" containerID="141421c82e159881b47175069362afdd21be912da7cbb73536bf9e0ca5da9926" Feb 23 09:52:39 crc kubenswrapper[4940]: E0223 09:52:39.354813 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-26mgs_openshift-machine-config-operator(f3f2cfd6-5ddf-436d-998f-440f1cc642b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" podUID="f3f2cfd6-5ddf-436d-998f-440f1cc642b1" Feb 23 09:52:54 crc kubenswrapper[4940]: I0223 09:52:54.345862 4940 scope.go:117] "RemoveContainer" containerID="141421c82e159881b47175069362afdd21be912da7cbb73536bf9e0ca5da9926" Feb 23 09:52:54 crc kubenswrapper[4940]: E0223 09:52:54.346779 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-26mgs_openshift-machine-config-operator(f3f2cfd6-5ddf-436d-998f-440f1cc642b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" podUID="f3f2cfd6-5ddf-436d-998f-440f1cc642b1" Feb 23 09:53:06 crc kubenswrapper[4940]: I0223 09:53:06.345072 4940 scope.go:117] "RemoveContainer" containerID="141421c82e159881b47175069362afdd21be912da7cbb73536bf9e0ca5da9926" Feb 23 09:53:06 crc kubenswrapper[4940]: E0223 09:53:06.346052 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-26mgs_openshift-machine-config-operator(f3f2cfd6-5ddf-436d-998f-440f1cc642b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" podUID="f3f2cfd6-5ddf-436d-998f-440f1cc642b1" Feb 23 09:53:18 crc kubenswrapper[4940]: I0223 09:53:18.346131 4940 scope.go:117] "RemoveContainer" containerID="141421c82e159881b47175069362afdd21be912da7cbb73536bf9e0ca5da9926" Feb 23 09:53:18 crc kubenswrapper[4940]: E0223 09:53:18.346989 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-26mgs_openshift-machine-config-operator(f3f2cfd6-5ddf-436d-998f-440f1cc642b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" podUID="f3f2cfd6-5ddf-436d-998f-440f1cc642b1" Feb 23 09:53:30 crc kubenswrapper[4940]: I0223 09:53:30.346826 4940 scope.go:117] "RemoveContainer" containerID="141421c82e159881b47175069362afdd21be912da7cbb73536bf9e0ca5da9926" Feb 23 09:53:30 crc kubenswrapper[4940]: E0223 09:53:30.347628 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-26mgs_openshift-machine-config-operator(f3f2cfd6-5ddf-436d-998f-440f1cc642b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" podUID="f3f2cfd6-5ddf-436d-998f-440f1cc642b1" Feb 23 09:53:44 crc kubenswrapper[4940]: I0223 09:53:44.346306 4940 scope.go:117] "RemoveContainer" containerID="141421c82e159881b47175069362afdd21be912da7cbb73536bf9e0ca5da9926" Feb 23 09:53:45 crc kubenswrapper[4940]: I0223 09:53:45.130588 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" event={"ID":"f3f2cfd6-5ddf-436d-998f-440f1cc642b1","Type":"ContainerStarted","Data":"d5c3f3b7aec9a80c96d9d04faed873fc7c641456f2cf0c11aa7e7508b7d5887d"} Feb 23 09:55:32 crc kubenswrapper[4940]: I0223 09:55:32.008573 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-9svhv"] Feb 23 09:55:32 crc kubenswrapper[4940]: E0223 09:55:32.009527 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a0db116-a448-439f-a9c6-d7fdd3a1b7a3" containerName="extract-content" Feb 23 09:55:32 crc kubenswrapper[4940]: I0223 09:55:32.009542 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a0db116-a448-439f-a9c6-d7fdd3a1b7a3" containerName="extract-content" Feb 23 09:55:32 crc kubenswrapper[4940]: E0223 09:55:32.009566 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a0db116-a448-439f-a9c6-d7fdd3a1b7a3" containerName="extract-utilities" Feb 23 09:55:32 crc kubenswrapper[4940]: I0223 09:55:32.009573 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a0db116-a448-439f-a9c6-d7fdd3a1b7a3" containerName="extract-utilities" Feb 23 09:55:32 crc kubenswrapper[4940]: E0223 09:55:32.009589 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a0db116-a448-439f-a9c6-d7fdd3a1b7a3" containerName="registry-server" Feb 23 09:55:32 crc kubenswrapper[4940]: I0223 09:55:32.009595 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a0db116-a448-439f-a9c6-d7fdd3a1b7a3" containerName="registry-server" Feb 23 09:55:32 crc kubenswrapper[4940]: I0223 09:55:32.009851 4940 memory_manager.go:354] "RemoveStaleState removing state" podUID="5a0db116-a448-439f-a9c6-d7fdd3a1b7a3" containerName="registry-server" Feb 23 09:55:32 crc kubenswrapper[4940]: I0223 09:55:32.011553 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9svhv" Feb 23 09:55:32 crc kubenswrapper[4940]: I0223 09:55:32.023867 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-9svhv"] Feb 23 09:55:32 crc kubenswrapper[4940]: I0223 09:55:32.088994 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a73a0235-7b1e-4224-b3ad-7be7fcedc87f-catalog-content\") pod \"redhat-operators-9svhv\" (UID: \"a73a0235-7b1e-4224-b3ad-7be7fcedc87f\") " pod="openshift-marketplace/redhat-operators-9svhv" Feb 23 09:55:32 crc kubenswrapper[4940]: I0223 09:55:32.089101 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a73a0235-7b1e-4224-b3ad-7be7fcedc87f-utilities\") pod \"redhat-operators-9svhv\" (UID: \"a73a0235-7b1e-4224-b3ad-7be7fcedc87f\") " pod="openshift-marketplace/redhat-operators-9svhv" Feb 23 09:55:32 crc kubenswrapper[4940]: I0223 09:55:32.089131 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cvtph\" (UniqueName: \"kubernetes.io/projected/a73a0235-7b1e-4224-b3ad-7be7fcedc87f-kube-api-access-cvtph\") pod \"redhat-operators-9svhv\" (UID: \"a73a0235-7b1e-4224-b3ad-7be7fcedc87f\") " pod="openshift-marketplace/redhat-operators-9svhv" Feb 23 09:55:32 crc kubenswrapper[4940]: I0223 09:55:32.191292 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a73a0235-7b1e-4224-b3ad-7be7fcedc87f-catalog-content\") pod \"redhat-operators-9svhv\" (UID: \"a73a0235-7b1e-4224-b3ad-7be7fcedc87f\") " pod="openshift-marketplace/redhat-operators-9svhv" Feb 23 09:55:32 crc kubenswrapper[4940]: I0223 09:55:32.191361 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a73a0235-7b1e-4224-b3ad-7be7fcedc87f-utilities\") pod \"redhat-operators-9svhv\" (UID: \"a73a0235-7b1e-4224-b3ad-7be7fcedc87f\") " pod="openshift-marketplace/redhat-operators-9svhv" Feb 23 09:55:32 crc kubenswrapper[4940]: I0223 09:55:32.191396 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cvtph\" (UniqueName: \"kubernetes.io/projected/a73a0235-7b1e-4224-b3ad-7be7fcedc87f-kube-api-access-cvtph\") pod \"redhat-operators-9svhv\" (UID: \"a73a0235-7b1e-4224-b3ad-7be7fcedc87f\") " pod="openshift-marketplace/redhat-operators-9svhv" Feb 23 09:55:32 crc kubenswrapper[4940]: I0223 09:55:32.192252 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a73a0235-7b1e-4224-b3ad-7be7fcedc87f-catalog-content\") pod \"redhat-operators-9svhv\" (UID: \"a73a0235-7b1e-4224-b3ad-7be7fcedc87f\") " pod="openshift-marketplace/redhat-operators-9svhv" Feb 23 09:55:32 crc kubenswrapper[4940]: I0223 09:55:32.192342 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a73a0235-7b1e-4224-b3ad-7be7fcedc87f-utilities\") pod \"redhat-operators-9svhv\" (UID: \"a73a0235-7b1e-4224-b3ad-7be7fcedc87f\") " pod="openshift-marketplace/redhat-operators-9svhv" Feb 23 09:55:32 crc kubenswrapper[4940]: I0223 09:55:32.215533 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cvtph\" (UniqueName: \"kubernetes.io/projected/a73a0235-7b1e-4224-b3ad-7be7fcedc87f-kube-api-access-cvtph\") pod \"redhat-operators-9svhv\" (UID: \"a73a0235-7b1e-4224-b3ad-7be7fcedc87f\") " pod="openshift-marketplace/redhat-operators-9svhv" Feb 23 09:55:32 crc kubenswrapper[4940]: I0223 09:55:32.345183 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9svhv" Feb 23 09:55:32 crc kubenswrapper[4940]: I0223 09:55:32.924492 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-9svhv"] Feb 23 09:55:34 crc kubenswrapper[4940]: I0223 09:55:34.112043 4940 generic.go:334] "Generic (PLEG): container finished" podID="a73a0235-7b1e-4224-b3ad-7be7fcedc87f" containerID="0a894172deae8022b7b583384c89dd13243864ef092eb9e318b02ce3e68b647c" exitCode=0 Feb 23 09:55:34 crc kubenswrapper[4940]: I0223 09:55:34.112121 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9svhv" event={"ID":"a73a0235-7b1e-4224-b3ad-7be7fcedc87f","Type":"ContainerDied","Data":"0a894172deae8022b7b583384c89dd13243864ef092eb9e318b02ce3e68b647c"} Feb 23 09:55:34 crc kubenswrapper[4940]: I0223 09:55:34.112582 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9svhv" event={"ID":"a73a0235-7b1e-4224-b3ad-7be7fcedc87f","Type":"ContainerStarted","Data":"6b470755849d3e0ee002401a6384f2b2c4f76ff7330cdd72a78980e200aaff56"} Feb 23 09:55:34 crc kubenswrapper[4940]: I0223 09:55:34.115897 4940 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 23 09:55:36 crc kubenswrapper[4940]: I0223 09:55:36.133070 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9svhv" event={"ID":"a73a0235-7b1e-4224-b3ad-7be7fcedc87f","Type":"ContainerStarted","Data":"5c6793ef895c9b48ed7b389aff577f47e19272034084dec975a2141c4a23281a"} Feb 23 09:55:40 crc kubenswrapper[4940]: I0223 09:55:40.173524 4940 generic.go:334] "Generic (PLEG): container finished" podID="a73a0235-7b1e-4224-b3ad-7be7fcedc87f" containerID="5c6793ef895c9b48ed7b389aff577f47e19272034084dec975a2141c4a23281a" exitCode=0 Feb 23 09:55:40 crc kubenswrapper[4940]: I0223 09:55:40.173649 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9svhv" event={"ID":"a73a0235-7b1e-4224-b3ad-7be7fcedc87f","Type":"ContainerDied","Data":"5c6793ef895c9b48ed7b389aff577f47e19272034084dec975a2141c4a23281a"} Feb 23 09:55:41 crc kubenswrapper[4940]: I0223 09:55:41.185333 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9svhv" event={"ID":"a73a0235-7b1e-4224-b3ad-7be7fcedc87f","Type":"ContainerStarted","Data":"82011d59213c841ce7cb8500fad2f955ab05039f13b1a18963c3848ce152a270"} Feb 23 09:55:41 crc kubenswrapper[4940]: I0223 09:55:41.202836 4940 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-9svhv" podStartSLOduration=3.759921449 podStartE2EDuration="10.202820967s" podCreationTimestamp="2026-02-23 09:55:31 +0000 UTC" firstStartedPulling="2026-02-23 09:55:34.115486 +0000 UTC m=+4065.498692157" lastFinishedPulling="2026-02-23 09:55:40.558385508 +0000 UTC m=+4071.941591675" observedRunningTime="2026-02-23 09:55:41.201639131 +0000 UTC m=+4072.584845288" watchObservedRunningTime="2026-02-23 09:55:41.202820967 +0000 UTC m=+4072.586027124" Feb 23 09:55:42 crc kubenswrapper[4940]: I0223 09:55:42.510359 4940 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-9svhv" Feb 23 09:55:42 crc kubenswrapper[4940]: I0223 09:55:42.514156 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-9svhv" Feb 23 09:55:43 crc kubenswrapper[4940]: I0223 09:55:43.636508 4940 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-9svhv" podUID="a73a0235-7b1e-4224-b3ad-7be7fcedc87f" containerName="registry-server" probeResult="failure" output=< Feb 23 09:55:43 crc kubenswrapper[4940]: timeout: failed to connect service ":50051" within 1s Feb 23 09:55:43 crc kubenswrapper[4940]: > Feb 23 09:55:53 crc kubenswrapper[4940]: I0223 09:55:53.393566 4940 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-9svhv" podUID="a73a0235-7b1e-4224-b3ad-7be7fcedc87f" containerName="registry-server" probeResult="failure" output=< Feb 23 09:55:53 crc kubenswrapper[4940]: timeout: failed to connect service ":50051" within 1s Feb 23 09:55:53 crc kubenswrapper[4940]: > Feb 23 09:56:01 crc kubenswrapper[4940]: I0223 09:56:01.430010 4940 patch_prober.go:28] interesting pod/machine-config-daemon-26mgs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 23 09:56:01 crc kubenswrapper[4940]: I0223 09:56:01.430683 4940 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" podUID="f3f2cfd6-5ddf-436d-998f-440f1cc642b1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 23 09:56:03 crc kubenswrapper[4940]: I0223 09:56:03.393760 4940 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-9svhv" podUID="a73a0235-7b1e-4224-b3ad-7be7fcedc87f" containerName="registry-server" probeResult="failure" output=< Feb 23 09:56:03 crc kubenswrapper[4940]: timeout: failed to connect service ":50051" within 1s Feb 23 09:56:03 crc kubenswrapper[4940]: > Feb 23 09:56:12 crc kubenswrapper[4940]: I0223 09:56:12.404838 4940 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-9svhv" Feb 23 09:56:12 crc kubenswrapper[4940]: I0223 09:56:12.502097 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-9svhv" Feb 23 09:56:12 crc kubenswrapper[4940]: I0223 09:56:12.658168 4940 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-9svhv"] Feb 23 09:56:13 crc kubenswrapper[4940]: I0223 09:56:13.911082 4940 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-9svhv" podUID="a73a0235-7b1e-4224-b3ad-7be7fcedc87f" containerName="registry-server" containerID="cri-o://82011d59213c841ce7cb8500fad2f955ab05039f13b1a18963c3848ce152a270" gracePeriod=2 Feb 23 09:56:14 crc kubenswrapper[4940]: I0223 09:56:14.922199 4940 generic.go:334] "Generic (PLEG): container finished" podID="a73a0235-7b1e-4224-b3ad-7be7fcedc87f" containerID="82011d59213c841ce7cb8500fad2f955ab05039f13b1a18963c3848ce152a270" exitCode=0 Feb 23 09:56:14 crc kubenswrapper[4940]: I0223 09:56:14.922271 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9svhv" event={"ID":"a73a0235-7b1e-4224-b3ad-7be7fcedc87f","Type":"ContainerDied","Data":"82011d59213c841ce7cb8500fad2f955ab05039f13b1a18963c3848ce152a270"} Feb 23 09:56:15 crc kubenswrapper[4940]: I0223 09:56:15.266779 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9svhv" Feb 23 09:56:15 crc kubenswrapper[4940]: I0223 09:56:15.304395 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a73a0235-7b1e-4224-b3ad-7be7fcedc87f-catalog-content\") pod \"a73a0235-7b1e-4224-b3ad-7be7fcedc87f\" (UID: \"a73a0235-7b1e-4224-b3ad-7be7fcedc87f\") " Feb 23 09:56:15 crc kubenswrapper[4940]: I0223 09:56:15.304626 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cvtph\" (UniqueName: \"kubernetes.io/projected/a73a0235-7b1e-4224-b3ad-7be7fcedc87f-kube-api-access-cvtph\") pod \"a73a0235-7b1e-4224-b3ad-7be7fcedc87f\" (UID: \"a73a0235-7b1e-4224-b3ad-7be7fcedc87f\") " Feb 23 09:56:15 crc kubenswrapper[4940]: I0223 09:56:15.304775 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a73a0235-7b1e-4224-b3ad-7be7fcedc87f-utilities\") pod \"a73a0235-7b1e-4224-b3ad-7be7fcedc87f\" (UID: \"a73a0235-7b1e-4224-b3ad-7be7fcedc87f\") " Feb 23 09:56:15 crc kubenswrapper[4940]: I0223 09:56:15.307226 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a73a0235-7b1e-4224-b3ad-7be7fcedc87f-utilities" (OuterVolumeSpecName: "utilities") pod "a73a0235-7b1e-4224-b3ad-7be7fcedc87f" (UID: "a73a0235-7b1e-4224-b3ad-7be7fcedc87f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 09:56:15 crc kubenswrapper[4940]: I0223 09:56:15.337005 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a73a0235-7b1e-4224-b3ad-7be7fcedc87f-kube-api-access-cvtph" (OuterVolumeSpecName: "kube-api-access-cvtph") pod "a73a0235-7b1e-4224-b3ad-7be7fcedc87f" (UID: "a73a0235-7b1e-4224-b3ad-7be7fcedc87f"). InnerVolumeSpecName "kube-api-access-cvtph". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 09:56:15 crc kubenswrapper[4940]: I0223 09:56:15.407809 4940 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cvtph\" (UniqueName: \"kubernetes.io/projected/a73a0235-7b1e-4224-b3ad-7be7fcedc87f-kube-api-access-cvtph\") on node \"crc\" DevicePath \"\"" Feb 23 09:56:15 crc kubenswrapper[4940]: I0223 09:56:15.407853 4940 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a73a0235-7b1e-4224-b3ad-7be7fcedc87f-utilities\") on node \"crc\" DevicePath \"\"" Feb 23 09:56:15 crc kubenswrapper[4940]: I0223 09:56:15.451306 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a73a0235-7b1e-4224-b3ad-7be7fcedc87f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a73a0235-7b1e-4224-b3ad-7be7fcedc87f" (UID: "a73a0235-7b1e-4224-b3ad-7be7fcedc87f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 09:56:15 crc kubenswrapper[4940]: I0223 09:56:15.510109 4940 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a73a0235-7b1e-4224-b3ad-7be7fcedc87f-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 23 09:56:15 crc kubenswrapper[4940]: I0223 09:56:15.935839 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9svhv" event={"ID":"a73a0235-7b1e-4224-b3ad-7be7fcedc87f","Type":"ContainerDied","Data":"6b470755849d3e0ee002401a6384f2b2c4f76ff7330cdd72a78980e200aaff56"} Feb 23 09:56:15 crc kubenswrapper[4940]: I0223 09:56:15.935903 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9svhv" Feb 23 09:56:15 crc kubenswrapper[4940]: I0223 09:56:15.935907 4940 scope.go:117] "RemoveContainer" containerID="82011d59213c841ce7cb8500fad2f955ab05039f13b1a18963c3848ce152a270" Feb 23 09:56:15 crc kubenswrapper[4940]: I0223 09:56:15.959661 4940 scope.go:117] "RemoveContainer" containerID="5c6793ef895c9b48ed7b389aff577f47e19272034084dec975a2141c4a23281a" Feb 23 09:56:15 crc kubenswrapper[4940]: I0223 09:56:15.986554 4940 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-9svhv"] Feb 23 09:56:15 crc kubenswrapper[4940]: I0223 09:56:15.990543 4940 scope.go:117] "RemoveContainer" containerID="0a894172deae8022b7b583384c89dd13243864ef092eb9e318b02ce3e68b647c" Feb 23 09:56:16 crc kubenswrapper[4940]: I0223 09:56:16.015135 4940 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-9svhv"] Feb 23 09:56:17 crc kubenswrapper[4940]: I0223 09:56:17.359002 4940 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a73a0235-7b1e-4224-b3ad-7be7fcedc87f" path="/var/lib/kubelet/pods/a73a0235-7b1e-4224-b3ad-7be7fcedc87f/volumes" Feb 23 09:56:31 crc kubenswrapper[4940]: I0223 09:56:31.429007 4940 patch_prober.go:28] interesting pod/machine-config-daemon-26mgs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 23 09:56:31 crc kubenswrapper[4940]: I0223 09:56:31.429626 4940 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" podUID="f3f2cfd6-5ddf-436d-998f-440f1cc642b1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 23 09:57:01 crc kubenswrapper[4940]: I0223 09:57:01.429666 4940 patch_prober.go:28] interesting pod/machine-config-daemon-26mgs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 23 09:57:01 crc kubenswrapper[4940]: I0223 09:57:01.430386 4940 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" podUID="f3f2cfd6-5ddf-436d-998f-440f1cc642b1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 23 09:57:01 crc kubenswrapper[4940]: I0223 09:57:01.430433 4940 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" Feb 23 09:57:01 crc kubenswrapper[4940]: I0223 09:57:01.431293 4940 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d5c3f3b7aec9a80c96d9d04faed873fc7c641456f2cf0c11aa7e7508b7d5887d"} pod="openshift-machine-config-operator/machine-config-daemon-26mgs" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 23 09:57:01 crc kubenswrapper[4940]: I0223 09:57:01.431349 4940 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" podUID="f3f2cfd6-5ddf-436d-998f-440f1cc642b1" containerName="machine-config-daemon" containerID="cri-o://d5c3f3b7aec9a80c96d9d04faed873fc7c641456f2cf0c11aa7e7508b7d5887d" gracePeriod=600 Feb 23 09:57:02 crc kubenswrapper[4940]: I0223 09:57:02.313282 4940 generic.go:334] "Generic (PLEG): container finished" podID="f3f2cfd6-5ddf-436d-998f-440f1cc642b1" containerID="d5c3f3b7aec9a80c96d9d04faed873fc7c641456f2cf0c11aa7e7508b7d5887d" exitCode=0 Feb 23 09:57:02 crc kubenswrapper[4940]: I0223 09:57:02.313371 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" event={"ID":"f3f2cfd6-5ddf-436d-998f-440f1cc642b1","Type":"ContainerDied","Data":"d5c3f3b7aec9a80c96d9d04faed873fc7c641456f2cf0c11aa7e7508b7d5887d"} Feb 23 09:57:02 crc kubenswrapper[4940]: I0223 09:57:02.313734 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" event={"ID":"f3f2cfd6-5ddf-436d-998f-440f1cc642b1","Type":"ContainerStarted","Data":"60c1bdf7587e6a2309b5b7e3f1e911b58db5d2196b9ec6a170337311283de3da"} Feb 23 09:57:02 crc kubenswrapper[4940]: I0223 09:57:02.313756 4940 scope.go:117] "RemoveContainer" containerID="141421c82e159881b47175069362afdd21be912da7cbb73536bf9e0ca5da9926" Feb 23 09:57:05 crc kubenswrapper[4940]: I0223 09:57:05.090417 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-cjlgz"] Feb 23 09:57:05 crc kubenswrapper[4940]: E0223 09:57:05.091330 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a73a0235-7b1e-4224-b3ad-7be7fcedc87f" containerName="extract-utilities" Feb 23 09:57:05 crc kubenswrapper[4940]: I0223 09:57:05.091349 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="a73a0235-7b1e-4224-b3ad-7be7fcedc87f" containerName="extract-utilities" Feb 23 09:57:05 crc kubenswrapper[4940]: E0223 09:57:05.091377 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a73a0235-7b1e-4224-b3ad-7be7fcedc87f" containerName="extract-content" Feb 23 09:57:05 crc kubenswrapper[4940]: I0223 09:57:05.091384 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="a73a0235-7b1e-4224-b3ad-7be7fcedc87f" containerName="extract-content" Feb 23 09:57:05 crc kubenswrapper[4940]: E0223 09:57:05.091394 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a73a0235-7b1e-4224-b3ad-7be7fcedc87f" containerName="registry-server" Feb 23 09:57:05 crc kubenswrapper[4940]: I0223 09:57:05.091400 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="a73a0235-7b1e-4224-b3ad-7be7fcedc87f" containerName="registry-server" Feb 23 09:57:05 crc kubenswrapper[4940]: I0223 09:57:05.091586 4940 memory_manager.go:354] "RemoveStaleState removing state" podUID="a73a0235-7b1e-4224-b3ad-7be7fcedc87f" containerName="registry-server" Feb 23 09:57:05 crc kubenswrapper[4940]: I0223 09:57:05.092959 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cjlgz" Feb 23 09:57:05 crc kubenswrapper[4940]: I0223 09:57:05.105521 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-cjlgz"] Feb 23 09:57:05 crc kubenswrapper[4940]: I0223 09:57:05.259009 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c514e1b7-80bd-4df3-8a76-ec821cc48e84-utilities\") pod \"certified-operators-cjlgz\" (UID: \"c514e1b7-80bd-4df3-8a76-ec821cc48e84\") " pod="openshift-marketplace/certified-operators-cjlgz" Feb 23 09:57:05 crc kubenswrapper[4940]: I0223 09:57:05.259365 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c514e1b7-80bd-4df3-8a76-ec821cc48e84-catalog-content\") pod \"certified-operators-cjlgz\" (UID: \"c514e1b7-80bd-4df3-8a76-ec821cc48e84\") " pod="openshift-marketplace/certified-operators-cjlgz" Feb 23 09:57:05 crc kubenswrapper[4940]: I0223 09:57:05.259430 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6dq2v\" (UniqueName: \"kubernetes.io/projected/c514e1b7-80bd-4df3-8a76-ec821cc48e84-kube-api-access-6dq2v\") pod \"certified-operators-cjlgz\" (UID: \"c514e1b7-80bd-4df3-8a76-ec821cc48e84\") " pod="openshift-marketplace/certified-operators-cjlgz" Feb 23 09:57:05 crc kubenswrapper[4940]: I0223 09:57:05.361770 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6dq2v\" (UniqueName: \"kubernetes.io/projected/c514e1b7-80bd-4df3-8a76-ec821cc48e84-kube-api-access-6dq2v\") pod \"certified-operators-cjlgz\" (UID: \"c514e1b7-80bd-4df3-8a76-ec821cc48e84\") " pod="openshift-marketplace/certified-operators-cjlgz" Feb 23 09:57:05 crc kubenswrapper[4940]: I0223 09:57:05.361912 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c514e1b7-80bd-4df3-8a76-ec821cc48e84-utilities\") pod \"certified-operators-cjlgz\" (UID: \"c514e1b7-80bd-4df3-8a76-ec821cc48e84\") " pod="openshift-marketplace/certified-operators-cjlgz" Feb 23 09:57:05 crc kubenswrapper[4940]: I0223 09:57:05.362061 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c514e1b7-80bd-4df3-8a76-ec821cc48e84-catalog-content\") pod \"certified-operators-cjlgz\" (UID: \"c514e1b7-80bd-4df3-8a76-ec821cc48e84\") " pod="openshift-marketplace/certified-operators-cjlgz" Feb 23 09:57:05 crc kubenswrapper[4940]: I0223 09:57:05.364004 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c514e1b7-80bd-4df3-8a76-ec821cc48e84-utilities\") pod \"certified-operators-cjlgz\" (UID: \"c514e1b7-80bd-4df3-8a76-ec821cc48e84\") " pod="openshift-marketplace/certified-operators-cjlgz" Feb 23 09:57:05 crc kubenswrapper[4940]: I0223 09:57:05.363831 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c514e1b7-80bd-4df3-8a76-ec821cc48e84-catalog-content\") pod \"certified-operators-cjlgz\" (UID: \"c514e1b7-80bd-4df3-8a76-ec821cc48e84\") " pod="openshift-marketplace/certified-operators-cjlgz" Feb 23 09:57:05 crc kubenswrapper[4940]: I0223 09:57:05.385980 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6dq2v\" (UniqueName: \"kubernetes.io/projected/c514e1b7-80bd-4df3-8a76-ec821cc48e84-kube-api-access-6dq2v\") pod \"certified-operators-cjlgz\" (UID: \"c514e1b7-80bd-4df3-8a76-ec821cc48e84\") " pod="openshift-marketplace/certified-operators-cjlgz" Feb 23 09:57:05 crc kubenswrapper[4940]: I0223 09:57:05.416237 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cjlgz" Feb 23 09:57:05 crc kubenswrapper[4940]: I0223 09:57:05.991024 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-cjlgz"] Feb 23 09:57:05 crc kubenswrapper[4940]: W0223 09:57:05.992367 4940 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc514e1b7_80bd_4df3_8a76_ec821cc48e84.slice/crio-f45aaa2bdac9c4af1f7bf64dfa016805e825a6903462269ca917135f9c406a6f WatchSource:0}: Error finding container f45aaa2bdac9c4af1f7bf64dfa016805e825a6903462269ca917135f9c406a6f: Status 404 returned error can't find the container with id f45aaa2bdac9c4af1f7bf64dfa016805e825a6903462269ca917135f9c406a6f Feb 23 09:57:06 crc kubenswrapper[4940]: I0223 09:57:06.361001 4940 generic.go:334] "Generic (PLEG): container finished" podID="c514e1b7-80bd-4df3-8a76-ec821cc48e84" containerID="21a534a6ecd079f8a28bad7f834bc86c4e75c1d8ac094bd95e6922d23cfd0fb3" exitCode=0 Feb 23 09:57:06 crc kubenswrapper[4940]: I0223 09:57:06.361314 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cjlgz" event={"ID":"c514e1b7-80bd-4df3-8a76-ec821cc48e84","Type":"ContainerDied","Data":"21a534a6ecd079f8a28bad7f834bc86c4e75c1d8ac094bd95e6922d23cfd0fb3"} Feb 23 09:57:06 crc kubenswrapper[4940]: I0223 09:57:06.361347 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cjlgz" event={"ID":"c514e1b7-80bd-4df3-8a76-ec821cc48e84","Type":"ContainerStarted","Data":"f45aaa2bdac9c4af1f7bf64dfa016805e825a6903462269ca917135f9c406a6f"} Feb 23 09:57:07 crc kubenswrapper[4940]: I0223 09:57:07.370910 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cjlgz" event={"ID":"c514e1b7-80bd-4df3-8a76-ec821cc48e84","Type":"ContainerStarted","Data":"4a315947fec1ab2e0390a51cc6e52b9eb4bb39e74f91597a777e8787350cf82f"} Feb 23 09:57:09 crc kubenswrapper[4940]: I0223 09:57:09.398785 4940 generic.go:334] "Generic (PLEG): container finished" podID="c514e1b7-80bd-4df3-8a76-ec821cc48e84" containerID="4a315947fec1ab2e0390a51cc6e52b9eb4bb39e74f91597a777e8787350cf82f" exitCode=0 Feb 23 09:57:09 crc kubenswrapper[4940]: I0223 09:57:09.399039 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cjlgz" event={"ID":"c514e1b7-80bd-4df3-8a76-ec821cc48e84","Type":"ContainerDied","Data":"4a315947fec1ab2e0390a51cc6e52b9eb4bb39e74f91597a777e8787350cf82f"} Feb 23 09:57:10 crc kubenswrapper[4940]: I0223 09:57:10.421879 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cjlgz" event={"ID":"c514e1b7-80bd-4df3-8a76-ec821cc48e84","Type":"ContainerStarted","Data":"3c79f3a3586daf55b64288ae30ad87ac71decd0144eed4a981d177ffec323fee"} Feb 23 09:57:10 crc kubenswrapper[4940]: I0223 09:57:10.459801 4940 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-cjlgz" podStartSLOduration=2.052852832 podStartE2EDuration="5.459775491s" podCreationTimestamp="2026-02-23 09:57:05 +0000 UTC" firstStartedPulling="2026-02-23 09:57:06.363323566 +0000 UTC m=+4157.746529723" lastFinishedPulling="2026-02-23 09:57:09.770246215 +0000 UTC m=+4161.153452382" observedRunningTime="2026-02-23 09:57:10.451180111 +0000 UTC m=+4161.834386288" watchObservedRunningTime="2026-02-23 09:57:10.459775491 +0000 UTC m=+4161.842981648" Feb 23 09:57:15 crc kubenswrapper[4940]: I0223 09:57:15.417217 4940 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-cjlgz" Feb 23 09:57:15 crc kubenswrapper[4940]: I0223 09:57:15.417924 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-cjlgz" Feb 23 09:57:15 crc kubenswrapper[4940]: I0223 09:57:15.482583 4940 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-cjlgz" Feb 23 09:57:15 crc kubenswrapper[4940]: I0223 09:57:15.546629 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-cjlgz" Feb 23 09:57:15 crc kubenswrapper[4940]: I0223 09:57:15.725944 4940 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-cjlgz"] Feb 23 09:57:17 crc kubenswrapper[4940]: I0223 09:57:17.485750 4940 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-cjlgz" podUID="c514e1b7-80bd-4df3-8a76-ec821cc48e84" containerName="registry-server" containerID="cri-o://3c79f3a3586daf55b64288ae30ad87ac71decd0144eed4a981d177ffec323fee" gracePeriod=2 Feb 23 09:57:18 crc kubenswrapper[4940]: I0223 09:57:18.341289 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cjlgz" Feb 23 09:57:18 crc kubenswrapper[4940]: I0223 09:57:18.367532 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6dq2v\" (UniqueName: \"kubernetes.io/projected/c514e1b7-80bd-4df3-8a76-ec821cc48e84-kube-api-access-6dq2v\") pod \"c514e1b7-80bd-4df3-8a76-ec821cc48e84\" (UID: \"c514e1b7-80bd-4df3-8a76-ec821cc48e84\") " Feb 23 09:57:18 crc kubenswrapper[4940]: I0223 09:57:18.367770 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c514e1b7-80bd-4df3-8a76-ec821cc48e84-catalog-content\") pod \"c514e1b7-80bd-4df3-8a76-ec821cc48e84\" (UID: \"c514e1b7-80bd-4df3-8a76-ec821cc48e84\") " Feb 23 09:57:18 crc kubenswrapper[4940]: I0223 09:57:18.367847 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c514e1b7-80bd-4df3-8a76-ec821cc48e84-utilities\") pod \"c514e1b7-80bd-4df3-8a76-ec821cc48e84\" (UID: \"c514e1b7-80bd-4df3-8a76-ec821cc48e84\") " Feb 23 09:57:18 crc kubenswrapper[4940]: I0223 09:57:18.368865 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c514e1b7-80bd-4df3-8a76-ec821cc48e84-utilities" (OuterVolumeSpecName: "utilities") pod "c514e1b7-80bd-4df3-8a76-ec821cc48e84" (UID: "c514e1b7-80bd-4df3-8a76-ec821cc48e84"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 09:57:18 crc kubenswrapper[4940]: I0223 09:57:18.373473 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c514e1b7-80bd-4df3-8a76-ec821cc48e84-kube-api-access-6dq2v" (OuterVolumeSpecName: "kube-api-access-6dq2v") pod "c514e1b7-80bd-4df3-8a76-ec821cc48e84" (UID: "c514e1b7-80bd-4df3-8a76-ec821cc48e84"). InnerVolumeSpecName "kube-api-access-6dq2v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 09:57:18 crc kubenswrapper[4940]: I0223 09:57:18.419760 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c514e1b7-80bd-4df3-8a76-ec821cc48e84-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c514e1b7-80bd-4df3-8a76-ec821cc48e84" (UID: "c514e1b7-80bd-4df3-8a76-ec821cc48e84"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 09:57:18 crc kubenswrapper[4940]: I0223 09:57:18.471672 4940 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c514e1b7-80bd-4df3-8a76-ec821cc48e84-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 23 09:57:18 crc kubenswrapper[4940]: I0223 09:57:18.471696 4940 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c514e1b7-80bd-4df3-8a76-ec821cc48e84-utilities\") on node \"crc\" DevicePath \"\"" Feb 23 09:57:18 crc kubenswrapper[4940]: I0223 09:57:18.471707 4940 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6dq2v\" (UniqueName: \"kubernetes.io/projected/c514e1b7-80bd-4df3-8a76-ec821cc48e84-kube-api-access-6dq2v\") on node \"crc\" DevicePath \"\"" Feb 23 09:57:18 crc kubenswrapper[4940]: I0223 09:57:18.496585 4940 generic.go:334] "Generic (PLEG): container finished" podID="c514e1b7-80bd-4df3-8a76-ec821cc48e84" containerID="3c79f3a3586daf55b64288ae30ad87ac71decd0144eed4a981d177ffec323fee" exitCode=0 Feb 23 09:57:18 crc kubenswrapper[4940]: I0223 09:57:18.496661 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cjlgz" event={"ID":"c514e1b7-80bd-4df3-8a76-ec821cc48e84","Type":"ContainerDied","Data":"3c79f3a3586daf55b64288ae30ad87ac71decd0144eed4a981d177ffec323fee"} Feb 23 09:57:18 crc kubenswrapper[4940]: I0223 09:57:18.496703 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cjlgz" event={"ID":"c514e1b7-80bd-4df3-8a76-ec821cc48e84","Type":"ContainerDied","Data":"f45aaa2bdac9c4af1f7bf64dfa016805e825a6903462269ca917135f9c406a6f"} Feb 23 09:57:18 crc kubenswrapper[4940]: I0223 09:57:18.496724 4940 scope.go:117] "RemoveContainer" containerID="3c79f3a3586daf55b64288ae30ad87ac71decd0144eed4a981d177ffec323fee" Feb 23 09:57:18 crc kubenswrapper[4940]: I0223 09:57:18.496860 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cjlgz" Feb 23 09:57:18 crc kubenswrapper[4940]: I0223 09:57:18.523099 4940 scope.go:117] "RemoveContainer" containerID="4a315947fec1ab2e0390a51cc6e52b9eb4bb39e74f91597a777e8787350cf82f" Feb 23 09:57:18 crc kubenswrapper[4940]: I0223 09:57:18.560643 4940 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-cjlgz"] Feb 23 09:57:18 crc kubenswrapper[4940]: I0223 09:57:18.570226 4940 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-cjlgz"] Feb 23 09:57:18 crc kubenswrapper[4940]: E0223 09:57:18.750644 4940 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc514e1b7_80bd_4df3_8a76_ec821cc48e84.slice\": RecentStats: unable to find data in memory cache]" Feb 23 09:57:19 crc kubenswrapper[4940]: I0223 09:57:19.245520 4940 scope.go:117] "RemoveContainer" containerID="21a534a6ecd079f8a28bad7f834bc86c4e75c1d8ac094bd95e6922d23cfd0fb3" Feb 23 09:57:19 crc kubenswrapper[4940]: I0223 09:57:19.271132 4940 scope.go:117] "RemoveContainer" containerID="3c79f3a3586daf55b64288ae30ad87ac71decd0144eed4a981d177ffec323fee" Feb 23 09:57:19 crc kubenswrapper[4940]: E0223 09:57:19.271727 4940 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3c79f3a3586daf55b64288ae30ad87ac71decd0144eed4a981d177ffec323fee\": container with ID starting with 3c79f3a3586daf55b64288ae30ad87ac71decd0144eed4a981d177ffec323fee not found: ID does not exist" containerID="3c79f3a3586daf55b64288ae30ad87ac71decd0144eed4a981d177ffec323fee" Feb 23 09:57:19 crc kubenswrapper[4940]: I0223 09:57:19.271862 4940 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3c79f3a3586daf55b64288ae30ad87ac71decd0144eed4a981d177ffec323fee"} err="failed to get container status \"3c79f3a3586daf55b64288ae30ad87ac71decd0144eed4a981d177ffec323fee\": rpc error: code = NotFound desc = could not find container \"3c79f3a3586daf55b64288ae30ad87ac71decd0144eed4a981d177ffec323fee\": container with ID starting with 3c79f3a3586daf55b64288ae30ad87ac71decd0144eed4a981d177ffec323fee not found: ID does not exist" Feb 23 09:57:19 crc kubenswrapper[4940]: I0223 09:57:19.271953 4940 scope.go:117] "RemoveContainer" containerID="4a315947fec1ab2e0390a51cc6e52b9eb4bb39e74f91597a777e8787350cf82f" Feb 23 09:57:19 crc kubenswrapper[4940]: E0223 09:57:19.272501 4940 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4a315947fec1ab2e0390a51cc6e52b9eb4bb39e74f91597a777e8787350cf82f\": container with ID starting with 4a315947fec1ab2e0390a51cc6e52b9eb4bb39e74f91597a777e8787350cf82f not found: ID does not exist" containerID="4a315947fec1ab2e0390a51cc6e52b9eb4bb39e74f91597a777e8787350cf82f" Feb 23 09:57:19 crc kubenswrapper[4940]: I0223 09:57:19.272547 4940 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4a315947fec1ab2e0390a51cc6e52b9eb4bb39e74f91597a777e8787350cf82f"} err="failed to get container status \"4a315947fec1ab2e0390a51cc6e52b9eb4bb39e74f91597a777e8787350cf82f\": rpc error: code = NotFound desc = could not find container \"4a315947fec1ab2e0390a51cc6e52b9eb4bb39e74f91597a777e8787350cf82f\": container with ID starting with 4a315947fec1ab2e0390a51cc6e52b9eb4bb39e74f91597a777e8787350cf82f not found: ID does not exist" Feb 23 09:57:19 crc kubenswrapper[4940]: I0223 09:57:19.272576 4940 scope.go:117] "RemoveContainer" containerID="21a534a6ecd079f8a28bad7f834bc86c4e75c1d8ac094bd95e6922d23cfd0fb3" Feb 23 09:57:19 crc kubenswrapper[4940]: E0223 09:57:19.272917 4940 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"21a534a6ecd079f8a28bad7f834bc86c4e75c1d8ac094bd95e6922d23cfd0fb3\": container with ID starting with 21a534a6ecd079f8a28bad7f834bc86c4e75c1d8ac094bd95e6922d23cfd0fb3 not found: ID does not exist" containerID="21a534a6ecd079f8a28bad7f834bc86c4e75c1d8ac094bd95e6922d23cfd0fb3" Feb 23 09:57:19 crc kubenswrapper[4940]: I0223 09:57:19.272950 4940 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"21a534a6ecd079f8a28bad7f834bc86c4e75c1d8ac094bd95e6922d23cfd0fb3"} err="failed to get container status \"21a534a6ecd079f8a28bad7f834bc86c4e75c1d8ac094bd95e6922d23cfd0fb3\": rpc error: code = NotFound desc = could not find container \"21a534a6ecd079f8a28bad7f834bc86c4e75c1d8ac094bd95e6922d23cfd0fb3\": container with ID starting with 21a534a6ecd079f8a28bad7f834bc86c4e75c1d8ac094bd95e6922d23cfd0fb3 not found: ID does not exist" Feb 23 09:57:19 crc kubenswrapper[4940]: I0223 09:57:19.357975 4940 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c514e1b7-80bd-4df3-8a76-ec821cc48e84" path="/var/lib/kubelet/pods/c514e1b7-80bd-4df3-8a76-ec821cc48e84/volumes" Feb 23 09:57:41 crc kubenswrapper[4940]: I0223 09:57:41.085592 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-ghqkx"] Feb 23 09:57:41 crc kubenswrapper[4940]: E0223 09:57:41.088305 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c514e1b7-80bd-4df3-8a76-ec821cc48e84" containerName="registry-server" Feb 23 09:57:41 crc kubenswrapper[4940]: I0223 09:57:41.088319 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="c514e1b7-80bd-4df3-8a76-ec821cc48e84" containerName="registry-server" Feb 23 09:57:41 crc kubenswrapper[4940]: E0223 09:57:41.088348 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c514e1b7-80bd-4df3-8a76-ec821cc48e84" containerName="extract-utilities" Feb 23 09:57:41 crc kubenswrapper[4940]: I0223 09:57:41.088355 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="c514e1b7-80bd-4df3-8a76-ec821cc48e84" containerName="extract-utilities" Feb 23 09:57:41 crc kubenswrapper[4940]: E0223 09:57:41.088370 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c514e1b7-80bd-4df3-8a76-ec821cc48e84" containerName="extract-content" Feb 23 09:57:41 crc kubenswrapper[4940]: I0223 09:57:41.088377 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="c514e1b7-80bd-4df3-8a76-ec821cc48e84" containerName="extract-content" Feb 23 09:57:41 crc kubenswrapper[4940]: I0223 09:57:41.088629 4940 memory_manager.go:354] "RemoveStaleState removing state" podUID="c514e1b7-80bd-4df3-8a76-ec821cc48e84" containerName="registry-server" Feb 23 09:57:41 crc kubenswrapper[4940]: I0223 09:57:41.092893 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ghqkx" Feb 23 09:57:41 crc kubenswrapper[4940]: I0223 09:57:41.127784 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-ghqkx"] Feb 23 09:57:41 crc kubenswrapper[4940]: I0223 09:57:41.218476 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s6d6m\" (UniqueName: \"kubernetes.io/projected/e54dfcd0-73b2-4250-a855-047f92840874-kube-api-access-s6d6m\") pod \"redhat-marketplace-ghqkx\" (UID: \"e54dfcd0-73b2-4250-a855-047f92840874\") " pod="openshift-marketplace/redhat-marketplace-ghqkx" Feb 23 09:57:41 crc kubenswrapper[4940]: I0223 09:57:41.218572 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e54dfcd0-73b2-4250-a855-047f92840874-catalog-content\") pod \"redhat-marketplace-ghqkx\" (UID: \"e54dfcd0-73b2-4250-a855-047f92840874\") " pod="openshift-marketplace/redhat-marketplace-ghqkx" Feb 23 09:57:41 crc kubenswrapper[4940]: I0223 09:57:41.218591 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e54dfcd0-73b2-4250-a855-047f92840874-utilities\") pod \"redhat-marketplace-ghqkx\" (UID: \"e54dfcd0-73b2-4250-a855-047f92840874\") " pod="openshift-marketplace/redhat-marketplace-ghqkx" Feb 23 09:57:41 crc kubenswrapper[4940]: I0223 09:57:41.320223 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e54dfcd0-73b2-4250-a855-047f92840874-catalog-content\") pod \"redhat-marketplace-ghqkx\" (UID: \"e54dfcd0-73b2-4250-a855-047f92840874\") " pod="openshift-marketplace/redhat-marketplace-ghqkx" Feb 23 09:57:41 crc kubenswrapper[4940]: I0223 09:57:41.320272 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e54dfcd0-73b2-4250-a855-047f92840874-utilities\") pod \"redhat-marketplace-ghqkx\" (UID: \"e54dfcd0-73b2-4250-a855-047f92840874\") " pod="openshift-marketplace/redhat-marketplace-ghqkx" Feb 23 09:57:41 crc kubenswrapper[4940]: I0223 09:57:41.320458 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s6d6m\" (UniqueName: \"kubernetes.io/projected/e54dfcd0-73b2-4250-a855-047f92840874-kube-api-access-s6d6m\") pod \"redhat-marketplace-ghqkx\" (UID: \"e54dfcd0-73b2-4250-a855-047f92840874\") " pod="openshift-marketplace/redhat-marketplace-ghqkx" Feb 23 09:57:41 crc kubenswrapper[4940]: I0223 09:57:41.321260 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e54dfcd0-73b2-4250-a855-047f92840874-catalog-content\") pod \"redhat-marketplace-ghqkx\" (UID: \"e54dfcd0-73b2-4250-a855-047f92840874\") " pod="openshift-marketplace/redhat-marketplace-ghqkx" Feb 23 09:57:41 crc kubenswrapper[4940]: I0223 09:57:41.321292 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e54dfcd0-73b2-4250-a855-047f92840874-utilities\") pod \"redhat-marketplace-ghqkx\" (UID: \"e54dfcd0-73b2-4250-a855-047f92840874\") " pod="openshift-marketplace/redhat-marketplace-ghqkx" Feb 23 09:57:41 crc kubenswrapper[4940]: I0223 09:57:41.349631 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s6d6m\" (UniqueName: \"kubernetes.io/projected/e54dfcd0-73b2-4250-a855-047f92840874-kube-api-access-s6d6m\") pod \"redhat-marketplace-ghqkx\" (UID: \"e54dfcd0-73b2-4250-a855-047f92840874\") " pod="openshift-marketplace/redhat-marketplace-ghqkx" Feb 23 09:57:41 crc kubenswrapper[4940]: I0223 09:57:41.441252 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ghqkx" Feb 23 09:57:41 crc kubenswrapper[4940]: I0223 09:57:41.921724 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-ghqkx"] Feb 23 09:57:42 crc kubenswrapper[4940]: I0223 09:57:42.728760 4940 generic.go:334] "Generic (PLEG): container finished" podID="e54dfcd0-73b2-4250-a855-047f92840874" containerID="97103e0cbd8cdc7d5b6c9477b42823490dc97ea158e999a004c1a76fcb05b612" exitCode=0 Feb 23 09:57:42 crc kubenswrapper[4940]: I0223 09:57:42.728822 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ghqkx" event={"ID":"e54dfcd0-73b2-4250-a855-047f92840874","Type":"ContainerDied","Data":"97103e0cbd8cdc7d5b6c9477b42823490dc97ea158e999a004c1a76fcb05b612"} Feb 23 09:57:42 crc kubenswrapper[4940]: I0223 09:57:42.730260 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ghqkx" event={"ID":"e54dfcd0-73b2-4250-a855-047f92840874","Type":"ContainerStarted","Data":"50bf44185003120c99b0d542be91d1511bc985b514be92bd492dc95a02ad7df5"} Feb 23 09:57:43 crc kubenswrapper[4940]: I0223 09:57:43.740819 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ghqkx" event={"ID":"e54dfcd0-73b2-4250-a855-047f92840874","Type":"ContainerStarted","Data":"240d7ec256d5e3f51d21423ebefbc4d610d87f6e24ddb8f58cf9b7b96b66d2f0"} Feb 23 09:57:44 crc kubenswrapper[4940]: I0223 09:57:44.752945 4940 generic.go:334] "Generic (PLEG): container finished" podID="e54dfcd0-73b2-4250-a855-047f92840874" containerID="240d7ec256d5e3f51d21423ebefbc4d610d87f6e24ddb8f58cf9b7b96b66d2f0" exitCode=0 Feb 23 09:57:44 crc kubenswrapper[4940]: I0223 09:57:44.753073 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ghqkx" event={"ID":"e54dfcd0-73b2-4250-a855-047f92840874","Type":"ContainerDied","Data":"240d7ec256d5e3f51d21423ebefbc4d610d87f6e24ddb8f58cf9b7b96b66d2f0"} Feb 23 09:57:45 crc kubenswrapper[4940]: I0223 09:57:45.763685 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ghqkx" event={"ID":"e54dfcd0-73b2-4250-a855-047f92840874","Type":"ContainerStarted","Data":"b96a38b3137ef320fda6a835993ada5d9a6b5e24e03f01ef8177f809afd9b796"} Feb 23 09:57:45 crc kubenswrapper[4940]: I0223 09:57:45.782289 4940 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-ghqkx" podStartSLOduration=2.3323026159999998 podStartE2EDuration="4.78226755s" podCreationTimestamp="2026-02-23 09:57:41 +0000 UTC" firstStartedPulling="2026-02-23 09:57:42.730982611 +0000 UTC m=+4194.114188768" lastFinishedPulling="2026-02-23 09:57:45.180947545 +0000 UTC m=+4196.564153702" observedRunningTime="2026-02-23 09:57:45.777737297 +0000 UTC m=+4197.160943454" watchObservedRunningTime="2026-02-23 09:57:45.78226755 +0000 UTC m=+4197.165473707" Feb 23 09:57:51 crc kubenswrapper[4940]: I0223 09:57:51.442020 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-ghqkx" Feb 23 09:57:51 crc kubenswrapper[4940]: I0223 09:57:51.442476 4940 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-ghqkx" Feb 23 09:57:51 crc kubenswrapper[4940]: I0223 09:57:51.539509 4940 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-ghqkx" Feb 23 09:57:51 crc kubenswrapper[4940]: I0223 09:57:51.878635 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-ghqkx" Feb 23 09:57:51 crc kubenswrapper[4940]: I0223 09:57:51.926837 4940 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-ghqkx"] Feb 23 09:57:53 crc kubenswrapper[4940]: I0223 09:57:53.832347 4940 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-ghqkx" podUID="e54dfcd0-73b2-4250-a855-047f92840874" containerName="registry-server" containerID="cri-o://b96a38b3137ef320fda6a835993ada5d9a6b5e24e03f01ef8177f809afd9b796" gracePeriod=2 Feb 23 09:57:54 crc kubenswrapper[4940]: I0223 09:57:54.475642 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ghqkx" Feb 23 09:57:54 crc kubenswrapper[4940]: I0223 09:57:54.596225 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e54dfcd0-73b2-4250-a855-047f92840874-utilities\") pod \"e54dfcd0-73b2-4250-a855-047f92840874\" (UID: \"e54dfcd0-73b2-4250-a855-047f92840874\") " Feb 23 09:57:54 crc kubenswrapper[4940]: I0223 09:57:54.596478 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e54dfcd0-73b2-4250-a855-047f92840874-catalog-content\") pod \"e54dfcd0-73b2-4250-a855-047f92840874\" (UID: \"e54dfcd0-73b2-4250-a855-047f92840874\") " Feb 23 09:57:54 crc kubenswrapper[4940]: I0223 09:57:54.596561 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s6d6m\" (UniqueName: \"kubernetes.io/projected/e54dfcd0-73b2-4250-a855-047f92840874-kube-api-access-s6d6m\") pod \"e54dfcd0-73b2-4250-a855-047f92840874\" (UID: \"e54dfcd0-73b2-4250-a855-047f92840874\") " Feb 23 09:57:54 crc kubenswrapper[4940]: I0223 09:57:54.597591 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e54dfcd0-73b2-4250-a855-047f92840874-utilities" (OuterVolumeSpecName: "utilities") pod "e54dfcd0-73b2-4250-a855-047f92840874" (UID: "e54dfcd0-73b2-4250-a855-047f92840874"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 09:57:54 crc kubenswrapper[4940]: I0223 09:57:54.613904 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e54dfcd0-73b2-4250-a855-047f92840874-kube-api-access-s6d6m" (OuterVolumeSpecName: "kube-api-access-s6d6m") pod "e54dfcd0-73b2-4250-a855-047f92840874" (UID: "e54dfcd0-73b2-4250-a855-047f92840874"). InnerVolumeSpecName "kube-api-access-s6d6m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 09:57:54 crc kubenswrapper[4940]: I0223 09:57:54.625813 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e54dfcd0-73b2-4250-a855-047f92840874-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e54dfcd0-73b2-4250-a855-047f92840874" (UID: "e54dfcd0-73b2-4250-a855-047f92840874"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 09:57:54 crc kubenswrapper[4940]: I0223 09:57:54.698768 4940 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e54dfcd0-73b2-4250-a855-047f92840874-utilities\") on node \"crc\" DevicePath \"\"" Feb 23 09:57:54 crc kubenswrapper[4940]: I0223 09:57:54.698798 4940 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e54dfcd0-73b2-4250-a855-047f92840874-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 23 09:57:54 crc kubenswrapper[4940]: I0223 09:57:54.698810 4940 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s6d6m\" (UniqueName: \"kubernetes.io/projected/e54dfcd0-73b2-4250-a855-047f92840874-kube-api-access-s6d6m\") on node \"crc\" DevicePath \"\"" Feb 23 09:57:54 crc kubenswrapper[4940]: I0223 09:57:54.843409 4940 generic.go:334] "Generic (PLEG): container finished" podID="e54dfcd0-73b2-4250-a855-047f92840874" containerID="b96a38b3137ef320fda6a835993ada5d9a6b5e24e03f01ef8177f809afd9b796" exitCode=0 Feb 23 09:57:54 crc kubenswrapper[4940]: I0223 09:57:54.843459 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ghqkx" event={"ID":"e54dfcd0-73b2-4250-a855-047f92840874","Type":"ContainerDied","Data":"b96a38b3137ef320fda6a835993ada5d9a6b5e24e03f01ef8177f809afd9b796"} Feb 23 09:57:54 crc kubenswrapper[4940]: I0223 09:57:54.843486 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ghqkx" event={"ID":"e54dfcd0-73b2-4250-a855-047f92840874","Type":"ContainerDied","Data":"50bf44185003120c99b0d542be91d1511bc985b514be92bd492dc95a02ad7df5"} Feb 23 09:57:54 crc kubenswrapper[4940]: I0223 09:57:54.843505 4940 scope.go:117] "RemoveContainer" containerID="b96a38b3137ef320fda6a835993ada5d9a6b5e24e03f01ef8177f809afd9b796" Feb 23 09:57:54 crc kubenswrapper[4940]: I0223 09:57:54.843678 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ghqkx" Feb 23 09:57:54 crc kubenswrapper[4940]: I0223 09:57:54.888461 4940 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-ghqkx"] Feb 23 09:57:54 crc kubenswrapper[4940]: I0223 09:57:54.894708 4940 scope.go:117] "RemoveContainer" containerID="240d7ec256d5e3f51d21423ebefbc4d610d87f6e24ddb8f58cf9b7b96b66d2f0" Feb 23 09:57:54 crc kubenswrapper[4940]: I0223 09:57:54.898674 4940 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-ghqkx"] Feb 23 09:57:54 crc kubenswrapper[4940]: I0223 09:57:54.932144 4940 scope.go:117] "RemoveContainer" containerID="97103e0cbd8cdc7d5b6c9477b42823490dc97ea158e999a004c1a76fcb05b612" Feb 23 09:57:54 crc kubenswrapper[4940]: I0223 09:57:54.970475 4940 scope.go:117] "RemoveContainer" containerID="b96a38b3137ef320fda6a835993ada5d9a6b5e24e03f01ef8177f809afd9b796" Feb 23 09:57:54 crc kubenswrapper[4940]: E0223 09:57:54.970853 4940 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b96a38b3137ef320fda6a835993ada5d9a6b5e24e03f01ef8177f809afd9b796\": container with ID starting with b96a38b3137ef320fda6a835993ada5d9a6b5e24e03f01ef8177f809afd9b796 not found: ID does not exist" containerID="b96a38b3137ef320fda6a835993ada5d9a6b5e24e03f01ef8177f809afd9b796" Feb 23 09:57:54 crc kubenswrapper[4940]: I0223 09:57:54.970953 4940 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b96a38b3137ef320fda6a835993ada5d9a6b5e24e03f01ef8177f809afd9b796"} err="failed to get container status \"b96a38b3137ef320fda6a835993ada5d9a6b5e24e03f01ef8177f809afd9b796\": rpc error: code = NotFound desc = could not find container \"b96a38b3137ef320fda6a835993ada5d9a6b5e24e03f01ef8177f809afd9b796\": container with ID starting with b96a38b3137ef320fda6a835993ada5d9a6b5e24e03f01ef8177f809afd9b796 not found: ID does not exist" Feb 23 09:57:54 crc kubenswrapper[4940]: I0223 09:57:54.971054 4940 scope.go:117] "RemoveContainer" containerID="240d7ec256d5e3f51d21423ebefbc4d610d87f6e24ddb8f58cf9b7b96b66d2f0" Feb 23 09:57:54 crc kubenswrapper[4940]: E0223 09:57:54.971636 4940 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"240d7ec256d5e3f51d21423ebefbc4d610d87f6e24ddb8f58cf9b7b96b66d2f0\": container with ID starting with 240d7ec256d5e3f51d21423ebefbc4d610d87f6e24ddb8f58cf9b7b96b66d2f0 not found: ID does not exist" containerID="240d7ec256d5e3f51d21423ebefbc4d610d87f6e24ddb8f58cf9b7b96b66d2f0" Feb 23 09:57:54 crc kubenswrapper[4940]: I0223 09:57:54.971684 4940 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"240d7ec256d5e3f51d21423ebefbc4d610d87f6e24ddb8f58cf9b7b96b66d2f0"} err="failed to get container status \"240d7ec256d5e3f51d21423ebefbc4d610d87f6e24ddb8f58cf9b7b96b66d2f0\": rpc error: code = NotFound desc = could not find container \"240d7ec256d5e3f51d21423ebefbc4d610d87f6e24ddb8f58cf9b7b96b66d2f0\": container with ID starting with 240d7ec256d5e3f51d21423ebefbc4d610d87f6e24ddb8f58cf9b7b96b66d2f0 not found: ID does not exist" Feb 23 09:57:54 crc kubenswrapper[4940]: I0223 09:57:54.971716 4940 scope.go:117] "RemoveContainer" containerID="97103e0cbd8cdc7d5b6c9477b42823490dc97ea158e999a004c1a76fcb05b612" Feb 23 09:57:54 crc kubenswrapper[4940]: E0223 09:57:54.972016 4940 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"97103e0cbd8cdc7d5b6c9477b42823490dc97ea158e999a004c1a76fcb05b612\": container with ID starting with 97103e0cbd8cdc7d5b6c9477b42823490dc97ea158e999a004c1a76fcb05b612 not found: ID does not exist" containerID="97103e0cbd8cdc7d5b6c9477b42823490dc97ea158e999a004c1a76fcb05b612" Feb 23 09:57:54 crc kubenswrapper[4940]: I0223 09:57:54.972041 4940 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"97103e0cbd8cdc7d5b6c9477b42823490dc97ea158e999a004c1a76fcb05b612"} err="failed to get container status \"97103e0cbd8cdc7d5b6c9477b42823490dc97ea158e999a004c1a76fcb05b612\": rpc error: code = NotFound desc = could not find container \"97103e0cbd8cdc7d5b6c9477b42823490dc97ea158e999a004c1a76fcb05b612\": container with ID starting with 97103e0cbd8cdc7d5b6c9477b42823490dc97ea158e999a004c1a76fcb05b612 not found: ID does not exist" Feb 23 09:57:55 crc kubenswrapper[4940]: I0223 09:57:55.355707 4940 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e54dfcd0-73b2-4250-a855-047f92840874" path="/var/lib/kubelet/pods/e54dfcd0-73b2-4250-a855-047f92840874/volumes" Feb 23 09:59:01 crc kubenswrapper[4940]: I0223 09:59:01.429777 4940 patch_prober.go:28] interesting pod/machine-config-daemon-26mgs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 23 09:59:01 crc kubenswrapper[4940]: I0223 09:59:01.430378 4940 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" podUID="f3f2cfd6-5ddf-436d-998f-440f1cc642b1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 23 09:59:30 crc kubenswrapper[4940]: I0223 09:59:30.713119 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-cts92"] Feb 23 09:59:30 crc kubenswrapper[4940]: E0223 09:59:30.713932 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e54dfcd0-73b2-4250-a855-047f92840874" containerName="extract-content" Feb 23 09:59:30 crc kubenswrapper[4940]: I0223 09:59:30.713946 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="e54dfcd0-73b2-4250-a855-047f92840874" containerName="extract-content" Feb 23 09:59:30 crc kubenswrapper[4940]: E0223 09:59:30.713962 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e54dfcd0-73b2-4250-a855-047f92840874" containerName="extract-utilities" Feb 23 09:59:30 crc kubenswrapper[4940]: I0223 09:59:30.713968 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="e54dfcd0-73b2-4250-a855-047f92840874" containerName="extract-utilities" Feb 23 09:59:30 crc kubenswrapper[4940]: E0223 09:59:30.713992 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e54dfcd0-73b2-4250-a855-047f92840874" containerName="registry-server" Feb 23 09:59:30 crc kubenswrapper[4940]: I0223 09:59:30.713998 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="e54dfcd0-73b2-4250-a855-047f92840874" containerName="registry-server" Feb 23 09:59:30 crc kubenswrapper[4940]: I0223 09:59:30.714191 4940 memory_manager.go:354] "RemoveStaleState removing state" podUID="e54dfcd0-73b2-4250-a855-047f92840874" containerName="registry-server" Feb 23 09:59:30 crc kubenswrapper[4940]: I0223 09:59:30.715553 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cts92" Feb 23 09:59:30 crc kubenswrapper[4940]: I0223 09:59:30.737045 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-cts92"] Feb 23 09:59:30 crc kubenswrapper[4940]: I0223 09:59:30.878523 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/44331892-e7c0-4f22-9cc1-5cfd64aaa81f-utilities\") pod \"community-operators-cts92\" (UID: \"44331892-e7c0-4f22-9cc1-5cfd64aaa81f\") " pod="openshift-marketplace/community-operators-cts92" Feb 23 09:59:30 crc kubenswrapper[4940]: I0223 09:59:30.878568 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w7xzs\" (UniqueName: \"kubernetes.io/projected/44331892-e7c0-4f22-9cc1-5cfd64aaa81f-kube-api-access-w7xzs\") pod \"community-operators-cts92\" (UID: \"44331892-e7c0-4f22-9cc1-5cfd64aaa81f\") " pod="openshift-marketplace/community-operators-cts92" Feb 23 09:59:30 crc kubenswrapper[4940]: I0223 09:59:30.878758 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/44331892-e7c0-4f22-9cc1-5cfd64aaa81f-catalog-content\") pod \"community-operators-cts92\" (UID: \"44331892-e7c0-4f22-9cc1-5cfd64aaa81f\") " pod="openshift-marketplace/community-operators-cts92" Feb 23 09:59:30 crc kubenswrapper[4940]: I0223 09:59:30.980606 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/44331892-e7c0-4f22-9cc1-5cfd64aaa81f-catalog-content\") pod \"community-operators-cts92\" (UID: \"44331892-e7c0-4f22-9cc1-5cfd64aaa81f\") " pod="openshift-marketplace/community-operators-cts92" Feb 23 09:59:30 crc kubenswrapper[4940]: I0223 09:59:30.980799 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/44331892-e7c0-4f22-9cc1-5cfd64aaa81f-utilities\") pod \"community-operators-cts92\" (UID: \"44331892-e7c0-4f22-9cc1-5cfd64aaa81f\") " pod="openshift-marketplace/community-operators-cts92" Feb 23 09:59:30 crc kubenswrapper[4940]: I0223 09:59:30.980823 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w7xzs\" (UniqueName: \"kubernetes.io/projected/44331892-e7c0-4f22-9cc1-5cfd64aaa81f-kube-api-access-w7xzs\") pod \"community-operators-cts92\" (UID: \"44331892-e7c0-4f22-9cc1-5cfd64aaa81f\") " pod="openshift-marketplace/community-operators-cts92" Feb 23 09:59:30 crc kubenswrapper[4940]: I0223 09:59:30.981106 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/44331892-e7c0-4f22-9cc1-5cfd64aaa81f-catalog-content\") pod \"community-operators-cts92\" (UID: \"44331892-e7c0-4f22-9cc1-5cfd64aaa81f\") " pod="openshift-marketplace/community-operators-cts92" Feb 23 09:59:30 crc kubenswrapper[4940]: I0223 09:59:30.981225 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/44331892-e7c0-4f22-9cc1-5cfd64aaa81f-utilities\") pod \"community-operators-cts92\" (UID: \"44331892-e7c0-4f22-9cc1-5cfd64aaa81f\") " pod="openshift-marketplace/community-operators-cts92" Feb 23 09:59:31 crc kubenswrapper[4940]: I0223 09:59:31.002296 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w7xzs\" (UniqueName: \"kubernetes.io/projected/44331892-e7c0-4f22-9cc1-5cfd64aaa81f-kube-api-access-w7xzs\") pod \"community-operators-cts92\" (UID: \"44331892-e7c0-4f22-9cc1-5cfd64aaa81f\") " pod="openshift-marketplace/community-operators-cts92" Feb 23 09:59:31 crc kubenswrapper[4940]: I0223 09:59:31.037802 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cts92" Feb 23 09:59:31 crc kubenswrapper[4940]: I0223 09:59:31.429163 4940 patch_prober.go:28] interesting pod/machine-config-daemon-26mgs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 23 09:59:31 crc kubenswrapper[4940]: I0223 09:59:31.429756 4940 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" podUID="f3f2cfd6-5ddf-436d-998f-440f1cc642b1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 23 09:59:31 crc kubenswrapper[4940]: I0223 09:59:31.616498 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-cts92"] Feb 23 09:59:31 crc kubenswrapper[4940]: I0223 09:59:31.726092 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cts92" event={"ID":"44331892-e7c0-4f22-9cc1-5cfd64aaa81f","Type":"ContainerStarted","Data":"d8a612f902efaad1f385ed24ba681b288a3b30130934b669c6e2f8bbd7f289ca"} Feb 23 09:59:32 crc kubenswrapper[4940]: I0223 09:59:32.740789 4940 generic.go:334] "Generic (PLEG): container finished" podID="44331892-e7c0-4f22-9cc1-5cfd64aaa81f" containerID="228a08f25b4757fe5576c52734a86b1b0b0e6c683384bfb5ca0be9e02c09a5fa" exitCode=0 Feb 23 09:59:32 crc kubenswrapper[4940]: I0223 09:59:32.741061 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cts92" event={"ID":"44331892-e7c0-4f22-9cc1-5cfd64aaa81f","Type":"ContainerDied","Data":"228a08f25b4757fe5576c52734a86b1b0b0e6c683384bfb5ca0be9e02c09a5fa"} Feb 23 09:59:33 crc kubenswrapper[4940]: I0223 09:59:33.753357 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cts92" event={"ID":"44331892-e7c0-4f22-9cc1-5cfd64aaa81f","Type":"ContainerStarted","Data":"b754c1111d218ed031deae4ae1109dad4fdc67cd3b4a5fabc91d90853c77f831"} Feb 23 09:59:34 crc kubenswrapper[4940]: I0223 09:59:34.764700 4940 generic.go:334] "Generic (PLEG): container finished" podID="44331892-e7c0-4f22-9cc1-5cfd64aaa81f" containerID="b754c1111d218ed031deae4ae1109dad4fdc67cd3b4a5fabc91d90853c77f831" exitCode=0 Feb 23 09:59:34 crc kubenswrapper[4940]: I0223 09:59:34.764911 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cts92" event={"ID":"44331892-e7c0-4f22-9cc1-5cfd64aaa81f","Type":"ContainerDied","Data":"b754c1111d218ed031deae4ae1109dad4fdc67cd3b4a5fabc91d90853c77f831"} Feb 23 09:59:35 crc kubenswrapper[4940]: I0223 09:59:35.803524 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cts92" event={"ID":"44331892-e7c0-4f22-9cc1-5cfd64aaa81f","Type":"ContainerStarted","Data":"f7a694219ee1abbf2435f087cd1b74ff5e250a5ec31fef1526f0464b155b2b06"} Feb 23 09:59:35 crc kubenswrapper[4940]: I0223 09:59:35.823386 4940 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-cts92" podStartSLOduration=3.427645943 podStartE2EDuration="5.823361014s" podCreationTimestamp="2026-02-23 09:59:30 +0000 UTC" firstStartedPulling="2026-02-23 09:59:32.743876389 +0000 UTC m=+4304.127082556" lastFinishedPulling="2026-02-23 09:59:35.13959147 +0000 UTC m=+4306.522797627" observedRunningTime="2026-02-23 09:59:35.821447584 +0000 UTC m=+4307.204653751" watchObservedRunningTime="2026-02-23 09:59:35.823361014 +0000 UTC m=+4307.206567171" Feb 23 09:59:41 crc kubenswrapper[4940]: I0223 09:59:41.038296 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-cts92" Feb 23 09:59:41 crc kubenswrapper[4940]: I0223 09:59:41.038859 4940 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-cts92" Feb 23 09:59:41 crc kubenswrapper[4940]: I0223 09:59:41.571680 4940 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-cts92" Feb 23 09:59:41 crc kubenswrapper[4940]: I0223 09:59:41.914058 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-cts92" Feb 23 09:59:41 crc kubenswrapper[4940]: I0223 09:59:41.974079 4940 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-cts92"] Feb 23 09:59:43 crc kubenswrapper[4940]: I0223 09:59:43.876530 4940 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-cts92" podUID="44331892-e7c0-4f22-9cc1-5cfd64aaa81f" containerName="registry-server" containerID="cri-o://f7a694219ee1abbf2435f087cd1b74ff5e250a5ec31fef1526f0464b155b2b06" gracePeriod=2 Feb 23 09:59:44 crc kubenswrapper[4940]: I0223 09:59:44.891588 4940 generic.go:334] "Generic (PLEG): container finished" podID="44331892-e7c0-4f22-9cc1-5cfd64aaa81f" containerID="f7a694219ee1abbf2435f087cd1b74ff5e250a5ec31fef1526f0464b155b2b06" exitCode=0 Feb 23 09:59:44 crc kubenswrapper[4940]: I0223 09:59:44.891670 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cts92" event={"ID":"44331892-e7c0-4f22-9cc1-5cfd64aaa81f","Type":"ContainerDied","Data":"f7a694219ee1abbf2435f087cd1b74ff5e250a5ec31fef1526f0464b155b2b06"} Feb 23 09:59:44 crc kubenswrapper[4940]: I0223 09:59:44.892051 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cts92" event={"ID":"44331892-e7c0-4f22-9cc1-5cfd64aaa81f","Type":"ContainerDied","Data":"d8a612f902efaad1f385ed24ba681b288a3b30130934b669c6e2f8bbd7f289ca"} Feb 23 09:59:44 crc kubenswrapper[4940]: I0223 09:59:44.892079 4940 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d8a612f902efaad1f385ed24ba681b288a3b30130934b669c6e2f8bbd7f289ca" Feb 23 09:59:44 crc kubenswrapper[4940]: I0223 09:59:44.982551 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cts92" Feb 23 09:59:45 crc kubenswrapper[4940]: I0223 09:59:45.021088 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/44331892-e7c0-4f22-9cc1-5cfd64aaa81f-utilities\") pod \"44331892-e7c0-4f22-9cc1-5cfd64aaa81f\" (UID: \"44331892-e7c0-4f22-9cc1-5cfd64aaa81f\") " Feb 23 09:59:45 crc kubenswrapper[4940]: I0223 09:59:45.021380 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7xzs\" (UniqueName: \"kubernetes.io/projected/44331892-e7c0-4f22-9cc1-5cfd64aaa81f-kube-api-access-w7xzs\") pod \"44331892-e7c0-4f22-9cc1-5cfd64aaa81f\" (UID: \"44331892-e7c0-4f22-9cc1-5cfd64aaa81f\") " Feb 23 09:59:45 crc kubenswrapper[4940]: I0223 09:59:45.021414 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/44331892-e7c0-4f22-9cc1-5cfd64aaa81f-catalog-content\") pod \"44331892-e7c0-4f22-9cc1-5cfd64aaa81f\" (UID: \"44331892-e7c0-4f22-9cc1-5cfd64aaa81f\") " Feb 23 09:59:45 crc kubenswrapper[4940]: I0223 09:59:45.022755 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/44331892-e7c0-4f22-9cc1-5cfd64aaa81f-utilities" (OuterVolumeSpecName: "utilities") pod "44331892-e7c0-4f22-9cc1-5cfd64aaa81f" (UID: "44331892-e7c0-4f22-9cc1-5cfd64aaa81f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 09:59:45 crc kubenswrapper[4940]: I0223 09:59:45.023595 4940 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/44331892-e7c0-4f22-9cc1-5cfd64aaa81f-utilities\") on node \"crc\" DevicePath \"\"" Feb 23 09:59:45 crc kubenswrapper[4940]: I0223 09:59:45.028937 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44331892-e7c0-4f22-9cc1-5cfd64aaa81f-kube-api-access-w7xzs" (OuterVolumeSpecName: "kube-api-access-w7xzs") pod "44331892-e7c0-4f22-9cc1-5cfd64aaa81f" (UID: "44331892-e7c0-4f22-9cc1-5cfd64aaa81f"). InnerVolumeSpecName "kube-api-access-w7xzs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 09:59:45 crc kubenswrapper[4940]: I0223 09:59:45.081387 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/44331892-e7c0-4f22-9cc1-5cfd64aaa81f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "44331892-e7c0-4f22-9cc1-5cfd64aaa81f" (UID: "44331892-e7c0-4f22-9cc1-5cfd64aaa81f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 09:59:45 crc kubenswrapper[4940]: I0223 09:59:45.126121 4940 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7xzs\" (UniqueName: \"kubernetes.io/projected/44331892-e7c0-4f22-9cc1-5cfd64aaa81f-kube-api-access-w7xzs\") on node \"crc\" DevicePath \"\"" Feb 23 09:59:45 crc kubenswrapper[4940]: I0223 09:59:45.126166 4940 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/44331892-e7c0-4f22-9cc1-5cfd64aaa81f-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 23 09:59:45 crc kubenswrapper[4940]: I0223 09:59:45.899871 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cts92" Feb 23 09:59:45 crc kubenswrapper[4940]: I0223 09:59:45.928799 4940 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-cts92"] Feb 23 09:59:45 crc kubenswrapper[4940]: I0223 09:59:45.939263 4940 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-cts92"] Feb 23 09:59:47 crc kubenswrapper[4940]: I0223 09:59:47.357105 4940 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44331892-e7c0-4f22-9cc1-5cfd64aaa81f" path="/var/lib/kubelet/pods/44331892-e7c0-4f22-9cc1-5cfd64aaa81f/volumes" Feb 23 10:00:00 crc kubenswrapper[4940]: I0223 10:00:00.210167 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29530680-q9cpf"] Feb 23 10:00:00 crc kubenswrapper[4940]: E0223 10:00:00.211149 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="44331892-e7c0-4f22-9cc1-5cfd64aaa81f" containerName="extract-content" Feb 23 10:00:00 crc kubenswrapper[4940]: I0223 10:00:00.211167 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="44331892-e7c0-4f22-9cc1-5cfd64aaa81f" containerName="extract-content" Feb 23 10:00:00 crc kubenswrapper[4940]: E0223 10:00:00.211181 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="44331892-e7c0-4f22-9cc1-5cfd64aaa81f" containerName="registry-server" Feb 23 10:00:00 crc kubenswrapper[4940]: I0223 10:00:00.211187 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="44331892-e7c0-4f22-9cc1-5cfd64aaa81f" containerName="registry-server" Feb 23 10:00:00 crc kubenswrapper[4940]: E0223 10:00:00.211203 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="44331892-e7c0-4f22-9cc1-5cfd64aaa81f" containerName="extract-utilities" Feb 23 10:00:00 crc kubenswrapper[4940]: I0223 10:00:00.211210 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="44331892-e7c0-4f22-9cc1-5cfd64aaa81f" containerName="extract-utilities" Feb 23 10:00:00 crc kubenswrapper[4940]: I0223 10:00:00.211412 4940 memory_manager.go:354] "RemoveStaleState removing state" podUID="44331892-e7c0-4f22-9cc1-5cfd64aaa81f" containerName="registry-server" Feb 23 10:00:00 crc kubenswrapper[4940]: I0223 10:00:00.212294 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29530680-q9cpf" Feb 23 10:00:00 crc kubenswrapper[4940]: I0223 10:00:00.219865 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 23 10:00:00 crc kubenswrapper[4940]: I0223 10:00:00.220301 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 23 10:00:00 crc kubenswrapper[4940]: I0223 10:00:00.228269 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29530680-q9cpf"] Feb 23 10:00:00 crc kubenswrapper[4940]: I0223 10:00:00.321538 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hhm9q\" (UniqueName: \"kubernetes.io/projected/cfae0562-d6df-44e8-b305-3efe8ca5514b-kube-api-access-hhm9q\") pod \"collect-profiles-29530680-q9cpf\" (UID: \"cfae0562-d6df-44e8-b305-3efe8ca5514b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29530680-q9cpf" Feb 23 10:00:00 crc kubenswrapper[4940]: I0223 10:00:00.321833 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cfae0562-d6df-44e8-b305-3efe8ca5514b-config-volume\") pod \"collect-profiles-29530680-q9cpf\" (UID: \"cfae0562-d6df-44e8-b305-3efe8ca5514b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29530680-q9cpf" Feb 23 10:00:00 crc kubenswrapper[4940]: I0223 10:00:00.321887 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/cfae0562-d6df-44e8-b305-3efe8ca5514b-secret-volume\") pod \"collect-profiles-29530680-q9cpf\" (UID: \"cfae0562-d6df-44e8-b305-3efe8ca5514b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29530680-q9cpf" Feb 23 10:00:00 crc kubenswrapper[4940]: I0223 10:00:00.424090 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cfae0562-d6df-44e8-b305-3efe8ca5514b-config-volume\") pod \"collect-profiles-29530680-q9cpf\" (UID: \"cfae0562-d6df-44e8-b305-3efe8ca5514b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29530680-q9cpf" Feb 23 10:00:00 crc kubenswrapper[4940]: I0223 10:00:00.424145 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/cfae0562-d6df-44e8-b305-3efe8ca5514b-secret-volume\") pod \"collect-profiles-29530680-q9cpf\" (UID: \"cfae0562-d6df-44e8-b305-3efe8ca5514b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29530680-q9cpf" Feb 23 10:00:00 crc kubenswrapper[4940]: I0223 10:00:00.424207 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hhm9q\" (UniqueName: \"kubernetes.io/projected/cfae0562-d6df-44e8-b305-3efe8ca5514b-kube-api-access-hhm9q\") pod \"collect-profiles-29530680-q9cpf\" (UID: \"cfae0562-d6df-44e8-b305-3efe8ca5514b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29530680-q9cpf" Feb 23 10:00:00 crc kubenswrapper[4940]: I0223 10:00:00.425058 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cfae0562-d6df-44e8-b305-3efe8ca5514b-config-volume\") pod \"collect-profiles-29530680-q9cpf\" (UID: \"cfae0562-d6df-44e8-b305-3efe8ca5514b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29530680-q9cpf" Feb 23 10:00:00 crc kubenswrapper[4940]: I0223 10:00:00.432328 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/cfae0562-d6df-44e8-b305-3efe8ca5514b-secret-volume\") pod \"collect-profiles-29530680-q9cpf\" (UID: \"cfae0562-d6df-44e8-b305-3efe8ca5514b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29530680-q9cpf" Feb 23 10:00:00 crc kubenswrapper[4940]: I0223 10:00:00.455533 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hhm9q\" (UniqueName: \"kubernetes.io/projected/cfae0562-d6df-44e8-b305-3efe8ca5514b-kube-api-access-hhm9q\") pod \"collect-profiles-29530680-q9cpf\" (UID: \"cfae0562-d6df-44e8-b305-3efe8ca5514b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29530680-q9cpf" Feb 23 10:00:00 crc kubenswrapper[4940]: I0223 10:00:00.536010 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29530680-q9cpf" Feb 23 10:00:01 crc kubenswrapper[4940]: I0223 10:00:01.002992 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29530680-q9cpf"] Feb 23 10:00:01 crc kubenswrapper[4940]: I0223 10:00:01.036833 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29530680-q9cpf" event={"ID":"cfae0562-d6df-44e8-b305-3efe8ca5514b","Type":"ContainerStarted","Data":"973e7ea47d0c2bd31efd0c87606aae0fc1d452ca398aa1d91402b5956cf8ccad"} Feb 23 10:00:01 crc kubenswrapper[4940]: I0223 10:00:01.429003 4940 patch_prober.go:28] interesting pod/machine-config-daemon-26mgs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 23 10:00:01 crc kubenswrapper[4940]: I0223 10:00:01.429354 4940 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" podUID="f3f2cfd6-5ddf-436d-998f-440f1cc642b1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 23 10:00:01 crc kubenswrapper[4940]: I0223 10:00:01.429407 4940 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" Feb 23 10:00:01 crc kubenswrapper[4940]: I0223 10:00:01.430365 4940 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"60c1bdf7587e6a2309b5b7e3f1e911b58db5d2196b9ec6a170337311283de3da"} pod="openshift-machine-config-operator/machine-config-daemon-26mgs" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 23 10:00:01 crc kubenswrapper[4940]: I0223 10:00:01.430438 4940 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" podUID="f3f2cfd6-5ddf-436d-998f-440f1cc642b1" containerName="machine-config-daemon" containerID="cri-o://60c1bdf7587e6a2309b5b7e3f1e911b58db5d2196b9ec6a170337311283de3da" gracePeriod=600 Feb 23 10:00:01 crc kubenswrapper[4940]: E0223 10:00:01.565837 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-26mgs_openshift-machine-config-operator(f3f2cfd6-5ddf-436d-998f-440f1cc642b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" podUID="f3f2cfd6-5ddf-436d-998f-440f1cc642b1" Feb 23 10:00:02 crc kubenswrapper[4940]: I0223 10:00:02.051187 4940 generic.go:334] "Generic (PLEG): container finished" podID="cfae0562-d6df-44e8-b305-3efe8ca5514b" containerID="7810e09e84dc346126d8528e9b980da3ad3186477473fb11828f7011ed10ad42" exitCode=0 Feb 23 10:00:02 crc kubenswrapper[4940]: I0223 10:00:02.051347 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29530680-q9cpf" event={"ID":"cfae0562-d6df-44e8-b305-3efe8ca5514b","Type":"ContainerDied","Data":"7810e09e84dc346126d8528e9b980da3ad3186477473fb11828f7011ed10ad42"} Feb 23 10:00:02 crc kubenswrapper[4940]: I0223 10:00:02.058101 4940 generic.go:334] "Generic (PLEG): container finished" podID="f3f2cfd6-5ddf-436d-998f-440f1cc642b1" containerID="60c1bdf7587e6a2309b5b7e3f1e911b58db5d2196b9ec6a170337311283de3da" exitCode=0 Feb 23 10:00:02 crc kubenswrapper[4940]: I0223 10:00:02.058139 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" event={"ID":"f3f2cfd6-5ddf-436d-998f-440f1cc642b1","Type":"ContainerDied","Data":"60c1bdf7587e6a2309b5b7e3f1e911b58db5d2196b9ec6a170337311283de3da"} Feb 23 10:00:02 crc kubenswrapper[4940]: I0223 10:00:02.058178 4940 scope.go:117] "RemoveContainer" containerID="d5c3f3b7aec9a80c96d9d04faed873fc7c641456f2cf0c11aa7e7508b7d5887d" Feb 23 10:00:02 crc kubenswrapper[4940]: I0223 10:00:02.058971 4940 scope.go:117] "RemoveContainer" containerID="60c1bdf7587e6a2309b5b7e3f1e911b58db5d2196b9ec6a170337311283de3da" Feb 23 10:00:02 crc kubenswrapper[4940]: E0223 10:00:02.059453 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-26mgs_openshift-machine-config-operator(f3f2cfd6-5ddf-436d-998f-440f1cc642b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" podUID="f3f2cfd6-5ddf-436d-998f-440f1cc642b1" Feb 23 10:00:03 crc kubenswrapper[4940]: I0223 10:00:03.550604 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29530680-q9cpf" Feb 23 10:00:03 crc kubenswrapper[4940]: I0223 10:00:03.704431 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/cfae0562-d6df-44e8-b305-3efe8ca5514b-secret-volume\") pod \"cfae0562-d6df-44e8-b305-3efe8ca5514b\" (UID: \"cfae0562-d6df-44e8-b305-3efe8ca5514b\") " Feb 23 10:00:03 crc kubenswrapper[4940]: I0223 10:00:03.704702 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hhm9q\" (UniqueName: \"kubernetes.io/projected/cfae0562-d6df-44e8-b305-3efe8ca5514b-kube-api-access-hhm9q\") pod \"cfae0562-d6df-44e8-b305-3efe8ca5514b\" (UID: \"cfae0562-d6df-44e8-b305-3efe8ca5514b\") " Feb 23 10:00:03 crc kubenswrapper[4940]: I0223 10:00:03.704777 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cfae0562-d6df-44e8-b305-3efe8ca5514b-config-volume\") pod \"cfae0562-d6df-44e8-b305-3efe8ca5514b\" (UID: \"cfae0562-d6df-44e8-b305-3efe8ca5514b\") " Feb 23 10:00:03 crc kubenswrapper[4940]: I0223 10:00:03.706205 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cfae0562-d6df-44e8-b305-3efe8ca5514b-config-volume" (OuterVolumeSpecName: "config-volume") pod "cfae0562-d6df-44e8-b305-3efe8ca5514b" (UID: "cfae0562-d6df-44e8-b305-3efe8ca5514b"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 10:00:03 crc kubenswrapper[4940]: I0223 10:00:03.712997 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cfae0562-d6df-44e8-b305-3efe8ca5514b-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "cfae0562-d6df-44e8-b305-3efe8ca5514b" (UID: "cfae0562-d6df-44e8-b305-3efe8ca5514b"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 10:00:03 crc kubenswrapper[4940]: I0223 10:00:03.718787 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cfae0562-d6df-44e8-b305-3efe8ca5514b-kube-api-access-hhm9q" (OuterVolumeSpecName: "kube-api-access-hhm9q") pod "cfae0562-d6df-44e8-b305-3efe8ca5514b" (UID: "cfae0562-d6df-44e8-b305-3efe8ca5514b"). InnerVolumeSpecName "kube-api-access-hhm9q". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 10:00:03 crc kubenswrapper[4940]: I0223 10:00:03.807822 4940 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/cfae0562-d6df-44e8-b305-3efe8ca5514b-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 23 10:00:03 crc kubenswrapper[4940]: I0223 10:00:03.807862 4940 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hhm9q\" (UniqueName: \"kubernetes.io/projected/cfae0562-d6df-44e8-b305-3efe8ca5514b-kube-api-access-hhm9q\") on node \"crc\" DevicePath \"\"" Feb 23 10:00:03 crc kubenswrapper[4940]: I0223 10:00:03.807874 4940 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cfae0562-d6df-44e8-b305-3efe8ca5514b-config-volume\") on node \"crc\" DevicePath \"\"" Feb 23 10:00:04 crc kubenswrapper[4940]: I0223 10:00:04.081946 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29530680-q9cpf" event={"ID":"cfae0562-d6df-44e8-b305-3efe8ca5514b","Type":"ContainerDied","Data":"973e7ea47d0c2bd31efd0c87606aae0fc1d452ca398aa1d91402b5956cf8ccad"} Feb 23 10:00:04 crc kubenswrapper[4940]: I0223 10:00:04.082000 4940 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="973e7ea47d0c2bd31efd0c87606aae0fc1d452ca398aa1d91402b5956cf8ccad" Feb 23 10:00:04 crc kubenswrapper[4940]: I0223 10:00:04.082003 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29530680-q9cpf" Feb 23 10:00:04 crc kubenswrapper[4940]: I0223 10:00:04.671311 4940 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29530635-xqbs7"] Feb 23 10:00:04 crc kubenswrapper[4940]: I0223 10:00:04.688005 4940 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29530635-xqbs7"] Feb 23 10:00:05 crc kubenswrapper[4940]: I0223 10:00:05.356983 4940 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2d84e3f5-dc4c-46fc-b34a-e4d9d30d80d7" path="/var/lib/kubelet/pods/2d84e3f5-dc4c-46fc-b34a-e4d9d30d80d7/volumes" Feb 23 10:00:13 crc kubenswrapper[4940]: I0223 10:00:13.349441 4940 scope.go:117] "RemoveContainer" containerID="60c1bdf7587e6a2309b5b7e3f1e911b58db5d2196b9ec6a170337311283de3da" Feb 23 10:00:13 crc kubenswrapper[4940]: E0223 10:00:13.350321 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-26mgs_openshift-machine-config-operator(f3f2cfd6-5ddf-436d-998f-440f1cc642b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" podUID="f3f2cfd6-5ddf-436d-998f-440f1cc642b1" Feb 23 10:00:25 crc kubenswrapper[4940]: I0223 10:00:25.345775 4940 scope.go:117] "RemoveContainer" containerID="60c1bdf7587e6a2309b5b7e3f1e911b58db5d2196b9ec6a170337311283de3da" Feb 23 10:00:25 crc kubenswrapper[4940]: E0223 10:00:25.346595 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-26mgs_openshift-machine-config-operator(f3f2cfd6-5ddf-436d-998f-440f1cc642b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" podUID="f3f2cfd6-5ddf-436d-998f-440f1cc642b1" Feb 23 10:00:36 crc kubenswrapper[4940]: I0223 10:00:36.345513 4940 scope.go:117] "RemoveContainer" containerID="60c1bdf7587e6a2309b5b7e3f1e911b58db5d2196b9ec6a170337311283de3da" Feb 23 10:00:36 crc kubenswrapper[4940]: E0223 10:00:36.346438 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-26mgs_openshift-machine-config-operator(f3f2cfd6-5ddf-436d-998f-440f1cc642b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" podUID="f3f2cfd6-5ddf-436d-998f-440f1cc642b1" Feb 23 10:00:36 crc kubenswrapper[4940]: I0223 10:00:36.530143 4940 scope.go:117] "RemoveContainer" containerID="715738e9152aafffc6accd0ba77eb6c14ee3a7826e604ac54f16ceb11f80f540" Feb 23 10:00:48 crc kubenswrapper[4940]: I0223 10:00:48.346199 4940 scope.go:117] "RemoveContainer" containerID="60c1bdf7587e6a2309b5b7e3f1e911b58db5d2196b9ec6a170337311283de3da" Feb 23 10:00:48 crc kubenswrapper[4940]: E0223 10:00:48.346894 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-26mgs_openshift-machine-config-operator(f3f2cfd6-5ddf-436d-998f-440f1cc642b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" podUID="f3f2cfd6-5ddf-436d-998f-440f1cc642b1" Feb 23 10:01:00 crc kubenswrapper[4940]: I0223 10:01:00.187932 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29530681-kgpdn"] Feb 23 10:01:00 crc kubenswrapper[4940]: E0223 10:01:00.189348 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cfae0562-d6df-44e8-b305-3efe8ca5514b" containerName="collect-profiles" Feb 23 10:01:00 crc kubenswrapper[4940]: I0223 10:01:00.189368 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="cfae0562-d6df-44e8-b305-3efe8ca5514b" containerName="collect-profiles" Feb 23 10:01:00 crc kubenswrapper[4940]: I0223 10:01:00.189653 4940 memory_manager.go:354] "RemoveStaleState removing state" podUID="cfae0562-d6df-44e8-b305-3efe8ca5514b" containerName="collect-profiles" Feb 23 10:01:00 crc kubenswrapper[4940]: I0223 10:01:00.190698 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29530681-kgpdn" Feb 23 10:01:00 crc kubenswrapper[4940]: I0223 10:01:00.197437 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29530681-kgpdn"] Feb 23 10:01:00 crc kubenswrapper[4940]: I0223 10:01:00.345521 4940 scope.go:117] "RemoveContainer" containerID="60c1bdf7587e6a2309b5b7e3f1e911b58db5d2196b9ec6a170337311283de3da" Feb 23 10:01:00 crc kubenswrapper[4940]: E0223 10:01:00.345889 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-26mgs_openshift-machine-config-operator(f3f2cfd6-5ddf-436d-998f-440f1cc642b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" podUID="f3f2cfd6-5ddf-436d-998f-440f1cc642b1" Feb 23 10:01:00 crc kubenswrapper[4940]: I0223 10:01:00.372022 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/62c1078b-acb9-4ce6-9c47-290a2ec6e9b0-fernet-keys\") pod \"keystone-cron-29530681-kgpdn\" (UID: \"62c1078b-acb9-4ce6-9c47-290a2ec6e9b0\") " pod="openstack/keystone-cron-29530681-kgpdn" Feb 23 10:01:00 crc kubenswrapper[4940]: I0223 10:01:00.372213 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8tlls\" (UniqueName: \"kubernetes.io/projected/62c1078b-acb9-4ce6-9c47-290a2ec6e9b0-kube-api-access-8tlls\") pod \"keystone-cron-29530681-kgpdn\" (UID: \"62c1078b-acb9-4ce6-9c47-290a2ec6e9b0\") " pod="openstack/keystone-cron-29530681-kgpdn" Feb 23 10:01:00 crc kubenswrapper[4940]: I0223 10:01:00.372308 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/62c1078b-acb9-4ce6-9c47-290a2ec6e9b0-config-data\") pod \"keystone-cron-29530681-kgpdn\" (UID: \"62c1078b-acb9-4ce6-9c47-290a2ec6e9b0\") " pod="openstack/keystone-cron-29530681-kgpdn" Feb 23 10:01:00 crc kubenswrapper[4940]: I0223 10:01:00.372438 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62c1078b-acb9-4ce6-9c47-290a2ec6e9b0-combined-ca-bundle\") pod \"keystone-cron-29530681-kgpdn\" (UID: \"62c1078b-acb9-4ce6-9c47-290a2ec6e9b0\") " pod="openstack/keystone-cron-29530681-kgpdn" Feb 23 10:01:00 crc kubenswrapper[4940]: I0223 10:01:00.473984 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62c1078b-acb9-4ce6-9c47-290a2ec6e9b0-combined-ca-bundle\") pod \"keystone-cron-29530681-kgpdn\" (UID: \"62c1078b-acb9-4ce6-9c47-290a2ec6e9b0\") " pod="openstack/keystone-cron-29530681-kgpdn" Feb 23 10:01:00 crc kubenswrapper[4940]: I0223 10:01:00.474085 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/62c1078b-acb9-4ce6-9c47-290a2ec6e9b0-fernet-keys\") pod \"keystone-cron-29530681-kgpdn\" (UID: \"62c1078b-acb9-4ce6-9c47-290a2ec6e9b0\") " pod="openstack/keystone-cron-29530681-kgpdn" Feb 23 10:01:00 crc kubenswrapper[4940]: I0223 10:01:00.474205 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8tlls\" (UniqueName: \"kubernetes.io/projected/62c1078b-acb9-4ce6-9c47-290a2ec6e9b0-kube-api-access-8tlls\") pod \"keystone-cron-29530681-kgpdn\" (UID: \"62c1078b-acb9-4ce6-9c47-290a2ec6e9b0\") " pod="openstack/keystone-cron-29530681-kgpdn" Feb 23 10:01:00 crc kubenswrapper[4940]: I0223 10:01:00.474262 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/62c1078b-acb9-4ce6-9c47-290a2ec6e9b0-config-data\") pod \"keystone-cron-29530681-kgpdn\" (UID: \"62c1078b-acb9-4ce6-9c47-290a2ec6e9b0\") " pod="openstack/keystone-cron-29530681-kgpdn" Feb 23 10:01:00 crc kubenswrapper[4940]: I0223 10:01:00.481710 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62c1078b-acb9-4ce6-9c47-290a2ec6e9b0-combined-ca-bundle\") pod \"keystone-cron-29530681-kgpdn\" (UID: \"62c1078b-acb9-4ce6-9c47-290a2ec6e9b0\") " pod="openstack/keystone-cron-29530681-kgpdn" Feb 23 10:01:00 crc kubenswrapper[4940]: I0223 10:01:00.487464 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/62c1078b-acb9-4ce6-9c47-290a2ec6e9b0-fernet-keys\") pod \"keystone-cron-29530681-kgpdn\" (UID: \"62c1078b-acb9-4ce6-9c47-290a2ec6e9b0\") " pod="openstack/keystone-cron-29530681-kgpdn" Feb 23 10:01:00 crc kubenswrapper[4940]: I0223 10:01:00.491034 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/62c1078b-acb9-4ce6-9c47-290a2ec6e9b0-config-data\") pod \"keystone-cron-29530681-kgpdn\" (UID: \"62c1078b-acb9-4ce6-9c47-290a2ec6e9b0\") " pod="openstack/keystone-cron-29530681-kgpdn" Feb 23 10:01:00 crc kubenswrapper[4940]: I0223 10:01:00.494364 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8tlls\" (UniqueName: \"kubernetes.io/projected/62c1078b-acb9-4ce6-9c47-290a2ec6e9b0-kube-api-access-8tlls\") pod \"keystone-cron-29530681-kgpdn\" (UID: \"62c1078b-acb9-4ce6-9c47-290a2ec6e9b0\") " pod="openstack/keystone-cron-29530681-kgpdn" Feb 23 10:01:00 crc kubenswrapper[4940]: I0223 10:01:00.526627 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29530681-kgpdn" Feb 23 10:01:00 crc kubenswrapper[4940]: I0223 10:01:00.999565 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29530681-kgpdn"] Feb 23 10:01:01 crc kubenswrapper[4940]: I0223 10:01:01.676071 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29530681-kgpdn" event={"ID":"62c1078b-acb9-4ce6-9c47-290a2ec6e9b0","Type":"ContainerStarted","Data":"5c453b68f42e6b5c5ee136011ff0027219912c9faf6886ea020c65a2f4e2dbbc"} Feb 23 10:01:01 crc kubenswrapper[4940]: I0223 10:01:01.676675 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29530681-kgpdn" event={"ID":"62c1078b-acb9-4ce6-9c47-290a2ec6e9b0","Type":"ContainerStarted","Data":"0068b13f70fd2e25a5ebec9509614ab3bd76a136a8e7f9a0c06b38420e65217a"} Feb 23 10:01:01 crc kubenswrapper[4940]: I0223 10:01:01.697102 4940 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29530681-kgpdn" podStartSLOduration=1.697079461 podStartE2EDuration="1.697079461s" podCreationTimestamp="2026-02-23 10:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 10:01:01.693078165 +0000 UTC m=+4393.076284382" watchObservedRunningTime="2026-02-23 10:01:01.697079461 +0000 UTC m=+4393.080285638" Feb 23 10:01:05 crc kubenswrapper[4940]: I0223 10:01:05.713464 4940 generic.go:334] "Generic (PLEG): container finished" podID="62c1078b-acb9-4ce6-9c47-290a2ec6e9b0" containerID="5c453b68f42e6b5c5ee136011ff0027219912c9faf6886ea020c65a2f4e2dbbc" exitCode=0 Feb 23 10:01:05 crc kubenswrapper[4940]: I0223 10:01:05.713595 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29530681-kgpdn" event={"ID":"62c1078b-acb9-4ce6-9c47-290a2ec6e9b0","Type":"ContainerDied","Data":"5c453b68f42e6b5c5ee136011ff0027219912c9faf6886ea020c65a2f4e2dbbc"} Feb 23 10:01:07 crc kubenswrapper[4940]: I0223 10:01:07.230859 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29530681-kgpdn" Feb 23 10:01:07 crc kubenswrapper[4940]: I0223 10:01:07.348201 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tlls\" (UniqueName: \"kubernetes.io/projected/62c1078b-acb9-4ce6-9c47-290a2ec6e9b0-kube-api-access-8tlls\") pod \"62c1078b-acb9-4ce6-9c47-290a2ec6e9b0\" (UID: \"62c1078b-acb9-4ce6-9c47-290a2ec6e9b0\") " Feb 23 10:01:07 crc kubenswrapper[4940]: I0223 10:01:07.348274 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62c1078b-acb9-4ce6-9c47-290a2ec6e9b0-combined-ca-bundle\") pod \"62c1078b-acb9-4ce6-9c47-290a2ec6e9b0\" (UID: \"62c1078b-acb9-4ce6-9c47-290a2ec6e9b0\") " Feb 23 10:01:07 crc kubenswrapper[4940]: I0223 10:01:07.348351 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/62c1078b-acb9-4ce6-9c47-290a2ec6e9b0-config-data\") pod \"62c1078b-acb9-4ce6-9c47-290a2ec6e9b0\" (UID: \"62c1078b-acb9-4ce6-9c47-290a2ec6e9b0\") " Feb 23 10:01:07 crc kubenswrapper[4940]: I0223 10:01:07.348586 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/62c1078b-acb9-4ce6-9c47-290a2ec6e9b0-fernet-keys\") pod \"62c1078b-acb9-4ce6-9c47-290a2ec6e9b0\" (UID: \"62c1078b-acb9-4ce6-9c47-290a2ec6e9b0\") " Feb 23 10:01:07 crc kubenswrapper[4940]: I0223 10:01:07.359893 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/62c1078b-acb9-4ce6-9c47-290a2ec6e9b0-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "62c1078b-acb9-4ce6-9c47-290a2ec6e9b0" (UID: "62c1078b-acb9-4ce6-9c47-290a2ec6e9b0"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 10:01:07 crc kubenswrapper[4940]: I0223 10:01:07.362844 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/62c1078b-acb9-4ce6-9c47-290a2ec6e9b0-kube-api-access-8tlls" (OuterVolumeSpecName: "kube-api-access-8tlls") pod "62c1078b-acb9-4ce6-9c47-290a2ec6e9b0" (UID: "62c1078b-acb9-4ce6-9c47-290a2ec6e9b0"). InnerVolumeSpecName "kube-api-access-8tlls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 10:01:07 crc kubenswrapper[4940]: I0223 10:01:07.396063 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/62c1078b-acb9-4ce6-9c47-290a2ec6e9b0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "62c1078b-acb9-4ce6-9c47-290a2ec6e9b0" (UID: "62c1078b-acb9-4ce6-9c47-290a2ec6e9b0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 10:01:07 crc kubenswrapper[4940]: I0223 10:01:07.418540 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/62c1078b-acb9-4ce6-9c47-290a2ec6e9b0-config-data" (OuterVolumeSpecName: "config-data") pod "62c1078b-acb9-4ce6-9c47-290a2ec6e9b0" (UID: "62c1078b-acb9-4ce6-9c47-290a2ec6e9b0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 10:01:07 crc kubenswrapper[4940]: I0223 10:01:07.451545 4940 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tlls\" (UniqueName: \"kubernetes.io/projected/62c1078b-acb9-4ce6-9c47-290a2ec6e9b0-kube-api-access-8tlls\") on node \"crc\" DevicePath \"\"" Feb 23 10:01:07 crc kubenswrapper[4940]: I0223 10:01:07.451583 4940 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62c1078b-acb9-4ce6-9c47-290a2ec6e9b0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 10:01:07 crc kubenswrapper[4940]: I0223 10:01:07.451597 4940 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/62c1078b-acb9-4ce6-9c47-290a2ec6e9b0-config-data\") on node \"crc\" DevicePath \"\"" Feb 23 10:01:07 crc kubenswrapper[4940]: I0223 10:01:07.451608 4940 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/62c1078b-acb9-4ce6-9c47-290a2ec6e9b0-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 23 10:01:07 crc kubenswrapper[4940]: I0223 10:01:07.732146 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29530681-kgpdn" event={"ID":"62c1078b-acb9-4ce6-9c47-290a2ec6e9b0","Type":"ContainerDied","Data":"0068b13f70fd2e25a5ebec9509614ab3bd76a136a8e7f9a0c06b38420e65217a"} Feb 23 10:01:07 crc kubenswrapper[4940]: I0223 10:01:07.732185 4940 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0068b13f70fd2e25a5ebec9509614ab3bd76a136a8e7f9a0c06b38420e65217a" Feb 23 10:01:07 crc kubenswrapper[4940]: I0223 10:01:07.732239 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29530681-kgpdn" Feb 23 10:01:14 crc kubenswrapper[4940]: I0223 10:01:14.345539 4940 scope.go:117] "RemoveContainer" containerID="60c1bdf7587e6a2309b5b7e3f1e911b58db5d2196b9ec6a170337311283de3da" Feb 23 10:01:14 crc kubenswrapper[4940]: E0223 10:01:14.346387 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-26mgs_openshift-machine-config-operator(f3f2cfd6-5ddf-436d-998f-440f1cc642b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" podUID="f3f2cfd6-5ddf-436d-998f-440f1cc642b1" Feb 23 10:01:26 crc kubenswrapper[4940]: I0223 10:01:26.345891 4940 scope.go:117] "RemoveContainer" containerID="60c1bdf7587e6a2309b5b7e3f1e911b58db5d2196b9ec6a170337311283de3da" Feb 23 10:01:26 crc kubenswrapper[4940]: E0223 10:01:26.346960 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-26mgs_openshift-machine-config-operator(f3f2cfd6-5ddf-436d-998f-440f1cc642b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" podUID="f3f2cfd6-5ddf-436d-998f-440f1cc642b1" Feb 23 10:01:39 crc kubenswrapper[4940]: I0223 10:01:39.352440 4940 scope.go:117] "RemoveContainer" containerID="60c1bdf7587e6a2309b5b7e3f1e911b58db5d2196b9ec6a170337311283de3da" Feb 23 10:01:39 crc kubenswrapper[4940]: E0223 10:01:39.354747 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-26mgs_openshift-machine-config-operator(f3f2cfd6-5ddf-436d-998f-440f1cc642b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" podUID="f3f2cfd6-5ddf-436d-998f-440f1cc642b1" Feb 23 10:01:50 crc kubenswrapper[4940]: I0223 10:01:50.346702 4940 scope.go:117] "RemoveContainer" containerID="60c1bdf7587e6a2309b5b7e3f1e911b58db5d2196b9ec6a170337311283de3da" Feb 23 10:01:50 crc kubenswrapper[4940]: E0223 10:01:50.348359 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-26mgs_openshift-machine-config-operator(f3f2cfd6-5ddf-436d-998f-440f1cc642b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" podUID="f3f2cfd6-5ddf-436d-998f-440f1cc642b1" Feb 23 10:02:02 crc kubenswrapper[4940]: I0223 10:02:02.345796 4940 scope.go:117] "RemoveContainer" containerID="60c1bdf7587e6a2309b5b7e3f1e911b58db5d2196b9ec6a170337311283de3da" Feb 23 10:02:02 crc kubenswrapper[4940]: E0223 10:02:02.346580 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-26mgs_openshift-machine-config-operator(f3f2cfd6-5ddf-436d-998f-440f1cc642b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" podUID="f3f2cfd6-5ddf-436d-998f-440f1cc642b1" Feb 23 10:02:16 crc kubenswrapper[4940]: I0223 10:02:16.346244 4940 scope.go:117] "RemoveContainer" containerID="60c1bdf7587e6a2309b5b7e3f1e911b58db5d2196b9ec6a170337311283de3da" Feb 23 10:02:16 crc kubenswrapper[4940]: E0223 10:02:16.347114 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-26mgs_openshift-machine-config-operator(f3f2cfd6-5ddf-436d-998f-440f1cc642b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" podUID="f3f2cfd6-5ddf-436d-998f-440f1cc642b1" Feb 23 10:02:27 crc kubenswrapper[4940]: I0223 10:02:27.346250 4940 scope.go:117] "RemoveContainer" containerID="60c1bdf7587e6a2309b5b7e3f1e911b58db5d2196b9ec6a170337311283de3da" Feb 23 10:02:27 crc kubenswrapper[4940]: E0223 10:02:27.347109 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-26mgs_openshift-machine-config-operator(f3f2cfd6-5ddf-436d-998f-440f1cc642b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" podUID="f3f2cfd6-5ddf-436d-998f-440f1cc642b1" Feb 23 10:02:42 crc kubenswrapper[4940]: I0223 10:02:42.346035 4940 scope.go:117] "RemoveContainer" containerID="60c1bdf7587e6a2309b5b7e3f1e911b58db5d2196b9ec6a170337311283de3da" Feb 23 10:02:42 crc kubenswrapper[4940]: E0223 10:02:42.346863 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-26mgs_openshift-machine-config-operator(f3f2cfd6-5ddf-436d-998f-440f1cc642b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" podUID="f3f2cfd6-5ddf-436d-998f-440f1cc642b1" Feb 23 10:02:55 crc kubenswrapper[4940]: I0223 10:02:55.346342 4940 scope.go:117] "RemoveContainer" containerID="60c1bdf7587e6a2309b5b7e3f1e911b58db5d2196b9ec6a170337311283de3da" Feb 23 10:02:55 crc kubenswrapper[4940]: E0223 10:02:55.347272 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-26mgs_openshift-machine-config-operator(f3f2cfd6-5ddf-436d-998f-440f1cc642b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" podUID="f3f2cfd6-5ddf-436d-998f-440f1cc642b1" Feb 23 10:03:08 crc kubenswrapper[4940]: I0223 10:03:08.345567 4940 scope.go:117] "RemoveContainer" containerID="60c1bdf7587e6a2309b5b7e3f1e911b58db5d2196b9ec6a170337311283de3da" Feb 23 10:03:08 crc kubenswrapper[4940]: E0223 10:03:08.346317 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-26mgs_openshift-machine-config-operator(f3f2cfd6-5ddf-436d-998f-440f1cc642b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" podUID="f3f2cfd6-5ddf-436d-998f-440f1cc642b1" Feb 23 10:03:20 crc kubenswrapper[4940]: I0223 10:03:20.346335 4940 scope.go:117] "RemoveContainer" containerID="60c1bdf7587e6a2309b5b7e3f1e911b58db5d2196b9ec6a170337311283de3da" Feb 23 10:03:20 crc kubenswrapper[4940]: E0223 10:03:20.347116 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-26mgs_openshift-machine-config-operator(f3f2cfd6-5ddf-436d-998f-440f1cc642b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" podUID="f3f2cfd6-5ddf-436d-998f-440f1cc642b1" Feb 23 10:03:31 crc kubenswrapper[4940]: I0223 10:03:31.346004 4940 scope.go:117] "RemoveContainer" containerID="60c1bdf7587e6a2309b5b7e3f1e911b58db5d2196b9ec6a170337311283de3da" Feb 23 10:03:31 crc kubenswrapper[4940]: E0223 10:03:31.346867 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-26mgs_openshift-machine-config-operator(f3f2cfd6-5ddf-436d-998f-440f1cc642b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" podUID="f3f2cfd6-5ddf-436d-998f-440f1cc642b1" Feb 23 10:03:42 crc kubenswrapper[4940]: I0223 10:03:42.346446 4940 scope.go:117] "RemoveContainer" containerID="60c1bdf7587e6a2309b5b7e3f1e911b58db5d2196b9ec6a170337311283de3da" Feb 23 10:03:42 crc kubenswrapper[4940]: E0223 10:03:42.347286 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-26mgs_openshift-machine-config-operator(f3f2cfd6-5ddf-436d-998f-440f1cc642b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" podUID="f3f2cfd6-5ddf-436d-998f-440f1cc642b1" Feb 23 10:03:53 crc kubenswrapper[4940]: I0223 10:03:53.346168 4940 scope.go:117] "RemoveContainer" containerID="60c1bdf7587e6a2309b5b7e3f1e911b58db5d2196b9ec6a170337311283de3da" Feb 23 10:03:53 crc kubenswrapper[4940]: E0223 10:03:53.347314 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-26mgs_openshift-machine-config-operator(f3f2cfd6-5ddf-436d-998f-440f1cc642b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" podUID="f3f2cfd6-5ddf-436d-998f-440f1cc642b1" Feb 23 10:04:08 crc kubenswrapper[4940]: I0223 10:04:08.346185 4940 scope.go:117] "RemoveContainer" containerID="60c1bdf7587e6a2309b5b7e3f1e911b58db5d2196b9ec6a170337311283de3da" Feb 23 10:04:08 crc kubenswrapper[4940]: E0223 10:04:08.346989 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-26mgs_openshift-machine-config-operator(f3f2cfd6-5ddf-436d-998f-440f1cc642b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" podUID="f3f2cfd6-5ddf-436d-998f-440f1cc642b1" Feb 23 10:04:22 crc kubenswrapper[4940]: I0223 10:04:22.346714 4940 scope.go:117] "RemoveContainer" containerID="60c1bdf7587e6a2309b5b7e3f1e911b58db5d2196b9ec6a170337311283de3da" Feb 23 10:04:22 crc kubenswrapper[4940]: E0223 10:04:22.347500 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-26mgs_openshift-machine-config-operator(f3f2cfd6-5ddf-436d-998f-440f1cc642b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" podUID="f3f2cfd6-5ddf-436d-998f-440f1cc642b1" Feb 23 10:04:36 crc kubenswrapper[4940]: I0223 10:04:36.345834 4940 scope.go:117] "RemoveContainer" containerID="60c1bdf7587e6a2309b5b7e3f1e911b58db5d2196b9ec6a170337311283de3da" Feb 23 10:04:36 crc kubenswrapper[4940]: E0223 10:04:36.346681 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-26mgs_openshift-machine-config-operator(f3f2cfd6-5ddf-436d-998f-440f1cc642b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" podUID="f3f2cfd6-5ddf-436d-998f-440f1cc642b1" Feb 23 10:04:50 crc kubenswrapper[4940]: I0223 10:04:50.345816 4940 scope.go:117] "RemoveContainer" containerID="60c1bdf7587e6a2309b5b7e3f1e911b58db5d2196b9ec6a170337311283de3da" Feb 23 10:04:50 crc kubenswrapper[4940]: E0223 10:04:50.346585 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-26mgs_openshift-machine-config-operator(f3f2cfd6-5ddf-436d-998f-440f1cc642b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" podUID="f3f2cfd6-5ddf-436d-998f-440f1cc642b1" Feb 23 10:05:02 crc kubenswrapper[4940]: I0223 10:05:02.345956 4940 scope.go:117] "RemoveContainer" containerID="60c1bdf7587e6a2309b5b7e3f1e911b58db5d2196b9ec6a170337311283de3da" Feb 23 10:05:02 crc kubenswrapper[4940]: I0223 10:05:02.940976 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" event={"ID":"f3f2cfd6-5ddf-436d-998f-440f1cc642b1","Type":"ContainerStarted","Data":"73564eb7ca7da05d30ce02d7b7f4f13edd916d63410bf2494932823351662582"} Feb 23 10:05:36 crc kubenswrapper[4940]: I0223 10:05:36.661866 4940 scope.go:117] "RemoveContainer" containerID="228a08f25b4757fe5576c52734a86b1b0b0e6c683384bfb5ca0be9e02c09a5fa" Feb 23 10:05:36 crc kubenswrapper[4940]: I0223 10:05:36.687058 4940 scope.go:117] "RemoveContainer" containerID="b754c1111d218ed031deae4ae1109dad4fdc67cd3b4a5fabc91d90853c77f831" Feb 23 10:05:36 crc kubenswrapper[4940]: I0223 10:05:36.796314 4940 scope.go:117] "RemoveContainer" containerID="f7a694219ee1abbf2435f087cd1b74ff5e250a5ec31fef1526f0464b155b2b06" Feb 23 10:06:02 crc kubenswrapper[4940]: I0223 10:06:02.472511 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-xksh4"] Feb 23 10:06:02 crc kubenswrapper[4940]: E0223 10:06:02.473550 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="62c1078b-acb9-4ce6-9c47-290a2ec6e9b0" containerName="keystone-cron" Feb 23 10:06:02 crc kubenswrapper[4940]: I0223 10:06:02.473568 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="62c1078b-acb9-4ce6-9c47-290a2ec6e9b0" containerName="keystone-cron" Feb 23 10:06:02 crc kubenswrapper[4940]: I0223 10:06:02.473778 4940 memory_manager.go:354] "RemoveStaleState removing state" podUID="62c1078b-acb9-4ce6-9c47-290a2ec6e9b0" containerName="keystone-cron" Feb 23 10:06:02 crc kubenswrapper[4940]: I0223 10:06:02.475340 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xksh4" Feb 23 10:06:02 crc kubenswrapper[4940]: I0223 10:06:02.484973 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-xksh4"] Feb 23 10:06:02 crc kubenswrapper[4940]: I0223 10:06:02.658099 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/74dbe3e1-b592-4ebf-95d7-2f90b7385b2d-utilities\") pod \"redhat-operators-xksh4\" (UID: \"74dbe3e1-b592-4ebf-95d7-2f90b7385b2d\") " pod="openshift-marketplace/redhat-operators-xksh4" Feb 23 10:06:02 crc kubenswrapper[4940]: I0223 10:06:02.658323 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/74dbe3e1-b592-4ebf-95d7-2f90b7385b2d-catalog-content\") pod \"redhat-operators-xksh4\" (UID: \"74dbe3e1-b592-4ebf-95d7-2f90b7385b2d\") " pod="openshift-marketplace/redhat-operators-xksh4" Feb 23 10:06:02 crc kubenswrapper[4940]: I0223 10:06:02.658394 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xmhrp\" (UniqueName: \"kubernetes.io/projected/74dbe3e1-b592-4ebf-95d7-2f90b7385b2d-kube-api-access-xmhrp\") pod \"redhat-operators-xksh4\" (UID: \"74dbe3e1-b592-4ebf-95d7-2f90b7385b2d\") " pod="openshift-marketplace/redhat-operators-xksh4" Feb 23 10:06:02 crc kubenswrapper[4940]: I0223 10:06:02.760303 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/74dbe3e1-b592-4ebf-95d7-2f90b7385b2d-utilities\") pod \"redhat-operators-xksh4\" (UID: \"74dbe3e1-b592-4ebf-95d7-2f90b7385b2d\") " pod="openshift-marketplace/redhat-operators-xksh4" Feb 23 10:06:02 crc kubenswrapper[4940]: I0223 10:06:02.760710 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/74dbe3e1-b592-4ebf-95d7-2f90b7385b2d-catalog-content\") pod \"redhat-operators-xksh4\" (UID: \"74dbe3e1-b592-4ebf-95d7-2f90b7385b2d\") " pod="openshift-marketplace/redhat-operators-xksh4" Feb 23 10:06:02 crc kubenswrapper[4940]: I0223 10:06:02.760864 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xmhrp\" (UniqueName: \"kubernetes.io/projected/74dbe3e1-b592-4ebf-95d7-2f90b7385b2d-kube-api-access-xmhrp\") pod \"redhat-operators-xksh4\" (UID: \"74dbe3e1-b592-4ebf-95d7-2f90b7385b2d\") " pod="openshift-marketplace/redhat-operators-xksh4" Feb 23 10:06:02 crc kubenswrapper[4940]: I0223 10:06:02.760971 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/74dbe3e1-b592-4ebf-95d7-2f90b7385b2d-catalog-content\") pod \"redhat-operators-xksh4\" (UID: \"74dbe3e1-b592-4ebf-95d7-2f90b7385b2d\") " pod="openshift-marketplace/redhat-operators-xksh4" Feb 23 10:06:02 crc kubenswrapper[4940]: I0223 10:06:02.760771 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/74dbe3e1-b592-4ebf-95d7-2f90b7385b2d-utilities\") pod \"redhat-operators-xksh4\" (UID: \"74dbe3e1-b592-4ebf-95d7-2f90b7385b2d\") " pod="openshift-marketplace/redhat-operators-xksh4" Feb 23 10:06:02 crc kubenswrapper[4940]: I0223 10:06:02.783326 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xmhrp\" (UniqueName: \"kubernetes.io/projected/74dbe3e1-b592-4ebf-95d7-2f90b7385b2d-kube-api-access-xmhrp\") pod \"redhat-operators-xksh4\" (UID: \"74dbe3e1-b592-4ebf-95d7-2f90b7385b2d\") " pod="openshift-marketplace/redhat-operators-xksh4" Feb 23 10:06:02 crc kubenswrapper[4940]: I0223 10:06:02.794936 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xksh4" Feb 23 10:06:03 crc kubenswrapper[4940]: I0223 10:06:03.270092 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-xksh4"] Feb 23 10:06:03 crc kubenswrapper[4940]: I0223 10:06:03.888532 4940 generic.go:334] "Generic (PLEG): container finished" podID="74dbe3e1-b592-4ebf-95d7-2f90b7385b2d" containerID="3ff471544b35e87c18ef1e9c21a63da25d1beb17328657cbac42a3ff40d21842" exitCode=0 Feb 23 10:06:03 crc kubenswrapper[4940]: I0223 10:06:03.888626 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xksh4" event={"ID":"74dbe3e1-b592-4ebf-95d7-2f90b7385b2d","Type":"ContainerDied","Data":"3ff471544b35e87c18ef1e9c21a63da25d1beb17328657cbac42a3ff40d21842"} Feb 23 10:06:03 crc kubenswrapper[4940]: I0223 10:06:03.888677 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xksh4" event={"ID":"74dbe3e1-b592-4ebf-95d7-2f90b7385b2d","Type":"ContainerStarted","Data":"ffcbfa3107fe877f13602de1b28dc5f1bf4177927e0a134ef54a343324baf22d"} Feb 23 10:06:03 crc kubenswrapper[4940]: I0223 10:06:03.890791 4940 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 23 10:06:04 crc kubenswrapper[4940]: I0223 10:06:04.901490 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xksh4" event={"ID":"74dbe3e1-b592-4ebf-95d7-2f90b7385b2d","Type":"ContainerStarted","Data":"3d6d03927a0093743bc015e9d63111fbab3876423f2eb6fb4262e0a668956a01"} Feb 23 10:06:08 crc kubenswrapper[4940]: I0223 10:06:08.951750 4940 generic.go:334] "Generic (PLEG): container finished" podID="74dbe3e1-b592-4ebf-95d7-2f90b7385b2d" containerID="3d6d03927a0093743bc015e9d63111fbab3876423f2eb6fb4262e0a668956a01" exitCode=0 Feb 23 10:06:08 crc kubenswrapper[4940]: I0223 10:06:08.951843 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xksh4" event={"ID":"74dbe3e1-b592-4ebf-95d7-2f90b7385b2d","Type":"ContainerDied","Data":"3d6d03927a0093743bc015e9d63111fbab3876423f2eb6fb4262e0a668956a01"} Feb 23 10:06:09 crc kubenswrapper[4940]: I0223 10:06:09.966180 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xksh4" event={"ID":"74dbe3e1-b592-4ebf-95d7-2f90b7385b2d","Type":"ContainerStarted","Data":"fd124723c7585a09dbfe91e52b7f2bcc720bd72ff80c5e160e3a44ff2e0c0218"} Feb 23 10:06:09 crc kubenswrapper[4940]: I0223 10:06:09.991347 4940 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-xksh4" podStartSLOduration=2.5475262 podStartE2EDuration="7.991326062s" podCreationTimestamp="2026-02-23 10:06:02 +0000 UTC" firstStartedPulling="2026-02-23 10:06:03.890466305 +0000 UTC m=+4695.273672462" lastFinishedPulling="2026-02-23 10:06:09.334266167 +0000 UTC m=+4700.717472324" observedRunningTime="2026-02-23 10:06:09.987376437 +0000 UTC m=+4701.370582594" watchObservedRunningTime="2026-02-23 10:06:09.991326062 +0000 UTC m=+4701.374532219" Feb 23 10:06:12 crc kubenswrapper[4940]: I0223 10:06:12.795702 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-xksh4" Feb 23 10:06:12 crc kubenswrapper[4940]: I0223 10:06:12.796224 4940 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-xksh4" Feb 23 10:06:13 crc kubenswrapper[4940]: I0223 10:06:13.854159 4940 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-xksh4" podUID="74dbe3e1-b592-4ebf-95d7-2f90b7385b2d" containerName="registry-server" probeResult="failure" output=< Feb 23 10:06:13 crc kubenswrapper[4940]: timeout: failed to connect service ":50051" within 1s Feb 23 10:06:13 crc kubenswrapper[4940]: > Feb 23 10:06:22 crc kubenswrapper[4940]: I0223 10:06:22.864707 4940 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-xksh4" Feb 23 10:06:22 crc kubenswrapper[4940]: I0223 10:06:22.942718 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-xksh4" Feb 23 10:06:23 crc kubenswrapper[4940]: I0223 10:06:23.103331 4940 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-xksh4"] Feb 23 10:06:24 crc kubenswrapper[4940]: I0223 10:06:24.095163 4940 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-xksh4" podUID="74dbe3e1-b592-4ebf-95d7-2f90b7385b2d" containerName="registry-server" containerID="cri-o://fd124723c7585a09dbfe91e52b7f2bcc720bd72ff80c5e160e3a44ff2e0c0218" gracePeriod=2 Feb 23 10:06:24 crc kubenswrapper[4940]: I0223 10:06:24.603839 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xksh4" Feb 23 10:06:24 crc kubenswrapper[4940]: I0223 10:06:24.775167 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xmhrp\" (UniqueName: \"kubernetes.io/projected/74dbe3e1-b592-4ebf-95d7-2f90b7385b2d-kube-api-access-xmhrp\") pod \"74dbe3e1-b592-4ebf-95d7-2f90b7385b2d\" (UID: \"74dbe3e1-b592-4ebf-95d7-2f90b7385b2d\") " Feb 23 10:06:24 crc kubenswrapper[4940]: I0223 10:06:24.775674 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/74dbe3e1-b592-4ebf-95d7-2f90b7385b2d-utilities\") pod \"74dbe3e1-b592-4ebf-95d7-2f90b7385b2d\" (UID: \"74dbe3e1-b592-4ebf-95d7-2f90b7385b2d\") " Feb 23 10:06:24 crc kubenswrapper[4940]: I0223 10:06:24.775756 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/74dbe3e1-b592-4ebf-95d7-2f90b7385b2d-catalog-content\") pod \"74dbe3e1-b592-4ebf-95d7-2f90b7385b2d\" (UID: \"74dbe3e1-b592-4ebf-95d7-2f90b7385b2d\") " Feb 23 10:06:24 crc kubenswrapper[4940]: I0223 10:06:24.777370 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/74dbe3e1-b592-4ebf-95d7-2f90b7385b2d-utilities" (OuterVolumeSpecName: "utilities") pod "74dbe3e1-b592-4ebf-95d7-2f90b7385b2d" (UID: "74dbe3e1-b592-4ebf-95d7-2f90b7385b2d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 10:06:24 crc kubenswrapper[4940]: I0223 10:06:24.878744 4940 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/74dbe3e1-b592-4ebf-95d7-2f90b7385b2d-utilities\") on node \"crc\" DevicePath \"\"" Feb 23 10:06:24 crc kubenswrapper[4940]: I0223 10:06:24.923314 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/74dbe3e1-b592-4ebf-95d7-2f90b7385b2d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "74dbe3e1-b592-4ebf-95d7-2f90b7385b2d" (UID: "74dbe3e1-b592-4ebf-95d7-2f90b7385b2d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 10:06:24 crc kubenswrapper[4940]: I0223 10:06:24.981157 4940 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/74dbe3e1-b592-4ebf-95d7-2f90b7385b2d-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 23 10:06:25 crc kubenswrapper[4940]: I0223 10:06:25.111202 4940 generic.go:334] "Generic (PLEG): container finished" podID="74dbe3e1-b592-4ebf-95d7-2f90b7385b2d" containerID="fd124723c7585a09dbfe91e52b7f2bcc720bd72ff80c5e160e3a44ff2e0c0218" exitCode=0 Feb 23 10:06:25 crc kubenswrapper[4940]: I0223 10:06:25.111256 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xksh4" Feb 23 10:06:25 crc kubenswrapper[4940]: I0223 10:06:25.111289 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xksh4" event={"ID":"74dbe3e1-b592-4ebf-95d7-2f90b7385b2d","Type":"ContainerDied","Data":"fd124723c7585a09dbfe91e52b7f2bcc720bd72ff80c5e160e3a44ff2e0c0218"} Feb 23 10:06:25 crc kubenswrapper[4940]: I0223 10:06:25.111408 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xksh4" event={"ID":"74dbe3e1-b592-4ebf-95d7-2f90b7385b2d","Type":"ContainerDied","Data":"ffcbfa3107fe877f13602de1b28dc5f1bf4177927e0a134ef54a343324baf22d"} Feb 23 10:06:25 crc kubenswrapper[4940]: I0223 10:06:25.111463 4940 scope.go:117] "RemoveContainer" containerID="fd124723c7585a09dbfe91e52b7f2bcc720bd72ff80c5e160e3a44ff2e0c0218" Feb 23 10:06:25 crc kubenswrapper[4940]: I0223 10:06:25.141742 4940 scope.go:117] "RemoveContainer" containerID="3d6d03927a0093743bc015e9d63111fbab3876423f2eb6fb4262e0a668956a01" Feb 23 10:06:25 crc kubenswrapper[4940]: I0223 10:06:25.427254 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/74dbe3e1-b592-4ebf-95d7-2f90b7385b2d-kube-api-access-xmhrp" (OuterVolumeSpecName: "kube-api-access-xmhrp") pod "74dbe3e1-b592-4ebf-95d7-2f90b7385b2d" (UID: "74dbe3e1-b592-4ebf-95d7-2f90b7385b2d"). InnerVolumeSpecName "kube-api-access-xmhrp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 10:06:25 crc kubenswrapper[4940]: I0223 10:06:25.445380 4940 scope.go:117] "RemoveContainer" containerID="3ff471544b35e87c18ef1e9c21a63da25d1beb17328657cbac42a3ff40d21842" Feb 23 10:06:25 crc kubenswrapper[4940]: I0223 10:06:25.505529 4940 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xmhrp\" (UniqueName: \"kubernetes.io/projected/74dbe3e1-b592-4ebf-95d7-2f90b7385b2d-kube-api-access-xmhrp\") on node \"crc\" DevicePath \"\"" Feb 23 10:06:25 crc kubenswrapper[4940]: I0223 10:06:25.543889 4940 scope.go:117] "RemoveContainer" containerID="fd124723c7585a09dbfe91e52b7f2bcc720bd72ff80c5e160e3a44ff2e0c0218" Feb 23 10:06:25 crc kubenswrapper[4940]: E0223 10:06:25.544600 4940 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fd124723c7585a09dbfe91e52b7f2bcc720bd72ff80c5e160e3a44ff2e0c0218\": container with ID starting with fd124723c7585a09dbfe91e52b7f2bcc720bd72ff80c5e160e3a44ff2e0c0218 not found: ID does not exist" containerID="fd124723c7585a09dbfe91e52b7f2bcc720bd72ff80c5e160e3a44ff2e0c0218" Feb 23 10:06:25 crc kubenswrapper[4940]: I0223 10:06:25.544704 4940 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fd124723c7585a09dbfe91e52b7f2bcc720bd72ff80c5e160e3a44ff2e0c0218"} err="failed to get container status \"fd124723c7585a09dbfe91e52b7f2bcc720bd72ff80c5e160e3a44ff2e0c0218\": rpc error: code = NotFound desc = could not find container \"fd124723c7585a09dbfe91e52b7f2bcc720bd72ff80c5e160e3a44ff2e0c0218\": container with ID starting with fd124723c7585a09dbfe91e52b7f2bcc720bd72ff80c5e160e3a44ff2e0c0218 not found: ID does not exist" Feb 23 10:06:25 crc kubenswrapper[4940]: I0223 10:06:25.544742 4940 scope.go:117] "RemoveContainer" containerID="3d6d03927a0093743bc015e9d63111fbab3876423f2eb6fb4262e0a668956a01" Feb 23 10:06:25 crc kubenswrapper[4940]: E0223 10:06:25.545364 4940 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3d6d03927a0093743bc015e9d63111fbab3876423f2eb6fb4262e0a668956a01\": container with ID starting with 3d6d03927a0093743bc015e9d63111fbab3876423f2eb6fb4262e0a668956a01 not found: ID does not exist" containerID="3d6d03927a0093743bc015e9d63111fbab3876423f2eb6fb4262e0a668956a01" Feb 23 10:06:25 crc kubenswrapper[4940]: I0223 10:06:25.545439 4940 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3d6d03927a0093743bc015e9d63111fbab3876423f2eb6fb4262e0a668956a01"} err="failed to get container status \"3d6d03927a0093743bc015e9d63111fbab3876423f2eb6fb4262e0a668956a01\": rpc error: code = NotFound desc = could not find container \"3d6d03927a0093743bc015e9d63111fbab3876423f2eb6fb4262e0a668956a01\": container with ID starting with 3d6d03927a0093743bc015e9d63111fbab3876423f2eb6fb4262e0a668956a01 not found: ID does not exist" Feb 23 10:06:25 crc kubenswrapper[4940]: I0223 10:06:25.545487 4940 scope.go:117] "RemoveContainer" containerID="3ff471544b35e87c18ef1e9c21a63da25d1beb17328657cbac42a3ff40d21842" Feb 23 10:06:25 crc kubenswrapper[4940]: E0223 10:06:25.545991 4940 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3ff471544b35e87c18ef1e9c21a63da25d1beb17328657cbac42a3ff40d21842\": container with ID starting with 3ff471544b35e87c18ef1e9c21a63da25d1beb17328657cbac42a3ff40d21842 not found: ID does not exist" containerID="3ff471544b35e87c18ef1e9c21a63da25d1beb17328657cbac42a3ff40d21842" Feb 23 10:06:25 crc kubenswrapper[4940]: I0223 10:06:25.546069 4940 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3ff471544b35e87c18ef1e9c21a63da25d1beb17328657cbac42a3ff40d21842"} err="failed to get container status \"3ff471544b35e87c18ef1e9c21a63da25d1beb17328657cbac42a3ff40d21842\": rpc error: code = NotFound desc = could not find container \"3ff471544b35e87c18ef1e9c21a63da25d1beb17328657cbac42a3ff40d21842\": container with ID starting with 3ff471544b35e87c18ef1e9c21a63da25d1beb17328657cbac42a3ff40d21842 not found: ID does not exist" Feb 23 10:06:25 crc kubenswrapper[4940]: I0223 10:06:25.768345 4940 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-xksh4"] Feb 23 10:06:25 crc kubenswrapper[4940]: I0223 10:06:25.777087 4940 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-xksh4"] Feb 23 10:06:27 crc kubenswrapper[4940]: I0223 10:06:27.357819 4940 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="74dbe3e1-b592-4ebf-95d7-2f90b7385b2d" path="/var/lib/kubelet/pods/74dbe3e1-b592-4ebf-95d7-2f90b7385b2d/volumes" Feb 23 10:07:03 crc kubenswrapper[4940]: I0223 10:07:03.520138 4940 generic.go:334] "Generic (PLEG): container finished" podID="c7cd2a10-7128-40ff-98b8-6d3026b08566" containerID="0e03c7ffc9ed6ac4348d53f29a3feb3bbb26909466b8232d2fdf482217df0f15" exitCode=0 Feb 23 10:07:03 crc kubenswrapper[4940]: I0223 10:07:03.520211 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"c7cd2a10-7128-40ff-98b8-6d3026b08566","Type":"ContainerDied","Data":"0e03c7ffc9ed6ac4348d53f29a3feb3bbb26909466b8232d2fdf482217df0f15"} Feb 23 10:07:04 crc kubenswrapper[4940]: I0223 10:07:04.897743 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Feb 23 10:07:05 crc kubenswrapper[4940]: I0223 10:07:05.038360 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c7cd2a10-7128-40ff-98b8-6d3026b08566-config-data\") pod \"c7cd2a10-7128-40ff-98b8-6d3026b08566\" (UID: \"c7cd2a10-7128-40ff-98b8-6d3026b08566\") " Feb 23 10:07:05 crc kubenswrapper[4940]: I0223 10:07:05.038752 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/c7cd2a10-7128-40ff-98b8-6d3026b08566-openstack-config\") pod \"c7cd2a10-7128-40ff-98b8-6d3026b08566\" (UID: \"c7cd2a10-7128-40ff-98b8-6d3026b08566\") " Feb 23 10:07:05 crc kubenswrapper[4940]: I0223 10:07:05.038906 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/c7cd2a10-7128-40ff-98b8-6d3026b08566-test-operator-ephemeral-temporary\") pod \"c7cd2a10-7128-40ff-98b8-6d3026b08566\" (UID: \"c7cd2a10-7128-40ff-98b8-6d3026b08566\") " Feb 23 10:07:05 crc kubenswrapper[4940]: I0223 10:07:05.039007 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/c7cd2a10-7128-40ff-98b8-6d3026b08566-openstack-config-secret\") pod \"c7cd2a10-7128-40ff-98b8-6d3026b08566\" (UID: \"c7cd2a10-7128-40ff-98b8-6d3026b08566\") " Feb 23 10:07:05 crc kubenswrapper[4940]: I0223 10:07:05.039059 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c7cd2a10-7128-40ff-98b8-6d3026b08566-ssh-key\") pod \"c7cd2a10-7128-40ff-98b8-6d3026b08566\" (UID: \"c7cd2a10-7128-40ff-98b8-6d3026b08566\") " Feb 23 10:07:05 crc kubenswrapper[4940]: I0223 10:07:05.039137 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/c7cd2a10-7128-40ff-98b8-6d3026b08566-ca-certs\") pod \"c7cd2a10-7128-40ff-98b8-6d3026b08566\" (UID: \"c7cd2a10-7128-40ff-98b8-6d3026b08566\") " Feb 23 10:07:05 crc kubenswrapper[4940]: I0223 10:07:05.039326 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c7cd2a10-7128-40ff-98b8-6d3026b08566-config-data" (OuterVolumeSpecName: "config-data") pod "c7cd2a10-7128-40ff-98b8-6d3026b08566" (UID: "c7cd2a10-7128-40ff-98b8-6d3026b08566"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 10:07:05 crc kubenswrapper[4940]: I0223 10:07:05.039576 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c7cd2a10-7128-40ff-98b8-6d3026b08566-test-operator-ephemeral-temporary" (OuterVolumeSpecName: "test-operator-ephemeral-temporary") pod "c7cd2a10-7128-40ff-98b8-6d3026b08566" (UID: "c7cd2a10-7128-40ff-98b8-6d3026b08566"). InnerVolumeSpecName "test-operator-ephemeral-temporary". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 10:07:05 crc kubenswrapper[4940]: I0223 10:07:05.039870 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bvqdk\" (UniqueName: \"kubernetes.io/projected/c7cd2a10-7128-40ff-98b8-6d3026b08566-kube-api-access-bvqdk\") pod \"c7cd2a10-7128-40ff-98b8-6d3026b08566\" (UID: \"c7cd2a10-7128-40ff-98b8-6d3026b08566\") " Feb 23 10:07:05 crc kubenswrapper[4940]: I0223 10:07:05.039939 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/c7cd2a10-7128-40ff-98b8-6d3026b08566-test-operator-ephemeral-workdir\") pod \"c7cd2a10-7128-40ff-98b8-6d3026b08566\" (UID: \"c7cd2a10-7128-40ff-98b8-6d3026b08566\") " Feb 23 10:07:05 crc kubenswrapper[4940]: I0223 10:07:05.039965 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-logs\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"c7cd2a10-7128-40ff-98b8-6d3026b08566\" (UID: \"c7cd2a10-7128-40ff-98b8-6d3026b08566\") " Feb 23 10:07:05 crc kubenswrapper[4940]: I0223 10:07:05.040676 4940 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c7cd2a10-7128-40ff-98b8-6d3026b08566-config-data\") on node \"crc\" DevicePath \"\"" Feb 23 10:07:05 crc kubenswrapper[4940]: I0223 10:07:05.040696 4940 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/c7cd2a10-7128-40ff-98b8-6d3026b08566-test-operator-ephemeral-temporary\") on node \"crc\" DevicePath \"\"" Feb 23 10:07:05 crc kubenswrapper[4940]: I0223 10:07:05.050654 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c7cd2a10-7128-40ff-98b8-6d3026b08566-kube-api-access-bvqdk" (OuterVolumeSpecName: "kube-api-access-bvqdk") pod "c7cd2a10-7128-40ff-98b8-6d3026b08566" (UID: "c7cd2a10-7128-40ff-98b8-6d3026b08566"). InnerVolumeSpecName "kube-api-access-bvqdk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 10:07:05 crc kubenswrapper[4940]: I0223 10:07:05.057874 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage05-crc" (OuterVolumeSpecName: "test-operator-logs") pod "c7cd2a10-7128-40ff-98b8-6d3026b08566" (UID: "c7cd2a10-7128-40ff-98b8-6d3026b08566"). InnerVolumeSpecName "local-storage05-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 23 10:07:05 crc kubenswrapper[4940]: I0223 10:07:05.067554 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c7cd2a10-7128-40ff-98b8-6d3026b08566-test-operator-ephemeral-workdir" (OuterVolumeSpecName: "test-operator-ephemeral-workdir") pod "c7cd2a10-7128-40ff-98b8-6d3026b08566" (UID: "c7cd2a10-7128-40ff-98b8-6d3026b08566"). InnerVolumeSpecName "test-operator-ephemeral-workdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 10:07:05 crc kubenswrapper[4940]: I0223 10:07:05.084521 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c7cd2a10-7128-40ff-98b8-6d3026b08566-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "c7cd2a10-7128-40ff-98b8-6d3026b08566" (UID: "c7cd2a10-7128-40ff-98b8-6d3026b08566"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 10:07:05 crc kubenswrapper[4940]: I0223 10:07:05.110940 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c7cd2a10-7128-40ff-98b8-6d3026b08566-ca-certs" (OuterVolumeSpecName: "ca-certs") pod "c7cd2a10-7128-40ff-98b8-6d3026b08566" (UID: "c7cd2a10-7128-40ff-98b8-6d3026b08566"). InnerVolumeSpecName "ca-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 10:07:05 crc kubenswrapper[4940]: I0223 10:07:05.132464 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c7cd2a10-7128-40ff-98b8-6d3026b08566-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "c7cd2a10-7128-40ff-98b8-6d3026b08566" (UID: "c7cd2a10-7128-40ff-98b8-6d3026b08566"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 10:07:05 crc kubenswrapper[4940]: I0223 10:07:05.142272 4940 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/c7cd2a10-7128-40ff-98b8-6d3026b08566-test-operator-ephemeral-workdir\") on node \"crc\" DevicePath \"\"" Feb 23 10:07:05 crc kubenswrapper[4940]: I0223 10:07:05.142975 4940 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" " Feb 23 10:07:05 crc kubenswrapper[4940]: I0223 10:07:05.143005 4940 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/c7cd2a10-7128-40ff-98b8-6d3026b08566-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Feb 23 10:07:05 crc kubenswrapper[4940]: I0223 10:07:05.143017 4940 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c7cd2a10-7128-40ff-98b8-6d3026b08566-ssh-key\") on node \"crc\" DevicePath \"\"" Feb 23 10:07:05 crc kubenswrapper[4940]: I0223 10:07:05.143025 4940 reconciler_common.go:293] "Volume detached for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/c7cd2a10-7128-40ff-98b8-6d3026b08566-ca-certs\") on node \"crc\" DevicePath \"\"" Feb 23 10:07:05 crc kubenswrapper[4940]: I0223 10:07:05.143033 4940 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bvqdk\" (UniqueName: \"kubernetes.io/projected/c7cd2a10-7128-40ff-98b8-6d3026b08566-kube-api-access-bvqdk\") on node \"crc\" DevicePath \"\"" Feb 23 10:07:05 crc kubenswrapper[4940]: I0223 10:07:05.162241 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c7cd2a10-7128-40ff-98b8-6d3026b08566-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "c7cd2a10-7128-40ff-98b8-6d3026b08566" (UID: "c7cd2a10-7128-40ff-98b8-6d3026b08566"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 10:07:05 crc kubenswrapper[4940]: I0223 10:07:05.169499 4940 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage05-crc" (UniqueName: "kubernetes.io/local-volume/local-storage05-crc") on node "crc" Feb 23 10:07:05 crc kubenswrapper[4940]: I0223 10:07:05.244807 4940 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/c7cd2a10-7128-40ff-98b8-6d3026b08566-openstack-config\") on node \"crc\" DevicePath \"\"" Feb 23 10:07:05 crc kubenswrapper[4940]: I0223 10:07:05.244836 4940 reconciler_common.go:293] "Volume detached for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" DevicePath \"\"" Feb 23 10:07:05 crc kubenswrapper[4940]: I0223 10:07:05.538339 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"c7cd2a10-7128-40ff-98b8-6d3026b08566","Type":"ContainerDied","Data":"1651ee54a2d524c531fbcb3da84015af21c4e80438e01a53178f22abc41200fd"} Feb 23 10:07:05 crc kubenswrapper[4940]: I0223 10:07:05.538394 4940 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1651ee54a2d524c531fbcb3da84015af21c4e80438e01a53178f22abc41200fd" Feb 23 10:07:05 crc kubenswrapper[4940]: I0223 10:07:05.538493 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Feb 23 10:07:16 crc kubenswrapper[4940]: I0223 10:07:16.413123 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Feb 23 10:07:16 crc kubenswrapper[4940]: E0223 10:07:16.414785 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="74dbe3e1-b592-4ebf-95d7-2f90b7385b2d" containerName="registry-server" Feb 23 10:07:16 crc kubenswrapper[4940]: I0223 10:07:16.414867 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="74dbe3e1-b592-4ebf-95d7-2f90b7385b2d" containerName="registry-server" Feb 23 10:07:16 crc kubenswrapper[4940]: E0223 10:07:16.414934 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="74dbe3e1-b592-4ebf-95d7-2f90b7385b2d" containerName="extract-utilities" Feb 23 10:07:16 crc kubenswrapper[4940]: I0223 10:07:16.414989 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="74dbe3e1-b592-4ebf-95d7-2f90b7385b2d" containerName="extract-utilities" Feb 23 10:07:16 crc kubenswrapper[4940]: E0223 10:07:16.415054 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7cd2a10-7128-40ff-98b8-6d3026b08566" containerName="tempest-tests-tempest-tests-runner" Feb 23 10:07:16 crc kubenswrapper[4940]: I0223 10:07:16.415154 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7cd2a10-7128-40ff-98b8-6d3026b08566" containerName="tempest-tests-tempest-tests-runner" Feb 23 10:07:16 crc kubenswrapper[4940]: E0223 10:07:16.415220 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="74dbe3e1-b592-4ebf-95d7-2f90b7385b2d" containerName="extract-content" Feb 23 10:07:16 crc kubenswrapper[4940]: I0223 10:07:16.415270 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="74dbe3e1-b592-4ebf-95d7-2f90b7385b2d" containerName="extract-content" Feb 23 10:07:16 crc kubenswrapper[4940]: I0223 10:07:16.415517 4940 memory_manager.go:354] "RemoveStaleState removing state" podUID="c7cd2a10-7128-40ff-98b8-6d3026b08566" containerName="tempest-tests-tempest-tests-runner" Feb 23 10:07:16 crc kubenswrapper[4940]: I0223 10:07:16.415593 4940 memory_manager.go:354] "RemoveStaleState removing state" podUID="74dbe3e1-b592-4ebf-95d7-2f90b7385b2d" containerName="registry-server" Feb 23 10:07:16 crc kubenswrapper[4940]: I0223 10:07:16.416592 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 23 10:07:16 crc kubenswrapper[4940]: I0223 10:07:16.434374 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Feb 23 10:07:16 crc kubenswrapper[4940]: I0223 10:07:16.503114 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"614008c1-1725-42e6-b6b3-407d9b909846\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 23 10:07:16 crc kubenswrapper[4940]: I0223 10:07:16.503247 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-txqdz\" (UniqueName: \"kubernetes.io/projected/614008c1-1725-42e6-b6b3-407d9b909846-kube-api-access-txqdz\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"614008c1-1725-42e6-b6b3-407d9b909846\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 23 10:07:16 crc kubenswrapper[4940]: I0223 10:07:16.605094 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-txqdz\" (UniqueName: \"kubernetes.io/projected/614008c1-1725-42e6-b6b3-407d9b909846-kube-api-access-txqdz\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"614008c1-1725-42e6-b6b3-407d9b909846\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 23 10:07:16 crc kubenswrapper[4940]: I0223 10:07:16.605240 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"614008c1-1725-42e6-b6b3-407d9b909846\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 23 10:07:16 crc kubenswrapper[4940]: I0223 10:07:16.605685 4940 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"614008c1-1725-42e6-b6b3-407d9b909846\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 23 10:07:16 crc kubenswrapper[4940]: I0223 10:07:16.626279 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-txqdz\" (UniqueName: \"kubernetes.io/projected/614008c1-1725-42e6-b6b3-407d9b909846-kube-api-access-txqdz\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"614008c1-1725-42e6-b6b3-407d9b909846\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 23 10:07:16 crc kubenswrapper[4940]: I0223 10:07:16.631361 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"614008c1-1725-42e6-b6b3-407d9b909846\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 23 10:07:16 crc kubenswrapper[4940]: I0223 10:07:16.783518 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 23 10:07:17 crc kubenswrapper[4940]: I0223 10:07:17.238220 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Feb 23 10:07:17 crc kubenswrapper[4940]: I0223 10:07:17.668466 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"614008c1-1725-42e6-b6b3-407d9b909846","Type":"ContainerStarted","Data":"3f94a77df25e9190711878086f24efd9205fa3d49da30a284a76929c8ece96dd"} Feb 23 10:07:19 crc kubenswrapper[4940]: I0223 10:07:19.693992 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"614008c1-1725-42e6-b6b3-407d9b909846","Type":"ContainerStarted","Data":"fb4d2d7886f0d3252990bccbf53cc205ca9d5c2d7c183976e6a4ae6ba964b042"} Feb 23 10:07:19 crc kubenswrapper[4940]: I0223 10:07:19.719010 4940 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podStartSLOduration=2.41691038 podStartE2EDuration="3.718985581s" podCreationTimestamp="2026-02-23 10:07:16 +0000 UTC" firstStartedPulling="2026-02-23 10:07:17.240386883 +0000 UTC m=+4768.623593030" lastFinishedPulling="2026-02-23 10:07:18.542462074 +0000 UTC m=+4769.925668231" observedRunningTime="2026-02-23 10:07:19.716497393 +0000 UTC m=+4771.099703560" watchObservedRunningTime="2026-02-23 10:07:19.718985581 +0000 UTC m=+4771.102191748" Feb 23 10:07:31 crc kubenswrapper[4940]: I0223 10:07:31.429456 4940 patch_prober.go:28] interesting pod/machine-config-daemon-26mgs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 23 10:07:31 crc kubenswrapper[4940]: I0223 10:07:31.430808 4940 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" podUID="f3f2cfd6-5ddf-436d-998f-440f1cc642b1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 23 10:07:39 crc kubenswrapper[4940]: I0223 10:07:39.957440 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-27xfq/must-gather-789bv"] Feb 23 10:07:39 crc kubenswrapper[4940]: I0223 10:07:39.961704 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-27xfq/must-gather-789bv" Feb 23 10:07:39 crc kubenswrapper[4940]: I0223 10:07:39.964944 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-27xfq"/"kube-root-ca.crt" Feb 23 10:07:39 crc kubenswrapper[4940]: I0223 10:07:39.965242 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-27xfq"/"default-dockercfg-qr9ch" Feb 23 10:07:39 crc kubenswrapper[4940]: I0223 10:07:39.966589 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-27xfq"/"openshift-service-ca.crt" Feb 23 10:07:39 crc kubenswrapper[4940]: I0223 10:07:39.973573 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-27xfq/must-gather-789bv"] Feb 23 10:07:40 crc kubenswrapper[4940]: I0223 10:07:40.132784 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/e19d452b-09de-4c24-8103-0c7614f78ec2-must-gather-output\") pod \"must-gather-789bv\" (UID: \"e19d452b-09de-4c24-8103-0c7614f78ec2\") " pod="openshift-must-gather-27xfq/must-gather-789bv" Feb 23 10:07:40 crc kubenswrapper[4940]: I0223 10:07:40.132832 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pdz97\" (UniqueName: \"kubernetes.io/projected/e19d452b-09de-4c24-8103-0c7614f78ec2-kube-api-access-pdz97\") pod \"must-gather-789bv\" (UID: \"e19d452b-09de-4c24-8103-0c7614f78ec2\") " pod="openshift-must-gather-27xfq/must-gather-789bv" Feb 23 10:07:40 crc kubenswrapper[4940]: I0223 10:07:40.235043 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/e19d452b-09de-4c24-8103-0c7614f78ec2-must-gather-output\") pod \"must-gather-789bv\" (UID: \"e19d452b-09de-4c24-8103-0c7614f78ec2\") " pod="openshift-must-gather-27xfq/must-gather-789bv" Feb 23 10:07:40 crc kubenswrapper[4940]: I0223 10:07:40.235113 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pdz97\" (UniqueName: \"kubernetes.io/projected/e19d452b-09de-4c24-8103-0c7614f78ec2-kube-api-access-pdz97\") pod \"must-gather-789bv\" (UID: \"e19d452b-09de-4c24-8103-0c7614f78ec2\") " pod="openshift-must-gather-27xfq/must-gather-789bv" Feb 23 10:07:40 crc kubenswrapper[4940]: I0223 10:07:40.235597 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/e19d452b-09de-4c24-8103-0c7614f78ec2-must-gather-output\") pod \"must-gather-789bv\" (UID: \"e19d452b-09de-4c24-8103-0c7614f78ec2\") " pod="openshift-must-gather-27xfq/must-gather-789bv" Feb 23 10:07:40 crc kubenswrapper[4940]: I0223 10:07:40.255774 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pdz97\" (UniqueName: \"kubernetes.io/projected/e19d452b-09de-4c24-8103-0c7614f78ec2-kube-api-access-pdz97\") pod \"must-gather-789bv\" (UID: \"e19d452b-09de-4c24-8103-0c7614f78ec2\") " pod="openshift-must-gather-27xfq/must-gather-789bv" Feb 23 10:07:40 crc kubenswrapper[4940]: I0223 10:07:40.282732 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-27xfq/must-gather-789bv" Feb 23 10:07:40 crc kubenswrapper[4940]: I0223 10:07:40.782996 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-27xfq/must-gather-789bv"] Feb 23 10:07:40 crc kubenswrapper[4940]: W0223 10:07:40.785292 4940 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode19d452b_09de_4c24_8103_0c7614f78ec2.slice/crio-d63ad25e244ff444fe5567178d5ad2a98484ad1404474995b2b011bd7888919e WatchSource:0}: Error finding container d63ad25e244ff444fe5567178d5ad2a98484ad1404474995b2b011bd7888919e: Status 404 returned error can't find the container with id d63ad25e244ff444fe5567178d5ad2a98484ad1404474995b2b011bd7888919e Feb 23 10:07:40 crc kubenswrapper[4940]: I0223 10:07:40.909185 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-27xfq/must-gather-789bv" event={"ID":"e19d452b-09de-4c24-8103-0c7614f78ec2","Type":"ContainerStarted","Data":"d63ad25e244ff444fe5567178d5ad2a98484ad1404474995b2b011bd7888919e"} Feb 23 10:07:47 crc kubenswrapper[4940]: I0223 10:07:47.997549 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-27xfq/must-gather-789bv" event={"ID":"e19d452b-09de-4c24-8103-0c7614f78ec2","Type":"ContainerStarted","Data":"db6e5a1d68e7bca7a1711d424b5b225a95a4aa8d62ce4bb1ed070ed45a81135b"} Feb 23 10:07:49 crc kubenswrapper[4940]: I0223 10:07:49.008225 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-27xfq/must-gather-789bv" event={"ID":"e19d452b-09de-4c24-8103-0c7614f78ec2","Type":"ContainerStarted","Data":"b0a4a3a31989f403c0ffa57e6556052a4ef0bd32088e93e125ed633190f04bb2"} Feb 23 10:07:49 crc kubenswrapper[4940]: I0223 10:07:49.032451 4940 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-27xfq/must-gather-789bv" podStartSLOduration=3.371093811 podStartE2EDuration="10.032432707s" podCreationTimestamp="2026-02-23 10:07:39 +0000 UTC" firstStartedPulling="2026-02-23 10:07:40.7889006 +0000 UTC m=+4792.172106757" lastFinishedPulling="2026-02-23 10:07:47.450239496 +0000 UTC m=+4798.833445653" observedRunningTime="2026-02-23 10:07:49.024650223 +0000 UTC m=+4800.407856380" watchObservedRunningTime="2026-02-23 10:07:49.032432707 +0000 UTC m=+4800.415638864" Feb 23 10:07:52 crc kubenswrapper[4940]: E0223 10:07:52.042485 4940 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.222:58056->38.102.83.222:40203: write tcp 38.102.83.222:58056->38.102.83.222:40203: write: broken pipe Feb 23 10:07:53 crc kubenswrapper[4940]: I0223 10:07:53.131185 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-27xfq/crc-debug-dvpkt"] Feb 23 10:07:53 crc kubenswrapper[4940]: I0223 10:07:53.151440 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-27xfq/crc-debug-dvpkt" Feb 23 10:07:53 crc kubenswrapper[4940]: I0223 10:07:53.255325 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/3f5ca541-00d0-4deb-896d-acd546d0a819-host\") pod \"crc-debug-dvpkt\" (UID: \"3f5ca541-00d0-4deb-896d-acd546d0a819\") " pod="openshift-must-gather-27xfq/crc-debug-dvpkt" Feb 23 10:07:53 crc kubenswrapper[4940]: I0223 10:07:53.255827 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ndsxf\" (UniqueName: \"kubernetes.io/projected/3f5ca541-00d0-4deb-896d-acd546d0a819-kube-api-access-ndsxf\") pod \"crc-debug-dvpkt\" (UID: \"3f5ca541-00d0-4deb-896d-acd546d0a819\") " pod="openshift-must-gather-27xfq/crc-debug-dvpkt" Feb 23 10:07:53 crc kubenswrapper[4940]: I0223 10:07:53.357402 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ndsxf\" (UniqueName: \"kubernetes.io/projected/3f5ca541-00d0-4deb-896d-acd546d0a819-kube-api-access-ndsxf\") pod \"crc-debug-dvpkt\" (UID: \"3f5ca541-00d0-4deb-896d-acd546d0a819\") " pod="openshift-must-gather-27xfq/crc-debug-dvpkt" Feb 23 10:07:53 crc kubenswrapper[4940]: I0223 10:07:53.357499 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/3f5ca541-00d0-4deb-896d-acd546d0a819-host\") pod \"crc-debug-dvpkt\" (UID: \"3f5ca541-00d0-4deb-896d-acd546d0a819\") " pod="openshift-must-gather-27xfq/crc-debug-dvpkt" Feb 23 10:07:53 crc kubenswrapper[4940]: I0223 10:07:53.357833 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/3f5ca541-00d0-4deb-896d-acd546d0a819-host\") pod \"crc-debug-dvpkt\" (UID: \"3f5ca541-00d0-4deb-896d-acd546d0a819\") " pod="openshift-must-gather-27xfq/crc-debug-dvpkt" Feb 23 10:07:53 crc kubenswrapper[4940]: I0223 10:07:53.376214 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ndsxf\" (UniqueName: \"kubernetes.io/projected/3f5ca541-00d0-4deb-896d-acd546d0a819-kube-api-access-ndsxf\") pod \"crc-debug-dvpkt\" (UID: \"3f5ca541-00d0-4deb-896d-acd546d0a819\") " pod="openshift-must-gather-27xfq/crc-debug-dvpkt" Feb 23 10:07:53 crc kubenswrapper[4940]: I0223 10:07:53.479168 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-27xfq/crc-debug-dvpkt" Feb 23 10:07:54 crc kubenswrapper[4940]: I0223 10:07:54.089421 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-27xfq/crc-debug-dvpkt" event={"ID":"3f5ca541-00d0-4deb-896d-acd546d0a819","Type":"ContainerStarted","Data":"b30ad7016c134fb341d4432e951e948e3202c842b5604c97f888a2d47f60a84c"} Feb 23 10:07:55 crc kubenswrapper[4940]: I0223 10:07:55.745845 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-hvxfm"] Feb 23 10:07:55 crc kubenswrapper[4940]: I0223 10:07:55.748156 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hvxfm" Feb 23 10:07:55 crc kubenswrapper[4940]: I0223 10:07:55.768513 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-hvxfm"] Feb 23 10:07:55 crc kubenswrapper[4940]: I0223 10:07:55.911657 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1285cc0b-bda4-4d3a-9bdc-98650295bd09-catalog-content\") pod \"redhat-marketplace-hvxfm\" (UID: \"1285cc0b-bda4-4d3a-9bdc-98650295bd09\") " pod="openshift-marketplace/redhat-marketplace-hvxfm" Feb 23 10:07:55 crc kubenswrapper[4940]: I0223 10:07:55.911732 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1285cc0b-bda4-4d3a-9bdc-98650295bd09-utilities\") pod \"redhat-marketplace-hvxfm\" (UID: \"1285cc0b-bda4-4d3a-9bdc-98650295bd09\") " pod="openshift-marketplace/redhat-marketplace-hvxfm" Feb 23 10:07:55 crc kubenswrapper[4940]: I0223 10:07:55.912037 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jdkg8\" (UniqueName: \"kubernetes.io/projected/1285cc0b-bda4-4d3a-9bdc-98650295bd09-kube-api-access-jdkg8\") pod \"redhat-marketplace-hvxfm\" (UID: \"1285cc0b-bda4-4d3a-9bdc-98650295bd09\") " pod="openshift-marketplace/redhat-marketplace-hvxfm" Feb 23 10:07:56 crc kubenswrapper[4940]: I0223 10:07:56.014195 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1285cc0b-bda4-4d3a-9bdc-98650295bd09-utilities\") pod \"redhat-marketplace-hvxfm\" (UID: \"1285cc0b-bda4-4d3a-9bdc-98650295bd09\") " pod="openshift-marketplace/redhat-marketplace-hvxfm" Feb 23 10:07:56 crc kubenswrapper[4940]: I0223 10:07:56.014401 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jdkg8\" (UniqueName: \"kubernetes.io/projected/1285cc0b-bda4-4d3a-9bdc-98650295bd09-kube-api-access-jdkg8\") pod \"redhat-marketplace-hvxfm\" (UID: \"1285cc0b-bda4-4d3a-9bdc-98650295bd09\") " pod="openshift-marketplace/redhat-marketplace-hvxfm" Feb 23 10:07:56 crc kubenswrapper[4940]: I0223 10:07:56.014587 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1285cc0b-bda4-4d3a-9bdc-98650295bd09-catalog-content\") pod \"redhat-marketplace-hvxfm\" (UID: \"1285cc0b-bda4-4d3a-9bdc-98650295bd09\") " pod="openshift-marketplace/redhat-marketplace-hvxfm" Feb 23 10:07:56 crc kubenswrapper[4940]: I0223 10:07:56.015147 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1285cc0b-bda4-4d3a-9bdc-98650295bd09-utilities\") pod \"redhat-marketplace-hvxfm\" (UID: \"1285cc0b-bda4-4d3a-9bdc-98650295bd09\") " pod="openshift-marketplace/redhat-marketplace-hvxfm" Feb 23 10:07:56 crc kubenswrapper[4940]: I0223 10:07:56.015191 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1285cc0b-bda4-4d3a-9bdc-98650295bd09-catalog-content\") pod \"redhat-marketplace-hvxfm\" (UID: \"1285cc0b-bda4-4d3a-9bdc-98650295bd09\") " pod="openshift-marketplace/redhat-marketplace-hvxfm" Feb 23 10:07:56 crc kubenswrapper[4940]: I0223 10:07:56.339795 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jdkg8\" (UniqueName: \"kubernetes.io/projected/1285cc0b-bda4-4d3a-9bdc-98650295bd09-kube-api-access-jdkg8\") pod \"redhat-marketplace-hvxfm\" (UID: \"1285cc0b-bda4-4d3a-9bdc-98650295bd09\") " pod="openshift-marketplace/redhat-marketplace-hvxfm" Feb 23 10:07:56 crc kubenswrapper[4940]: I0223 10:07:56.348848 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-kdf6v"] Feb 23 10:07:56 crc kubenswrapper[4940]: I0223 10:07:56.351857 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kdf6v" Feb 23 10:07:56 crc kubenswrapper[4940]: I0223 10:07:56.372329 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-kdf6v"] Feb 23 10:07:56 crc kubenswrapper[4940]: I0223 10:07:56.421448 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7klcm\" (UniqueName: \"kubernetes.io/projected/ecadaddd-3b79-40ab-8938-5d8bc8c8d01a-kube-api-access-7klcm\") pod \"certified-operators-kdf6v\" (UID: \"ecadaddd-3b79-40ab-8938-5d8bc8c8d01a\") " pod="openshift-marketplace/certified-operators-kdf6v" Feb 23 10:07:56 crc kubenswrapper[4940]: I0223 10:07:56.421557 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ecadaddd-3b79-40ab-8938-5d8bc8c8d01a-catalog-content\") pod \"certified-operators-kdf6v\" (UID: \"ecadaddd-3b79-40ab-8938-5d8bc8c8d01a\") " pod="openshift-marketplace/certified-operators-kdf6v" Feb 23 10:07:56 crc kubenswrapper[4940]: I0223 10:07:56.421627 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ecadaddd-3b79-40ab-8938-5d8bc8c8d01a-utilities\") pod \"certified-operators-kdf6v\" (UID: \"ecadaddd-3b79-40ab-8938-5d8bc8c8d01a\") " pod="openshift-marketplace/certified-operators-kdf6v" Feb 23 10:07:56 crc kubenswrapper[4940]: I0223 10:07:56.434136 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hvxfm" Feb 23 10:07:56 crc kubenswrapper[4940]: I0223 10:07:56.523240 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ecadaddd-3b79-40ab-8938-5d8bc8c8d01a-catalog-content\") pod \"certified-operators-kdf6v\" (UID: \"ecadaddd-3b79-40ab-8938-5d8bc8c8d01a\") " pod="openshift-marketplace/certified-operators-kdf6v" Feb 23 10:07:56 crc kubenswrapper[4940]: I0223 10:07:56.523657 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ecadaddd-3b79-40ab-8938-5d8bc8c8d01a-utilities\") pod \"certified-operators-kdf6v\" (UID: \"ecadaddd-3b79-40ab-8938-5d8bc8c8d01a\") " pod="openshift-marketplace/certified-operators-kdf6v" Feb 23 10:07:56 crc kubenswrapper[4940]: I0223 10:07:56.524112 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7klcm\" (UniqueName: \"kubernetes.io/projected/ecadaddd-3b79-40ab-8938-5d8bc8c8d01a-kube-api-access-7klcm\") pod \"certified-operators-kdf6v\" (UID: \"ecadaddd-3b79-40ab-8938-5d8bc8c8d01a\") " pod="openshift-marketplace/certified-operators-kdf6v" Feb 23 10:07:56 crc kubenswrapper[4940]: I0223 10:07:56.525333 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ecadaddd-3b79-40ab-8938-5d8bc8c8d01a-catalog-content\") pod \"certified-operators-kdf6v\" (UID: \"ecadaddd-3b79-40ab-8938-5d8bc8c8d01a\") " pod="openshift-marketplace/certified-operators-kdf6v" Feb 23 10:07:56 crc kubenswrapper[4940]: I0223 10:07:56.525380 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ecadaddd-3b79-40ab-8938-5d8bc8c8d01a-utilities\") pod \"certified-operators-kdf6v\" (UID: \"ecadaddd-3b79-40ab-8938-5d8bc8c8d01a\") " pod="openshift-marketplace/certified-operators-kdf6v" Feb 23 10:07:56 crc kubenswrapper[4940]: I0223 10:07:56.545234 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7klcm\" (UniqueName: \"kubernetes.io/projected/ecadaddd-3b79-40ab-8938-5d8bc8c8d01a-kube-api-access-7klcm\") pod \"certified-operators-kdf6v\" (UID: \"ecadaddd-3b79-40ab-8938-5d8bc8c8d01a\") " pod="openshift-marketplace/certified-operators-kdf6v" Feb 23 10:07:56 crc kubenswrapper[4940]: I0223 10:07:56.607175 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kdf6v" Feb 23 10:07:56 crc kubenswrapper[4940]: I0223 10:07:56.974572 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-hvxfm"] Feb 23 10:07:57 crc kubenswrapper[4940]: I0223 10:07:57.123968 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hvxfm" event={"ID":"1285cc0b-bda4-4d3a-9bdc-98650295bd09","Type":"ContainerStarted","Data":"1cd4706ff99ed644e9d75c79be5a4bcf5ace46583a32816e25b879bee8f3c5d9"} Feb 23 10:07:57 crc kubenswrapper[4940]: I0223 10:07:57.149858 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-kdf6v"] Feb 23 10:07:57 crc kubenswrapper[4940]: W0223 10:07:57.184963 4940 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podecadaddd_3b79_40ab_8938_5d8bc8c8d01a.slice/crio-7b11b3abe6170e724ed29b0561c2c2a0fc069fbd199041ece1356d91173094de WatchSource:0}: Error finding container 7b11b3abe6170e724ed29b0561c2c2a0fc069fbd199041ece1356d91173094de: Status 404 returned error can't find the container with id 7b11b3abe6170e724ed29b0561c2c2a0fc069fbd199041ece1356d91173094de Feb 23 10:07:58 crc kubenswrapper[4940]: I0223 10:07:58.135694 4940 generic.go:334] "Generic (PLEG): container finished" podID="1285cc0b-bda4-4d3a-9bdc-98650295bd09" containerID="3144fe00aa4046ddb2dcdb352bf8018a7db80869dcb208d324f52806f57c6adb" exitCode=0 Feb 23 10:07:58 crc kubenswrapper[4940]: I0223 10:07:58.135728 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hvxfm" event={"ID":"1285cc0b-bda4-4d3a-9bdc-98650295bd09","Type":"ContainerDied","Data":"3144fe00aa4046ddb2dcdb352bf8018a7db80869dcb208d324f52806f57c6adb"} Feb 23 10:07:58 crc kubenswrapper[4940]: I0223 10:07:58.141221 4940 generic.go:334] "Generic (PLEG): container finished" podID="ecadaddd-3b79-40ab-8938-5d8bc8c8d01a" containerID="d9e9a6000aa76a3196451e527e6b97ea8c9557b16bfba2723c5a95eab98c9329" exitCode=0 Feb 23 10:07:58 crc kubenswrapper[4940]: I0223 10:07:58.141266 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kdf6v" event={"ID":"ecadaddd-3b79-40ab-8938-5d8bc8c8d01a","Type":"ContainerDied","Data":"d9e9a6000aa76a3196451e527e6b97ea8c9557b16bfba2723c5a95eab98c9329"} Feb 23 10:07:58 crc kubenswrapper[4940]: I0223 10:07:58.141292 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kdf6v" event={"ID":"ecadaddd-3b79-40ab-8938-5d8bc8c8d01a","Type":"ContainerStarted","Data":"7b11b3abe6170e724ed29b0561c2c2a0fc069fbd199041ece1356d91173094de"} Feb 23 10:08:01 crc kubenswrapper[4940]: I0223 10:08:01.430595 4940 patch_prober.go:28] interesting pod/machine-config-daemon-26mgs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 23 10:08:01 crc kubenswrapper[4940]: I0223 10:08:01.431293 4940 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" podUID="f3f2cfd6-5ddf-436d-998f-440f1cc642b1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 23 10:08:06 crc kubenswrapper[4940]: I0223 10:08:06.213239 4940 generic.go:334] "Generic (PLEG): container finished" podID="1285cc0b-bda4-4d3a-9bdc-98650295bd09" containerID="f1394a4f84fe10386ca7854cbb3e2ec562815908102a65d074ed911b714eb944" exitCode=0 Feb 23 10:08:06 crc kubenswrapper[4940]: I0223 10:08:06.213314 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hvxfm" event={"ID":"1285cc0b-bda4-4d3a-9bdc-98650295bd09","Type":"ContainerDied","Data":"f1394a4f84fe10386ca7854cbb3e2ec562815908102a65d074ed911b714eb944"} Feb 23 10:08:06 crc kubenswrapper[4940]: I0223 10:08:06.218577 4940 generic.go:334] "Generic (PLEG): container finished" podID="ecadaddd-3b79-40ab-8938-5d8bc8c8d01a" containerID="824c1d17913ffbcd514c9da59a7b52bedcbaeb6aa2a9f46985c5bbb13d93c120" exitCode=0 Feb 23 10:08:06 crc kubenswrapper[4940]: I0223 10:08:06.218703 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kdf6v" event={"ID":"ecadaddd-3b79-40ab-8938-5d8bc8c8d01a","Type":"ContainerDied","Data":"824c1d17913ffbcd514c9da59a7b52bedcbaeb6aa2a9f46985c5bbb13d93c120"} Feb 23 10:08:06 crc kubenswrapper[4940]: I0223 10:08:06.221464 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-27xfq/crc-debug-dvpkt" event={"ID":"3f5ca541-00d0-4deb-896d-acd546d0a819","Type":"ContainerStarted","Data":"2ba1561fedeb67bce131f34cf77038bf40109087e60ca2eb3dc9fdb9c574c4f6"} Feb 23 10:08:06 crc kubenswrapper[4940]: I0223 10:08:06.259061 4940 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-27xfq/crc-debug-dvpkt" podStartSLOduration=1.774389356 podStartE2EDuration="13.259042841s" podCreationTimestamp="2026-02-23 10:07:53 +0000 UTC" firstStartedPulling="2026-02-23 10:07:53.536833913 +0000 UTC m=+4804.920040070" lastFinishedPulling="2026-02-23 10:08:05.021487398 +0000 UTC m=+4816.404693555" observedRunningTime="2026-02-23 10:08:06.248708237 +0000 UTC m=+4817.631914404" watchObservedRunningTime="2026-02-23 10:08:06.259042841 +0000 UTC m=+4817.642248998" Feb 23 10:08:07 crc kubenswrapper[4940]: I0223 10:08:07.235434 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kdf6v" event={"ID":"ecadaddd-3b79-40ab-8938-5d8bc8c8d01a","Type":"ContainerStarted","Data":"b756307563c1fc57f69aa1e4bbcc528ac83a25246e6469cd334d78a20a2c5fa3"} Feb 23 10:08:07 crc kubenswrapper[4940]: I0223 10:08:07.240880 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hvxfm" event={"ID":"1285cc0b-bda4-4d3a-9bdc-98650295bd09","Type":"ContainerStarted","Data":"b48cf1f85e8cef7e894790f327a748b8657831f661076f6e9767278a1bb4fec7"} Feb 23 10:08:07 crc kubenswrapper[4940]: I0223 10:08:07.262654 4940 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-kdf6v" podStartSLOduration=2.767353305 podStartE2EDuration="11.262631334s" podCreationTimestamp="2026-02-23 10:07:56 +0000 UTC" firstStartedPulling="2026-02-23 10:07:58.143740611 +0000 UTC m=+4809.526946768" lastFinishedPulling="2026-02-23 10:08:06.63901864 +0000 UTC m=+4818.022224797" observedRunningTime="2026-02-23 10:08:07.256016586 +0000 UTC m=+4818.639222743" watchObservedRunningTime="2026-02-23 10:08:07.262631334 +0000 UTC m=+4818.645837491" Feb 23 10:08:07 crc kubenswrapper[4940]: I0223 10:08:07.274895 4940 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-hvxfm" podStartSLOduration=3.666797035 podStartE2EDuration="12.274877728s" podCreationTimestamp="2026-02-23 10:07:55 +0000 UTC" firstStartedPulling="2026-02-23 10:07:58.138443625 +0000 UTC m=+4809.521649782" lastFinishedPulling="2026-02-23 10:08:06.746524318 +0000 UTC m=+4818.129730475" observedRunningTime="2026-02-23 10:08:07.273558457 +0000 UTC m=+4818.656764624" watchObservedRunningTime="2026-02-23 10:08:07.274877728 +0000 UTC m=+4818.658083875" Feb 23 10:08:16 crc kubenswrapper[4940]: I0223 10:08:16.435299 4940 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-hvxfm" Feb 23 10:08:16 crc kubenswrapper[4940]: I0223 10:08:16.436042 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-hvxfm" Feb 23 10:08:16 crc kubenswrapper[4940]: I0223 10:08:16.489020 4940 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-hvxfm" Feb 23 10:08:16 crc kubenswrapper[4940]: I0223 10:08:16.608000 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-kdf6v" Feb 23 10:08:16 crc kubenswrapper[4940]: I0223 10:08:16.608299 4940 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-kdf6v" Feb 23 10:08:16 crc kubenswrapper[4940]: I0223 10:08:16.664462 4940 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-kdf6v" Feb 23 10:08:17 crc kubenswrapper[4940]: I0223 10:08:17.389224 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-kdf6v" Feb 23 10:08:17 crc kubenswrapper[4940]: I0223 10:08:17.400329 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-hvxfm" Feb 23 10:08:17 crc kubenswrapper[4940]: I0223 10:08:17.932731 4940 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-kdf6v"] Feb 23 10:08:19 crc kubenswrapper[4940]: I0223 10:08:19.358471 4940 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-kdf6v" podUID="ecadaddd-3b79-40ab-8938-5d8bc8c8d01a" containerName="registry-server" containerID="cri-o://b756307563c1fc57f69aa1e4bbcc528ac83a25246e6469cd334d78a20a2c5fa3" gracePeriod=2 Feb 23 10:08:19 crc kubenswrapper[4940]: I0223 10:08:19.746567 4940 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-hvxfm"] Feb 23 10:08:19 crc kubenswrapper[4940]: I0223 10:08:19.747262 4940 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-hvxfm" podUID="1285cc0b-bda4-4d3a-9bdc-98650295bd09" containerName="registry-server" containerID="cri-o://b48cf1f85e8cef7e894790f327a748b8657831f661076f6e9767278a1bb4fec7" gracePeriod=2 Feb 23 10:08:19 crc kubenswrapper[4940]: I0223 10:08:19.961200 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kdf6v" Feb 23 10:08:20 crc kubenswrapper[4940]: I0223 10:08:20.075091 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ecadaddd-3b79-40ab-8938-5d8bc8c8d01a-catalog-content\") pod \"ecadaddd-3b79-40ab-8938-5d8bc8c8d01a\" (UID: \"ecadaddd-3b79-40ab-8938-5d8bc8c8d01a\") " Feb 23 10:08:20 crc kubenswrapper[4940]: I0223 10:08:20.075815 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ecadaddd-3b79-40ab-8938-5d8bc8c8d01a-utilities\") pod \"ecadaddd-3b79-40ab-8938-5d8bc8c8d01a\" (UID: \"ecadaddd-3b79-40ab-8938-5d8bc8c8d01a\") " Feb 23 10:08:20 crc kubenswrapper[4940]: I0223 10:08:20.075905 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7klcm\" (UniqueName: \"kubernetes.io/projected/ecadaddd-3b79-40ab-8938-5d8bc8c8d01a-kube-api-access-7klcm\") pod \"ecadaddd-3b79-40ab-8938-5d8bc8c8d01a\" (UID: \"ecadaddd-3b79-40ab-8938-5d8bc8c8d01a\") " Feb 23 10:08:20 crc kubenswrapper[4940]: I0223 10:08:20.078011 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ecadaddd-3b79-40ab-8938-5d8bc8c8d01a-utilities" (OuterVolumeSpecName: "utilities") pod "ecadaddd-3b79-40ab-8938-5d8bc8c8d01a" (UID: "ecadaddd-3b79-40ab-8938-5d8bc8c8d01a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 10:08:20 crc kubenswrapper[4940]: I0223 10:08:20.093344 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ecadaddd-3b79-40ab-8938-5d8bc8c8d01a-kube-api-access-7klcm" (OuterVolumeSpecName: "kube-api-access-7klcm") pod "ecadaddd-3b79-40ab-8938-5d8bc8c8d01a" (UID: "ecadaddd-3b79-40ab-8938-5d8bc8c8d01a"). InnerVolumeSpecName "kube-api-access-7klcm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 10:08:20 crc kubenswrapper[4940]: I0223 10:08:20.178578 4940 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ecadaddd-3b79-40ab-8938-5d8bc8c8d01a-utilities\") on node \"crc\" DevicePath \"\"" Feb 23 10:08:20 crc kubenswrapper[4940]: I0223 10:08:20.178630 4940 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7klcm\" (UniqueName: \"kubernetes.io/projected/ecadaddd-3b79-40ab-8938-5d8bc8c8d01a-kube-api-access-7klcm\") on node \"crc\" DevicePath \"\"" Feb 23 10:08:20 crc kubenswrapper[4940]: I0223 10:08:20.308019 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hvxfm" Feb 23 10:08:20 crc kubenswrapper[4940]: I0223 10:08:20.319964 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ecadaddd-3b79-40ab-8938-5d8bc8c8d01a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ecadaddd-3b79-40ab-8938-5d8bc8c8d01a" (UID: "ecadaddd-3b79-40ab-8938-5d8bc8c8d01a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 10:08:20 crc kubenswrapper[4940]: I0223 10:08:20.381796 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1285cc0b-bda4-4d3a-9bdc-98650295bd09-catalog-content\") pod \"1285cc0b-bda4-4d3a-9bdc-98650295bd09\" (UID: \"1285cc0b-bda4-4d3a-9bdc-98650295bd09\") " Feb 23 10:08:20 crc kubenswrapper[4940]: I0223 10:08:20.383537 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jdkg8\" (UniqueName: \"kubernetes.io/projected/1285cc0b-bda4-4d3a-9bdc-98650295bd09-kube-api-access-jdkg8\") pod \"1285cc0b-bda4-4d3a-9bdc-98650295bd09\" (UID: \"1285cc0b-bda4-4d3a-9bdc-98650295bd09\") " Feb 23 10:08:20 crc kubenswrapper[4940]: I0223 10:08:20.383603 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1285cc0b-bda4-4d3a-9bdc-98650295bd09-utilities\") pod \"1285cc0b-bda4-4d3a-9bdc-98650295bd09\" (UID: \"1285cc0b-bda4-4d3a-9bdc-98650295bd09\") " Feb 23 10:08:20 crc kubenswrapper[4940]: I0223 10:08:20.385017 4940 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ecadaddd-3b79-40ab-8938-5d8bc8c8d01a-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 23 10:08:20 crc kubenswrapper[4940]: I0223 10:08:20.386209 4940 generic.go:334] "Generic (PLEG): container finished" podID="ecadaddd-3b79-40ab-8938-5d8bc8c8d01a" containerID="b756307563c1fc57f69aa1e4bbcc528ac83a25246e6469cd334d78a20a2c5fa3" exitCode=0 Feb 23 10:08:20 crc kubenswrapper[4940]: I0223 10:08:20.386295 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kdf6v" event={"ID":"ecadaddd-3b79-40ab-8938-5d8bc8c8d01a","Type":"ContainerDied","Data":"b756307563c1fc57f69aa1e4bbcc528ac83a25246e6469cd334d78a20a2c5fa3"} Feb 23 10:08:20 crc kubenswrapper[4940]: I0223 10:08:20.386331 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kdf6v" event={"ID":"ecadaddd-3b79-40ab-8938-5d8bc8c8d01a","Type":"ContainerDied","Data":"7b11b3abe6170e724ed29b0561c2c2a0fc069fbd199041ece1356d91173094de"} Feb 23 10:08:20 crc kubenswrapper[4940]: I0223 10:08:20.386353 4940 scope.go:117] "RemoveContainer" containerID="b756307563c1fc57f69aa1e4bbcc528ac83a25246e6469cd334d78a20a2c5fa3" Feb 23 10:08:20 crc kubenswrapper[4940]: I0223 10:08:20.386501 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kdf6v" Feb 23 10:08:20 crc kubenswrapper[4940]: I0223 10:08:20.400670 4940 generic.go:334] "Generic (PLEG): container finished" podID="1285cc0b-bda4-4d3a-9bdc-98650295bd09" containerID="b48cf1f85e8cef7e894790f327a748b8657831f661076f6e9767278a1bb4fec7" exitCode=0 Feb 23 10:08:20 crc kubenswrapper[4940]: I0223 10:08:20.400711 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hvxfm" event={"ID":"1285cc0b-bda4-4d3a-9bdc-98650295bd09","Type":"ContainerDied","Data":"b48cf1f85e8cef7e894790f327a748b8657831f661076f6e9767278a1bb4fec7"} Feb 23 10:08:20 crc kubenswrapper[4940]: I0223 10:08:20.400738 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hvxfm" event={"ID":"1285cc0b-bda4-4d3a-9bdc-98650295bd09","Type":"ContainerDied","Data":"1cd4706ff99ed644e9d75c79be5a4bcf5ace46583a32816e25b879bee8f3c5d9"} Feb 23 10:08:20 crc kubenswrapper[4940]: I0223 10:08:20.400815 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hvxfm" Feb 23 10:08:20 crc kubenswrapper[4940]: I0223 10:08:20.424240 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1285cc0b-bda4-4d3a-9bdc-98650295bd09-utilities" (OuterVolumeSpecName: "utilities") pod "1285cc0b-bda4-4d3a-9bdc-98650295bd09" (UID: "1285cc0b-bda4-4d3a-9bdc-98650295bd09"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 10:08:20 crc kubenswrapper[4940]: I0223 10:08:20.454877 4940 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-kdf6v"] Feb 23 10:08:20 crc kubenswrapper[4940]: I0223 10:08:20.462194 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1285cc0b-bda4-4d3a-9bdc-98650295bd09-kube-api-access-jdkg8" (OuterVolumeSpecName: "kube-api-access-jdkg8") pod "1285cc0b-bda4-4d3a-9bdc-98650295bd09" (UID: "1285cc0b-bda4-4d3a-9bdc-98650295bd09"). InnerVolumeSpecName "kube-api-access-jdkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 10:08:20 crc kubenswrapper[4940]: I0223 10:08:20.483208 4940 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-kdf6v"] Feb 23 10:08:20 crc kubenswrapper[4940]: I0223 10:08:20.508078 4940 scope.go:117] "RemoveContainer" containerID="824c1d17913ffbcd514c9da59a7b52bedcbaeb6aa2a9f46985c5bbb13d93c120" Feb 23 10:08:20 crc kubenswrapper[4940]: I0223 10:08:20.509325 4940 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jdkg8\" (UniqueName: \"kubernetes.io/projected/1285cc0b-bda4-4d3a-9bdc-98650295bd09-kube-api-access-jdkg8\") on node \"crc\" DevicePath \"\"" Feb 23 10:08:20 crc kubenswrapper[4940]: I0223 10:08:20.509367 4940 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1285cc0b-bda4-4d3a-9bdc-98650295bd09-utilities\") on node \"crc\" DevicePath \"\"" Feb 23 10:08:20 crc kubenswrapper[4940]: I0223 10:08:20.519764 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1285cc0b-bda4-4d3a-9bdc-98650295bd09-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1285cc0b-bda4-4d3a-9bdc-98650295bd09" (UID: "1285cc0b-bda4-4d3a-9bdc-98650295bd09"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 10:08:20 crc kubenswrapper[4940]: I0223 10:08:20.557285 4940 scope.go:117] "RemoveContainer" containerID="d9e9a6000aa76a3196451e527e6b97ea8c9557b16bfba2723c5a95eab98c9329" Feb 23 10:08:20 crc kubenswrapper[4940]: I0223 10:08:20.611067 4940 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1285cc0b-bda4-4d3a-9bdc-98650295bd09-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 23 10:08:20 crc kubenswrapper[4940]: I0223 10:08:20.613517 4940 scope.go:117] "RemoveContainer" containerID="b756307563c1fc57f69aa1e4bbcc528ac83a25246e6469cd334d78a20a2c5fa3" Feb 23 10:08:20 crc kubenswrapper[4940]: E0223 10:08:20.613982 4940 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b756307563c1fc57f69aa1e4bbcc528ac83a25246e6469cd334d78a20a2c5fa3\": container with ID starting with b756307563c1fc57f69aa1e4bbcc528ac83a25246e6469cd334d78a20a2c5fa3 not found: ID does not exist" containerID="b756307563c1fc57f69aa1e4bbcc528ac83a25246e6469cd334d78a20a2c5fa3" Feb 23 10:08:20 crc kubenswrapper[4940]: I0223 10:08:20.614014 4940 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b756307563c1fc57f69aa1e4bbcc528ac83a25246e6469cd334d78a20a2c5fa3"} err="failed to get container status \"b756307563c1fc57f69aa1e4bbcc528ac83a25246e6469cd334d78a20a2c5fa3\": rpc error: code = NotFound desc = could not find container \"b756307563c1fc57f69aa1e4bbcc528ac83a25246e6469cd334d78a20a2c5fa3\": container with ID starting with b756307563c1fc57f69aa1e4bbcc528ac83a25246e6469cd334d78a20a2c5fa3 not found: ID does not exist" Feb 23 10:08:20 crc kubenswrapper[4940]: I0223 10:08:20.614035 4940 scope.go:117] "RemoveContainer" containerID="824c1d17913ffbcd514c9da59a7b52bedcbaeb6aa2a9f46985c5bbb13d93c120" Feb 23 10:08:20 crc kubenswrapper[4940]: E0223 10:08:20.614256 4940 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"824c1d17913ffbcd514c9da59a7b52bedcbaeb6aa2a9f46985c5bbb13d93c120\": container with ID starting with 824c1d17913ffbcd514c9da59a7b52bedcbaeb6aa2a9f46985c5bbb13d93c120 not found: ID does not exist" containerID="824c1d17913ffbcd514c9da59a7b52bedcbaeb6aa2a9f46985c5bbb13d93c120" Feb 23 10:08:20 crc kubenswrapper[4940]: I0223 10:08:20.614291 4940 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"824c1d17913ffbcd514c9da59a7b52bedcbaeb6aa2a9f46985c5bbb13d93c120"} err="failed to get container status \"824c1d17913ffbcd514c9da59a7b52bedcbaeb6aa2a9f46985c5bbb13d93c120\": rpc error: code = NotFound desc = could not find container \"824c1d17913ffbcd514c9da59a7b52bedcbaeb6aa2a9f46985c5bbb13d93c120\": container with ID starting with 824c1d17913ffbcd514c9da59a7b52bedcbaeb6aa2a9f46985c5bbb13d93c120 not found: ID does not exist" Feb 23 10:08:20 crc kubenswrapper[4940]: I0223 10:08:20.614304 4940 scope.go:117] "RemoveContainer" containerID="d9e9a6000aa76a3196451e527e6b97ea8c9557b16bfba2723c5a95eab98c9329" Feb 23 10:08:20 crc kubenswrapper[4940]: E0223 10:08:20.614590 4940 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d9e9a6000aa76a3196451e527e6b97ea8c9557b16bfba2723c5a95eab98c9329\": container with ID starting with d9e9a6000aa76a3196451e527e6b97ea8c9557b16bfba2723c5a95eab98c9329 not found: ID does not exist" containerID="d9e9a6000aa76a3196451e527e6b97ea8c9557b16bfba2723c5a95eab98c9329" Feb 23 10:08:20 crc kubenswrapper[4940]: I0223 10:08:20.614626 4940 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d9e9a6000aa76a3196451e527e6b97ea8c9557b16bfba2723c5a95eab98c9329"} err="failed to get container status \"d9e9a6000aa76a3196451e527e6b97ea8c9557b16bfba2723c5a95eab98c9329\": rpc error: code = NotFound desc = could not find container \"d9e9a6000aa76a3196451e527e6b97ea8c9557b16bfba2723c5a95eab98c9329\": container with ID starting with d9e9a6000aa76a3196451e527e6b97ea8c9557b16bfba2723c5a95eab98c9329 not found: ID does not exist" Feb 23 10:08:20 crc kubenswrapper[4940]: I0223 10:08:20.614639 4940 scope.go:117] "RemoveContainer" containerID="b48cf1f85e8cef7e894790f327a748b8657831f661076f6e9767278a1bb4fec7" Feb 23 10:08:20 crc kubenswrapper[4940]: I0223 10:08:20.639304 4940 scope.go:117] "RemoveContainer" containerID="f1394a4f84fe10386ca7854cbb3e2ec562815908102a65d074ed911b714eb944" Feb 23 10:08:20 crc kubenswrapper[4940]: I0223 10:08:20.671058 4940 scope.go:117] "RemoveContainer" containerID="3144fe00aa4046ddb2dcdb352bf8018a7db80869dcb208d324f52806f57c6adb" Feb 23 10:08:20 crc kubenswrapper[4940]: I0223 10:08:20.710138 4940 scope.go:117] "RemoveContainer" containerID="b48cf1f85e8cef7e894790f327a748b8657831f661076f6e9767278a1bb4fec7" Feb 23 10:08:20 crc kubenswrapper[4940]: E0223 10:08:20.710509 4940 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b48cf1f85e8cef7e894790f327a748b8657831f661076f6e9767278a1bb4fec7\": container with ID starting with b48cf1f85e8cef7e894790f327a748b8657831f661076f6e9767278a1bb4fec7 not found: ID does not exist" containerID="b48cf1f85e8cef7e894790f327a748b8657831f661076f6e9767278a1bb4fec7" Feb 23 10:08:20 crc kubenswrapper[4940]: I0223 10:08:20.710545 4940 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b48cf1f85e8cef7e894790f327a748b8657831f661076f6e9767278a1bb4fec7"} err="failed to get container status \"b48cf1f85e8cef7e894790f327a748b8657831f661076f6e9767278a1bb4fec7\": rpc error: code = NotFound desc = could not find container \"b48cf1f85e8cef7e894790f327a748b8657831f661076f6e9767278a1bb4fec7\": container with ID starting with b48cf1f85e8cef7e894790f327a748b8657831f661076f6e9767278a1bb4fec7 not found: ID does not exist" Feb 23 10:08:20 crc kubenswrapper[4940]: I0223 10:08:20.710569 4940 scope.go:117] "RemoveContainer" containerID="f1394a4f84fe10386ca7854cbb3e2ec562815908102a65d074ed911b714eb944" Feb 23 10:08:20 crc kubenswrapper[4940]: E0223 10:08:20.710816 4940 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f1394a4f84fe10386ca7854cbb3e2ec562815908102a65d074ed911b714eb944\": container with ID starting with f1394a4f84fe10386ca7854cbb3e2ec562815908102a65d074ed911b714eb944 not found: ID does not exist" containerID="f1394a4f84fe10386ca7854cbb3e2ec562815908102a65d074ed911b714eb944" Feb 23 10:08:20 crc kubenswrapper[4940]: I0223 10:08:20.710838 4940 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f1394a4f84fe10386ca7854cbb3e2ec562815908102a65d074ed911b714eb944"} err="failed to get container status \"f1394a4f84fe10386ca7854cbb3e2ec562815908102a65d074ed911b714eb944\": rpc error: code = NotFound desc = could not find container \"f1394a4f84fe10386ca7854cbb3e2ec562815908102a65d074ed911b714eb944\": container with ID starting with f1394a4f84fe10386ca7854cbb3e2ec562815908102a65d074ed911b714eb944 not found: ID does not exist" Feb 23 10:08:20 crc kubenswrapper[4940]: I0223 10:08:20.710852 4940 scope.go:117] "RemoveContainer" containerID="3144fe00aa4046ddb2dcdb352bf8018a7db80869dcb208d324f52806f57c6adb" Feb 23 10:08:20 crc kubenswrapper[4940]: E0223 10:08:20.711060 4940 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3144fe00aa4046ddb2dcdb352bf8018a7db80869dcb208d324f52806f57c6adb\": container with ID starting with 3144fe00aa4046ddb2dcdb352bf8018a7db80869dcb208d324f52806f57c6adb not found: ID does not exist" containerID="3144fe00aa4046ddb2dcdb352bf8018a7db80869dcb208d324f52806f57c6adb" Feb 23 10:08:20 crc kubenswrapper[4940]: I0223 10:08:20.711078 4940 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3144fe00aa4046ddb2dcdb352bf8018a7db80869dcb208d324f52806f57c6adb"} err="failed to get container status \"3144fe00aa4046ddb2dcdb352bf8018a7db80869dcb208d324f52806f57c6adb\": rpc error: code = NotFound desc = could not find container \"3144fe00aa4046ddb2dcdb352bf8018a7db80869dcb208d324f52806f57c6adb\": container with ID starting with 3144fe00aa4046ddb2dcdb352bf8018a7db80869dcb208d324f52806f57c6adb not found: ID does not exist" Feb 23 10:08:20 crc kubenswrapper[4940]: I0223 10:08:20.765545 4940 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-hvxfm"] Feb 23 10:08:20 crc kubenswrapper[4940]: I0223 10:08:20.774786 4940 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-hvxfm"] Feb 23 10:08:21 crc kubenswrapper[4940]: I0223 10:08:21.359097 4940 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1285cc0b-bda4-4d3a-9bdc-98650295bd09" path="/var/lib/kubelet/pods/1285cc0b-bda4-4d3a-9bdc-98650295bd09/volumes" Feb 23 10:08:21 crc kubenswrapper[4940]: I0223 10:08:21.360344 4940 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ecadaddd-3b79-40ab-8938-5d8bc8c8d01a" path="/var/lib/kubelet/pods/ecadaddd-3b79-40ab-8938-5d8bc8c8d01a/volumes" Feb 23 10:08:31 crc kubenswrapper[4940]: I0223 10:08:31.428971 4940 patch_prober.go:28] interesting pod/machine-config-daemon-26mgs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 23 10:08:31 crc kubenswrapper[4940]: I0223 10:08:31.429551 4940 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" podUID="f3f2cfd6-5ddf-436d-998f-440f1cc642b1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 23 10:08:31 crc kubenswrapper[4940]: I0223 10:08:31.429605 4940 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" Feb 23 10:08:31 crc kubenswrapper[4940]: I0223 10:08:31.430592 4940 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"73564eb7ca7da05d30ce02d7b7f4f13edd916d63410bf2494932823351662582"} pod="openshift-machine-config-operator/machine-config-daemon-26mgs" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 23 10:08:31 crc kubenswrapper[4940]: I0223 10:08:31.430676 4940 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" podUID="f3f2cfd6-5ddf-436d-998f-440f1cc642b1" containerName="machine-config-daemon" containerID="cri-o://73564eb7ca7da05d30ce02d7b7f4f13edd916d63410bf2494932823351662582" gracePeriod=600 Feb 23 10:08:32 crc kubenswrapper[4940]: I0223 10:08:32.506775 4940 generic.go:334] "Generic (PLEG): container finished" podID="f3f2cfd6-5ddf-436d-998f-440f1cc642b1" containerID="73564eb7ca7da05d30ce02d7b7f4f13edd916d63410bf2494932823351662582" exitCode=0 Feb 23 10:08:32 crc kubenswrapper[4940]: I0223 10:08:32.506855 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" event={"ID":"f3f2cfd6-5ddf-436d-998f-440f1cc642b1","Type":"ContainerDied","Data":"73564eb7ca7da05d30ce02d7b7f4f13edd916d63410bf2494932823351662582"} Feb 23 10:08:32 crc kubenswrapper[4940]: I0223 10:08:32.507223 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" event={"ID":"f3f2cfd6-5ddf-436d-998f-440f1cc642b1","Type":"ContainerStarted","Data":"15348b9c177e6685e259fc78f404481f8793b0b450981c2e491d638571adaf4f"} Feb 23 10:08:32 crc kubenswrapper[4940]: I0223 10:08:32.507246 4940 scope.go:117] "RemoveContainer" containerID="60c1bdf7587e6a2309b5b7e3f1e911b58db5d2196b9ec6a170337311283de3da" Feb 23 10:08:53 crc kubenswrapper[4940]: I0223 10:08:53.702397 4940 generic.go:334] "Generic (PLEG): container finished" podID="3f5ca541-00d0-4deb-896d-acd546d0a819" containerID="2ba1561fedeb67bce131f34cf77038bf40109087e60ca2eb3dc9fdb9c574c4f6" exitCode=0 Feb 23 10:08:53 crc kubenswrapper[4940]: I0223 10:08:53.702561 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-27xfq/crc-debug-dvpkt" event={"ID":"3f5ca541-00d0-4deb-896d-acd546d0a819","Type":"ContainerDied","Data":"2ba1561fedeb67bce131f34cf77038bf40109087e60ca2eb3dc9fdb9c574c4f6"} Feb 23 10:08:54 crc kubenswrapper[4940]: I0223 10:08:54.938047 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-27xfq/crc-debug-dvpkt" Feb 23 10:08:54 crc kubenswrapper[4940]: I0223 10:08:54.989144 4940 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-27xfq/crc-debug-dvpkt"] Feb 23 10:08:54 crc kubenswrapper[4940]: I0223 10:08:54.993939 4940 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-27xfq/crc-debug-dvpkt"] Feb 23 10:08:55 crc kubenswrapper[4940]: I0223 10:08:55.022396 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/3f5ca541-00d0-4deb-896d-acd546d0a819-host\") pod \"3f5ca541-00d0-4deb-896d-acd546d0a819\" (UID: \"3f5ca541-00d0-4deb-896d-acd546d0a819\") " Feb 23 10:08:55 crc kubenswrapper[4940]: I0223 10:08:55.022509 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3f5ca541-00d0-4deb-896d-acd546d0a819-host" (OuterVolumeSpecName: "host") pod "3f5ca541-00d0-4deb-896d-acd546d0a819" (UID: "3f5ca541-00d0-4deb-896d-acd546d0a819"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 10:08:55 crc kubenswrapper[4940]: I0223 10:08:55.022584 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ndsxf\" (UniqueName: \"kubernetes.io/projected/3f5ca541-00d0-4deb-896d-acd546d0a819-kube-api-access-ndsxf\") pod \"3f5ca541-00d0-4deb-896d-acd546d0a819\" (UID: \"3f5ca541-00d0-4deb-896d-acd546d0a819\") " Feb 23 10:08:55 crc kubenswrapper[4940]: I0223 10:08:55.023342 4940 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/3f5ca541-00d0-4deb-896d-acd546d0a819-host\") on node \"crc\" DevicePath \"\"" Feb 23 10:08:55 crc kubenswrapper[4940]: I0223 10:08:55.029979 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3f5ca541-00d0-4deb-896d-acd546d0a819-kube-api-access-ndsxf" (OuterVolumeSpecName: "kube-api-access-ndsxf") pod "3f5ca541-00d0-4deb-896d-acd546d0a819" (UID: "3f5ca541-00d0-4deb-896d-acd546d0a819"). InnerVolumeSpecName "kube-api-access-ndsxf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 10:08:55 crc kubenswrapper[4940]: I0223 10:08:55.125353 4940 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ndsxf\" (UniqueName: \"kubernetes.io/projected/3f5ca541-00d0-4deb-896d-acd546d0a819-kube-api-access-ndsxf\") on node \"crc\" DevicePath \"\"" Feb 23 10:08:55 crc kubenswrapper[4940]: I0223 10:08:55.356582 4940 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3f5ca541-00d0-4deb-896d-acd546d0a819" path="/var/lib/kubelet/pods/3f5ca541-00d0-4deb-896d-acd546d0a819/volumes" Feb 23 10:08:55 crc kubenswrapper[4940]: I0223 10:08:55.733957 4940 scope.go:117] "RemoveContainer" containerID="2ba1561fedeb67bce131f34cf77038bf40109087e60ca2eb3dc9fdb9c574c4f6" Feb 23 10:08:55 crc kubenswrapper[4940]: I0223 10:08:55.734277 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-27xfq/crc-debug-dvpkt" Feb 23 10:08:56 crc kubenswrapper[4940]: I0223 10:08:56.212004 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-27xfq/crc-debug-5l569"] Feb 23 10:08:56 crc kubenswrapper[4940]: E0223 10:08:56.212721 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ecadaddd-3b79-40ab-8938-5d8bc8c8d01a" containerName="registry-server" Feb 23 10:08:56 crc kubenswrapper[4940]: I0223 10:08:56.212749 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="ecadaddd-3b79-40ab-8938-5d8bc8c8d01a" containerName="registry-server" Feb 23 10:08:56 crc kubenswrapper[4940]: E0223 10:08:56.212774 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ecadaddd-3b79-40ab-8938-5d8bc8c8d01a" containerName="extract-content" Feb 23 10:08:56 crc kubenswrapper[4940]: I0223 10:08:56.212785 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="ecadaddd-3b79-40ab-8938-5d8bc8c8d01a" containerName="extract-content" Feb 23 10:08:56 crc kubenswrapper[4940]: E0223 10:08:56.212810 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1285cc0b-bda4-4d3a-9bdc-98650295bd09" containerName="extract-utilities" Feb 23 10:08:56 crc kubenswrapper[4940]: I0223 10:08:56.212823 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="1285cc0b-bda4-4d3a-9bdc-98650295bd09" containerName="extract-utilities" Feb 23 10:08:56 crc kubenswrapper[4940]: E0223 10:08:56.212869 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ecadaddd-3b79-40ab-8938-5d8bc8c8d01a" containerName="extract-utilities" Feb 23 10:08:56 crc kubenswrapper[4940]: I0223 10:08:56.212879 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="ecadaddd-3b79-40ab-8938-5d8bc8c8d01a" containerName="extract-utilities" Feb 23 10:08:56 crc kubenswrapper[4940]: E0223 10:08:56.212897 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f5ca541-00d0-4deb-896d-acd546d0a819" containerName="container-00" Feb 23 10:08:56 crc kubenswrapper[4940]: I0223 10:08:56.212907 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f5ca541-00d0-4deb-896d-acd546d0a819" containerName="container-00" Feb 23 10:08:56 crc kubenswrapper[4940]: E0223 10:08:56.212939 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1285cc0b-bda4-4d3a-9bdc-98650295bd09" containerName="extract-content" Feb 23 10:08:56 crc kubenswrapper[4940]: I0223 10:08:56.212950 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="1285cc0b-bda4-4d3a-9bdc-98650295bd09" containerName="extract-content" Feb 23 10:08:56 crc kubenswrapper[4940]: E0223 10:08:56.212970 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1285cc0b-bda4-4d3a-9bdc-98650295bd09" containerName="registry-server" Feb 23 10:08:56 crc kubenswrapper[4940]: I0223 10:08:56.212980 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="1285cc0b-bda4-4d3a-9bdc-98650295bd09" containerName="registry-server" Feb 23 10:08:56 crc kubenswrapper[4940]: I0223 10:08:56.213298 4940 memory_manager.go:354] "RemoveStaleState removing state" podUID="ecadaddd-3b79-40ab-8938-5d8bc8c8d01a" containerName="registry-server" Feb 23 10:08:56 crc kubenswrapper[4940]: I0223 10:08:56.213339 4940 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f5ca541-00d0-4deb-896d-acd546d0a819" containerName="container-00" Feb 23 10:08:56 crc kubenswrapper[4940]: I0223 10:08:56.213365 4940 memory_manager.go:354] "RemoveStaleState removing state" podUID="1285cc0b-bda4-4d3a-9bdc-98650295bd09" containerName="registry-server" Feb 23 10:08:56 crc kubenswrapper[4940]: I0223 10:08:56.214274 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-27xfq/crc-debug-5l569" Feb 23 10:08:56 crc kubenswrapper[4940]: I0223 10:08:56.350605 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6spx9\" (UniqueName: \"kubernetes.io/projected/99a38c0c-dbad-4dfa-a94d-f2c4062322d9-kube-api-access-6spx9\") pod \"crc-debug-5l569\" (UID: \"99a38c0c-dbad-4dfa-a94d-f2c4062322d9\") " pod="openshift-must-gather-27xfq/crc-debug-5l569" Feb 23 10:08:56 crc kubenswrapper[4940]: I0223 10:08:56.350767 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/99a38c0c-dbad-4dfa-a94d-f2c4062322d9-host\") pod \"crc-debug-5l569\" (UID: \"99a38c0c-dbad-4dfa-a94d-f2c4062322d9\") " pod="openshift-must-gather-27xfq/crc-debug-5l569" Feb 23 10:08:56 crc kubenswrapper[4940]: I0223 10:08:56.452329 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6spx9\" (UniqueName: \"kubernetes.io/projected/99a38c0c-dbad-4dfa-a94d-f2c4062322d9-kube-api-access-6spx9\") pod \"crc-debug-5l569\" (UID: \"99a38c0c-dbad-4dfa-a94d-f2c4062322d9\") " pod="openshift-must-gather-27xfq/crc-debug-5l569" Feb 23 10:08:56 crc kubenswrapper[4940]: I0223 10:08:56.452432 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/99a38c0c-dbad-4dfa-a94d-f2c4062322d9-host\") pod \"crc-debug-5l569\" (UID: \"99a38c0c-dbad-4dfa-a94d-f2c4062322d9\") " pod="openshift-must-gather-27xfq/crc-debug-5l569" Feb 23 10:08:56 crc kubenswrapper[4940]: I0223 10:08:56.452784 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/99a38c0c-dbad-4dfa-a94d-f2c4062322d9-host\") pod \"crc-debug-5l569\" (UID: \"99a38c0c-dbad-4dfa-a94d-f2c4062322d9\") " pod="openshift-must-gather-27xfq/crc-debug-5l569" Feb 23 10:08:56 crc kubenswrapper[4940]: I0223 10:08:56.627657 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6spx9\" (UniqueName: \"kubernetes.io/projected/99a38c0c-dbad-4dfa-a94d-f2c4062322d9-kube-api-access-6spx9\") pod \"crc-debug-5l569\" (UID: \"99a38c0c-dbad-4dfa-a94d-f2c4062322d9\") " pod="openshift-must-gather-27xfq/crc-debug-5l569" Feb 23 10:08:56 crc kubenswrapper[4940]: I0223 10:08:56.831262 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-27xfq/crc-debug-5l569" Feb 23 10:08:57 crc kubenswrapper[4940]: I0223 10:08:57.773695 4940 generic.go:334] "Generic (PLEG): container finished" podID="99a38c0c-dbad-4dfa-a94d-f2c4062322d9" containerID="df627c3f5c9f3ae2ff8743b6f1edefcd3e58d47c4737424ba0aa22c771853e51" exitCode=0 Feb 23 10:08:57 crc kubenswrapper[4940]: I0223 10:08:57.773785 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-27xfq/crc-debug-5l569" event={"ID":"99a38c0c-dbad-4dfa-a94d-f2c4062322d9","Type":"ContainerDied","Data":"df627c3f5c9f3ae2ff8743b6f1edefcd3e58d47c4737424ba0aa22c771853e51"} Feb 23 10:08:57 crc kubenswrapper[4940]: I0223 10:08:57.774048 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-27xfq/crc-debug-5l569" event={"ID":"99a38c0c-dbad-4dfa-a94d-f2c4062322d9","Type":"ContainerStarted","Data":"3a9b204db67c0fe0f3df040b8acfe79ae244ea19ef3484ac6a60bce4547245dd"} Feb 23 10:08:58 crc kubenswrapper[4940]: I0223 10:08:58.924028 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-27xfq/crc-debug-5l569" Feb 23 10:08:59 crc kubenswrapper[4940]: I0223 10:08:59.002439 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/99a38c0c-dbad-4dfa-a94d-f2c4062322d9-host\") pod \"99a38c0c-dbad-4dfa-a94d-f2c4062322d9\" (UID: \"99a38c0c-dbad-4dfa-a94d-f2c4062322d9\") " Feb 23 10:08:59 crc kubenswrapper[4940]: I0223 10:08:59.002592 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/99a38c0c-dbad-4dfa-a94d-f2c4062322d9-host" (OuterVolumeSpecName: "host") pod "99a38c0c-dbad-4dfa-a94d-f2c4062322d9" (UID: "99a38c0c-dbad-4dfa-a94d-f2c4062322d9"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 10:08:59 crc kubenswrapper[4940]: I0223 10:08:59.002721 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6spx9\" (UniqueName: \"kubernetes.io/projected/99a38c0c-dbad-4dfa-a94d-f2c4062322d9-kube-api-access-6spx9\") pod \"99a38c0c-dbad-4dfa-a94d-f2c4062322d9\" (UID: \"99a38c0c-dbad-4dfa-a94d-f2c4062322d9\") " Feb 23 10:08:59 crc kubenswrapper[4940]: I0223 10:08:59.003693 4940 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/99a38c0c-dbad-4dfa-a94d-f2c4062322d9-host\") on node \"crc\" DevicePath \"\"" Feb 23 10:08:59 crc kubenswrapper[4940]: I0223 10:08:59.022757 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/99a38c0c-dbad-4dfa-a94d-f2c4062322d9-kube-api-access-6spx9" (OuterVolumeSpecName: "kube-api-access-6spx9") pod "99a38c0c-dbad-4dfa-a94d-f2c4062322d9" (UID: "99a38c0c-dbad-4dfa-a94d-f2c4062322d9"). InnerVolumeSpecName "kube-api-access-6spx9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 10:08:59 crc kubenswrapper[4940]: I0223 10:08:59.105315 4940 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6spx9\" (UniqueName: \"kubernetes.io/projected/99a38c0c-dbad-4dfa-a94d-f2c4062322d9-kube-api-access-6spx9\") on node \"crc\" DevicePath \"\"" Feb 23 10:08:59 crc kubenswrapper[4940]: I0223 10:08:59.795485 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-27xfq/crc-debug-5l569" event={"ID":"99a38c0c-dbad-4dfa-a94d-f2c4062322d9","Type":"ContainerDied","Data":"3a9b204db67c0fe0f3df040b8acfe79ae244ea19ef3484ac6a60bce4547245dd"} Feb 23 10:08:59 crc kubenswrapper[4940]: I0223 10:08:59.795545 4940 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3a9b204db67c0fe0f3df040b8acfe79ae244ea19ef3484ac6a60bce4547245dd" Feb 23 10:08:59 crc kubenswrapper[4940]: I0223 10:08:59.795638 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-27xfq/crc-debug-5l569" Feb 23 10:09:00 crc kubenswrapper[4940]: I0223 10:09:00.533104 4940 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-27xfq/crc-debug-5l569"] Feb 23 10:09:00 crc kubenswrapper[4940]: I0223 10:09:00.546174 4940 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-27xfq/crc-debug-5l569"] Feb 23 10:09:01 crc kubenswrapper[4940]: I0223 10:09:01.355673 4940 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="99a38c0c-dbad-4dfa-a94d-f2c4062322d9" path="/var/lib/kubelet/pods/99a38c0c-dbad-4dfa-a94d-f2c4062322d9/volumes" Feb 23 10:09:01 crc kubenswrapper[4940]: I0223 10:09:01.838957 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-27xfq/crc-debug-4rfh9"] Feb 23 10:09:01 crc kubenswrapper[4940]: E0223 10:09:01.839496 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="99a38c0c-dbad-4dfa-a94d-f2c4062322d9" containerName="container-00" Feb 23 10:09:01 crc kubenswrapper[4940]: I0223 10:09:01.839513 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="99a38c0c-dbad-4dfa-a94d-f2c4062322d9" containerName="container-00" Feb 23 10:09:01 crc kubenswrapper[4940]: I0223 10:09:01.839957 4940 memory_manager.go:354] "RemoveStaleState removing state" podUID="99a38c0c-dbad-4dfa-a94d-f2c4062322d9" containerName="container-00" Feb 23 10:09:01 crc kubenswrapper[4940]: I0223 10:09:01.841460 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-27xfq/crc-debug-4rfh9" Feb 23 10:09:01 crc kubenswrapper[4940]: I0223 10:09:01.960318 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/828c343a-4608-48d8-a790-195df40097d3-host\") pod \"crc-debug-4rfh9\" (UID: \"828c343a-4608-48d8-a790-195df40097d3\") " pod="openshift-must-gather-27xfq/crc-debug-4rfh9" Feb 23 10:09:01 crc kubenswrapper[4940]: I0223 10:09:01.960571 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qv567\" (UniqueName: \"kubernetes.io/projected/828c343a-4608-48d8-a790-195df40097d3-kube-api-access-qv567\") pod \"crc-debug-4rfh9\" (UID: \"828c343a-4608-48d8-a790-195df40097d3\") " pod="openshift-must-gather-27xfq/crc-debug-4rfh9" Feb 23 10:09:02 crc kubenswrapper[4940]: I0223 10:09:02.062677 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qv567\" (UniqueName: \"kubernetes.io/projected/828c343a-4608-48d8-a790-195df40097d3-kube-api-access-qv567\") pod \"crc-debug-4rfh9\" (UID: \"828c343a-4608-48d8-a790-195df40097d3\") " pod="openshift-must-gather-27xfq/crc-debug-4rfh9" Feb 23 10:09:02 crc kubenswrapper[4940]: I0223 10:09:02.062804 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/828c343a-4608-48d8-a790-195df40097d3-host\") pod \"crc-debug-4rfh9\" (UID: \"828c343a-4608-48d8-a790-195df40097d3\") " pod="openshift-must-gather-27xfq/crc-debug-4rfh9" Feb 23 10:09:02 crc kubenswrapper[4940]: I0223 10:09:02.062914 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/828c343a-4608-48d8-a790-195df40097d3-host\") pod \"crc-debug-4rfh9\" (UID: \"828c343a-4608-48d8-a790-195df40097d3\") " pod="openshift-must-gather-27xfq/crc-debug-4rfh9" Feb 23 10:09:02 crc kubenswrapper[4940]: I0223 10:09:02.080869 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qv567\" (UniqueName: \"kubernetes.io/projected/828c343a-4608-48d8-a790-195df40097d3-kube-api-access-qv567\") pod \"crc-debug-4rfh9\" (UID: \"828c343a-4608-48d8-a790-195df40097d3\") " pod="openshift-must-gather-27xfq/crc-debug-4rfh9" Feb 23 10:09:02 crc kubenswrapper[4940]: I0223 10:09:02.164506 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-27xfq/crc-debug-4rfh9" Feb 23 10:09:02 crc kubenswrapper[4940]: I0223 10:09:02.827234 4940 generic.go:334] "Generic (PLEG): container finished" podID="828c343a-4608-48d8-a790-195df40097d3" containerID="b1e8aec565f2a224177633f47452d4bc89d6ade230d02d490a32555f525c6bb5" exitCode=0 Feb 23 10:09:02 crc kubenswrapper[4940]: I0223 10:09:02.827309 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-27xfq/crc-debug-4rfh9" event={"ID":"828c343a-4608-48d8-a790-195df40097d3","Type":"ContainerDied","Data":"b1e8aec565f2a224177633f47452d4bc89d6ade230d02d490a32555f525c6bb5"} Feb 23 10:09:02 crc kubenswrapper[4940]: I0223 10:09:02.827732 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-27xfq/crc-debug-4rfh9" event={"ID":"828c343a-4608-48d8-a790-195df40097d3","Type":"ContainerStarted","Data":"182aead29228e85ab51f975100a856ccfe9a39b1da0625f14e4af30464e27ba7"} Feb 23 10:09:02 crc kubenswrapper[4940]: I0223 10:09:02.870171 4940 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-27xfq/crc-debug-4rfh9"] Feb 23 10:09:02 crc kubenswrapper[4940]: I0223 10:09:02.878804 4940 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-27xfq/crc-debug-4rfh9"] Feb 23 10:09:03 crc kubenswrapper[4940]: I0223 10:09:03.940012 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-27xfq/crc-debug-4rfh9" Feb 23 10:09:04 crc kubenswrapper[4940]: I0223 10:09:04.105430 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/828c343a-4608-48d8-a790-195df40097d3-host\") pod \"828c343a-4608-48d8-a790-195df40097d3\" (UID: \"828c343a-4608-48d8-a790-195df40097d3\") " Feb 23 10:09:04 crc kubenswrapper[4940]: I0223 10:09:04.105559 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/828c343a-4608-48d8-a790-195df40097d3-host" (OuterVolumeSpecName: "host") pod "828c343a-4608-48d8-a790-195df40097d3" (UID: "828c343a-4608-48d8-a790-195df40097d3"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 10:09:04 crc kubenswrapper[4940]: I0223 10:09:04.105631 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qv567\" (UniqueName: \"kubernetes.io/projected/828c343a-4608-48d8-a790-195df40097d3-kube-api-access-qv567\") pod \"828c343a-4608-48d8-a790-195df40097d3\" (UID: \"828c343a-4608-48d8-a790-195df40097d3\") " Feb 23 10:09:04 crc kubenswrapper[4940]: I0223 10:09:04.106313 4940 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/828c343a-4608-48d8-a790-195df40097d3-host\") on node \"crc\" DevicePath \"\"" Feb 23 10:09:04 crc kubenswrapper[4940]: I0223 10:09:04.117872 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/828c343a-4608-48d8-a790-195df40097d3-kube-api-access-qv567" (OuterVolumeSpecName: "kube-api-access-qv567") pod "828c343a-4608-48d8-a790-195df40097d3" (UID: "828c343a-4608-48d8-a790-195df40097d3"). InnerVolumeSpecName "kube-api-access-qv567". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 10:09:04 crc kubenswrapper[4940]: I0223 10:09:04.208464 4940 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qv567\" (UniqueName: \"kubernetes.io/projected/828c343a-4608-48d8-a790-195df40097d3-kube-api-access-qv567\") on node \"crc\" DevicePath \"\"" Feb 23 10:09:04 crc kubenswrapper[4940]: I0223 10:09:04.850065 4940 scope.go:117] "RemoveContainer" containerID="b1e8aec565f2a224177633f47452d4bc89d6ade230d02d490a32555f525c6bb5" Feb 23 10:09:04 crc kubenswrapper[4940]: I0223 10:09:04.850147 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-27xfq/crc-debug-4rfh9" Feb 23 10:09:05 crc kubenswrapper[4940]: I0223 10:09:05.357974 4940 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="828c343a-4608-48d8-a790-195df40097d3" path="/var/lib/kubelet/pods/828c343a-4608-48d8-a790-195df40097d3/volumes" Feb 23 10:09:20 crc kubenswrapper[4940]: I0223 10:09:20.590456 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-8f67f879d-fb7mr_e15eadde-81b6-46a2-bc90-7f8ded67b3bd/barbican-api/0.log" Feb 23 10:09:20 crc kubenswrapper[4940]: I0223 10:09:20.794671 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-5d6cbfd9cd-f6hzm_8e5d03e5-c75c-4cfe-a31a-fa57e04a51ce/barbican-keystone-listener/0.log" Feb 23 10:09:20 crc kubenswrapper[4940]: I0223 10:09:20.814470 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-8f67f879d-fb7mr_e15eadde-81b6-46a2-bc90-7f8ded67b3bd/barbican-api-log/0.log" Feb 23 10:09:21 crc kubenswrapper[4940]: I0223 10:09:21.065052 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-7b9b88c6bc-hkv9v_8008f8dc-0709-408f-88d1-0707f66c0a10/barbican-worker/0.log" Feb 23 10:09:21 crc kubenswrapper[4940]: I0223 10:09:21.082659 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-7b9b88c6bc-hkv9v_8008f8dc-0709-408f-88d1-0707f66c0a10/barbican-worker-log/0.log" Feb 23 10:09:21 crc kubenswrapper[4940]: I0223 10:09:21.330902 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-kvmwh_5d90dbb8-e870-41e1-bbab-a053b479fee1/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Feb 23 10:09:21 crc kubenswrapper[4940]: I0223 10:09:21.611380 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_3545f0d9-3f75-4de3-ab04-716362d1a057/ceilometer-notification-agent/0.log" Feb 23 10:09:21 crc kubenswrapper[4940]: I0223 10:09:21.615406 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_3545f0d9-3f75-4de3-ab04-716362d1a057/proxy-httpd/0.log" Feb 23 10:09:21 crc kubenswrapper[4940]: I0223 10:09:21.645664 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-5d6cbfd9cd-f6hzm_8e5d03e5-c75c-4cfe-a31a-fa57e04a51ce/barbican-keystone-listener-log/0.log" Feb 23 10:09:21 crc kubenswrapper[4940]: I0223 10:09:21.667845 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_3545f0d9-3f75-4de3-ab04-716362d1a057/ceilometer-central-agent/0.log" Feb 23 10:09:21 crc kubenswrapper[4940]: I0223 10:09:21.780023 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_3545f0d9-3f75-4de3-ab04-716362d1a057/sg-core/0.log" Feb 23 10:09:21 crc kubenswrapper[4940]: I0223 10:09:21.925823 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceph_63ebc8a2-744a-4844-b60d-80fefedbf7df/ceph/0.log" Feb 23 10:09:22 crc kubenswrapper[4940]: I0223 10:09:22.281778 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_f91c0e0d-08da-47b9-acef-5e4e9856fc85/cinder-api-log/0.log" Feb 23 10:09:22 crc kubenswrapper[4940]: I0223 10:09:22.317203 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_f91c0e0d-08da-47b9-acef-5e4e9856fc85/cinder-api/0.log" Feb 23 10:09:22 crc kubenswrapper[4940]: I0223 10:09:22.395456 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-backup-0_20de4506-b14e-4f9b-9afc-c4d9ac6aef52/probe/0.log" Feb 23 10:09:22 crc kubenswrapper[4940]: I0223 10:09:22.619830 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_8ea914bd-a046-42ba-942e-7d3d778d0b52/cinder-scheduler/0.log" Feb 23 10:09:22 crc kubenswrapper[4940]: I0223 10:09:22.745101 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_8ea914bd-a046-42ba-942e-7d3d778d0b52/probe/0.log" Feb 23 10:09:23 crc kubenswrapper[4940]: I0223 10:09:23.071120 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-volume-volume1-0_f446400f-c44a-49c0-891b-83b475c43e39/probe/0.log" Feb 23 10:09:23 crc kubenswrapper[4940]: I0223 10:09:23.254931 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-z77p7_10b1d407-edfe-4a01-9d25-ae2d0491e2aa/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Feb 23 10:09:23 crc kubenswrapper[4940]: I0223 10:09:23.504501 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-ttbp8_bf532816-d5b9-4205-844c-bf70b4cc5c18/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 23 10:09:23 crc kubenswrapper[4940]: I0223 10:09:23.792532 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-5d99fc9df9-v8kxm_99e36de7-b768-429a-a0c5-78ee546952bf/init/0.log" Feb 23 10:09:23 crc kubenswrapper[4940]: I0223 10:09:23.967827 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-5d99fc9df9-v8kxm_99e36de7-b768-429a-a0c5-78ee546952bf/init/0.log" Feb 23 10:09:24 crc kubenswrapper[4940]: I0223 10:09:24.193557 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-5d99fc9df9-v8kxm_99e36de7-b768-429a-a0c5-78ee546952bf/dnsmasq-dns/0.log" Feb 23 10:09:24 crc kubenswrapper[4940]: I0223 10:09:24.454511 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-nr6hb_50cd61db-fb52-4abe-a3c6-7c3e3777d04b/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Feb 23 10:09:24 crc kubenswrapper[4940]: I0223 10:09:24.594455 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-backup-0_20de4506-b14e-4f9b-9afc-c4d9ac6aef52/cinder-backup/0.log" Feb 23 10:09:25 crc kubenswrapper[4940]: I0223 10:09:25.072424 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_4c37345c-c81e-4d3f-8b55-8eec1705a5a1/glance-httpd/0.log" Feb 23 10:09:25 crc kubenswrapper[4940]: I0223 10:09:25.086905 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_4c37345c-c81e-4d3f-8b55-8eec1705a5a1/glance-log/0.log" Feb 23 10:09:25 crc kubenswrapper[4940]: I0223 10:09:25.096498 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_a6886923-a3fa-46f7-97f5-7864c61a5137/glance-httpd/0.log" Feb 23 10:09:25 crc kubenswrapper[4940]: I0223 10:09:25.342014 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_a6886923-a3fa-46f7-97f5-7864c61a5137/glance-log/0.log" Feb 23 10:09:25 crc kubenswrapper[4940]: I0223 10:09:25.481574 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-8485464bb-cvmj5_0c698dee-e3c4-44d3-a08b-73e6b1e87986/horizon/0.log" Feb 23 10:09:25 crc kubenswrapper[4940]: I0223 10:09:25.608483 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-l9x62_b42a2d02-c866-40d6-93ce-81d71aaf7195/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Feb 23 10:09:25 crc kubenswrapper[4940]: I0223 10:09:25.789656 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-nntpm_05541a9b-b462-4150-b0d7-131d75a1d775/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 23 10:09:26 crc kubenswrapper[4940]: I0223 10:09:26.074503 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-8485464bb-cvmj5_0c698dee-e3c4-44d3-a08b-73e6b1e87986/horizon-log/0.log" Feb 23 10:09:26 crc kubenswrapper[4940]: I0223 10:09:26.103836 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29530681-kgpdn_62c1078b-acb9-4ce6-9c47-290a2ec6e9b0/keystone-cron/0.log" Feb 23 10:09:26 crc kubenswrapper[4940]: I0223 10:09:26.217341 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-volume-volume1-0_f446400f-c44a-49c0-891b-83b475c43e39/cinder-volume/0.log" Feb 23 10:09:26 crc kubenswrapper[4940]: I0223 10:09:26.786593 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_15d7a09a-83f9-4b41-a280-e0d7257ee6f3/kube-state-metrics/0.log" Feb 23 10:09:26 crc kubenswrapper[4940]: I0223 10:09:26.958665 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-t6ttl_7376823f-eb39-4631-9cac-0d4b297a9580/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Feb 23 10:09:27 crc kubenswrapper[4940]: I0223 10:09:27.594804 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-scheduler-0_6efb7037-6af6-4b85-b2fc-940a912cddf4/probe/0.log" Feb 23 10:09:27 crc kubenswrapper[4940]: I0223 10:09:27.647422 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-scheduler-0_6efb7037-6af6-4b85-b2fc-940a912cddf4/manila-scheduler/0.log" Feb 23 10:09:27 crc kubenswrapper[4940]: I0223 10:09:27.708002 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-api-0_3f78e173-a538-4fa3-804d-25bff89a23ca/manila-api/0.log" Feb 23 10:09:28 crc kubenswrapper[4940]: I0223 10:09:28.138266 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-share-share1-0_1cfd9d39-e351-44f6-90b2-02c15fef4e9f/probe/0.log" Feb 23 10:09:28 crc kubenswrapper[4940]: I0223 10:09:28.405896 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-api-0_3f78e173-a538-4fa3-804d-25bff89a23ca/manila-api-log/0.log" Feb 23 10:09:28 crc kubenswrapper[4940]: I0223 10:09:28.415493 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-share-share1-0_1cfd9d39-e351-44f6-90b2-02c15fef4e9f/manila-share/0.log" Feb 23 10:09:29 crc kubenswrapper[4940]: I0223 10:09:29.095769 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-jl6h4_dea28292-7367-4777-9e99-80da3a9c51cf/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Feb 23 10:09:29 crc kubenswrapper[4940]: I0223 10:09:29.396494 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-654489f6f-92jdq_feae1958-0b14-4a24-af08-cb96a4131a47/neutron-httpd/0.log" Feb 23 10:09:29 crc kubenswrapper[4940]: I0223 10:09:29.686847 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-657b46f66d-5snf5_14b5e353-0333-4351-a628-4767407854ec/keystone-api/0.log" Feb 23 10:09:30 crc kubenswrapper[4940]: I0223 10:09:30.062823 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-654489f6f-92jdq_feae1958-0b14-4a24-af08-cb96a4131a47/neutron-api/0.log" Feb 23 10:09:30 crc kubenswrapper[4940]: I0223 10:09:30.662911 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_c854cfd6-7319-4a6c-8893-a96cd32bdcd0/nova-cell1-conductor-conductor/0.log" Feb 23 10:09:30 crc kubenswrapper[4940]: I0223 10:09:30.669056 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_09737df3-14f0-4f68-a683-5402bfcb0aab/nova-cell0-conductor-conductor/0.log" Feb 23 10:09:31 crc kubenswrapper[4940]: I0223 10:09:31.014160 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_1baa0ab5-14b9-4150-872e-e135857e3033/nova-cell1-novncproxy-novncproxy/0.log" Feb 23 10:09:31 crc kubenswrapper[4940]: I0223 10:09:31.336224 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-wj7m8_4528f4f4-45cd-415f-902e-d15ecef72b60/nova-edpm-deployment-openstack-edpm-ipam/0.log" Feb 23 10:09:31 crc kubenswrapper[4940]: I0223 10:09:31.657447 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_38eb6728-c410-4f85-ac35-969880b14e26/nova-metadata-log/0.log" Feb 23 10:09:31 crc kubenswrapper[4940]: I0223 10:09:31.967177 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_10792692-8f84-43da-aea3-46d28e5ba1f5/nova-api-log/0.log" Feb 23 10:09:32 crc kubenswrapper[4940]: I0223 10:09:32.439162 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_f228632e-c649-4cbf-9a32-5baad303ef28/mysql-bootstrap/0.log" Feb 23 10:09:32 crc kubenswrapper[4940]: I0223 10:09:32.589017 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_261aaecb-ec48-4d96-9579-35057b0d6394/nova-scheduler-scheduler/0.log" Feb 23 10:09:32 crc kubenswrapper[4940]: I0223 10:09:32.665971 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_f228632e-c649-4cbf-9a32-5baad303ef28/mysql-bootstrap/0.log" Feb 23 10:09:32 crc kubenswrapper[4940]: I0223 10:09:32.819346 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_f228632e-c649-4cbf-9a32-5baad303ef28/galera/0.log" Feb 23 10:09:32 crc kubenswrapper[4940]: I0223 10:09:32.828288 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_10792692-8f84-43da-aea3-46d28e5ba1f5/nova-api-api/0.log" Feb 23 10:09:33 crc kubenswrapper[4940]: I0223 10:09:33.058483 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_1b7438a4-1302-46b5-a005-b74758200871/mysql-bootstrap/0.log" Feb 23 10:09:33 crc kubenswrapper[4940]: I0223 10:09:33.259076 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_1b7438a4-1302-46b5-a005-b74758200871/mysql-bootstrap/0.log" Feb 23 10:09:33 crc kubenswrapper[4940]: I0223 10:09:33.334785 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_1b7438a4-1302-46b5-a005-b74758200871/galera/0.log" Feb 23 10:09:33 crc kubenswrapper[4940]: I0223 10:09:33.562909 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_1a7ead03-cd14-44b3-967b-9daaf4070687/openstackclient/0.log" Feb 23 10:09:33 crc kubenswrapper[4940]: I0223 10:09:33.590737 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-jl7wx_53a2f9a0-c632-432a-aebd-7f3c5863d0bc/openstack-network-exporter/0.log" Feb 23 10:09:33 crc kubenswrapper[4940]: I0223 10:09:33.839831 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-srtp4_67fcaa2c-2af4-49db-8193-de6e83317807/ovsdb-server-init/0.log" Feb 23 10:09:33 crc kubenswrapper[4940]: I0223 10:09:33.880699 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_38eb6728-c410-4f85-ac35-969880b14e26/nova-metadata-metadata/0.log" Feb 23 10:09:34 crc kubenswrapper[4940]: I0223 10:09:34.432730 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-srtp4_67fcaa2c-2af4-49db-8193-de6e83317807/ovsdb-server/0.log" Feb 23 10:09:34 crc kubenswrapper[4940]: I0223 10:09:34.444389 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-srtp4_67fcaa2c-2af4-49db-8193-de6e83317807/ovsdb-server-init/0.log" Feb 23 10:09:34 crc kubenswrapper[4940]: I0223 10:09:34.464211 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-srtp4_67fcaa2c-2af4-49db-8193-de6e83317807/ovs-vswitchd/0.log" Feb 23 10:09:34 crc kubenswrapper[4940]: I0223 10:09:34.617528 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-skhdb_f4dfcca1-21ca-42ff-bed0-1bb4f8d14aaa/ovn-controller/0.log" Feb 23 10:09:34 crc kubenswrapper[4940]: I0223 10:09:34.732418 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-ltqdk_d252356a-80f4-4cf3-b739-520d9bd4b2c1/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Feb 23 10:09:34 crc kubenswrapper[4940]: I0223 10:09:34.800450 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_a6f61bb8-d6f5-464b-8ee4-fee2fce8c60a/openstack-network-exporter/0.log" Feb 23 10:09:34 crc kubenswrapper[4940]: I0223 10:09:34.928675 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_a6f61bb8-d6f5-464b-8ee4-fee2fce8c60a/ovn-northd/0.log" Feb 23 10:09:34 crc kubenswrapper[4940]: I0223 10:09:34.992442 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_20aa8441-57d4-4190-8edb-609af4891496/openstack-network-exporter/0.log" Feb 23 10:09:35 crc kubenswrapper[4940]: I0223 10:09:35.043046 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_20aa8441-57d4-4190-8edb-609af4891496/ovsdbserver-nb/0.log" Feb 23 10:09:35 crc kubenswrapper[4940]: I0223 10:09:35.281047 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_0e5b3c11-0f21-4277-b49b-15dc23cc9d96/ovsdbserver-sb/0.log" Feb 23 10:09:35 crc kubenswrapper[4940]: I0223 10:09:35.292469 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_0e5b3c11-0f21-4277-b49b-15dc23cc9d96/openstack-network-exporter/0.log" Feb 23 10:09:35 crc kubenswrapper[4940]: I0223 10:09:35.644290 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_89409108-7455-4318-83ba-65a6dd96d76c/setup-container/0.log" Feb 23 10:09:35 crc kubenswrapper[4940]: I0223 10:09:35.792625 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-96958f474-956sq_e38493cb-6fde-4245-a5a4-99a91920708b/placement-api/0.log" Feb 23 10:09:36 crc kubenswrapper[4940]: I0223 10:09:36.033805 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-96958f474-956sq_e38493cb-6fde-4245-a5a4-99a91920708b/placement-log/0.log" Feb 23 10:09:36 crc kubenswrapper[4940]: I0223 10:09:36.419636 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_89409108-7455-4318-83ba-65a6dd96d76c/setup-container/0.log" Feb 23 10:09:36 crc kubenswrapper[4940]: I0223 10:09:36.468024 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_89409108-7455-4318-83ba-65a6dd96d76c/rabbitmq/0.log" Feb 23 10:09:36 crc kubenswrapper[4940]: I0223 10:09:36.508583 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_438743de-ddf8-4b10-878a-b87c389cd3b6/setup-container/0.log" Feb 23 10:09:36 crc kubenswrapper[4940]: I0223 10:09:36.783151 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_438743de-ddf8-4b10-878a-b87c389cd3b6/setup-container/0.log" Feb 23 10:09:36 crc kubenswrapper[4940]: I0223 10:09:36.810934 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_438743de-ddf8-4b10-878a-b87c389cd3b6/rabbitmq/0.log" Feb 23 10:09:36 crc kubenswrapper[4940]: I0223 10:09:36.837318 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-l48h8_863d4b0a-6bc6-44a6-89d0-9167411a397d/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 23 10:09:37 crc kubenswrapper[4940]: I0223 10:09:37.044863 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-mh545_5fe83bad-242b-4933-9ff1-525359d29867/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Feb 23 10:09:37 crc kubenswrapper[4940]: I0223 10:09:37.102742 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-wrv9l_cb29483b-9f50-4202-935a-0ff2e3e7d3ec/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Feb 23 10:09:37 crc kubenswrapper[4940]: I0223 10:09:37.268707 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-lgnzc_a25d2721-a065-4c7a-9d4c-61c3be28422e/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 23 10:09:37 crc kubenswrapper[4940]: I0223 10:09:37.419483 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-w7thd_96cf1ebf-387e-417f-83eb-a360f951217e/ssh-known-hosts-edpm-deployment/0.log" Feb 23 10:09:37 crc kubenswrapper[4940]: I0223 10:09:37.631795 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-6c475756fc-pxxbv_418704a3-dc2d-440f-8beb-2c00795cf4d4/proxy-server/0.log" Feb 23 10:09:37 crc kubenswrapper[4940]: I0223 10:09:37.834231 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-6c475756fc-pxxbv_418704a3-dc2d-440f-8beb-2c00795cf4d4/proxy-httpd/0.log" Feb 23 10:09:37 crc kubenswrapper[4940]: I0223 10:09:37.837127 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-4jjdt_1b9efcfe-df2d-405e-9f10-d22dbce174e9/swift-ring-rebalance/0.log" Feb 23 10:09:37 crc kubenswrapper[4940]: I0223 10:09:37.936847 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_8b985bfb-fd7d-4c37-b935-26bc80e96fc0/account-auditor/0.log" Feb 23 10:09:38 crc kubenswrapper[4940]: I0223 10:09:38.068123 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_8b985bfb-fd7d-4c37-b935-26bc80e96fc0/account-reaper/0.log" Feb 23 10:09:38 crc kubenswrapper[4940]: I0223 10:09:38.106472 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_8b985bfb-fd7d-4c37-b935-26bc80e96fc0/account-replicator/0.log" Feb 23 10:09:38 crc kubenswrapper[4940]: I0223 10:09:38.218251 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_8b985bfb-fd7d-4c37-b935-26bc80e96fc0/container-auditor/0.log" Feb 23 10:09:38 crc kubenswrapper[4940]: I0223 10:09:38.221576 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_8b985bfb-fd7d-4c37-b935-26bc80e96fc0/account-server/0.log" Feb 23 10:09:38 crc kubenswrapper[4940]: I0223 10:09:38.317815 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_8b985bfb-fd7d-4c37-b935-26bc80e96fc0/container-server/0.log" Feb 23 10:09:38 crc kubenswrapper[4940]: I0223 10:09:38.365562 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_8b985bfb-fd7d-4c37-b935-26bc80e96fc0/container-replicator/0.log" Feb 23 10:09:38 crc kubenswrapper[4940]: I0223 10:09:38.469492 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_8b985bfb-fd7d-4c37-b935-26bc80e96fc0/object-auditor/0.log" Feb 23 10:09:38 crc kubenswrapper[4940]: I0223 10:09:38.490677 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_8b985bfb-fd7d-4c37-b935-26bc80e96fc0/object-expirer/0.log" Feb 23 10:09:38 crc kubenswrapper[4940]: I0223 10:09:38.535556 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_8b985bfb-fd7d-4c37-b935-26bc80e96fc0/container-updater/0.log" Feb 23 10:09:38 crc kubenswrapper[4940]: I0223 10:09:38.621948 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_8b985bfb-fd7d-4c37-b935-26bc80e96fc0/object-replicator/0.log" Feb 23 10:09:38 crc kubenswrapper[4940]: I0223 10:09:38.717508 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_8b985bfb-fd7d-4c37-b935-26bc80e96fc0/object-server/0.log" Feb 23 10:09:38 crc kubenswrapper[4940]: I0223 10:09:38.743767 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_8b985bfb-fd7d-4c37-b935-26bc80e96fc0/object-updater/0.log" Feb 23 10:09:38 crc kubenswrapper[4940]: I0223 10:09:38.756482 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_8b985bfb-fd7d-4c37-b935-26bc80e96fc0/rsync/0.log" Feb 23 10:09:38 crc kubenswrapper[4940]: I0223 10:09:38.829344 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_8b985bfb-fd7d-4c37-b935-26bc80e96fc0/swift-recon-cron/0.log" Feb 23 10:09:39 crc kubenswrapper[4940]: I0223 10:09:39.026326 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-5zv49_16b77a40-fb67-4fe3-b4c8-d87dd4be9b25/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Feb 23 10:09:39 crc kubenswrapper[4940]: I0223 10:09:39.155411 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_c7cd2a10-7128-40ff-98b8-6d3026b08566/tempest-tests-tempest-tests-runner/0.log" Feb 23 10:09:39 crc kubenswrapper[4940]: I0223 10:09:39.247865 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_614008c1-1725-42e6-b6b3-407d9b909846/test-operator-logs-container/0.log" Feb 23 10:09:39 crc kubenswrapper[4940]: I0223 10:09:39.398859 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-l8x8b_f35a90d0-fb62-4a2f-9ae3-f7c8971a58e8/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Feb 23 10:09:56 crc kubenswrapper[4940]: I0223 10:09:56.331643 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_e0aedede-6061-46c9-8fd2-88a2e1880c2f/memcached/0.log" Feb 23 10:10:10 crc kubenswrapper[4940]: I0223 10:10:10.053938 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-6d8bf5c495-92fk4_0d68e7dc-1d8e-4edd-a2f9-585043e15a98/manager/0.log" Feb 23 10:10:10 crc kubenswrapper[4940]: I0223 10:10:10.333998 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_f50df332aac1228d54a4d82de8c3e65bdeace5a18fcc566c39bd5668d3d5l6q_c924436a-929d-4ed7-ad05-6f9dea4ab38a/util/0.log" Feb 23 10:10:10 crc kubenswrapper[4940]: I0223 10:10:10.570918 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_f50df332aac1228d54a4d82de8c3e65bdeace5a18fcc566c39bd5668d3d5l6q_c924436a-929d-4ed7-ad05-6f9dea4ab38a/util/0.log" Feb 23 10:10:10 crc kubenswrapper[4940]: I0223 10:10:10.573471 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_f50df332aac1228d54a4d82de8c3e65bdeace5a18fcc566c39bd5668d3d5l6q_c924436a-929d-4ed7-ad05-6f9dea4ab38a/pull/0.log" Feb 23 10:10:10 crc kubenswrapper[4940]: I0223 10:10:10.769657 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_f50df332aac1228d54a4d82de8c3e65bdeace5a18fcc566c39bd5668d3d5l6q_c924436a-929d-4ed7-ad05-6f9dea4ab38a/pull/0.log" Feb 23 10:10:10 crc kubenswrapper[4940]: I0223 10:10:10.994662 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_f50df332aac1228d54a4d82de8c3e65bdeace5a18fcc566c39bd5668d3d5l6q_c924436a-929d-4ed7-ad05-6f9dea4ab38a/util/0.log" Feb 23 10:10:11 crc kubenswrapper[4940]: I0223 10:10:11.070428 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_f50df332aac1228d54a4d82de8c3e65bdeace5a18fcc566c39bd5668d3d5l6q_c924436a-929d-4ed7-ad05-6f9dea4ab38a/pull/0.log" Feb 23 10:10:11 crc kubenswrapper[4940]: I0223 10:10:11.324024 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_f50df332aac1228d54a4d82de8c3e65bdeace5a18fcc566c39bd5668d3d5l6q_c924436a-929d-4ed7-ad05-6f9dea4ab38a/extract/0.log" Feb 23 10:10:11 crc kubenswrapper[4940]: I0223 10:10:11.629331 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-77987464f4-p857l_2a7c5730-7ed4-44b1-832d-109fa4460dc5/manager/0.log" Feb 23 10:10:11 crc kubenswrapper[4940]: I0223 10:10:11.764568 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-69f49c598c-pvb4b_61343538-79c0-4565-ae70-a397b5fd6b2f/manager/0.log" Feb 23 10:10:12 crc kubenswrapper[4940]: I0223 10:10:12.002145 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-5b9b8895d5-qzd5f_db71f743-426e-4fe8-ab74-17c3f68798fc/manager/0.log" Feb 23 10:10:12 crc kubenswrapper[4940]: I0223 10:10:12.436704 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-5d946d989d-bqhhr_c8d94d12-5d54-4c60-85d4-de19e4dfde67/manager/0.log" Feb 23 10:10:12 crc kubenswrapper[4940]: I0223 10:10:12.529363 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-554564d7fc-8wv98_34061626-0f45-4bb5-a16f-9059fa45be7f/manager/0.log" Feb 23 10:10:12 crc kubenswrapper[4940]: I0223 10:10:12.634377 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-79d975b745-86vf7_82d3766e-53e7-4dc8-9c9b-d71e9d930595/manager/0.log" Feb 23 10:10:13 crc kubenswrapper[4940]: I0223 10:10:13.516491 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-54f6768c69-vh4r6_780fe903-e160-47c9-9291-31ee2d139266/manager/0.log" Feb 23 10:10:13 crc kubenswrapper[4940]: I0223 10:10:13.651823 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-b4d948c87-zqz6k_2fb7ee71-a9af-4504-8899-932449157080/manager/0.log" Feb 23 10:10:13 crc kubenswrapper[4940]: I0223 10:10:13.874498 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-6994f66f48-rwvf9_e05a318b-495f-49c1-83cf-056d5ce99c8c/manager/0.log" Feb 23 10:10:14 crc kubenswrapper[4940]: I0223 10:10:14.194901 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-64ddbf8bb-6nlcd_15249b0f-c437-4d93-b97a-c7e078139e07/manager/0.log" Feb 23 10:10:14 crc kubenswrapper[4940]: I0223 10:10:14.247074 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-567668f5cf-6ztzk_d2c13199-d708-496b-b69a-43fba1068955/manager/0.log" Feb 23 10:10:14 crc kubenswrapper[4940]: I0223 10:10:14.403859 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-7c6767dc9cwwlzn_70ee3a78-ae3e-4b61-a6f6-4c3e36f50c3e/manager/0.log" Feb 23 10:10:15 crc kubenswrapper[4940]: I0223 10:10:15.504473 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-68c97fd8b-ls267_bd17ff9f-8fa9-4ccc-972b-3cbb2abbdf00/operator/0.log" Feb 23 10:10:15 crc kubenswrapper[4940]: I0223 10:10:15.697340 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-fkkwt_808d7f68-dc41-4211-b785-00e0157483b1/registry-server/0.log" Feb 23 10:10:15 crc kubenswrapper[4940]: I0223 10:10:15.990981 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-d44cf6b75-58p99_e810e429-c05d-4451-a863-196e8e071d9b/manager/0.log" Feb 23 10:10:16 crc kubenswrapper[4940]: I0223 10:10:16.196675 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-8497b45c89-zb9xm_bda50d0f-3559-47b6-9ee2-8104750b30c4/manager/0.log" Feb 23 10:10:16 crc kubenswrapper[4940]: I0223 10:10:16.420583 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-gk729_69a079c2-ac60-4b97-ae60-25c8189e6816/operator/0.log" Feb 23 10:10:16 crc kubenswrapper[4940]: I0223 10:10:16.657555 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-68f46476f-qzv55_8d39a603-93c8-4c09-a1d2-97e6c14902fe/manager/0.log" Feb 23 10:10:16 crc kubenswrapper[4940]: I0223 10:10:16.985541 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-7f45b4ff68-khtmd_d2fb7a6a-317d-4180-bcc3-07087b8a48ba/manager/0.log" Feb 23 10:10:17 crc kubenswrapper[4940]: I0223 10:10:17.074152 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-7866795846-phggr_c81581e5-15a7-4b56-9b22-ecfd026749bc/manager/0.log" Feb 23 10:10:17 crc kubenswrapper[4940]: I0223 10:10:17.368783 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-5db88f68c-s2vxb_c6e874c6-520a-40fa-b182-e7a0daab54c7/manager/0.log" Feb 23 10:10:17 crc kubenswrapper[4940]: I0223 10:10:17.375296 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-554b4c57dc-7gq48_ce0016e4-e6c7-4ac5-8b5e-bd9edfa9c1b8/manager/0.log" Feb 23 10:10:17 crc kubenswrapper[4940]: I0223 10:10:17.382726 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-69f8888797-cmbf8_32fc4d76-59e1-44b3-ace9-e9f14dc4f86a/manager/0.log" Feb 23 10:10:21 crc kubenswrapper[4940]: I0223 10:10:21.732996 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-868647ff47-vp5zb_dfc9a681-c309-4803-9be0-6150d615b023/manager/0.log" Feb 23 10:10:31 crc kubenswrapper[4940]: I0223 10:10:31.429970 4940 patch_prober.go:28] interesting pod/machine-config-daemon-26mgs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 23 10:10:31 crc kubenswrapper[4940]: I0223 10:10:31.430426 4940 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" podUID="f3f2cfd6-5ddf-436d-998f-440f1cc642b1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 23 10:10:38 crc kubenswrapper[4940]: I0223 10:10:38.931937 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-687p7_f5e834a4-b3e0-4d34-922a-2a8ed8e1fecb/control-plane-machine-set-operator/0.log" Feb 23 10:10:39 crc kubenswrapper[4940]: I0223 10:10:39.045382 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-95wjd_4de72bcc-6d41-47cc-b9f7-f4cca10b977f/kube-rbac-proxy/0.log" Feb 23 10:10:39 crc kubenswrapper[4940]: I0223 10:10:39.094032 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-95wjd_4de72bcc-6d41-47cc-b9f7-f4cca10b977f/machine-api-operator/0.log" Feb 23 10:10:52 crc kubenswrapper[4940]: I0223 10:10:52.942417 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-csxvp_ea6d2e05-15f3-4d73-b9e7-d22652f685ff/cert-manager-controller/0.log" Feb 23 10:10:53 crc kubenswrapper[4940]: I0223 10:10:53.146864 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-ls9d2_ff0dd0c0-0c3d-4373-bf64-bfbfbda693d3/cert-manager-cainjector/0.log" Feb 23 10:10:53 crc kubenswrapper[4940]: I0223 10:10:53.177549 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-wv7r6_33328c1e-cfb4-435b-a5a0-8b1ec675055a/cert-manager-webhook/0.log" Feb 23 10:11:01 crc kubenswrapper[4940]: I0223 10:11:01.429332 4940 patch_prober.go:28] interesting pod/machine-config-daemon-26mgs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 23 10:11:01 crc kubenswrapper[4940]: I0223 10:11:01.429819 4940 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" podUID="f3f2cfd6-5ddf-436d-998f-440f1cc642b1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 23 10:11:05 crc kubenswrapper[4940]: I0223 10:11:05.412125 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-5c78fc5d65-w29cj_e77fac6b-039a-43b2-ad12-f5e506201ef7/nmstate-console-plugin/0.log" Feb 23 10:11:05 crc kubenswrapper[4940]: I0223 10:11:05.577679 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-frr6p_a28be9f7-f2d0-4349-8432-a33d0f04d076/nmstate-handler/0.log" Feb 23 10:11:05 crc kubenswrapper[4940]: I0223 10:11:05.676848 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-58c85c668d-kcszk_06a06080-4162-423f-bd67-2cdc3aa6cec0/kube-rbac-proxy/0.log" Feb 23 10:11:05 crc kubenswrapper[4940]: I0223 10:11:05.775224 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-58c85c668d-kcszk_06a06080-4162-423f-bd67-2cdc3aa6cec0/nmstate-metrics/0.log" Feb 23 10:11:05 crc kubenswrapper[4940]: I0223 10:11:05.853107 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-694c9596b7-2lctm_6a03ba2d-040d-4fe6-ac2f-081bb22e1f38/nmstate-operator/0.log" Feb 23 10:11:06 crc kubenswrapper[4940]: I0223 10:11:06.034931 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-866bcb46dc-btmp6_f78f27f8-0a49-4aef-9e58-0cdb19fddbe9/nmstate-webhook/0.log" Feb 23 10:11:31 crc kubenswrapper[4940]: I0223 10:11:31.457128 4940 patch_prober.go:28] interesting pod/machine-config-daemon-26mgs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 23 10:11:31 crc kubenswrapper[4940]: I0223 10:11:31.457539 4940 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" podUID="f3f2cfd6-5ddf-436d-998f-440f1cc642b1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 23 10:11:31 crc kubenswrapper[4940]: I0223 10:11:31.469769 4940 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" Feb 23 10:11:31 crc kubenswrapper[4940]: I0223 10:11:31.470528 4940 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"15348b9c177e6685e259fc78f404481f8793b0b450981c2e491d638571adaf4f"} pod="openshift-machine-config-operator/machine-config-daemon-26mgs" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 23 10:11:31 crc kubenswrapper[4940]: I0223 10:11:31.470601 4940 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" podUID="f3f2cfd6-5ddf-436d-998f-440f1cc642b1" containerName="machine-config-daemon" containerID="cri-o://15348b9c177e6685e259fc78f404481f8793b0b450981c2e491d638571adaf4f" gracePeriod=600 Feb 23 10:11:31 crc kubenswrapper[4940]: E0223 10:11:31.592538 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-26mgs_openshift-machine-config-operator(f3f2cfd6-5ddf-436d-998f-440f1cc642b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" podUID="f3f2cfd6-5ddf-436d-998f-440f1cc642b1" Feb 23 10:11:32 crc kubenswrapper[4940]: I0223 10:11:32.248448 4940 generic.go:334] "Generic (PLEG): container finished" podID="f3f2cfd6-5ddf-436d-998f-440f1cc642b1" containerID="15348b9c177e6685e259fc78f404481f8793b0b450981c2e491d638571adaf4f" exitCode=0 Feb 23 10:11:32 crc kubenswrapper[4940]: I0223 10:11:32.248533 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" event={"ID":"f3f2cfd6-5ddf-436d-998f-440f1cc642b1","Type":"ContainerDied","Data":"15348b9c177e6685e259fc78f404481f8793b0b450981c2e491d638571adaf4f"} Feb 23 10:11:32 crc kubenswrapper[4940]: I0223 10:11:32.249201 4940 scope.go:117] "RemoveContainer" containerID="73564eb7ca7da05d30ce02d7b7f4f13edd916d63410bf2494932823351662582" Feb 23 10:11:32 crc kubenswrapper[4940]: I0223 10:11:32.249964 4940 scope.go:117] "RemoveContainer" containerID="15348b9c177e6685e259fc78f404481f8793b0b450981c2e491d638571adaf4f" Feb 23 10:11:32 crc kubenswrapper[4940]: E0223 10:11:32.250270 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-26mgs_openshift-machine-config-operator(f3f2cfd6-5ddf-436d-998f-440f1cc642b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" podUID="f3f2cfd6-5ddf-436d-998f-440f1cc642b1" Feb 23 10:11:35 crc kubenswrapper[4940]: I0223 10:11:35.692549 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-69bbfbf88f-5cz68_e1d5ae18-3a8e-4845-a163-827184c53429/kube-rbac-proxy/0.log" Feb 23 10:11:35 crc kubenswrapper[4940]: I0223 10:11:35.872474 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-69bbfbf88f-5cz68_e1d5ae18-3a8e-4845-a163-827184c53429/controller/0.log" Feb 23 10:11:35 crc kubenswrapper[4940]: I0223 10:11:35.962336 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-vj8xk_bbb0e30c-ec14-4878-922f-df5bdaa26e76/cp-frr-files/0.log" Feb 23 10:11:36 crc kubenswrapper[4940]: I0223 10:11:36.148172 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-vj8xk_bbb0e30c-ec14-4878-922f-df5bdaa26e76/cp-frr-files/0.log" Feb 23 10:11:36 crc kubenswrapper[4940]: I0223 10:11:36.153464 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-vj8xk_bbb0e30c-ec14-4878-922f-df5bdaa26e76/cp-metrics/0.log" Feb 23 10:11:36 crc kubenswrapper[4940]: I0223 10:11:36.174560 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-vj8xk_bbb0e30c-ec14-4878-922f-df5bdaa26e76/cp-reloader/0.log" Feb 23 10:11:36 crc kubenswrapper[4940]: I0223 10:11:36.182760 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-vj8xk_bbb0e30c-ec14-4878-922f-df5bdaa26e76/cp-reloader/0.log" Feb 23 10:11:36 crc kubenswrapper[4940]: I0223 10:11:36.630253 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-vj8xk_bbb0e30c-ec14-4878-922f-df5bdaa26e76/cp-metrics/0.log" Feb 23 10:11:36 crc kubenswrapper[4940]: I0223 10:11:36.647486 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-vj8xk_bbb0e30c-ec14-4878-922f-df5bdaa26e76/cp-metrics/0.log" Feb 23 10:11:36 crc kubenswrapper[4940]: I0223 10:11:36.678246 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-vj8xk_bbb0e30c-ec14-4878-922f-df5bdaa26e76/cp-frr-files/0.log" Feb 23 10:11:36 crc kubenswrapper[4940]: I0223 10:11:36.704704 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-vj8xk_bbb0e30c-ec14-4878-922f-df5bdaa26e76/cp-reloader/0.log" Feb 23 10:11:36 crc kubenswrapper[4940]: I0223 10:11:36.893111 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-vj8xk_bbb0e30c-ec14-4878-922f-df5bdaa26e76/cp-frr-files/0.log" Feb 23 10:11:36 crc kubenswrapper[4940]: I0223 10:11:36.922631 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-vj8xk_bbb0e30c-ec14-4878-922f-df5bdaa26e76/controller/0.log" Feb 23 10:11:36 crc kubenswrapper[4940]: I0223 10:11:36.928619 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-vj8xk_bbb0e30c-ec14-4878-922f-df5bdaa26e76/cp-reloader/0.log" Feb 23 10:11:36 crc kubenswrapper[4940]: I0223 10:11:36.944928 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-vj8xk_bbb0e30c-ec14-4878-922f-df5bdaa26e76/cp-metrics/0.log" Feb 23 10:11:37 crc kubenswrapper[4940]: I0223 10:11:37.102119 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-vj8xk_bbb0e30c-ec14-4878-922f-df5bdaa26e76/frr-metrics/0.log" Feb 23 10:11:37 crc kubenswrapper[4940]: I0223 10:11:37.139047 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-vj8xk_bbb0e30c-ec14-4878-922f-df5bdaa26e76/kube-rbac-proxy/0.log" Feb 23 10:11:37 crc kubenswrapper[4940]: I0223 10:11:37.209196 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-vj8xk_bbb0e30c-ec14-4878-922f-df5bdaa26e76/kube-rbac-proxy-frr/0.log" Feb 23 10:11:37 crc kubenswrapper[4940]: I0223 10:11:37.555950 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-vj8xk_bbb0e30c-ec14-4878-922f-df5bdaa26e76/reloader/0.log" Feb 23 10:11:37 crc kubenswrapper[4940]: I0223 10:11:37.857185 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-78b44bf5bb-crdxs_130d1750-19ea-4753-87f5-1e7f85169a40/frr-k8s-webhook-server/0.log" Feb 23 10:11:38 crc kubenswrapper[4940]: I0223 10:11:38.042129 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-6fbfbdcfc7-6tv8l_19abcf46-c53b-4409-a6f9-e7e8b41e3182/manager/0.log" Feb 23 10:11:38 crc kubenswrapper[4940]: I0223 10:11:38.262775 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-d595fc4b7-pnf6s_462005ef-96eb-4734-9ffe-eec88929e4d2/webhook-server/0.log" Feb 23 10:11:38 crc kubenswrapper[4940]: I0223 10:11:38.278280 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-vw24x_2309dc31-3802-4155-847b-56d77574cee0/kube-rbac-proxy/0.log" Feb 23 10:11:38 crc kubenswrapper[4940]: I0223 10:11:38.998870 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-vw24x_2309dc31-3802-4155-847b-56d77574cee0/speaker/0.log" Feb 23 10:11:39 crc kubenswrapper[4940]: I0223 10:11:39.158816 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-vj8xk_bbb0e30c-ec14-4878-922f-df5bdaa26e76/frr/0.log" Feb 23 10:11:43 crc kubenswrapper[4940]: I0223 10:11:43.350407 4940 scope.go:117] "RemoveContainer" containerID="15348b9c177e6685e259fc78f404481f8793b0b450981c2e491d638571adaf4f" Feb 23 10:11:43 crc kubenswrapper[4940]: E0223 10:11:43.351257 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-26mgs_openshift-machine-config-operator(f3f2cfd6-5ddf-436d-998f-440f1cc642b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" podUID="f3f2cfd6-5ddf-436d-998f-440f1cc642b1" Feb 23 10:11:54 crc kubenswrapper[4940]: I0223 10:11:54.346496 4940 scope.go:117] "RemoveContainer" containerID="15348b9c177e6685e259fc78f404481f8793b0b450981c2e491d638571adaf4f" Feb 23 10:11:54 crc kubenswrapper[4940]: E0223 10:11:54.347335 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-26mgs_openshift-machine-config-operator(f3f2cfd6-5ddf-436d-998f-440f1cc642b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" podUID="f3f2cfd6-5ddf-436d-998f-440f1cc642b1" Feb 23 10:11:54 crc kubenswrapper[4940]: I0223 10:11:54.471469 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213ccb95_36e9d5fb-4709-4cb8-ac88-67a510ca10fe/util/0.log" Feb 23 10:11:54 crc kubenswrapper[4940]: I0223 10:11:54.684887 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213ccb95_36e9d5fb-4709-4cb8-ac88-67a510ca10fe/util/0.log" Feb 23 10:11:54 crc kubenswrapper[4940]: I0223 10:11:54.685365 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213ccb95_36e9d5fb-4709-4cb8-ac88-67a510ca10fe/pull/0.log" Feb 23 10:11:54 crc kubenswrapper[4940]: I0223 10:11:54.740394 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213ccb95_36e9d5fb-4709-4cb8-ac88-67a510ca10fe/pull/0.log" Feb 23 10:11:54 crc kubenswrapper[4940]: I0223 10:11:54.919638 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213ccb95_36e9d5fb-4709-4cb8-ac88-67a510ca10fe/util/0.log" Feb 23 10:11:55 crc kubenswrapper[4940]: I0223 10:11:55.206360 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213ccb95_36e9d5fb-4709-4cb8-ac88-67a510ca10fe/extract/0.log" Feb 23 10:11:55 crc kubenswrapper[4940]: I0223 10:11:55.265806 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213ccb95_36e9d5fb-4709-4cb8-ac88-67a510ca10fe/pull/0.log" Feb 23 10:11:55 crc kubenswrapper[4940]: I0223 10:11:55.387978 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-xj85m_67ecdea2-a9cb-4de7-8350-894a47f81718/extract-utilities/0.log" Feb 23 10:11:55 crc kubenswrapper[4940]: I0223 10:11:55.568744 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-xj85m_67ecdea2-a9cb-4de7-8350-894a47f81718/extract-utilities/0.log" Feb 23 10:11:55 crc kubenswrapper[4940]: I0223 10:11:55.585595 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-xj85m_67ecdea2-a9cb-4de7-8350-894a47f81718/extract-content/0.log" Feb 23 10:11:55 crc kubenswrapper[4940]: I0223 10:11:55.586598 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-xj85m_67ecdea2-a9cb-4de7-8350-894a47f81718/extract-content/0.log" Feb 23 10:11:55 crc kubenswrapper[4940]: I0223 10:11:55.791339 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-xj85m_67ecdea2-a9cb-4de7-8350-894a47f81718/extract-utilities/0.log" Feb 23 10:11:55 crc kubenswrapper[4940]: I0223 10:11:55.796680 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-xj85m_67ecdea2-a9cb-4de7-8350-894a47f81718/extract-content/0.log" Feb 23 10:11:56 crc kubenswrapper[4940]: I0223 10:11:56.084357 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-nmz5s_f93c4964-18cf-48c3-b3b1-dc7107d8542a/extract-utilities/0.log" Feb 23 10:11:56 crc kubenswrapper[4940]: I0223 10:11:56.316963 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-nmz5s_f93c4964-18cf-48c3-b3b1-dc7107d8542a/extract-content/0.log" Feb 23 10:11:56 crc kubenswrapper[4940]: I0223 10:11:56.328751 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-nmz5s_f93c4964-18cf-48c3-b3b1-dc7107d8542a/extract-utilities/0.log" Feb 23 10:11:56 crc kubenswrapper[4940]: I0223 10:11:56.351012 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-nmz5s_f93c4964-18cf-48c3-b3b1-dc7107d8542a/extract-content/0.log" Feb 23 10:11:56 crc kubenswrapper[4940]: I0223 10:11:56.457903 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-xj85m_67ecdea2-a9cb-4de7-8350-894a47f81718/registry-server/0.log" Feb 23 10:11:56 crc kubenswrapper[4940]: I0223 10:11:56.816440 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-nmz5s_f93c4964-18cf-48c3-b3b1-dc7107d8542a/extract-utilities/0.log" Feb 23 10:11:56 crc kubenswrapper[4940]: I0223 10:11:56.899544 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-nmz5s_f93c4964-18cf-48c3-b3b1-dc7107d8542a/extract-content/0.log" Feb 23 10:11:57 crc kubenswrapper[4940]: I0223 10:11:57.072750 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca4pb6q_3db796ec-3e41-4deb-abb8-e60eb37a659a/util/0.log" Feb 23 10:11:57 crc kubenswrapper[4940]: I0223 10:11:57.308080 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca4pb6q_3db796ec-3e41-4deb-abb8-e60eb37a659a/util/0.log" Feb 23 10:11:57 crc kubenswrapper[4940]: I0223 10:11:57.342340 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca4pb6q_3db796ec-3e41-4deb-abb8-e60eb37a659a/pull/0.log" Feb 23 10:11:57 crc kubenswrapper[4940]: I0223 10:11:57.356239 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-nmz5s_f93c4964-18cf-48c3-b3b1-dc7107d8542a/registry-server/0.log" Feb 23 10:11:57 crc kubenswrapper[4940]: I0223 10:11:57.360459 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca4pb6q_3db796ec-3e41-4deb-abb8-e60eb37a659a/pull/0.log" Feb 23 10:11:57 crc kubenswrapper[4940]: I0223 10:11:57.505502 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca4pb6q_3db796ec-3e41-4deb-abb8-e60eb37a659a/pull/0.log" Feb 23 10:11:57 crc kubenswrapper[4940]: I0223 10:11:57.584706 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca4pb6q_3db796ec-3e41-4deb-abb8-e60eb37a659a/extract/0.log" Feb 23 10:11:57 crc kubenswrapper[4940]: I0223 10:11:57.592998 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca4pb6q_3db796ec-3e41-4deb-abb8-e60eb37a659a/util/0.log" Feb 23 10:11:57 crc kubenswrapper[4940]: I0223 10:11:57.773005 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-hf78k_4e776654-5212-41ae-ac30-a4dafdf7a349/marketplace-operator/0.log" Feb 23 10:11:57 crc kubenswrapper[4940]: I0223 10:11:57.851832 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-6t7mh_b812b371-b4f8-439d-8a46-152ba8e9b7bf/extract-utilities/0.log" Feb 23 10:11:58 crc kubenswrapper[4940]: I0223 10:11:58.403370 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-6t7mh_b812b371-b4f8-439d-8a46-152ba8e9b7bf/extract-content/0.log" Feb 23 10:11:58 crc kubenswrapper[4940]: I0223 10:11:58.430595 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-6t7mh_b812b371-b4f8-439d-8a46-152ba8e9b7bf/extract-utilities/0.log" Feb 23 10:11:58 crc kubenswrapper[4940]: I0223 10:11:58.806192 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-6t7mh_b812b371-b4f8-439d-8a46-152ba8e9b7bf/extract-content/0.log" Feb 23 10:11:59 crc kubenswrapper[4940]: I0223 10:11:59.044478 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-6t7mh_b812b371-b4f8-439d-8a46-152ba8e9b7bf/extract-utilities/0.log" Feb 23 10:11:59 crc kubenswrapper[4940]: I0223 10:11:59.046718 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-6t7mh_b812b371-b4f8-439d-8a46-152ba8e9b7bf/extract-content/0.log" Feb 23 10:11:59 crc kubenswrapper[4940]: I0223 10:11:59.113803 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-qmdzn_612eb23a-7ea4-4c79-bcfe-a627918a7e3f/extract-utilities/0.log" Feb 23 10:11:59 crc kubenswrapper[4940]: I0223 10:11:59.265373 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-6t7mh_b812b371-b4f8-439d-8a46-152ba8e9b7bf/registry-server/0.log" Feb 23 10:11:59 crc kubenswrapper[4940]: I0223 10:11:59.327905 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-qmdzn_612eb23a-7ea4-4c79-bcfe-a627918a7e3f/extract-content/0.log" Feb 23 10:11:59 crc kubenswrapper[4940]: I0223 10:11:59.332396 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-qmdzn_612eb23a-7ea4-4c79-bcfe-a627918a7e3f/extract-content/0.log" Feb 23 10:11:59 crc kubenswrapper[4940]: I0223 10:11:59.333524 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-qmdzn_612eb23a-7ea4-4c79-bcfe-a627918a7e3f/extract-utilities/0.log" Feb 23 10:11:59 crc kubenswrapper[4940]: I0223 10:11:59.624761 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-qmdzn_612eb23a-7ea4-4c79-bcfe-a627918a7e3f/extract-utilities/0.log" Feb 23 10:11:59 crc kubenswrapper[4940]: I0223 10:11:59.658241 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-qmdzn_612eb23a-7ea4-4c79-bcfe-a627918a7e3f/extract-content/0.log" Feb 23 10:12:00 crc kubenswrapper[4940]: I0223 10:12:00.398414 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-qmdzn_612eb23a-7ea4-4c79-bcfe-a627918a7e3f/registry-server/0.log" Feb 23 10:12:05 crc kubenswrapper[4940]: I0223 10:12:05.118293 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-tqjcb"] Feb 23 10:12:05 crc kubenswrapper[4940]: E0223 10:12:05.119468 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="828c343a-4608-48d8-a790-195df40097d3" containerName="container-00" Feb 23 10:12:05 crc kubenswrapper[4940]: I0223 10:12:05.119495 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="828c343a-4608-48d8-a790-195df40097d3" containerName="container-00" Feb 23 10:12:05 crc kubenswrapper[4940]: I0223 10:12:05.119897 4940 memory_manager.go:354] "RemoveStaleState removing state" podUID="828c343a-4608-48d8-a790-195df40097d3" containerName="container-00" Feb 23 10:12:05 crc kubenswrapper[4940]: I0223 10:12:05.122455 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tqjcb" Feb 23 10:12:05 crc kubenswrapper[4940]: I0223 10:12:05.136057 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-tqjcb"] Feb 23 10:12:05 crc kubenswrapper[4940]: I0223 10:12:05.199445 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/71dad9c1-deb6-4b53-b44d-37c89695d02a-catalog-content\") pod \"community-operators-tqjcb\" (UID: \"71dad9c1-deb6-4b53-b44d-37c89695d02a\") " pod="openshift-marketplace/community-operators-tqjcb" Feb 23 10:12:05 crc kubenswrapper[4940]: I0223 10:12:05.199770 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/71dad9c1-deb6-4b53-b44d-37c89695d02a-utilities\") pod \"community-operators-tqjcb\" (UID: \"71dad9c1-deb6-4b53-b44d-37c89695d02a\") " pod="openshift-marketplace/community-operators-tqjcb" Feb 23 10:12:05 crc kubenswrapper[4940]: I0223 10:12:05.199896 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9cbxl\" (UniqueName: \"kubernetes.io/projected/71dad9c1-deb6-4b53-b44d-37c89695d02a-kube-api-access-9cbxl\") pod \"community-operators-tqjcb\" (UID: \"71dad9c1-deb6-4b53-b44d-37c89695d02a\") " pod="openshift-marketplace/community-operators-tqjcb" Feb 23 10:12:05 crc kubenswrapper[4940]: I0223 10:12:05.302332 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/71dad9c1-deb6-4b53-b44d-37c89695d02a-catalog-content\") pod \"community-operators-tqjcb\" (UID: \"71dad9c1-deb6-4b53-b44d-37c89695d02a\") " pod="openshift-marketplace/community-operators-tqjcb" Feb 23 10:12:05 crc kubenswrapper[4940]: I0223 10:12:05.302393 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/71dad9c1-deb6-4b53-b44d-37c89695d02a-utilities\") pod \"community-operators-tqjcb\" (UID: \"71dad9c1-deb6-4b53-b44d-37c89695d02a\") " pod="openshift-marketplace/community-operators-tqjcb" Feb 23 10:12:05 crc kubenswrapper[4940]: I0223 10:12:05.302467 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9cbxl\" (UniqueName: \"kubernetes.io/projected/71dad9c1-deb6-4b53-b44d-37c89695d02a-kube-api-access-9cbxl\") pod \"community-operators-tqjcb\" (UID: \"71dad9c1-deb6-4b53-b44d-37c89695d02a\") " pod="openshift-marketplace/community-operators-tqjcb" Feb 23 10:12:05 crc kubenswrapper[4940]: I0223 10:12:05.303055 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/71dad9c1-deb6-4b53-b44d-37c89695d02a-catalog-content\") pod \"community-operators-tqjcb\" (UID: \"71dad9c1-deb6-4b53-b44d-37c89695d02a\") " pod="openshift-marketplace/community-operators-tqjcb" Feb 23 10:12:05 crc kubenswrapper[4940]: I0223 10:12:05.303111 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/71dad9c1-deb6-4b53-b44d-37c89695d02a-utilities\") pod \"community-operators-tqjcb\" (UID: \"71dad9c1-deb6-4b53-b44d-37c89695d02a\") " pod="openshift-marketplace/community-operators-tqjcb" Feb 23 10:12:05 crc kubenswrapper[4940]: I0223 10:12:05.330959 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9cbxl\" (UniqueName: \"kubernetes.io/projected/71dad9c1-deb6-4b53-b44d-37c89695d02a-kube-api-access-9cbxl\") pod \"community-operators-tqjcb\" (UID: \"71dad9c1-deb6-4b53-b44d-37c89695d02a\") " pod="openshift-marketplace/community-operators-tqjcb" Feb 23 10:12:05 crc kubenswrapper[4940]: I0223 10:12:05.442467 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tqjcb" Feb 23 10:12:05 crc kubenswrapper[4940]: I0223 10:12:05.964937 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-tqjcb"] Feb 23 10:12:06 crc kubenswrapper[4940]: I0223 10:12:06.729999 4940 generic.go:334] "Generic (PLEG): container finished" podID="71dad9c1-deb6-4b53-b44d-37c89695d02a" containerID="64856442dd5baf2acef48588019e9579fd3961479da92290b572e3ca744dc6cb" exitCode=0 Feb 23 10:12:06 crc kubenswrapper[4940]: I0223 10:12:06.730204 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tqjcb" event={"ID":"71dad9c1-deb6-4b53-b44d-37c89695d02a","Type":"ContainerDied","Data":"64856442dd5baf2acef48588019e9579fd3961479da92290b572e3ca744dc6cb"} Feb 23 10:12:06 crc kubenswrapper[4940]: I0223 10:12:06.730306 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tqjcb" event={"ID":"71dad9c1-deb6-4b53-b44d-37c89695d02a","Type":"ContainerStarted","Data":"a803cfa4d2516f4460e00dafcbe5f9c977b8d150845ad060677d6bfe70e20b79"} Feb 23 10:12:06 crc kubenswrapper[4940]: I0223 10:12:06.732601 4940 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 23 10:12:07 crc kubenswrapper[4940]: I0223 10:12:07.739909 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tqjcb" event={"ID":"71dad9c1-deb6-4b53-b44d-37c89695d02a","Type":"ContainerStarted","Data":"66945ac35693d5fe5dab503a8a46523e537ea9ed4c97be53bf4f1ab4dc44a260"} Feb 23 10:12:08 crc kubenswrapper[4940]: I0223 10:12:08.750651 4940 generic.go:334] "Generic (PLEG): container finished" podID="71dad9c1-deb6-4b53-b44d-37c89695d02a" containerID="66945ac35693d5fe5dab503a8a46523e537ea9ed4c97be53bf4f1ab4dc44a260" exitCode=0 Feb 23 10:12:08 crc kubenswrapper[4940]: I0223 10:12:08.750730 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tqjcb" event={"ID":"71dad9c1-deb6-4b53-b44d-37c89695d02a","Type":"ContainerDied","Data":"66945ac35693d5fe5dab503a8a46523e537ea9ed4c97be53bf4f1ab4dc44a260"} Feb 23 10:12:09 crc kubenswrapper[4940]: I0223 10:12:09.409439 4940 scope.go:117] "RemoveContainer" containerID="15348b9c177e6685e259fc78f404481f8793b0b450981c2e491d638571adaf4f" Feb 23 10:12:09 crc kubenswrapper[4940]: E0223 10:12:09.410026 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-26mgs_openshift-machine-config-operator(f3f2cfd6-5ddf-436d-998f-440f1cc642b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" podUID="f3f2cfd6-5ddf-436d-998f-440f1cc642b1" Feb 23 10:12:09 crc kubenswrapper[4940]: I0223 10:12:09.761204 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tqjcb" event={"ID":"71dad9c1-deb6-4b53-b44d-37c89695d02a","Type":"ContainerStarted","Data":"d018ed26bed3d8a7f8e431e68799bb9e9fe1d3fbd2ec2319876f2bebf337cd5b"} Feb 23 10:12:09 crc kubenswrapper[4940]: I0223 10:12:09.798416 4940 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-tqjcb" podStartSLOduration=2.388631741 podStartE2EDuration="4.798375372s" podCreationTimestamp="2026-02-23 10:12:05 +0000 UTC" firstStartedPulling="2026-02-23 10:12:06.732291717 +0000 UTC m=+5058.115497864" lastFinishedPulling="2026-02-23 10:12:09.142035338 +0000 UTC m=+5060.525241495" observedRunningTime="2026-02-23 10:12:09.788064695 +0000 UTC m=+5061.171270872" watchObservedRunningTime="2026-02-23 10:12:09.798375372 +0000 UTC m=+5061.181581539" Feb 23 10:12:15 crc kubenswrapper[4940]: I0223 10:12:15.443254 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-tqjcb" Feb 23 10:12:15 crc kubenswrapper[4940]: I0223 10:12:15.443784 4940 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-tqjcb" Feb 23 10:12:15 crc kubenswrapper[4940]: I0223 10:12:15.500940 4940 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-tqjcb" Feb 23 10:12:15 crc kubenswrapper[4940]: I0223 10:12:15.875574 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-tqjcb" Feb 23 10:12:17 crc kubenswrapper[4940]: I0223 10:12:17.884059 4940 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-tqjcb"] Feb 23 10:12:17 crc kubenswrapper[4940]: I0223 10:12:17.885550 4940 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-tqjcb" podUID="71dad9c1-deb6-4b53-b44d-37c89695d02a" containerName="registry-server" containerID="cri-o://d018ed26bed3d8a7f8e431e68799bb9e9fe1d3fbd2ec2319876f2bebf337cd5b" gracePeriod=2 Feb 23 10:12:18 crc kubenswrapper[4940]: I0223 10:12:18.819556 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tqjcb" Feb 23 10:12:18 crc kubenswrapper[4940]: I0223 10:12:18.857980 4940 generic.go:334] "Generic (PLEG): container finished" podID="71dad9c1-deb6-4b53-b44d-37c89695d02a" containerID="d018ed26bed3d8a7f8e431e68799bb9e9fe1d3fbd2ec2319876f2bebf337cd5b" exitCode=0 Feb 23 10:12:18 crc kubenswrapper[4940]: I0223 10:12:18.858017 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tqjcb" event={"ID":"71dad9c1-deb6-4b53-b44d-37c89695d02a","Type":"ContainerDied","Data":"d018ed26bed3d8a7f8e431e68799bb9e9fe1d3fbd2ec2319876f2bebf337cd5b"} Feb 23 10:12:18 crc kubenswrapper[4940]: I0223 10:12:18.858047 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tqjcb" event={"ID":"71dad9c1-deb6-4b53-b44d-37c89695d02a","Type":"ContainerDied","Data":"a803cfa4d2516f4460e00dafcbe5f9c977b8d150845ad060677d6bfe70e20b79"} Feb 23 10:12:18 crc kubenswrapper[4940]: I0223 10:12:18.858064 4940 scope.go:117] "RemoveContainer" containerID="d018ed26bed3d8a7f8e431e68799bb9e9fe1d3fbd2ec2319876f2bebf337cd5b" Feb 23 10:12:18 crc kubenswrapper[4940]: I0223 10:12:18.858203 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tqjcb" Feb 23 10:12:18 crc kubenswrapper[4940]: I0223 10:12:18.876036 4940 scope.go:117] "RemoveContainer" containerID="66945ac35693d5fe5dab503a8a46523e537ea9ed4c97be53bf4f1ab4dc44a260" Feb 23 10:12:18 crc kubenswrapper[4940]: I0223 10:12:18.929755 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/71dad9c1-deb6-4b53-b44d-37c89695d02a-utilities\") pod \"71dad9c1-deb6-4b53-b44d-37c89695d02a\" (UID: \"71dad9c1-deb6-4b53-b44d-37c89695d02a\") " Feb 23 10:12:18 crc kubenswrapper[4940]: I0223 10:12:18.929882 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/71dad9c1-deb6-4b53-b44d-37c89695d02a-catalog-content\") pod \"71dad9c1-deb6-4b53-b44d-37c89695d02a\" (UID: \"71dad9c1-deb6-4b53-b44d-37c89695d02a\") " Feb 23 10:12:18 crc kubenswrapper[4940]: I0223 10:12:18.929982 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9cbxl\" (UniqueName: \"kubernetes.io/projected/71dad9c1-deb6-4b53-b44d-37c89695d02a-kube-api-access-9cbxl\") pod \"71dad9c1-deb6-4b53-b44d-37c89695d02a\" (UID: \"71dad9c1-deb6-4b53-b44d-37c89695d02a\") " Feb 23 10:12:18 crc kubenswrapper[4940]: I0223 10:12:18.931725 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/71dad9c1-deb6-4b53-b44d-37c89695d02a-utilities" (OuterVolumeSpecName: "utilities") pod "71dad9c1-deb6-4b53-b44d-37c89695d02a" (UID: "71dad9c1-deb6-4b53-b44d-37c89695d02a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 10:12:18 crc kubenswrapper[4940]: I0223 10:12:18.940049 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/71dad9c1-deb6-4b53-b44d-37c89695d02a-kube-api-access-9cbxl" (OuterVolumeSpecName: "kube-api-access-9cbxl") pod "71dad9c1-deb6-4b53-b44d-37c89695d02a" (UID: "71dad9c1-deb6-4b53-b44d-37c89695d02a"). InnerVolumeSpecName "kube-api-access-9cbxl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 10:12:18 crc kubenswrapper[4940]: I0223 10:12:18.940637 4940 scope.go:117] "RemoveContainer" containerID="64856442dd5baf2acef48588019e9579fd3961479da92290b572e3ca744dc6cb" Feb 23 10:12:19 crc kubenswrapper[4940]: I0223 10:12:19.005888 4940 scope.go:117] "RemoveContainer" containerID="d018ed26bed3d8a7f8e431e68799bb9e9fe1d3fbd2ec2319876f2bebf337cd5b" Feb 23 10:12:19 crc kubenswrapper[4940]: E0223 10:12:19.007595 4940 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d018ed26bed3d8a7f8e431e68799bb9e9fe1d3fbd2ec2319876f2bebf337cd5b\": container with ID starting with d018ed26bed3d8a7f8e431e68799bb9e9fe1d3fbd2ec2319876f2bebf337cd5b not found: ID does not exist" containerID="d018ed26bed3d8a7f8e431e68799bb9e9fe1d3fbd2ec2319876f2bebf337cd5b" Feb 23 10:12:19 crc kubenswrapper[4940]: I0223 10:12:19.007693 4940 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d018ed26bed3d8a7f8e431e68799bb9e9fe1d3fbd2ec2319876f2bebf337cd5b"} err="failed to get container status \"d018ed26bed3d8a7f8e431e68799bb9e9fe1d3fbd2ec2319876f2bebf337cd5b\": rpc error: code = NotFound desc = could not find container \"d018ed26bed3d8a7f8e431e68799bb9e9fe1d3fbd2ec2319876f2bebf337cd5b\": container with ID starting with d018ed26bed3d8a7f8e431e68799bb9e9fe1d3fbd2ec2319876f2bebf337cd5b not found: ID does not exist" Feb 23 10:12:19 crc kubenswrapper[4940]: I0223 10:12:19.007732 4940 scope.go:117] "RemoveContainer" containerID="66945ac35693d5fe5dab503a8a46523e537ea9ed4c97be53bf4f1ab4dc44a260" Feb 23 10:12:19 crc kubenswrapper[4940]: E0223 10:12:19.009389 4940 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"66945ac35693d5fe5dab503a8a46523e537ea9ed4c97be53bf4f1ab4dc44a260\": container with ID starting with 66945ac35693d5fe5dab503a8a46523e537ea9ed4c97be53bf4f1ab4dc44a260 not found: ID does not exist" containerID="66945ac35693d5fe5dab503a8a46523e537ea9ed4c97be53bf4f1ab4dc44a260" Feb 23 10:12:19 crc kubenswrapper[4940]: I0223 10:12:19.009439 4940 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"66945ac35693d5fe5dab503a8a46523e537ea9ed4c97be53bf4f1ab4dc44a260"} err="failed to get container status \"66945ac35693d5fe5dab503a8a46523e537ea9ed4c97be53bf4f1ab4dc44a260\": rpc error: code = NotFound desc = could not find container \"66945ac35693d5fe5dab503a8a46523e537ea9ed4c97be53bf4f1ab4dc44a260\": container with ID starting with 66945ac35693d5fe5dab503a8a46523e537ea9ed4c97be53bf4f1ab4dc44a260 not found: ID does not exist" Feb 23 10:12:19 crc kubenswrapper[4940]: I0223 10:12:19.009469 4940 scope.go:117] "RemoveContainer" containerID="64856442dd5baf2acef48588019e9579fd3961479da92290b572e3ca744dc6cb" Feb 23 10:12:19 crc kubenswrapper[4940]: E0223 10:12:19.009815 4940 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"64856442dd5baf2acef48588019e9579fd3961479da92290b572e3ca744dc6cb\": container with ID starting with 64856442dd5baf2acef48588019e9579fd3961479da92290b572e3ca744dc6cb not found: ID does not exist" containerID="64856442dd5baf2acef48588019e9579fd3961479da92290b572e3ca744dc6cb" Feb 23 10:12:19 crc kubenswrapper[4940]: I0223 10:12:19.009862 4940 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"64856442dd5baf2acef48588019e9579fd3961479da92290b572e3ca744dc6cb"} err="failed to get container status \"64856442dd5baf2acef48588019e9579fd3961479da92290b572e3ca744dc6cb\": rpc error: code = NotFound desc = could not find container \"64856442dd5baf2acef48588019e9579fd3961479da92290b572e3ca744dc6cb\": container with ID starting with 64856442dd5baf2acef48588019e9579fd3961479da92290b572e3ca744dc6cb not found: ID does not exist" Feb 23 10:12:19 crc kubenswrapper[4940]: I0223 10:12:19.011530 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/71dad9c1-deb6-4b53-b44d-37c89695d02a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "71dad9c1-deb6-4b53-b44d-37c89695d02a" (UID: "71dad9c1-deb6-4b53-b44d-37c89695d02a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 10:12:19 crc kubenswrapper[4940]: I0223 10:12:19.032813 4940 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/71dad9c1-deb6-4b53-b44d-37c89695d02a-utilities\") on node \"crc\" DevicePath \"\"" Feb 23 10:12:19 crc kubenswrapper[4940]: I0223 10:12:19.033031 4940 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/71dad9c1-deb6-4b53-b44d-37c89695d02a-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 23 10:12:19 crc kubenswrapper[4940]: I0223 10:12:19.033109 4940 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9cbxl\" (UniqueName: \"kubernetes.io/projected/71dad9c1-deb6-4b53-b44d-37c89695d02a-kube-api-access-9cbxl\") on node \"crc\" DevicePath \"\"" Feb 23 10:12:19 crc kubenswrapper[4940]: I0223 10:12:19.196358 4940 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-tqjcb"] Feb 23 10:12:19 crc kubenswrapper[4940]: I0223 10:12:19.205462 4940 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-tqjcb"] Feb 23 10:12:19 crc kubenswrapper[4940]: I0223 10:12:19.359494 4940 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="71dad9c1-deb6-4b53-b44d-37c89695d02a" path="/var/lib/kubelet/pods/71dad9c1-deb6-4b53-b44d-37c89695d02a/volumes" Feb 23 10:12:21 crc kubenswrapper[4940]: I0223 10:12:21.347856 4940 scope.go:117] "RemoveContainer" containerID="15348b9c177e6685e259fc78f404481f8793b0b450981c2e491d638571adaf4f" Feb 23 10:12:21 crc kubenswrapper[4940]: E0223 10:12:21.348104 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-26mgs_openshift-machine-config-operator(f3f2cfd6-5ddf-436d-998f-440f1cc642b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" podUID="f3f2cfd6-5ddf-436d-998f-440f1cc642b1" Feb 23 10:12:33 crc kubenswrapper[4940]: I0223 10:12:33.345860 4940 scope.go:117] "RemoveContainer" containerID="15348b9c177e6685e259fc78f404481f8793b0b450981c2e491d638571adaf4f" Feb 23 10:12:33 crc kubenswrapper[4940]: E0223 10:12:33.346545 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-26mgs_openshift-machine-config-operator(f3f2cfd6-5ddf-436d-998f-440f1cc642b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" podUID="f3f2cfd6-5ddf-436d-998f-440f1cc642b1" Feb 23 10:12:35 crc kubenswrapper[4940]: E0223 10:12:35.463404 4940 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.222:34928->38.102.83.222:40203: write tcp 38.102.83.222:34928->38.102.83.222:40203: write: broken pipe Feb 23 10:12:44 crc kubenswrapper[4940]: I0223 10:12:44.346935 4940 scope.go:117] "RemoveContainer" containerID="15348b9c177e6685e259fc78f404481f8793b0b450981c2e491d638571adaf4f" Feb 23 10:12:44 crc kubenswrapper[4940]: E0223 10:12:44.347944 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-26mgs_openshift-machine-config-operator(f3f2cfd6-5ddf-436d-998f-440f1cc642b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" podUID="f3f2cfd6-5ddf-436d-998f-440f1cc642b1" Feb 23 10:12:57 crc kubenswrapper[4940]: I0223 10:12:57.346492 4940 scope.go:117] "RemoveContainer" containerID="15348b9c177e6685e259fc78f404481f8793b0b450981c2e491d638571adaf4f" Feb 23 10:12:57 crc kubenswrapper[4940]: E0223 10:12:57.347261 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-26mgs_openshift-machine-config-operator(f3f2cfd6-5ddf-436d-998f-440f1cc642b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" podUID="f3f2cfd6-5ddf-436d-998f-440f1cc642b1" Feb 23 10:13:10 crc kubenswrapper[4940]: I0223 10:13:10.346695 4940 scope.go:117] "RemoveContainer" containerID="15348b9c177e6685e259fc78f404481f8793b0b450981c2e491d638571adaf4f" Feb 23 10:13:10 crc kubenswrapper[4940]: E0223 10:13:10.347563 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-26mgs_openshift-machine-config-operator(f3f2cfd6-5ddf-436d-998f-440f1cc642b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" podUID="f3f2cfd6-5ddf-436d-998f-440f1cc642b1" Feb 23 10:13:24 crc kubenswrapper[4940]: I0223 10:13:24.346639 4940 scope.go:117] "RemoveContainer" containerID="15348b9c177e6685e259fc78f404481f8793b0b450981c2e491d638571adaf4f" Feb 23 10:13:24 crc kubenswrapper[4940]: E0223 10:13:24.347891 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-26mgs_openshift-machine-config-operator(f3f2cfd6-5ddf-436d-998f-440f1cc642b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" podUID="f3f2cfd6-5ddf-436d-998f-440f1cc642b1" Feb 23 10:13:36 crc kubenswrapper[4940]: I0223 10:13:36.345722 4940 scope.go:117] "RemoveContainer" containerID="15348b9c177e6685e259fc78f404481f8793b0b450981c2e491d638571adaf4f" Feb 23 10:13:36 crc kubenswrapper[4940]: E0223 10:13:36.352139 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-26mgs_openshift-machine-config-operator(f3f2cfd6-5ddf-436d-998f-440f1cc642b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" podUID="f3f2cfd6-5ddf-436d-998f-440f1cc642b1" Feb 23 10:13:51 crc kubenswrapper[4940]: I0223 10:13:51.347652 4940 scope.go:117] "RemoveContainer" containerID="15348b9c177e6685e259fc78f404481f8793b0b450981c2e491d638571adaf4f" Feb 23 10:13:51 crc kubenswrapper[4940]: E0223 10:13:51.348441 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-26mgs_openshift-machine-config-operator(f3f2cfd6-5ddf-436d-998f-440f1cc642b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" podUID="f3f2cfd6-5ddf-436d-998f-440f1cc642b1" Feb 23 10:14:02 crc kubenswrapper[4940]: I0223 10:14:02.346598 4940 scope.go:117] "RemoveContainer" containerID="15348b9c177e6685e259fc78f404481f8793b0b450981c2e491d638571adaf4f" Feb 23 10:14:02 crc kubenswrapper[4940]: E0223 10:14:02.347543 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-26mgs_openshift-machine-config-operator(f3f2cfd6-5ddf-436d-998f-440f1cc642b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" podUID="f3f2cfd6-5ddf-436d-998f-440f1cc642b1" Feb 23 10:14:16 crc kubenswrapper[4940]: I0223 10:14:16.345893 4940 scope.go:117] "RemoveContainer" containerID="15348b9c177e6685e259fc78f404481f8793b0b450981c2e491d638571adaf4f" Feb 23 10:14:16 crc kubenswrapper[4940]: E0223 10:14:16.346951 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-26mgs_openshift-machine-config-operator(f3f2cfd6-5ddf-436d-998f-440f1cc642b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" podUID="f3f2cfd6-5ddf-436d-998f-440f1cc642b1" Feb 23 10:14:28 crc kubenswrapper[4940]: I0223 10:14:28.833091 4940 generic.go:334] "Generic (PLEG): container finished" podID="e19d452b-09de-4c24-8103-0c7614f78ec2" containerID="db6e5a1d68e7bca7a1711d424b5b225a95a4aa8d62ce4bb1ed070ed45a81135b" exitCode=0 Feb 23 10:14:28 crc kubenswrapper[4940]: I0223 10:14:28.833225 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-27xfq/must-gather-789bv" event={"ID":"e19d452b-09de-4c24-8103-0c7614f78ec2","Type":"ContainerDied","Data":"db6e5a1d68e7bca7a1711d424b5b225a95a4aa8d62ce4bb1ed070ed45a81135b"} Feb 23 10:14:28 crc kubenswrapper[4940]: I0223 10:14:28.835018 4940 scope.go:117] "RemoveContainer" containerID="db6e5a1d68e7bca7a1711d424b5b225a95a4aa8d62ce4bb1ed070ed45a81135b" Feb 23 10:14:29 crc kubenswrapper[4940]: I0223 10:14:29.183479 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-27xfq_must-gather-789bv_e19d452b-09de-4c24-8103-0c7614f78ec2/gather/0.log" Feb 23 10:14:29 crc kubenswrapper[4940]: I0223 10:14:29.352198 4940 scope.go:117] "RemoveContainer" containerID="15348b9c177e6685e259fc78f404481f8793b0b450981c2e491d638571adaf4f" Feb 23 10:14:29 crc kubenswrapper[4940]: E0223 10:14:29.352432 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-26mgs_openshift-machine-config-operator(f3f2cfd6-5ddf-436d-998f-440f1cc642b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" podUID="f3f2cfd6-5ddf-436d-998f-440f1cc642b1" Feb 23 10:14:37 crc kubenswrapper[4940]: I0223 10:14:37.954257 4940 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-27xfq/must-gather-789bv"] Feb 23 10:14:37 crc kubenswrapper[4940]: I0223 10:14:37.955342 4940 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-27xfq/must-gather-789bv" podUID="e19d452b-09de-4c24-8103-0c7614f78ec2" containerName="copy" containerID="cri-o://b0a4a3a31989f403c0ffa57e6556052a4ef0bd32088e93e125ed633190f04bb2" gracePeriod=2 Feb 23 10:14:37 crc kubenswrapper[4940]: I0223 10:14:37.965508 4940 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-27xfq/must-gather-789bv"] Feb 23 10:14:38 crc kubenswrapper[4940]: I0223 10:14:38.408492 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-27xfq_must-gather-789bv_e19d452b-09de-4c24-8103-0c7614f78ec2/copy/0.log" Feb 23 10:14:38 crc kubenswrapper[4940]: I0223 10:14:38.409163 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-27xfq/must-gather-789bv" Feb 23 10:14:38 crc kubenswrapper[4940]: I0223 10:14:38.520441 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/e19d452b-09de-4c24-8103-0c7614f78ec2-must-gather-output\") pod \"e19d452b-09de-4c24-8103-0c7614f78ec2\" (UID: \"e19d452b-09de-4c24-8103-0c7614f78ec2\") " Feb 23 10:14:38 crc kubenswrapper[4940]: I0223 10:14:38.520712 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pdz97\" (UniqueName: \"kubernetes.io/projected/e19d452b-09de-4c24-8103-0c7614f78ec2-kube-api-access-pdz97\") pod \"e19d452b-09de-4c24-8103-0c7614f78ec2\" (UID: \"e19d452b-09de-4c24-8103-0c7614f78ec2\") " Feb 23 10:14:38 crc kubenswrapper[4940]: I0223 10:14:38.527858 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e19d452b-09de-4c24-8103-0c7614f78ec2-kube-api-access-pdz97" (OuterVolumeSpecName: "kube-api-access-pdz97") pod "e19d452b-09de-4c24-8103-0c7614f78ec2" (UID: "e19d452b-09de-4c24-8103-0c7614f78ec2"). InnerVolumeSpecName "kube-api-access-pdz97". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 10:14:38 crc kubenswrapper[4940]: I0223 10:14:38.623289 4940 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pdz97\" (UniqueName: \"kubernetes.io/projected/e19d452b-09de-4c24-8103-0c7614f78ec2-kube-api-access-pdz97\") on node \"crc\" DevicePath \"\"" Feb 23 10:14:38 crc kubenswrapper[4940]: I0223 10:14:38.904097 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e19d452b-09de-4c24-8103-0c7614f78ec2-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "e19d452b-09de-4c24-8103-0c7614f78ec2" (UID: "e19d452b-09de-4c24-8103-0c7614f78ec2"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 10:14:38 crc kubenswrapper[4940]: I0223 10:14:38.907328 4940 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/e19d452b-09de-4c24-8103-0c7614f78ec2-must-gather-output\") on node \"crc\" DevicePath \"\"" Feb 23 10:14:38 crc kubenswrapper[4940]: I0223 10:14:38.953024 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-27xfq_must-gather-789bv_e19d452b-09de-4c24-8103-0c7614f78ec2/copy/0.log" Feb 23 10:14:38 crc kubenswrapper[4940]: I0223 10:14:38.953814 4940 generic.go:334] "Generic (PLEG): container finished" podID="e19d452b-09de-4c24-8103-0c7614f78ec2" containerID="b0a4a3a31989f403c0ffa57e6556052a4ef0bd32088e93e125ed633190f04bb2" exitCode=143 Feb 23 10:14:38 crc kubenswrapper[4940]: I0223 10:14:38.953931 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-27xfq/must-gather-789bv" Feb 23 10:14:38 crc kubenswrapper[4940]: I0223 10:14:38.953943 4940 scope.go:117] "RemoveContainer" containerID="b0a4a3a31989f403c0ffa57e6556052a4ef0bd32088e93e125ed633190f04bb2" Feb 23 10:14:38 crc kubenswrapper[4940]: I0223 10:14:38.981517 4940 scope.go:117] "RemoveContainer" containerID="db6e5a1d68e7bca7a1711d424b5b225a95a4aa8d62ce4bb1ed070ed45a81135b" Feb 23 10:14:39 crc kubenswrapper[4940]: I0223 10:14:39.065274 4940 scope.go:117] "RemoveContainer" containerID="b0a4a3a31989f403c0ffa57e6556052a4ef0bd32088e93e125ed633190f04bb2" Feb 23 10:14:39 crc kubenswrapper[4940]: E0223 10:14:39.065951 4940 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b0a4a3a31989f403c0ffa57e6556052a4ef0bd32088e93e125ed633190f04bb2\": container with ID starting with b0a4a3a31989f403c0ffa57e6556052a4ef0bd32088e93e125ed633190f04bb2 not found: ID does not exist" containerID="b0a4a3a31989f403c0ffa57e6556052a4ef0bd32088e93e125ed633190f04bb2" Feb 23 10:14:39 crc kubenswrapper[4940]: I0223 10:14:39.066007 4940 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b0a4a3a31989f403c0ffa57e6556052a4ef0bd32088e93e125ed633190f04bb2"} err="failed to get container status \"b0a4a3a31989f403c0ffa57e6556052a4ef0bd32088e93e125ed633190f04bb2\": rpc error: code = NotFound desc = could not find container \"b0a4a3a31989f403c0ffa57e6556052a4ef0bd32088e93e125ed633190f04bb2\": container with ID starting with b0a4a3a31989f403c0ffa57e6556052a4ef0bd32088e93e125ed633190f04bb2 not found: ID does not exist" Feb 23 10:14:39 crc kubenswrapper[4940]: I0223 10:14:39.066036 4940 scope.go:117] "RemoveContainer" containerID="db6e5a1d68e7bca7a1711d424b5b225a95a4aa8d62ce4bb1ed070ed45a81135b" Feb 23 10:14:39 crc kubenswrapper[4940]: E0223 10:14:39.066433 4940 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"db6e5a1d68e7bca7a1711d424b5b225a95a4aa8d62ce4bb1ed070ed45a81135b\": container with ID starting with db6e5a1d68e7bca7a1711d424b5b225a95a4aa8d62ce4bb1ed070ed45a81135b not found: ID does not exist" containerID="db6e5a1d68e7bca7a1711d424b5b225a95a4aa8d62ce4bb1ed070ed45a81135b" Feb 23 10:14:39 crc kubenswrapper[4940]: I0223 10:14:39.066467 4940 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"db6e5a1d68e7bca7a1711d424b5b225a95a4aa8d62ce4bb1ed070ed45a81135b"} err="failed to get container status \"db6e5a1d68e7bca7a1711d424b5b225a95a4aa8d62ce4bb1ed070ed45a81135b\": rpc error: code = NotFound desc = could not find container \"db6e5a1d68e7bca7a1711d424b5b225a95a4aa8d62ce4bb1ed070ed45a81135b\": container with ID starting with db6e5a1d68e7bca7a1711d424b5b225a95a4aa8d62ce4bb1ed070ed45a81135b not found: ID does not exist" Feb 23 10:14:39 crc kubenswrapper[4940]: I0223 10:14:39.356365 4940 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e19d452b-09de-4c24-8103-0c7614f78ec2" path="/var/lib/kubelet/pods/e19d452b-09de-4c24-8103-0c7614f78ec2/volumes" Feb 23 10:14:41 crc kubenswrapper[4940]: I0223 10:14:41.346141 4940 scope.go:117] "RemoveContainer" containerID="15348b9c177e6685e259fc78f404481f8793b0b450981c2e491d638571adaf4f" Feb 23 10:14:41 crc kubenswrapper[4940]: E0223 10:14:41.350274 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-26mgs_openshift-machine-config-operator(f3f2cfd6-5ddf-436d-998f-440f1cc642b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" podUID="f3f2cfd6-5ddf-436d-998f-440f1cc642b1" Feb 23 10:14:55 crc kubenswrapper[4940]: I0223 10:14:55.346306 4940 scope.go:117] "RemoveContainer" containerID="15348b9c177e6685e259fc78f404481f8793b0b450981c2e491d638571adaf4f" Feb 23 10:14:55 crc kubenswrapper[4940]: E0223 10:14:55.347840 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-26mgs_openshift-machine-config-operator(f3f2cfd6-5ddf-436d-998f-440f1cc642b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" podUID="f3f2cfd6-5ddf-436d-998f-440f1cc642b1" Feb 23 10:15:00 crc kubenswrapper[4940]: I0223 10:15:00.174561 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29530695-zsxp7"] Feb 23 10:15:00 crc kubenswrapper[4940]: E0223 10:15:00.176155 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e19d452b-09de-4c24-8103-0c7614f78ec2" containerName="gather" Feb 23 10:15:00 crc kubenswrapper[4940]: I0223 10:15:00.176189 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="e19d452b-09de-4c24-8103-0c7614f78ec2" containerName="gather" Feb 23 10:15:00 crc kubenswrapper[4940]: E0223 10:15:00.176241 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="71dad9c1-deb6-4b53-b44d-37c89695d02a" containerName="extract-utilities" Feb 23 10:15:00 crc kubenswrapper[4940]: I0223 10:15:00.176250 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="71dad9c1-deb6-4b53-b44d-37c89695d02a" containerName="extract-utilities" Feb 23 10:15:00 crc kubenswrapper[4940]: E0223 10:15:00.176271 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="71dad9c1-deb6-4b53-b44d-37c89695d02a" containerName="registry-server" Feb 23 10:15:00 crc kubenswrapper[4940]: I0223 10:15:00.176280 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="71dad9c1-deb6-4b53-b44d-37c89695d02a" containerName="registry-server" Feb 23 10:15:00 crc kubenswrapper[4940]: E0223 10:15:00.176316 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e19d452b-09de-4c24-8103-0c7614f78ec2" containerName="copy" Feb 23 10:15:00 crc kubenswrapper[4940]: I0223 10:15:00.176326 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="e19d452b-09de-4c24-8103-0c7614f78ec2" containerName="copy" Feb 23 10:15:00 crc kubenswrapper[4940]: E0223 10:15:00.176352 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="71dad9c1-deb6-4b53-b44d-37c89695d02a" containerName="extract-content" Feb 23 10:15:00 crc kubenswrapper[4940]: I0223 10:15:00.176362 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="71dad9c1-deb6-4b53-b44d-37c89695d02a" containerName="extract-content" Feb 23 10:15:00 crc kubenswrapper[4940]: I0223 10:15:00.176822 4940 memory_manager.go:354] "RemoveStaleState removing state" podUID="e19d452b-09de-4c24-8103-0c7614f78ec2" containerName="copy" Feb 23 10:15:00 crc kubenswrapper[4940]: I0223 10:15:00.176882 4940 memory_manager.go:354] "RemoveStaleState removing state" podUID="71dad9c1-deb6-4b53-b44d-37c89695d02a" containerName="registry-server" Feb 23 10:15:00 crc kubenswrapper[4940]: I0223 10:15:00.176897 4940 memory_manager.go:354] "RemoveStaleState removing state" podUID="e19d452b-09de-4c24-8103-0c7614f78ec2" containerName="gather" Feb 23 10:15:00 crc kubenswrapper[4940]: I0223 10:15:00.178238 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29530695-zsxp7" Feb 23 10:15:00 crc kubenswrapper[4940]: I0223 10:15:00.180823 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 23 10:15:00 crc kubenswrapper[4940]: I0223 10:15:00.181255 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 23 10:15:00 crc kubenswrapper[4940]: I0223 10:15:00.187633 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29530695-zsxp7"] Feb 23 10:15:00 crc kubenswrapper[4940]: I0223 10:15:00.314471 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d1c2071d-ca85-4a9e-acde-fdfb6dab602d-secret-volume\") pod \"collect-profiles-29530695-zsxp7\" (UID: \"d1c2071d-ca85-4a9e-acde-fdfb6dab602d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29530695-zsxp7" Feb 23 10:15:00 crc kubenswrapper[4940]: I0223 10:15:00.314940 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d1c2071d-ca85-4a9e-acde-fdfb6dab602d-config-volume\") pod \"collect-profiles-29530695-zsxp7\" (UID: \"d1c2071d-ca85-4a9e-acde-fdfb6dab602d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29530695-zsxp7" Feb 23 10:15:00 crc kubenswrapper[4940]: I0223 10:15:00.314974 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p4xmb\" (UniqueName: \"kubernetes.io/projected/d1c2071d-ca85-4a9e-acde-fdfb6dab602d-kube-api-access-p4xmb\") pod \"collect-profiles-29530695-zsxp7\" (UID: \"d1c2071d-ca85-4a9e-acde-fdfb6dab602d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29530695-zsxp7" Feb 23 10:15:00 crc kubenswrapper[4940]: I0223 10:15:00.417041 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d1c2071d-ca85-4a9e-acde-fdfb6dab602d-secret-volume\") pod \"collect-profiles-29530695-zsxp7\" (UID: \"d1c2071d-ca85-4a9e-acde-fdfb6dab602d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29530695-zsxp7" Feb 23 10:15:00 crc kubenswrapper[4940]: I0223 10:15:00.417170 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d1c2071d-ca85-4a9e-acde-fdfb6dab602d-config-volume\") pod \"collect-profiles-29530695-zsxp7\" (UID: \"d1c2071d-ca85-4a9e-acde-fdfb6dab602d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29530695-zsxp7" Feb 23 10:15:00 crc kubenswrapper[4940]: I0223 10:15:00.417198 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p4xmb\" (UniqueName: \"kubernetes.io/projected/d1c2071d-ca85-4a9e-acde-fdfb6dab602d-kube-api-access-p4xmb\") pod \"collect-profiles-29530695-zsxp7\" (UID: \"d1c2071d-ca85-4a9e-acde-fdfb6dab602d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29530695-zsxp7" Feb 23 10:15:00 crc kubenswrapper[4940]: I0223 10:15:00.418292 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d1c2071d-ca85-4a9e-acde-fdfb6dab602d-config-volume\") pod \"collect-profiles-29530695-zsxp7\" (UID: \"d1c2071d-ca85-4a9e-acde-fdfb6dab602d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29530695-zsxp7" Feb 23 10:15:00 crc kubenswrapper[4940]: I0223 10:15:00.431968 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d1c2071d-ca85-4a9e-acde-fdfb6dab602d-secret-volume\") pod \"collect-profiles-29530695-zsxp7\" (UID: \"d1c2071d-ca85-4a9e-acde-fdfb6dab602d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29530695-zsxp7" Feb 23 10:15:00 crc kubenswrapper[4940]: I0223 10:15:00.433984 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p4xmb\" (UniqueName: \"kubernetes.io/projected/d1c2071d-ca85-4a9e-acde-fdfb6dab602d-kube-api-access-p4xmb\") pod \"collect-profiles-29530695-zsxp7\" (UID: \"d1c2071d-ca85-4a9e-acde-fdfb6dab602d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29530695-zsxp7" Feb 23 10:15:00 crc kubenswrapper[4940]: I0223 10:15:00.505498 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29530695-zsxp7" Feb 23 10:15:00 crc kubenswrapper[4940]: I0223 10:15:00.943699 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29530695-zsxp7"] Feb 23 10:15:01 crc kubenswrapper[4940]: I0223 10:15:01.172485 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29530695-zsxp7" event={"ID":"d1c2071d-ca85-4a9e-acde-fdfb6dab602d","Type":"ContainerStarted","Data":"a0706e8e81a7a36dfaac0bd6de42ad50aec18693fb2146e82aeb1b892e416269"} Feb 23 10:15:01 crc kubenswrapper[4940]: I0223 10:15:01.172796 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29530695-zsxp7" event={"ID":"d1c2071d-ca85-4a9e-acde-fdfb6dab602d","Type":"ContainerStarted","Data":"cb50bc10ec41d5fb5384d33776438b8c16bde92d7cda2917bbfef6781c30e987"} Feb 23 10:15:01 crc kubenswrapper[4940]: I0223 10:15:01.200448 4940 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29530695-zsxp7" podStartSLOduration=1.200418468 podStartE2EDuration="1.200418468s" podCreationTimestamp="2026-02-23 10:15:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 10:15:01.193133097 +0000 UTC m=+5232.576339254" watchObservedRunningTime="2026-02-23 10:15:01.200418468 +0000 UTC m=+5232.583624635" Feb 23 10:15:02 crc kubenswrapper[4940]: I0223 10:15:02.184369 4940 generic.go:334] "Generic (PLEG): container finished" podID="d1c2071d-ca85-4a9e-acde-fdfb6dab602d" containerID="a0706e8e81a7a36dfaac0bd6de42ad50aec18693fb2146e82aeb1b892e416269" exitCode=0 Feb 23 10:15:02 crc kubenswrapper[4940]: I0223 10:15:02.184474 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29530695-zsxp7" event={"ID":"d1c2071d-ca85-4a9e-acde-fdfb6dab602d","Type":"ContainerDied","Data":"a0706e8e81a7a36dfaac0bd6de42ad50aec18693fb2146e82aeb1b892e416269"} Feb 23 10:15:03 crc kubenswrapper[4940]: I0223 10:15:03.541409 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29530695-zsxp7" Feb 23 10:15:03 crc kubenswrapper[4940]: I0223 10:15:03.684911 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p4xmb\" (UniqueName: \"kubernetes.io/projected/d1c2071d-ca85-4a9e-acde-fdfb6dab602d-kube-api-access-p4xmb\") pod \"d1c2071d-ca85-4a9e-acde-fdfb6dab602d\" (UID: \"d1c2071d-ca85-4a9e-acde-fdfb6dab602d\") " Feb 23 10:15:03 crc kubenswrapper[4940]: I0223 10:15:03.685043 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d1c2071d-ca85-4a9e-acde-fdfb6dab602d-secret-volume\") pod \"d1c2071d-ca85-4a9e-acde-fdfb6dab602d\" (UID: \"d1c2071d-ca85-4a9e-acde-fdfb6dab602d\") " Feb 23 10:15:03 crc kubenswrapper[4940]: I0223 10:15:03.685329 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d1c2071d-ca85-4a9e-acde-fdfb6dab602d-config-volume\") pod \"d1c2071d-ca85-4a9e-acde-fdfb6dab602d\" (UID: \"d1c2071d-ca85-4a9e-acde-fdfb6dab602d\") " Feb 23 10:15:03 crc kubenswrapper[4940]: I0223 10:15:03.688033 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d1c2071d-ca85-4a9e-acde-fdfb6dab602d-config-volume" (OuterVolumeSpecName: "config-volume") pod "d1c2071d-ca85-4a9e-acde-fdfb6dab602d" (UID: "d1c2071d-ca85-4a9e-acde-fdfb6dab602d"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 10:15:03 crc kubenswrapper[4940]: I0223 10:15:03.694136 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d1c2071d-ca85-4a9e-acde-fdfb6dab602d-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "d1c2071d-ca85-4a9e-acde-fdfb6dab602d" (UID: "d1c2071d-ca85-4a9e-acde-fdfb6dab602d"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 10:15:03 crc kubenswrapper[4940]: I0223 10:15:03.694804 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d1c2071d-ca85-4a9e-acde-fdfb6dab602d-kube-api-access-p4xmb" (OuterVolumeSpecName: "kube-api-access-p4xmb") pod "d1c2071d-ca85-4a9e-acde-fdfb6dab602d" (UID: "d1c2071d-ca85-4a9e-acde-fdfb6dab602d"). InnerVolumeSpecName "kube-api-access-p4xmb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 10:15:03 crc kubenswrapper[4940]: I0223 10:15:03.788405 4940 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p4xmb\" (UniqueName: \"kubernetes.io/projected/d1c2071d-ca85-4a9e-acde-fdfb6dab602d-kube-api-access-p4xmb\") on node \"crc\" DevicePath \"\"" Feb 23 10:15:03 crc kubenswrapper[4940]: I0223 10:15:03.788436 4940 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d1c2071d-ca85-4a9e-acde-fdfb6dab602d-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 23 10:15:03 crc kubenswrapper[4940]: I0223 10:15:03.788446 4940 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d1c2071d-ca85-4a9e-acde-fdfb6dab602d-config-volume\") on node \"crc\" DevicePath \"\"" Feb 23 10:15:04 crc kubenswrapper[4940]: I0223 10:15:04.201867 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29530695-zsxp7" event={"ID":"d1c2071d-ca85-4a9e-acde-fdfb6dab602d","Type":"ContainerDied","Data":"cb50bc10ec41d5fb5384d33776438b8c16bde92d7cda2917bbfef6781c30e987"} Feb 23 10:15:04 crc kubenswrapper[4940]: I0223 10:15:04.201907 4940 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cb50bc10ec41d5fb5384d33776438b8c16bde92d7cda2917bbfef6781c30e987" Feb 23 10:15:04 crc kubenswrapper[4940]: I0223 10:15:04.201990 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29530695-zsxp7" Feb 23 10:15:04 crc kubenswrapper[4940]: I0223 10:15:04.255515 4940 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29530650-5t7cp"] Feb 23 10:15:04 crc kubenswrapper[4940]: I0223 10:15:04.266672 4940 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29530650-5t7cp"] Feb 23 10:15:05 crc kubenswrapper[4940]: I0223 10:15:05.364727 4940 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="608b2d6d-4d96-4ccf-82f8-8b8e0f0f15c3" path="/var/lib/kubelet/pods/608b2d6d-4d96-4ccf-82f8-8b8e0f0f15c3/volumes" Feb 23 10:15:10 crc kubenswrapper[4940]: I0223 10:15:10.345970 4940 scope.go:117] "RemoveContainer" containerID="15348b9c177e6685e259fc78f404481f8793b0b450981c2e491d638571adaf4f" Feb 23 10:15:10 crc kubenswrapper[4940]: E0223 10:15:10.346806 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-26mgs_openshift-machine-config-operator(f3f2cfd6-5ddf-436d-998f-440f1cc642b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" podUID="f3f2cfd6-5ddf-436d-998f-440f1cc642b1" Feb 23 10:15:24 crc kubenswrapper[4940]: I0223 10:15:24.346140 4940 scope.go:117] "RemoveContainer" containerID="15348b9c177e6685e259fc78f404481f8793b0b450981c2e491d638571adaf4f" Feb 23 10:15:24 crc kubenswrapper[4940]: E0223 10:15:24.347211 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-26mgs_openshift-machine-config-operator(f3f2cfd6-5ddf-436d-998f-440f1cc642b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" podUID="f3f2cfd6-5ddf-436d-998f-440f1cc642b1" Feb 23 10:15:37 crc kubenswrapper[4940]: I0223 10:15:37.121524 4940 scope.go:117] "RemoveContainer" containerID="5fcdb617cb1878693644da3e8b924fe966f5f382e6c84787909dd58a44ac1a19" Feb 23 10:15:37 crc kubenswrapper[4940]: I0223 10:15:37.343040 4940 scope.go:117] "RemoveContainer" containerID="df627c3f5c9f3ae2ff8743b6f1edefcd3e58d47c4737424ba0aa22c771853e51" Feb 23 10:15:37 crc kubenswrapper[4940]: I0223 10:15:37.345909 4940 scope.go:117] "RemoveContainer" containerID="15348b9c177e6685e259fc78f404481f8793b0b450981c2e491d638571adaf4f" Feb 23 10:15:37 crc kubenswrapper[4940]: E0223 10:15:37.346424 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-26mgs_openshift-machine-config-operator(f3f2cfd6-5ddf-436d-998f-440f1cc642b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" podUID="f3f2cfd6-5ddf-436d-998f-440f1cc642b1" Feb 23 10:15:51 crc kubenswrapper[4940]: I0223 10:15:51.346712 4940 scope.go:117] "RemoveContainer" containerID="15348b9c177e6685e259fc78f404481f8793b0b450981c2e491d638571adaf4f" Feb 23 10:15:51 crc kubenswrapper[4940]: E0223 10:15:51.347575 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-26mgs_openshift-machine-config-operator(f3f2cfd6-5ddf-436d-998f-440f1cc642b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" podUID="f3f2cfd6-5ddf-436d-998f-440f1cc642b1" Feb 23 10:16:04 crc kubenswrapper[4940]: I0223 10:16:04.348111 4940 scope.go:117] "RemoveContainer" containerID="15348b9c177e6685e259fc78f404481f8793b0b450981c2e491d638571adaf4f" Feb 23 10:16:04 crc kubenswrapper[4940]: E0223 10:16:04.349374 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-26mgs_openshift-machine-config-operator(f3f2cfd6-5ddf-436d-998f-440f1cc642b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" podUID="f3f2cfd6-5ddf-436d-998f-440f1cc642b1" Feb 23 10:16:18 crc kubenswrapper[4940]: I0223 10:16:18.345659 4940 scope.go:117] "RemoveContainer" containerID="15348b9c177e6685e259fc78f404481f8793b0b450981c2e491d638571adaf4f" Feb 23 10:16:18 crc kubenswrapper[4940]: E0223 10:16:18.346819 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-26mgs_openshift-machine-config-operator(f3f2cfd6-5ddf-436d-998f-440f1cc642b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" podUID="f3f2cfd6-5ddf-436d-998f-440f1cc642b1" Feb 23 10:16:33 crc kubenswrapper[4940]: I0223 10:16:33.345837 4940 scope.go:117] "RemoveContainer" containerID="15348b9c177e6685e259fc78f404481f8793b0b450981c2e491d638571adaf4f" Feb 23 10:16:34 crc kubenswrapper[4940]: I0223 10:16:34.173453 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" event={"ID":"f3f2cfd6-5ddf-436d-998f-440f1cc642b1","Type":"ContainerStarted","Data":"546957238c5906c7fb8f7c88c8ab18ff9dc429daecaa694afa30833c82fdb34e"} Feb 23 10:16:53 crc kubenswrapper[4940]: I0223 10:16:53.128870 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-z9m9b"] Feb 23 10:16:53 crc kubenswrapper[4940]: E0223 10:16:53.130025 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d1c2071d-ca85-4a9e-acde-fdfb6dab602d" containerName="collect-profiles" Feb 23 10:16:53 crc kubenswrapper[4940]: I0223 10:16:53.130042 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1c2071d-ca85-4a9e-acde-fdfb6dab602d" containerName="collect-profiles" Feb 23 10:16:53 crc kubenswrapper[4940]: I0223 10:16:53.130269 4940 memory_manager.go:354] "RemoveStaleState removing state" podUID="d1c2071d-ca85-4a9e-acde-fdfb6dab602d" containerName="collect-profiles" Feb 23 10:16:53 crc kubenswrapper[4940]: I0223 10:16:53.131929 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-z9m9b" Feb 23 10:16:53 crc kubenswrapper[4940]: I0223 10:16:53.149123 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-z9m9b"] Feb 23 10:16:53 crc kubenswrapper[4940]: I0223 10:16:53.170879 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5hmf5\" (UniqueName: \"kubernetes.io/projected/2ca01729-2336-434b-8b9e-bb12a941bc70-kube-api-access-5hmf5\") pod \"redhat-operators-z9m9b\" (UID: \"2ca01729-2336-434b-8b9e-bb12a941bc70\") " pod="openshift-marketplace/redhat-operators-z9m9b" Feb 23 10:16:53 crc kubenswrapper[4940]: I0223 10:16:53.171516 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2ca01729-2336-434b-8b9e-bb12a941bc70-catalog-content\") pod \"redhat-operators-z9m9b\" (UID: \"2ca01729-2336-434b-8b9e-bb12a941bc70\") " pod="openshift-marketplace/redhat-operators-z9m9b" Feb 23 10:16:53 crc kubenswrapper[4940]: I0223 10:16:53.171930 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2ca01729-2336-434b-8b9e-bb12a941bc70-utilities\") pod \"redhat-operators-z9m9b\" (UID: \"2ca01729-2336-434b-8b9e-bb12a941bc70\") " pod="openshift-marketplace/redhat-operators-z9m9b" Feb 23 10:16:53 crc kubenswrapper[4940]: I0223 10:16:53.274604 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5hmf5\" (UniqueName: \"kubernetes.io/projected/2ca01729-2336-434b-8b9e-bb12a941bc70-kube-api-access-5hmf5\") pod \"redhat-operators-z9m9b\" (UID: \"2ca01729-2336-434b-8b9e-bb12a941bc70\") " pod="openshift-marketplace/redhat-operators-z9m9b" Feb 23 10:16:53 crc kubenswrapper[4940]: I0223 10:16:53.274763 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2ca01729-2336-434b-8b9e-bb12a941bc70-catalog-content\") pod \"redhat-operators-z9m9b\" (UID: \"2ca01729-2336-434b-8b9e-bb12a941bc70\") " pod="openshift-marketplace/redhat-operators-z9m9b" Feb 23 10:16:53 crc kubenswrapper[4940]: I0223 10:16:53.274910 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2ca01729-2336-434b-8b9e-bb12a941bc70-utilities\") pod \"redhat-operators-z9m9b\" (UID: \"2ca01729-2336-434b-8b9e-bb12a941bc70\") " pod="openshift-marketplace/redhat-operators-z9m9b" Feb 23 10:16:53 crc kubenswrapper[4940]: I0223 10:16:53.275355 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2ca01729-2336-434b-8b9e-bb12a941bc70-catalog-content\") pod \"redhat-operators-z9m9b\" (UID: \"2ca01729-2336-434b-8b9e-bb12a941bc70\") " pod="openshift-marketplace/redhat-operators-z9m9b" Feb 23 10:16:53 crc kubenswrapper[4940]: I0223 10:16:53.275363 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2ca01729-2336-434b-8b9e-bb12a941bc70-utilities\") pod \"redhat-operators-z9m9b\" (UID: \"2ca01729-2336-434b-8b9e-bb12a941bc70\") " pod="openshift-marketplace/redhat-operators-z9m9b" Feb 23 10:16:53 crc kubenswrapper[4940]: I0223 10:16:53.296413 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5hmf5\" (UniqueName: \"kubernetes.io/projected/2ca01729-2336-434b-8b9e-bb12a941bc70-kube-api-access-5hmf5\") pod \"redhat-operators-z9m9b\" (UID: \"2ca01729-2336-434b-8b9e-bb12a941bc70\") " pod="openshift-marketplace/redhat-operators-z9m9b" Feb 23 10:16:53 crc kubenswrapper[4940]: I0223 10:16:53.501184 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-z9m9b" Feb 23 10:16:54 crc kubenswrapper[4940]: I0223 10:16:54.020718 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-z9m9b"] Feb 23 10:16:54 crc kubenswrapper[4940]: I0223 10:16:54.517025 4940 generic.go:334] "Generic (PLEG): container finished" podID="2ca01729-2336-434b-8b9e-bb12a941bc70" containerID="671e928c0fcdd518aec6df2eb94398e6e4e7c22c80c9f5a32ed26447165d8568" exitCode=0 Feb 23 10:16:54 crc kubenswrapper[4940]: I0223 10:16:54.517109 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z9m9b" event={"ID":"2ca01729-2336-434b-8b9e-bb12a941bc70","Type":"ContainerDied","Data":"671e928c0fcdd518aec6df2eb94398e6e4e7c22c80c9f5a32ed26447165d8568"} Feb 23 10:16:54 crc kubenswrapper[4940]: I0223 10:16:54.517162 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z9m9b" event={"ID":"2ca01729-2336-434b-8b9e-bb12a941bc70","Type":"ContainerStarted","Data":"7f21e8e543ec080f996258ef4844d0f723fa3d4f38dfdd33eca965c1a8886cc9"} Feb 23 10:16:55 crc kubenswrapper[4940]: I0223 10:16:55.526904 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z9m9b" event={"ID":"2ca01729-2336-434b-8b9e-bb12a941bc70","Type":"ContainerStarted","Data":"e15ed6d6ca05204d09547cd32e6313f6521846168168c37480f44d68761f5cf9"} Feb 23 10:16:58 crc kubenswrapper[4940]: I0223 10:16:58.556542 4940 generic.go:334] "Generic (PLEG): container finished" podID="2ca01729-2336-434b-8b9e-bb12a941bc70" containerID="e15ed6d6ca05204d09547cd32e6313f6521846168168c37480f44d68761f5cf9" exitCode=0 Feb 23 10:16:58 crc kubenswrapper[4940]: I0223 10:16:58.556653 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z9m9b" event={"ID":"2ca01729-2336-434b-8b9e-bb12a941bc70","Type":"ContainerDied","Data":"e15ed6d6ca05204d09547cd32e6313f6521846168168c37480f44d68761f5cf9"} Feb 23 10:16:59 crc kubenswrapper[4940]: I0223 10:16:59.568578 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z9m9b" event={"ID":"2ca01729-2336-434b-8b9e-bb12a941bc70","Type":"ContainerStarted","Data":"9ba0349ca8d1b3dfa9f93b3e6959ac59dedf79da3c0e79c91d6d1cabb3bf9936"} Feb 23 10:16:59 crc kubenswrapper[4940]: I0223 10:16:59.592775 4940 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-z9m9b" podStartSLOduration=2.177114796 podStartE2EDuration="6.59275097s" podCreationTimestamp="2026-02-23 10:16:53 +0000 UTC" firstStartedPulling="2026-02-23 10:16:54.519544342 +0000 UTC m=+5345.902750499" lastFinishedPulling="2026-02-23 10:16:58.935180516 +0000 UTC m=+5350.318386673" observedRunningTime="2026-02-23 10:16:59.586246193 +0000 UTC m=+5350.969452360" watchObservedRunningTime="2026-02-23 10:16:59.59275097 +0000 UTC m=+5350.975957127" Feb 23 10:17:03 crc kubenswrapper[4940]: I0223 10:17:03.502041 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-z9m9b" Feb 23 10:17:03 crc kubenswrapper[4940]: I0223 10:17:03.503507 4940 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-z9m9b" Feb 23 10:17:04 crc kubenswrapper[4940]: I0223 10:17:04.549446 4940 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-z9m9b" podUID="2ca01729-2336-434b-8b9e-bb12a941bc70" containerName="registry-server" probeResult="failure" output=< Feb 23 10:17:04 crc kubenswrapper[4940]: timeout: failed to connect service ":50051" within 1s Feb 23 10:17:04 crc kubenswrapper[4940]: > Feb 23 10:17:13 crc kubenswrapper[4940]: I0223 10:17:13.568789 4940 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-z9m9b" Feb 23 10:17:13 crc kubenswrapper[4940]: I0223 10:17:13.618823 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-z9m9b" Feb 23 10:17:13 crc kubenswrapper[4940]: I0223 10:17:13.810081 4940 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-z9m9b"] Feb 23 10:17:14 crc kubenswrapper[4940]: I0223 10:17:14.712911 4940 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-z9m9b" podUID="2ca01729-2336-434b-8b9e-bb12a941bc70" containerName="registry-server" containerID="cri-o://9ba0349ca8d1b3dfa9f93b3e6959ac59dedf79da3c0e79c91d6d1cabb3bf9936" gracePeriod=2 Feb 23 10:17:15 crc kubenswrapper[4940]: I0223 10:17:15.186820 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-z9m9b" Feb 23 10:17:15 crc kubenswrapper[4940]: I0223 10:17:15.275905 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5hmf5\" (UniqueName: \"kubernetes.io/projected/2ca01729-2336-434b-8b9e-bb12a941bc70-kube-api-access-5hmf5\") pod \"2ca01729-2336-434b-8b9e-bb12a941bc70\" (UID: \"2ca01729-2336-434b-8b9e-bb12a941bc70\") " Feb 23 10:17:15 crc kubenswrapper[4940]: I0223 10:17:15.276588 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2ca01729-2336-434b-8b9e-bb12a941bc70-utilities\") pod \"2ca01729-2336-434b-8b9e-bb12a941bc70\" (UID: \"2ca01729-2336-434b-8b9e-bb12a941bc70\") " Feb 23 10:17:15 crc kubenswrapper[4940]: I0223 10:17:15.276677 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2ca01729-2336-434b-8b9e-bb12a941bc70-catalog-content\") pod \"2ca01729-2336-434b-8b9e-bb12a941bc70\" (UID: \"2ca01729-2336-434b-8b9e-bb12a941bc70\") " Feb 23 10:17:15 crc kubenswrapper[4940]: I0223 10:17:15.277771 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2ca01729-2336-434b-8b9e-bb12a941bc70-utilities" (OuterVolumeSpecName: "utilities") pod "2ca01729-2336-434b-8b9e-bb12a941bc70" (UID: "2ca01729-2336-434b-8b9e-bb12a941bc70"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 10:17:15 crc kubenswrapper[4940]: I0223 10:17:15.302766 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2ca01729-2336-434b-8b9e-bb12a941bc70-kube-api-access-5hmf5" (OuterVolumeSpecName: "kube-api-access-5hmf5") pod "2ca01729-2336-434b-8b9e-bb12a941bc70" (UID: "2ca01729-2336-434b-8b9e-bb12a941bc70"). InnerVolumeSpecName "kube-api-access-5hmf5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 10:17:15 crc kubenswrapper[4940]: I0223 10:17:15.379948 4940 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5hmf5\" (UniqueName: \"kubernetes.io/projected/2ca01729-2336-434b-8b9e-bb12a941bc70-kube-api-access-5hmf5\") on node \"crc\" DevicePath \"\"" Feb 23 10:17:15 crc kubenswrapper[4940]: I0223 10:17:15.380010 4940 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2ca01729-2336-434b-8b9e-bb12a941bc70-utilities\") on node \"crc\" DevicePath \"\"" Feb 23 10:17:15 crc kubenswrapper[4940]: I0223 10:17:15.415980 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2ca01729-2336-434b-8b9e-bb12a941bc70-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2ca01729-2336-434b-8b9e-bb12a941bc70" (UID: "2ca01729-2336-434b-8b9e-bb12a941bc70"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 10:17:15 crc kubenswrapper[4940]: I0223 10:17:15.482691 4940 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2ca01729-2336-434b-8b9e-bb12a941bc70-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 23 10:17:15 crc kubenswrapper[4940]: I0223 10:17:15.724849 4940 generic.go:334] "Generic (PLEG): container finished" podID="2ca01729-2336-434b-8b9e-bb12a941bc70" containerID="9ba0349ca8d1b3dfa9f93b3e6959ac59dedf79da3c0e79c91d6d1cabb3bf9936" exitCode=0 Feb 23 10:17:15 crc kubenswrapper[4940]: I0223 10:17:15.724927 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-z9m9b" Feb 23 10:17:15 crc kubenswrapper[4940]: I0223 10:17:15.724924 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z9m9b" event={"ID":"2ca01729-2336-434b-8b9e-bb12a941bc70","Type":"ContainerDied","Data":"9ba0349ca8d1b3dfa9f93b3e6959ac59dedf79da3c0e79c91d6d1cabb3bf9936"} Feb 23 10:17:15 crc kubenswrapper[4940]: I0223 10:17:15.725019 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z9m9b" event={"ID":"2ca01729-2336-434b-8b9e-bb12a941bc70","Type":"ContainerDied","Data":"7f21e8e543ec080f996258ef4844d0f723fa3d4f38dfdd33eca965c1a8886cc9"} Feb 23 10:17:15 crc kubenswrapper[4940]: I0223 10:17:15.725065 4940 scope.go:117] "RemoveContainer" containerID="9ba0349ca8d1b3dfa9f93b3e6959ac59dedf79da3c0e79c91d6d1cabb3bf9936" Feb 23 10:17:15 crc kubenswrapper[4940]: I0223 10:17:15.751908 4940 scope.go:117] "RemoveContainer" containerID="e15ed6d6ca05204d09547cd32e6313f6521846168168c37480f44d68761f5cf9" Feb 23 10:17:15 crc kubenswrapper[4940]: I0223 10:17:15.761113 4940 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-z9m9b"] Feb 23 10:17:15 crc kubenswrapper[4940]: I0223 10:17:15.769669 4940 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-z9m9b"] Feb 23 10:17:15 crc kubenswrapper[4940]: I0223 10:17:15.785784 4940 scope.go:117] "RemoveContainer" containerID="671e928c0fcdd518aec6df2eb94398e6e4e7c22c80c9f5a32ed26447165d8568" Feb 23 10:17:15 crc kubenswrapper[4940]: I0223 10:17:15.833110 4940 scope.go:117] "RemoveContainer" containerID="9ba0349ca8d1b3dfa9f93b3e6959ac59dedf79da3c0e79c91d6d1cabb3bf9936" Feb 23 10:17:15 crc kubenswrapper[4940]: E0223 10:17:15.834138 4940 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9ba0349ca8d1b3dfa9f93b3e6959ac59dedf79da3c0e79c91d6d1cabb3bf9936\": container with ID starting with 9ba0349ca8d1b3dfa9f93b3e6959ac59dedf79da3c0e79c91d6d1cabb3bf9936 not found: ID does not exist" containerID="9ba0349ca8d1b3dfa9f93b3e6959ac59dedf79da3c0e79c91d6d1cabb3bf9936" Feb 23 10:17:15 crc kubenswrapper[4940]: I0223 10:17:15.834214 4940 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9ba0349ca8d1b3dfa9f93b3e6959ac59dedf79da3c0e79c91d6d1cabb3bf9936"} err="failed to get container status \"9ba0349ca8d1b3dfa9f93b3e6959ac59dedf79da3c0e79c91d6d1cabb3bf9936\": rpc error: code = NotFound desc = could not find container \"9ba0349ca8d1b3dfa9f93b3e6959ac59dedf79da3c0e79c91d6d1cabb3bf9936\": container with ID starting with 9ba0349ca8d1b3dfa9f93b3e6959ac59dedf79da3c0e79c91d6d1cabb3bf9936 not found: ID does not exist" Feb 23 10:17:15 crc kubenswrapper[4940]: I0223 10:17:15.834248 4940 scope.go:117] "RemoveContainer" containerID="e15ed6d6ca05204d09547cd32e6313f6521846168168c37480f44d68761f5cf9" Feb 23 10:17:15 crc kubenswrapper[4940]: E0223 10:17:15.834732 4940 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e15ed6d6ca05204d09547cd32e6313f6521846168168c37480f44d68761f5cf9\": container with ID starting with e15ed6d6ca05204d09547cd32e6313f6521846168168c37480f44d68761f5cf9 not found: ID does not exist" containerID="e15ed6d6ca05204d09547cd32e6313f6521846168168c37480f44d68761f5cf9" Feb 23 10:17:15 crc kubenswrapper[4940]: I0223 10:17:15.834782 4940 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e15ed6d6ca05204d09547cd32e6313f6521846168168c37480f44d68761f5cf9"} err="failed to get container status \"e15ed6d6ca05204d09547cd32e6313f6521846168168c37480f44d68761f5cf9\": rpc error: code = NotFound desc = could not find container \"e15ed6d6ca05204d09547cd32e6313f6521846168168c37480f44d68761f5cf9\": container with ID starting with e15ed6d6ca05204d09547cd32e6313f6521846168168c37480f44d68761f5cf9 not found: ID does not exist" Feb 23 10:17:15 crc kubenswrapper[4940]: I0223 10:17:15.834814 4940 scope.go:117] "RemoveContainer" containerID="671e928c0fcdd518aec6df2eb94398e6e4e7c22c80c9f5a32ed26447165d8568" Feb 23 10:17:15 crc kubenswrapper[4940]: E0223 10:17:15.835366 4940 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"671e928c0fcdd518aec6df2eb94398e6e4e7c22c80c9f5a32ed26447165d8568\": container with ID starting with 671e928c0fcdd518aec6df2eb94398e6e4e7c22c80c9f5a32ed26447165d8568 not found: ID does not exist" containerID="671e928c0fcdd518aec6df2eb94398e6e4e7c22c80c9f5a32ed26447165d8568" Feb 23 10:17:15 crc kubenswrapper[4940]: I0223 10:17:15.835425 4940 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"671e928c0fcdd518aec6df2eb94398e6e4e7c22c80c9f5a32ed26447165d8568"} err="failed to get container status \"671e928c0fcdd518aec6df2eb94398e6e4e7c22c80c9f5a32ed26447165d8568\": rpc error: code = NotFound desc = could not find container \"671e928c0fcdd518aec6df2eb94398e6e4e7c22c80c9f5a32ed26447165d8568\": container with ID starting with 671e928c0fcdd518aec6df2eb94398e6e4e7c22c80c9f5a32ed26447165d8568 not found: ID does not exist" Feb 23 10:17:17 crc kubenswrapper[4940]: I0223 10:17:17.363757 4940 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2ca01729-2336-434b-8b9e-bb12a941bc70" path="/var/lib/kubelet/pods/2ca01729-2336-434b-8b9e-bb12a941bc70/volumes" Feb 23 10:17:57 crc kubenswrapper[4940]: I0223 10:17:57.173985 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-vkzq4/must-gather-lc2d4"] Feb 23 10:17:57 crc kubenswrapper[4940]: E0223 10:17:57.174920 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ca01729-2336-434b-8b9e-bb12a941bc70" containerName="extract-content" Feb 23 10:17:57 crc kubenswrapper[4940]: I0223 10:17:57.174936 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ca01729-2336-434b-8b9e-bb12a941bc70" containerName="extract-content" Feb 23 10:17:57 crc kubenswrapper[4940]: E0223 10:17:57.174954 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ca01729-2336-434b-8b9e-bb12a941bc70" containerName="registry-server" Feb 23 10:17:57 crc kubenswrapper[4940]: I0223 10:17:57.174961 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ca01729-2336-434b-8b9e-bb12a941bc70" containerName="registry-server" Feb 23 10:17:57 crc kubenswrapper[4940]: E0223 10:17:57.175003 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ca01729-2336-434b-8b9e-bb12a941bc70" containerName="extract-utilities" Feb 23 10:17:57 crc kubenswrapper[4940]: I0223 10:17:57.175011 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ca01729-2336-434b-8b9e-bb12a941bc70" containerName="extract-utilities" Feb 23 10:17:57 crc kubenswrapper[4940]: I0223 10:17:57.175236 4940 memory_manager.go:354] "RemoveStaleState removing state" podUID="2ca01729-2336-434b-8b9e-bb12a941bc70" containerName="registry-server" Feb 23 10:17:57 crc kubenswrapper[4940]: I0223 10:17:57.176324 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vkzq4/must-gather-lc2d4" Feb 23 10:17:57 crc kubenswrapper[4940]: I0223 10:17:57.179978 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-vkzq4"/"openshift-service-ca.crt" Feb 23 10:17:57 crc kubenswrapper[4940]: I0223 10:17:57.179979 4940 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-vkzq4"/"default-dockercfg-gqxx5" Feb 23 10:17:57 crc kubenswrapper[4940]: I0223 10:17:57.181553 4940 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-vkzq4"/"kube-root-ca.crt" Feb 23 10:17:57 crc kubenswrapper[4940]: I0223 10:17:57.190046 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-vkzq4/must-gather-lc2d4"] Feb 23 10:17:57 crc kubenswrapper[4940]: I0223 10:17:57.271750 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qpdj6\" (UniqueName: \"kubernetes.io/projected/d5a55261-dc38-4ed1-88be-1552ca0e32eb-kube-api-access-qpdj6\") pod \"must-gather-lc2d4\" (UID: \"d5a55261-dc38-4ed1-88be-1552ca0e32eb\") " pod="openshift-must-gather-vkzq4/must-gather-lc2d4" Feb 23 10:17:57 crc kubenswrapper[4940]: I0223 10:17:57.271861 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/d5a55261-dc38-4ed1-88be-1552ca0e32eb-must-gather-output\") pod \"must-gather-lc2d4\" (UID: \"d5a55261-dc38-4ed1-88be-1552ca0e32eb\") " pod="openshift-must-gather-vkzq4/must-gather-lc2d4" Feb 23 10:17:57 crc kubenswrapper[4940]: I0223 10:17:57.373479 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qpdj6\" (UniqueName: \"kubernetes.io/projected/d5a55261-dc38-4ed1-88be-1552ca0e32eb-kube-api-access-qpdj6\") pod \"must-gather-lc2d4\" (UID: \"d5a55261-dc38-4ed1-88be-1552ca0e32eb\") " pod="openshift-must-gather-vkzq4/must-gather-lc2d4" Feb 23 10:17:57 crc kubenswrapper[4940]: I0223 10:17:57.373597 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/d5a55261-dc38-4ed1-88be-1552ca0e32eb-must-gather-output\") pod \"must-gather-lc2d4\" (UID: \"d5a55261-dc38-4ed1-88be-1552ca0e32eb\") " pod="openshift-must-gather-vkzq4/must-gather-lc2d4" Feb 23 10:17:57 crc kubenswrapper[4940]: I0223 10:17:57.374045 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/d5a55261-dc38-4ed1-88be-1552ca0e32eb-must-gather-output\") pod \"must-gather-lc2d4\" (UID: \"d5a55261-dc38-4ed1-88be-1552ca0e32eb\") " pod="openshift-must-gather-vkzq4/must-gather-lc2d4" Feb 23 10:17:57 crc kubenswrapper[4940]: I0223 10:17:57.390984 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qpdj6\" (UniqueName: \"kubernetes.io/projected/d5a55261-dc38-4ed1-88be-1552ca0e32eb-kube-api-access-qpdj6\") pod \"must-gather-lc2d4\" (UID: \"d5a55261-dc38-4ed1-88be-1552ca0e32eb\") " pod="openshift-must-gather-vkzq4/must-gather-lc2d4" Feb 23 10:17:57 crc kubenswrapper[4940]: I0223 10:17:57.495457 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vkzq4/must-gather-lc2d4" Feb 23 10:17:58 crc kubenswrapper[4940]: I0223 10:17:58.167144 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-vkzq4/must-gather-lc2d4"] Feb 23 10:17:58 crc kubenswrapper[4940]: I0223 10:17:58.200539 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-vkzq4/must-gather-lc2d4" event={"ID":"d5a55261-dc38-4ed1-88be-1552ca0e32eb","Type":"ContainerStarted","Data":"963b528d11fc516fa5efa3b9aceae62dc35fa330aa582a04e9cf057ebd877633"} Feb 23 10:18:00 crc kubenswrapper[4940]: I0223 10:18:00.218262 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-vkzq4/must-gather-lc2d4" event={"ID":"d5a55261-dc38-4ed1-88be-1552ca0e32eb","Type":"ContainerStarted","Data":"a5f5bbec0f2c57e5fef78806c16d0443ebbe1638063dcd969867bf125f2216ce"} Feb 23 10:18:00 crc kubenswrapper[4940]: I0223 10:18:00.218706 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-vkzq4/must-gather-lc2d4" event={"ID":"d5a55261-dc38-4ed1-88be-1552ca0e32eb","Type":"ContainerStarted","Data":"73d7466bbc598593251e71a094b4762d0b3dc311e1f807f8766ed7bb8ca0c3fe"} Feb 23 10:18:00 crc kubenswrapper[4940]: I0223 10:18:00.240196 4940 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-vkzq4/must-gather-lc2d4" podStartSLOduration=3.24017806 podStartE2EDuration="3.24017806s" podCreationTimestamp="2026-02-23 10:17:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 10:18:00.236121511 +0000 UTC m=+5411.619327688" watchObservedRunningTime="2026-02-23 10:18:00.24017806 +0000 UTC m=+5411.623384207" Feb 23 10:18:04 crc kubenswrapper[4940]: I0223 10:18:04.086299 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-vkzq4/crc-debug-wt6l7"] Feb 23 10:18:04 crc kubenswrapper[4940]: I0223 10:18:04.088056 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vkzq4/crc-debug-wt6l7" Feb 23 10:18:04 crc kubenswrapper[4940]: I0223 10:18:04.132951 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gx6bc\" (UniqueName: \"kubernetes.io/projected/72c14b64-3aa8-4297-8bc3-86f737e34bb4-kube-api-access-gx6bc\") pod \"crc-debug-wt6l7\" (UID: \"72c14b64-3aa8-4297-8bc3-86f737e34bb4\") " pod="openshift-must-gather-vkzq4/crc-debug-wt6l7" Feb 23 10:18:04 crc kubenswrapper[4940]: I0223 10:18:04.133007 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/72c14b64-3aa8-4297-8bc3-86f737e34bb4-host\") pod \"crc-debug-wt6l7\" (UID: \"72c14b64-3aa8-4297-8bc3-86f737e34bb4\") " pod="openshift-must-gather-vkzq4/crc-debug-wt6l7" Feb 23 10:18:04 crc kubenswrapper[4940]: I0223 10:18:04.235163 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gx6bc\" (UniqueName: \"kubernetes.io/projected/72c14b64-3aa8-4297-8bc3-86f737e34bb4-kube-api-access-gx6bc\") pod \"crc-debug-wt6l7\" (UID: \"72c14b64-3aa8-4297-8bc3-86f737e34bb4\") " pod="openshift-must-gather-vkzq4/crc-debug-wt6l7" Feb 23 10:18:04 crc kubenswrapper[4940]: I0223 10:18:04.235213 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/72c14b64-3aa8-4297-8bc3-86f737e34bb4-host\") pod \"crc-debug-wt6l7\" (UID: \"72c14b64-3aa8-4297-8bc3-86f737e34bb4\") " pod="openshift-must-gather-vkzq4/crc-debug-wt6l7" Feb 23 10:18:04 crc kubenswrapper[4940]: I0223 10:18:04.235391 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/72c14b64-3aa8-4297-8bc3-86f737e34bb4-host\") pod \"crc-debug-wt6l7\" (UID: \"72c14b64-3aa8-4297-8bc3-86f737e34bb4\") " pod="openshift-must-gather-vkzq4/crc-debug-wt6l7" Feb 23 10:18:04 crc kubenswrapper[4940]: I0223 10:18:04.256112 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gx6bc\" (UniqueName: \"kubernetes.io/projected/72c14b64-3aa8-4297-8bc3-86f737e34bb4-kube-api-access-gx6bc\") pod \"crc-debug-wt6l7\" (UID: \"72c14b64-3aa8-4297-8bc3-86f737e34bb4\") " pod="openshift-must-gather-vkzq4/crc-debug-wt6l7" Feb 23 10:18:04 crc kubenswrapper[4940]: I0223 10:18:04.412759 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vkzq4/crc-debug-wt6l7" Feb 23 10:18:04 crc kubenswrapper[4940]: W0223 10:18:04.459958 4940 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod72c14b64_3aa8_4297_8bc3_86f737e34bb4.slice/crio-dffb6b0ee7f6ad5af7d25d2706a67e1293797e0194e9a6d26d4b79a256ab37d8 WatchSource:0}: Error finding container dffb6b0ee7f6ad5af7d25d2706a67e1293797e0194e9a6d26d4b79a256ab37d8: Status 404 returned error can't find the container with id dffb6b0ee7f6ad5af7d25d2706a67e1293797e0194e9a6d26d4b79a256ab37d8 Feb 23 10:18:05 crc kubenswrapper[4940]: I0223 10:18:05.260288 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-vkzq4/crc-debug-wt6l7" event={"ID":"72c14b64-3aa8-4297-8bc3-86f737e34bb4","Type":"ContainerStarted","Data":"4839673769c8a99599d4f7289f855357137e6dd69fac5fe55637db30c9d2d1e6"} Feb 23 10:18:05 crc kubenswrapper[4940]: I0223 10:18:05.260860 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-vkzq4/crc-debug-wt6l7" event={"ID":"72c14b64-3aa8-4297-8bc3-86f737e34bb4","Type":"ContainerStarted","Data":"dffb6b0ee7f6ad5af7d25d2706a67e1293797e0194e9a6d26d4b79a256ab37d8"} Feb 23 10:18:05 crc kubenswrapper[4940]: I0223 10:18:05.285729 4940 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-vkzq4/crc-debug-wt6l7" podStartSLOduration=1.285705549 podStartE2EDuration="1.285705549s" podCreationTimestamp="2026-02-23 10:18:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 10:18:05.2787918 +0000 UTC m=+5416.661997967" watchObservedRunningTime="2026-02-23 10:18:05.285705549 +0000 UTC m=+5416.668911716" Feb 23 10:18:12 crc kubenswrapper[4940]: I0223 10:18:12.964985 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-nkx7p"] Feb 23 10:18:12 crc kubenswrapper[4940]: I0223 10:18:12.967992 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nkx7p" Feb 23 10:18:12 crc kubenswrapper[4940]: I0223 10:18:12.974710 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-nkx7p"] Feb 23 10:18:13 crc kubenswrapper[4940]: I0223 10:18:13.014104 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c2f95df1-67e7-47f8-aa50-91cbcdd1036d-catalog-content\") pod \"certified-operators-nkx7p\" (UID: \"c2f95df1-67e7-47f8-aa50-91cbcdd1036d\") " pod="openshift-marketplace/certified-operators-nkx7p" Feb 23 10:18:13 crc kubenswrapper[4940]: I0223 10:18:13.014244 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fk99g\" (UniqueName: \"kubernetes.io/projected/c2f95df1-67e7-47f8-aa50-91cbcdd1036d-kube-api-access-fk99g\") pod \"certified-operators-nkx7p\" (UID: \"c2f95df1-67e7-47f8-aa50-91cbcdd1036d\") " pod="openshift-marketplace/certified-operators-nkx7p" Feb 23 10:18:13 crc kubenswrapper[4940]: I0223 10:18:13.014393 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c2f95df1-67e7-47f8-aa50-91cbcdd1036d-utilities\") pod \"certified-operators-nkx7p\" (UID: \"c2f95df1-67e7-47f8-aa50-91cbcdd1036d\") " pod="openshift-marketplace/certified-operators-nkx7p" Feb 23 10:18:13 crc kubenswrapper[4940]: I0223 10:18:13.115812 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c2f95df1-67e7-47f8-aa50-91cbcdd1036d-utilities\") pod \"certified-operators-nkx7p\" (UID: \"c2f95df1-67e7-47f8-aa50-91cbcdd1036d\") " pod="openshift-marketplace/certified-operators-nkx7p" Feb 23 10:18:13 crc kubenswrapper[4940]: I0223 10:18:13.116130 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c2f95df1-67e7-47f8-aa50-91cbcdd1036d-catalog-content\") pod \"certified-operators-nkx7p\" (UID: \"c2f95df1-67e7-47f8-aa50-91cbcdd1036d\") " pod="openshift-marketplace/certified-operators-nkx7p" Feb 23 10:18:13 crc kubenswrapper[4940]: I0223 10:18:13.116173 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fk99g\" (UniqueName: \"kubernetes.io/projected/c2f95df1-67e7-47f8-aa50-91cbcdd1036d-kube-api-access-fk99g\") pod \"certified-operators-nkx7p\" (UID: \"c2f95df1-67e7-47f8-aa50-91cbcdd1036d\") " pod="openshift-marketplace/certified-operators-nkx7p" Feb 23 10:18:13 crc kubenswrapper[4940]: I0223 10:18:13.116457 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c2f95df1-67e7-47f8-aa50-91cbcdd1036d-utilities\") pod \"certified-operators-nkx7p\" (UID: \"c2f95df1-67e7-47f8-aa50-91cbcdd1036d\") " pod="openshift-marketplace/certified-operators-nkx7p" Feb 23 10:18:13 crc kubenswrapper[4940]: I0223 10:18:13.116694 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c2f95df1-67e7-47f8-aa50-91cbcdd1036d-catalog-content\") pod \"certified-operators-nkx7p\" (UID: \"c2f95df1-67e7-47f8-aa50-91cbcdd1036d\") " pod="openshift-marketplace/certified-operators-nkx7p" Feb 23 10:18:13 crc kubenswrapper[4940]: I0223 10:18:13.136810 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fk99g\" (UniqueName: \"kubernetes.io/projected/c2f95df1-67e7-47f8-aa50-91cbcdd1036d-kube-api-access-fk99g\") pod \"certified-operators-nkx7p\" (UID: \"c2f95df1-67e7-47f8-aa50-91cbcdd1036d\") " pod="openshift-marketplace/certified-operators-nkx7p" Feb 23 10:18:13 crc kubenswrapper[4940]: I0223 10:18:13.405931 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nkx7p" Feb 23 10:18:14 crc kubenswrapper[4940]: W0223 10:18:14.084256 4940 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc2f95df1_67e7_47f8_aa50_91cbcdd1036d.slice/crio-f9b7919242e7069d5e3a1d657b5a5bf56d4e216ab42e77f4b8cdb6434a0133b9 WatchSource:0}: Error finding container f9b7919242e7069d5e3a1d657b5a5bf56d4e216ab42e77f4b8cdb6434a0133b9: Status 404 returned error can't find the container with id f9b7919242e7069d5e3a1d657b5a5bf56d4e216ab42e77f4b8cdb6434a0133b9 Feb 23 10:18:14 crc kubenswrapper[4940]: I0223 10:18:14.086882 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-nkx7p"] Feb 23 10:18:14 crc kubenswrapper[4940]: I0223 10:18:14.339475 4940 generic.go:334] "Generic (PLEG): container finished" podID="c2f95df1-67e7-47f8-aa50-91cbcdd1036d" containerID="9d848967d6b5e3c2e0a70b5d7833f29ce287a00fbb71550065b681d0da923b60" exitCode=0 Feb 23 10:18:14 crc kubenswrapper[4940]: I0223 10:18:14.340092 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nkx7p" event={"ID":"c2f95df1-67e7-47f8-aa50-91cbcdd1036d","Type":"ContainerDied","Data":"9d848967d6b5e3c2e0a70b5d7833f29ce287a00fbb71550065b681d0da923b60"} Feb 23 10:18:14 crc kubenswrapper[4940]: I0223 10:18:14.340131 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nkx7p" event={"ID":"c2f95df1-67e7-47f8-aa50-91cbcdd1036d","Type":"ContainerStarted","Data":"f9b7919242e7069d5e3a1d657b5a5bf56d4e216ab42e77f4b8cdb6434a0133b9"} Feb 23 10:18:14 crc kubenswrapper[4940]: I0223 10:18:14.342413 4940 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 23 10:18:21 crc kubenswrapper[4940]: I0223 10:18:21.425111 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nkx7p" event={"ID":"c2f95df1-67e7-47f8-aa50-91cbcdd1036d","Type":"ContainerStarted","Data":"15188c185ff3ff739b4e08e58104601e5af55528f97e7815635be4fbd6158cce"} Feb 23 10:18:22 crc kubenswrapper[4940]: I0223 10:18:22.437774 4940 generic.go:334] "Generic (PLEG): container finished" podID="c2f95df1-67e7-47f8-aa50-91cbcdd1036d" containerID="15188c185ff3ff739b4e08e58104601e5af55528f97e7815635be4fbd6158cce" exitCode=0 Feb 23 10:18:22 crc kubenswrapper[4940]: I0223 10:18:22.438089 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nkx7p" event={"ID":"c2f95df1-67e7-47f8-aa50-91cbcdd1036d","Type":"ContainerDied","Data":"15188c185ff3ff739b4e08e58104601e5af55528f97e7815635be4fbd6158cce"} Feb 23 10:18:23 crc kubenswrapper[4940]: I0223 10:18:23.455273 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nkx7p" event={"ID":"c2f95df1-67e7-47f8-aa50-91cbcdd1036d","Type":"ContainerStarted","Data":"b13ff60c7c04f3e06af06fe704cab3a8a3044df8d6670198a6d831cbe9123000"} Feb 23 10:18:23 crc kubenswrapper[4940]: I0223 10:18:23.483293 4940 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-nkx7p" podStartSLOduration=2.9752497350000002 podStartE2EDuration="11.48326559s" podCreationTimestamp="2026-02-23 10:18:12 +0000 UTC" firstStartedPulling="2026-02-23 10:18:14.341950901 +0000 UTC m=+5425.725157058" lastFinishedPulling="2026-02-23 10:18:22.849966756 +0000 UTC m=+5434.233172913" observedRunningTime="2026-02-23 10:18:23.475296358 +0000 UTC m=+5434.858502525" watchObservedRunningTime="2026-02-23 10:18:23.48326559 +0000 UTC m=+5434.866471747" Feb 23 10:18:33 crc kubenswrapper[4940]: I0223 10:18:33.407280 4940 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-nkx7p" Feb 23 10:18:33 crc kubenswrapper[4940]: I0223 10:18:33.408744 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-nkx7p" Feb 23 10:18:33 crc kubenswrapper[4940]: I0223 10:18:33.673696 4940 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-nkx7p" Feb 23 10:18:33 crc kubenswrapper[4940]: I0223 10:18:33.723963 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-nkx7p" Feb 23 10:18:33 crc kubenswrapper[4940]: I0223 10:18:33.852553 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-nkx7p"] Feb 23 10:18:33 crc kubenswrapper[4940]: I0223 10:18:33.923269 4940 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-xj85m"] Feb 23 10:18:33 crc kubenswrapper[4940]: I0223 10:18:33.923634 4940 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-xj85m" podUID="67ecdea2-a9cb-4de7-8350-894a47f81718" containerName="registry-server" containerID="cri-o://8b6045a482b117e66f4de4ad021ea277c4cb65d1b82c113346f665bcfc5e0f8b" gracePeriod=2 Feb 23 10:18:34 crc kubenswrapper[4940]: I0223 10:18:34.403094 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xj85m" Feb 23 10:18:34 crc kubenswrapper[4940]: I0223 10:18:34.449899 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qc7rg\" (UniqueName: \"kubernetes.io/projected/67ecdea2-a9cb-4de7-8350-894a47f81718-kube-api-access-qc7rg\") pod \"67ecdea2-a9cb-4de7-8350-894a47f81718\" (UID: \"67ecdea2-a9cb-4de7-8350-894a47f81718\") " Feb 23 10:18:34 crc kubenswrapper[4940]: I0223 10:18:34.449997 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/67ecdea2-a9cb-4de7-8350-894a47f81718-catalog-content\") pod \"67ecdea2-a9cb-4de7-8350-894a47f81718\" (UID: \"67ecdea2-a9cb-4de7-8350-894a47f81718\") " Feb 23 10:18:34 crc kubenswrapper[4940]: I0223 10:18:34.461217 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/67ecdea2-a9cb-4de7-8350-894a47f81718-kube-api-access-qc7rg" (OuterVolumeSpecName: "kube-api-access-qc7rg") pod "67ecdea2-a9cb-4de7-8350-894a47f81718" (UID: "67ecdea2-a9cb-4de7-8350-894a47f81718"). InnerVolumeSpecName "kube-api-access-qc7rg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 10:18:34 crc kubenswrapper[4940]: I0223 10:18:34.539148 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/67ecdea2-a9cb-4de7-8350-894a47f81718-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "67ecdea2-a9cb-4de7-8350-894a47f81718" (UID: "67ecdea2-a9cb-4de7-8350-894a47f81718"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 10:18:34 crc kubenswrapper[4940]: I0223 10:18:34.553864 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/67ecdea2-a9cb-4de7-8350-894a47f81718-utilities\") pod \"67ecdea2-a9cb-4de7-8350-894a47f81718\" (UID: \"67ecdea2-a9cb-4de7-8350-894a47f81718\") " Feb 23 10:18:34 crc kubenswrapper[4940]: I0223 10:18:34.554433 4940 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qc7rg\" (UniqueName: \"kubernetes.io/projected/67ecdea2-a9cb-4de7-8350-894a47f81718-kube-api-access-qc7rg\") on node \"crc\" DevicePath \"\"" Feb 23 10:18:34 crc kubenswrapper[4940]: I0223 10:18:34.554456 4940 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/67ecdea2-a9cb-4de7-8350-894a47f81718-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 23 10:18:34 crc kubenswrapper[4940]: I0223 10:18:34.554872 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/67ecdea2-a9cb-4de7-8350-894a47f81718-utilities" (OuterVolumeSpecName: "utilities") pod "67ecdea2-a9cb-4de7-8350-894a47f81718" (UID: "67ecdea2-a9cb-4de7-8350-894a47f81718"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 10:18:34 crc kubenswrapper[4940]: I0223 10:18:34.656126 4940 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/67ecdea2-a9cb-4de7-8350-894a47f81718-utilities\") on node \"crc\" DevicePath \"\"" Feb 23 10:18:34 crc kubenswrapper[4940]: I0223 10:18:34.659234 4940 generic.go:334] "Generic (PLEG): container finished" podID="67ecdea2-a9cb-4de7-8350-894a47f81718" containerID="8b6045a482b117e66f4de4ad021ea277c4cb65d1b82c113346f665bcfc5e0f8b" exitCode=0 Feb 23 10:18:34 crc kubenswrapper[4940]: I0223 10:18:34.659342 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xj85m" Feb 23 10:18:34 crc kubenswrapper[4940]: I0223 10:18:34.659331 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xj85m" event={"ID":"67ecdea2-a9cb-4de7-8350-894a47f81718","Type":"ContainerDied","Data":"8b6045a482b117e66f4de4ad021ea277c4cb65d1b82c113346f665bcfc5e0f8b"} Feb 23 10:18:34 crc kubenswrapper[4940]: I0223 10:18:34.659401 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xj85m" event={"ID":"67ecdea2-a9cb-4de7-8350-894a47f81718","Type":"ContainerDied","Data":"597b0952c3d1134efc8220d6ccd84d2fbf4d2f69a33a184624158d51e3402eb9"} Feb 23 10:18:34 crc kubenswrapper[4940]: I0223 10:18:34.659435 4940 scope.go:117] "RemoveContainer" containerID="8b6045a482b117e66f4de4ad021ea277c4cb65d1b82c113346f665bcfc5e0f8b" Feb 23 10:18:34 crc kubenswrapper[4940]: I0223 10:18:34.713698 4940 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-xj85m"] Feb 23 10:18:34 crc kubenswrapper[4940]: I0223 10:18:34.717122 4940 scope.go:117] "RemoveContainer" containerID="2a7a7e36c6b83e68641e4beb9faee4dcd6f50a820bb8cd0c2905a54dad3a76b2" Feb 23 10:18:34 crc kubenswrapper[4940]: I0223 10:18:34.723528 4940 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-xj85m"] Feb 23 10:18:34 crc kubenswrapper[4940]: I0223 10:18:34.745280 4940 scope.go:117] "RemoveContainer" containerID="b268e3397600f032ca6e965b7e80582aa3b72aa69a7d6a1d213a8aa3361180c3" Feb 23 10:18:34 crc kubenswrapper[4940]: I0223 10:18:34.802361 4940 scope.go:117] "RemoveContainer" containerID="8b6045a482b117e66f4de4ad021ea277c4cb65d1b82c113346f665bcfc5e0f8b" Feb 23 10:18:34 crc kubenswrapper[4940]: E0223 10:18:34.802848 4940 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8b6045a482b117e66f4de4ad021ea277c4cb65d1b82c113346f665bcfc5e0f8b\": container with ID starting with 8b6045a482b117e66f4de4ad021ea277c4cb65d1b82c113346f665bcfc5e0f8b not found: ID does not exist" containerID="8b6045a482b117e66f4de4ad021ea277c4cb65d1b82c113346f665bcfc5e0f8b" Feb 23 10:18:34 crc kubenswrapper[4940]: I0223 10:18:34.802880 4940 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8b6045a482b117e66f4de4ad021ea277c4cb65d1b82c113346f665bcfc5e0f8b"} err="failed to get container status \"8b6045a482b117e66f4de4ad021ea277c4cb65d1b82c113346f665bcfc5e0f8b\": rpc error: code = NotFound desc = could not find container \"8b6045a482b117e66f4de4ad021ea277c4cb65d1b82c113346f665bcfc5e0f8b\": container with ID starting with 8b6045a482b117e66f4de4ad021ea277c4cb65d1b82c113346f665bcfc5e0f8b not found: ID does not exist" Feb 23 10:18:34 crc kubenswrapper[4940]: I0223 10:18:34.802904 4940 scope.go:117] "RemoveContainer" containerID="2a7a7e36c6b83e68641e4beb9faee4dcd6f50a820bb8cd0c2905a54dad3a76b2" Feb 23 10:18:34 crc kubenswrapper[4940]: E0223 10:18:34.803236 4940 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2a7a7e36c6b83e68641e4beb9faee4dcd6f50a820bb8cd0c2905a54dad3a76b2\": container with ID starting with 2a7a7e36c6b83e68641e4beb9faee4dcd6f50a820bb8cd0c2905a54dad3a76b2 not found: ID does not exist" containerID="2a7a7e36c6b83e68641e4beb9faee4dcd6f50a820bb8cd0c2905a54dad3a76b2" Feb 23 10:18:34 crc kubenswrapper[4940]: I0223 10:18:34.803259 4940 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2a7a7e36c6b83e68641e4beb9faee4dcd6f50a820bb8cd0c2905a54dad3a76b2"} err="failed to get container status \"2a7a7e36c6b83e68641e4beb9faee4dcd6f50a820bb8cd0c2905a54dad3a76b2\": rpc error: code = NotFound desc = could not find container \"2a7a7e36c6b83e68641e4beb9faee4dcd6f50a820bb8cd0c2905a54dad3a76b2\": container with ID starting with 2a7a7e36c6b83e68641e4beb9faee4dcd6f50a820bb8cd0c2905a54dad3a76b2 not found: ID does not exist" Feb 23 10:18:34 crc kubenswrapper[4940]: I0223 10:18:34.803274 4940 scope.go:117] "RemoveContainer" containerID="b268e3397600f032ca6e965b7e80582aa3b72aa69a7d6a1d213a8aa3361180c3" Feb 23 10:18:34 crc kubenswrapper[4940]: E0223 10:18:34.803658 4940 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b268e3397600f032ca6e965b7e80582aa3b72aa69a7d6a1d213a8aa3361180c3\": container with ID starting with b268e3397600f032ca6e965b7e80582aa3b72aa69a7d6a1d213a8aa3361180c3 not found: ID does not exist" containerID="b268e3397600f032ca6e965b7e80582aa3b72aa69a7d6a1d213a8aa3361180c3" Feb 23 10:18:34 crc kubenswrapper[4940]: I0223 10:18:34.803680 4940 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b268e3397600f032ca6e965b7e80582aa3b72aa69a7d6a1d213a8aa3361180c3"} err="failed to get container status \"b268e3397600f032ca6e965b7e80582aa3b72aa69a7d6a1d213a8aa3361180c3\": rpc error: code = NotFound desc = could not find container \"b268e3397600f032ca6e965b7e80582aa3b72aa69a7d6a1d213a8aa3361180c3\": container with ID starting with b268e3397600f032ca6e965b7e80582aa3b72aa69a7d6a1d213a8aa3361180c3 not found: ID does not exist" Feb 23 10:18:36 crc kubenswrapper[4940]: I0223 10:18:35.355313 4940 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="67ecdea2-a9cb-4de7-8350-894a47f81718" path="/var/lib/kubelet/pods/67ecdea2-a9cb-4de7-8350-894a47f81718/volumes" Feb 23 10:18:54 crc kubenswrapper[4940]: I0223 10:18:54.928749 4940 generic.go:334] "Generic (PLEG): container finished" podID="72c14b64-3aa8-4297-8bc3-86f737e34bb4" containerID="4839673769c8a99599d4f7289f855357137e6dd69fac5fe55637db30c9d2d1e6" exitCode=0 Feb 23 10:18:54 crc kubenswrapper[4940]: I0223 10:18:54.928843 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-vkzq4/crc-debug-wt6l7" event={"ID":"72c14b64-3aa8-4297-8bc3-86f737e34bb4","Type":"ContainerDied","Data":"4839673769c8a99599d4f7289f855357137e6dd69fac5fe55637db30c9d2d1e6"} Feb 23 10:18:56 crc kubenswrapper[4940]: I0223 10:18:56.088200 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vkzq4/crc-debug-wt6l7" Feb 23 10:18:56 crc kubenswrapper[4940]: I0223 10:18:56.141791 4940 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-vkzq4/crc-debug-wt6l7"] Feb 23 10:18:56 crc kubenswrapper[4940]: I0223 10:18:56.152674 4940 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-vkzq4/crc-debug-wt6l7"] Feb 23 10:18:56 crc kubenswrapper[4940]: I0223 10:18:56.259441 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gx6bc\" (UniqueName: \"kubernetes.io/projected/72c14b64-3aa8-4297-8bc3-86f737e34bb4-kube-api-access-gx6bc\") pod \"72c14b64-3aa8-4297-8bc3-86f737e34bb4\" (UID: \"72c14b64-3aa8-4297-8bc3-86f737e34bb4\") " Feb 23 10:18:56 crc kubenswrapper[4940]: I0223 10:18:56.259696 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/72c14b64-3aa8-4297-8bc3-86f737e34bb4-host\") pod \"72c14b64-3aa8-4297-8bc3-86f737e34bb4\" (UID: \"72c14b64-3aa8-4297-8bc3-86f737e34bb4\") " Feb 23 10:18:56 crc kubenswrapper[4940]: I0223 10:18:56.259820 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/72c14b64-3aa8-4297-8bc3-86f737e34bb4-host" (OuterVolumeSpecName: "host") pod "72c14b64-3aa8-4297-8bc3-86f737e34bb4" (UID: "72c14b64-3aa8-4297-8bc3-86f737e34bb4"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 10:18:56 crc kubenswrapper[4940]: I0223 10:18:56.260251 4940 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/72c14b64-3aa8-4297-8bc3-86f737e34bb4-host\") on node \"crc\" DevicePath \"\"" Feb 23 10:18:56 crc kubenswrapper[4940]: I0223 10:18:56.267955 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/72c14b64-3aa8-4297-8bc3-86f737e34bb4-kube-api-access-gx6bc" (OuterVolumeSpecName: "kube-api-access-gx6bc") pod "72c14b64-3aa8-4297-8bc3-86f737e34bb4" (UID: "72c14b64-3aa8-4297-8bc3-86f737e34bb4"). InnerVolumeSpecName "kube-api-access-gx6bc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 10:18:56 crc kubenswrapper[4940]: I0223 10:18:56.362381 4940 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gx6bc\" (UniqueName: \"kubernetes.io/projected/72c14b64-3aa8-4297-8bc3-86f737e34bb4-kube-api-access-gx6bc\") on node \"crc\" DevicePath \"\"" Feb 23 10:18:56 crc kubenswrapper[4940]: I0223 10:18:56.953839 4940 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dffb6b0ee7f6ad5af7d25d2706a67e1293797e0194e9a6d26d4b79a256ab37d8" Feb 23 10:18:56 crc kubenswrapper[4940]: I0223 10:18:56.953893 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vkzq4/crc-debug-wt6l7" Feb 23 10:18:57 crc kubenswrapper[4940]: I0223 10:18:57.357089 4940 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="72c14b64-3aa8-4297-8bc3-86f737e34bb4" path="/var/lib/kubelet/pods/72c14b64-3aa8-4297-8bc3-86f737e34bb4/volumes" Feb 23 10:18:57 crc kubenswrapper[4940]: I0223 10:18:57.381721 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-vkzq4/crc-debug-rfnp5"] Feb 23 10:18:57 crc kubenswrapper[4940]: E0223 10:18:57.382145 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="72c14b64-3aa8-4297-8bc3-86f737e34bb4" containerName="container-00" Feb 23 10:18:57 crc kubenswrapper[4940]: I0223 10:18:57.382160 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="72c14b64-3aa8-4297-8bc3-86f737e34bb4" containerName="container-00" Feb 23 10:18:57 crc kubenswrapper[4940]: E0223 10:18:57.382173 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67ecdea2-a9cb-4de7-8350-894a47f81718" containerName="registry-server" Feb 23 10:18:57 crc kubenswrapper[4940]: I0223 10:18:57.382179 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="67ecdea2-a9cb-4de7-8350-894a47f81718" containerName="registry-server" Feb 23 10:18:57 crc kubenswrapper[4940]: E0223 10:18:57.382204 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67ecdea2-a9cb-4de7-8350-894a47f81718" containerName="extract-content" Feb 23 10:18:57 crc kubenswrapper[4940]: I0223 10:18:57.382210 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="67ecdea2-a9cb-4de7-8350-894a47f81718" containerName="extract-content" Feb 23 10:18:57 crc kubenswrapper[4940]: E0223 10:18:57.382232 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67ecdea2-a9cb-4de7-8350-894a47f81718" containerName="extract-utilities" Feb 23 10:18:57 crc kubenswrapper[4940]: I0223 10:18:57.382238 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="67ecdea2-a9cb-4de7-8350-894a47f81718" containerName="extract-utilities" Feb 23 10:18:57 crc kubenswrapper[4940]: I0223 10:18:57.382500 4940 memory_manager.go:354] "RemoveStaleState removing state" podUID="72c14b64-3aa8-4297-8bc3-86f737e34bb4" containerName="container-00" Feb 23 10:18:57 crc kubenswrapper[4940]: I0223 10:18:57.382523 4940 memory_manager.go:354] "RemoveStaleState removing state" podUID="67ecdea2-a9cb-4de7-8350-894a47f81718" containerName="registry-server" Feb 23 10:18:57 crc kubenswrapper[4940]: I0223 10:18:57.383201 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vkzq4/crc-debug-rfnp5" Feb 23 10:18:57 crc kubenswrapper[4940]: I0223 10:18:57.391916 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/59878980-9641-489f-ad48-2b98ac5e2988-host\") pod \"crc-debug-rfnp5\" (UID: \"59878980-9641-489f-ad48-2b98ac5e2988\") " pod="openshift-must-gather-vkzq4/crc-debug-rfnp5" Feb 23 10:18:57 crc kubenswrapper[4940]: I0223 10:18:57.392139 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rk8tt\" (UniqueName: \"kubernetes.io/projected/59878980-9641-489f-ad48-2b98ac5e2988-kube-api-access-rk8tt\") pod \"crc-debug-rfnp5\" (UID: \"59878980-9641-489f-ad48-2b98ac5e2988\") " pod="openshift-must-gather-vkzq4/crc-debug-rfnp5" Feb 23 10:18:57 crc kubenswrapper[4940]: I0223 10:18:57.564992 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rk8tt\" (UniqueName: \"kubernetes.io/projected/59878980-9641-489f-ad48-2b98ac5e2988-kube-api-access-rk8tt\") pod \"crc-debug-rfnp5\" (UID: \"59878980-9641-489f-ad48-2b98ac5e2988\") " pod="openshift-must-gather-vkzq4/crc-debug-rfnp5" Feb 23 10:18:57 crc kubenswrapper[4940]: I0223 10:18:57.565430 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/59878980-9641-489f-ad48-2b98ac5e2988-host\") pod \"crc-debug-rfnp5\" (UID: \"59878980-9641-489f-ad48-2b98ac5e2988\") " pod="openshift-must-gather-vkzq4/crc-debug-rfnp5" Feb 23 10:18:57 crc kubenswrapper[4940]: I0223 10:18:57.565596 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/59878980-9641-489f-ad48-2b98ac5e2988-host\") pod \"crc-debug-rfnp5\" (UID: \"59878980-9641-489f-ad48-2b98ac5e2988\") " pod="openshift-must-gather-vkzq4/crc-debug-rfnp5" Feb 23 10:18:57 crc kubenswrapper[4940]: I0223 10:18:57.596200 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rk8tt\" (UniqueName: \"kubernetes.io/projected/59878980-9641-489f-ad48-2b98ac5e2988-kube-api-access-rk8tt\") pod \"crc-debug-rfnp5\" (UID: \"59878980-9641-489f-ad48-2b98ac5e2988\") " pod="openshift-must-gather-vkzq4/crc-debug-rfnp5" Feb 23 10:18:57 crc kubenswrapper[4940]: I0223 10:18:57.698937 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vkzq4/crc-debug-rfnp5" Feb 23 10:18:57 crc kubenswrapper[4940]: I0223 10:18:57.962190 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-vkzq4/crc-debug-rfnp5" event={"ID":"59878980-9641-489f-ad48-2b98ac5e2988","Type":"ContainerStarted","Data":"4a555d9db513964b8cc401464c08c018a11791fe1f1e68354ff671f1b7cfeffb"} Feb 23 10:18:58 crc kubenswrapper[4940]: I0223 10:18:58.974320 4940 generic.go:334] "Generic (PLEG): container finished" podID="59878980-9641-489f-ad48-2b98ac5e2988" containerID="0e2706b8d43476e2fe65a82f4c6a7a555a0ab7f7a987ad12004d7c679b8ebf88" exitCode=0 Feb 23 10:18:58 crc kubenswrapper[4940]: I0223 10:18:58.974402 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-vkzq4/crc-debug-rfnp5" event={"ID":"59878980-9641-489f-ad48-2b98ac5e2988","Type":"ContainerDied","Data":"0e2706b8d43476e2fe65a82f4c6a7a555a0ab7f7a987ad12004d7c679b8ebf88"} Feb 23 10:19:00 crc kubenswrapper[4940]: I0223 10:19:00.324419 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vkzq4/crc-debug-rfnp5" Feb 23 10:19:00 crc kubenswrapper[4940]: I0223 10:19:00.489422 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rk8tt\" (UniqueName: \"kubernetes.io/projected/59878980-9641-489f-ad48-2b98ac5e2988-kube-api-access-rk8tt\") pod \"59878980-9641-489f-ad48-2b98ac5e2988\" (UID: \"59878980-9641-489f-ad48-2b98ac5e2988\") " Feb 23 10:19:00 crc kubenswrapper[4940]: I0223 10:19:00.489521 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/59878980-9641-489f-ad48-2b98ac5e2988-host\") pod \"59878980-9641-489f-ad48-2b98ac5e2988\" (UID: \"59878980-9641-489f-ad48-2b98ac5e2988\") " Feb 23 10:19:00 crc kubenswrapper[4940]: I0223 10:19:00.489852 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/59878980-9641-489f-ad48-2b98ac5e2988-host" (OuterVolumeSpecName: "host") pod "59878980-9641-489f-ad48-2b98ac5e2988" (UID: "59878980-9641-489f-ad48-2b98ac5e2988"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 10:19:00 crc kubenswrapper[4940]: I0223 10:19:00.490492 4940 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/59878980-9641-489f-ad48-2b98ac5e2988-host\") on node \"crc\" DevicePath \"\"" Feb 23 10:19:00 crc kubenswrapper[4940]: I0223 10:19:00.494241 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/59878980-9641-489f-ad48-2b98ac5e2988-kube-api-access-rk8tt" (OuterVolumeSpecName: "kube-api-access-rk8tt") pod "59878980-9641-489f-ad48-2b98ac5e2988" (UID: "59878980-9641-489f-ad48-2b98ac5e2988"). InnerVolumeSpecName "kube-api-access-rk8tt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 10:19:00 crc kubenswrapper[4940]: I0223 10:19:00.591794 4940 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rk8tt\" (UniqueName: \"kubernetes.io/projected/59878980-9641-489f-ad48-2b98ac5e2988-kube-api-access-rk8tt\") on node \"crc\" DevicePath \"\"" Feb 23 10:19:00 crc kubenswrapper[4940]: I0223 10:19:00.993476 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-vkzq4/crc-debug-rfnp5" event={"ID":"59878980-9641-489f-ad48-2b98ac5e2988","Type":"ContainerDied","Data":"4a555d9db513964b8cc401464c08c018a11791fe1f1e68354ff671f1b7cfeffb"} Feb 23 10:19:00 crc kubenswrapper[4940]: I0223 10:19:00.993512 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vkzq4/crc-debug-rfnp5" Feb 23 10:19:00 crc kubenswrapper[4940]: I0223 10:19:00.993513 4940 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4a555d9db513964b8cc401464c08c018a11791fe1f1e68354ff671f1b7cfeffb" Feb 23 10:19:01 crc kubenswrapper[4940]: I0223 10:19:01.428885 4940 patch_prober.go:28] interesting pod/machine-config-daemon-26mgs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 23 10:19:01 crc kubenswrapper[4940]: I0223 10:19:01.428963 4940 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" podUID="f3f2cfd6-5ddf-436d-998f-440f1cc642b1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 23 10:19:01 crc kubenswrapper[4940]: I0223 10:19:01.935668 4940 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-vkzq4/crc-debug-rfnp5"] Feb 23 10:19:01 crc kubenswrapper[4940]: I0223 10:19:01.941391 4940 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-vkzq4/crc-debug-rfnp5"] Feb 23 10:19:03 crc kubenswrapper[4940]: I0223 10:19:03.292689 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-vkzq4/crc-debug-4vnw8"] Feb 23 10:19:03 crc kubenswrapper[4940]: E0223 10:19:03.293394 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59878980-9641-489f-ad48-2b98ac5e2988" containerName="container-00" Feb 23 10:19:03 crc kubenswrapper[4940]: I0223 10:19:03.293417 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="59878980-9641-489f-ad48-2b98ac5e2988" containerName="container-00" Feb 23 10:19:03 crc kubenswrapper[4940]: I0223 10:19:03.293699 4940 memory_manager.go:354] "RemoveStaleState removing state" podUID="59878980-9641-489f-ad48-2b98ac5e2988" containerName="container-00" Feb 23 10:19:03 crc kubenswrapper[4940]: I0223 10:19:03.294514 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vkzq4/crc-debug-4vnw8" Feb 23 10:19:03 crc kubenswrapper[4940]: I0223 10:19:03.356516 4940 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="59878980-9641-489f-ad48-2b98ac5e2988" path="/var/lib/kubelet/pods/59878980-9641-489f-ad48-2b98ac5e2988/volumes" Feb 23 10:19:03 crc kubenswrapper[4940]: I0223 10:19:03.401184 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q2g7m\" (UniqueName: \"kubernetes.io/projected/a51f07ce-77b2-4aa0-b521-63b587e319e6-kube-api-access-q2g7m\") pod \"crc-debug-4vnw8\" (UID: \"a51f07ce-77b2-4aa0-b521-63b587e319e6\") " pod="openshift-must-gather-vkzq4/crc-debug-4vnw8" Feb 23 10:19:03 crc kubenswrapper[4940]: I0223 10:19:03.401997 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a51f07ce-77b2-4aa0-b521-63b587e319e6-host\") pod \"crc-debug-4vnw8\" (UID: \"a51f07ce-77b2-4aa0-b521-63b587e319e6\") " pod="openshift-must-gather-vkzq4/crc-debug-4vnw8" Feb 23 10:19:03 crc kubenswrapper[4940]: I0223 10:19:03.503679 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a51f07ce-77b2-4aa0-b521-63b587e319e6-host\") pod \"crc-debug-4vnw8\" (UID: \"a51f07ce-77b2-4aa0-b521-63b587e319e6\") " pod="openshift-must-gather-vkzq4/crc-debug-4vnw8" Feb 23 10:19:03 crc kubenswrapper[4940]: I0223 10:19:03.503830 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q2g7m\" (UniqueName: \"kubernetes.io/projected/a51f07ce-77b2-4aa0-b521-63b587e319e6-kube-api-access-q2g7m\") pod \"crc-debug-4vnw8\" (UID: \"a51f07ce-77b2-4aa0-b521-63b587e319e6\") " pod="openshift-must-gather-vkzq4/crc-debug-4vnw8" Feb 23 10:19:03 crc kubenswrapper[4940]: I0223 10:19:03.503855 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a51f07ce-77b2-4aa0-b521-63b587e319e6-host\") pod \"crc-debug-4vnw8\" (UID: \"a51f07ce-77b2-4aa0-b521-63b587e319e6\") " pod="openshift-must-gather-vkzq4/crc-debug-4vnw8" Feb 23 10:19:03 crc kubenswrapper[4940]: I0223 10:19:03.523245 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q2g7m\" (UniqueName: \"kubernetes.io/projected/a51f07ce-77b2-4aa0-b521-63b587e319e6-kube-api-access-q2g7m\") pod \"crc-debug-4vnw8\" (UID: \"a51f07ce-77b2-4aa0-b521-63b587e319e6\") " pod="openshift-must-gather-vkzq4/crc-debug-4vnw8" Feb 23 10:19:03 crc kubenswrapper[4940]: I0223 10:19:03.613479 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vkzq4/crc-debug-4vnw8" Feb 23 10:19:04 crc kubenswrapper[4940]: W0223 10:19:04.341247 4940 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda51f07ce_77b2_4aa0_b521_63b587e319e6.slice/crio-9392f1fa3b27117e2b83b4b6fc57cd77f6bb4f1101f4656a71563008160b3888 WatchSource:0}: Error finding container 9392f1fa3b27117e2b83b4b6fc57cd77f6bb4f1101f4656a71563008160b3888: Status 404 returned error can't find the container with id 9392f1fa3b27117e2b83b4b6fc57cd77f6bb4f1101f4656a71563008160b3888 Feb 23 10:19:05 crc kubenswrapper[4940]: I0223 10:19:05.041395 4940 generic.go:334] "Generic (PLEG): container finished" podID="a51f07ce-77b2-4aa0-b521-63b587e319e6" containerID="844a762a24ac3a97743169cfc21cb4668f0f86b1b781e00c284f526e28b6addf" exitCode=0 Feb 23 10:19:05 crc kubenswrapper[4940]: I0223 10:19:05.041492 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-vkzq4/crc-debug-4vnw8" event={"ID":"a51f07ce-77b2-4aa0-b521-63b587e319e6","Type":"ContainerDied","Data":"844a762a24ac3a97743169cfc21cb4668f0f86b1b781e00c284f526e28b6addf"} Feb 23 10:19:05 crc kubenswrapper[4940]: I0223 10:19:05.041781 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-vkzq4/crc-debug-4vnw8" event={"ID":"a51f07ce-77b2-4aa0-b521-63b587e319e6","Type":"ContainerStarted","Data":"9392f1fa3b27117e2b83b4b6fc57cd77f6bb4f1101f4656a71563008160b3888"} Feb 23 10:19:05 crc kubenswrapper[4940]: I0223 10:19:05.080281 4940 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-vkzq4/crc-debug-4vnw8"] Feb 23 10:19:05 crc kubenswrapper[4940]: I0223 10:19:05.088133 4940 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-vkzq4/crc-debug-4vnw8"] Feb 23 10:19:06 crc kubenswrapper[4940]: I0223 10:19:06.646627 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vkzq4/crc-debug-4vnw8" Feb 23 10:19:06 crc kubenswrapper[4940]: I0223 10:19:06.751713 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q2g7m\" (UniqueName: \"kubernetes.io/projected/a51f07ce-77b2-4aa0-b521-63b587e319e6-kube-api-access-q2g7m\") pod \"a51f07ce-77b2-4aa0-b521-63b587e319e6\" (UID: \"a51f07ce-77b2-4aa0-b521-63b587e319e6\") " Feb 23 10:19:06 crc kubenswrapper[4940]: I0223 10:19:06.752285 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a51f07ce-77b2-4aa0-b521-63b587e319e6-host\") pod \"a51f07ce-77b2-4aa0-b521-63b587e319e6\" (UID: \"a51f07ce-77b2-4aa0-b521-63b587e319e6\") " Feb 23 10:19:06 crc kubenswrapper[4940]: I0223 10:19:06.752359 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a51f07ce-77b2-4aa0-b521-63b587e319e6-host" (OuterVolumeSpecName: "host") pod "a51f07ce-77b2-4aa0-b521-63b587e319e6" (UID: "a51f07ce-77b2-4aa0-b521-63b587e319e6"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 10:19:06 crc kubenswrapper[4940]: I0223 10:19:06.752874 4940 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a51f07ce-77b2-4aa0-b521-63b587e319e6-host\") on node \"crc\" DevicePath \"\"" Feb 23 10:19:06 crc kubenswrapper[4940]: I0223 10:19:06.764984 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a51f07ce-77b2-4aa0-b521-63b587e319e6-kube-api-access-q2g7m" (OuterVolumeSpecName: "kube-api-access-q2g7m") pod "a51f07ce-77b2-4aa0-b521-63b587e319e6" (UID: "a51f07ce-77b2-4aa0-b521-63b587e319e6"). InnerVolumeSpecName "kube-api-access-q2g7m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 10:19:06 crc kubenswrapper[4940]: I0223 10:19:06.854725 4940 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q2g7m\" (UniqueName: \"kubernetes.io/projected/a51f07ce-77b2-4aa0-b521-63b587e319e6-kube-api-access-q2g7m\") on node \"crc\" DevicePath \"\"" Feb 23 10:19:07 crc kubenswrapper[4940]: I0223 10:19:07.174650 4940 scope.go:117] "RemoveContainer" containerID="844a762a24ac3a97743169cfc21cb4668f0f86b1b781e00c284f526e28b6addf" Feb 23 10:19:07 crc kubenswrapper[4940]: I0223 10:19:07.174690 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vkzq4/crc-debug-4vnw8" Feb 23 10:19:07 crc kubenswrapper[4940]: I0223 10:19:07.382243 4940 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a51f07ce-77b2-4aa0-b521-63b587e319e6" path="/var/lib/kubelet/pods/a51f07ce-77b2-4aa0-b521-63b587e319e6/volumes" Feb 23 10:19:31 crc kubenswrapper[4940]: I0223 10:19:31.429098 4940 patch_prober.go:28] interesting pod/machine-config-daemon-26mgs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 23 10:19:31 crc kubenswrapper[4940]: I0223 10:19:31.429736 4940 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" podUID="f3f2cfd6-5ddf-436d-998f-440f1cc642b1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 23 10:19:37 crc kubenswrapper[4940]: I0223 10:19:37.635330 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-8f67f879d-fb7mr_e15eadde-81b6-46a2-bc90-7f8ded67b3bd/barbican-api/0.log" Feb 23 10:19:37 crc kubenswrapper[4940]: I0223 10:19:37.795755 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-8f67f879d-fb7mr_e15eadde-81b6-46a2-bc90-7f8ded67b3bd/barbican-api-log/0.log" Feb 23 10:19:37 crc kubenswrapper[4940]: I0223 10:19:37.854007 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-5d6cbfd9cd-f6hzm_8e5d03e5-c75c-4cfe-a31a-fa57e04a51ce/barbican-keystone-listener/0.log" Feb 23 10:19:38 crc kubenswrapper[4940]: I0223 10:19:38.077012 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-7b9b88c6bc-hkv9v_8008f8dc-0709-408f-88d1-0707f66c0a10/barbican-worker/0.log" Feb 23 10:19:38 crc kubenswrapper[4940]: I0223 10:19:38.182458 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-7b9b88c6bc-hkv9v_8008f8dc-0709-408f-88d1-0707f66c0a10/barbican-worker-log/0.log" Feb 23 10:19:38 crc kubenswrapper[4940]: I0223 10:19:38.367988 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-kvmwh_5d90dbb8-e870-41e1-bbab-a053b479fee1/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Feb 23 10:19:38 crc kubenswrapper[4940]: I0223 10:19:38.622931 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_3545f0d9-3f75-4de3-ab04-716362d1a057/ceilometer-notification-agent/0.log" Feb 23 10:19:38 crc kubenswrapper[4940]: I0223 10:19:38.651198 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-5d6cbfd9cd-f6hzm_8e5d03e5-c75c-4cfe-a31a-fa57e04a51ce/barbican-keystone-listener-log/0.log" Feb 23 10:19:38 crc kubenswrapper[4940]: I0223 10:19:38.695300 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_3545f0d9-3f75-4de3-ab04-716362d1a057/ceilometer-central-agent/0.log" Feb 23 10:19:38 crc kubenswrapper[4940]: I0223 10:19:38.708030 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_3545f0d9-3f75-4de3-ab04-716362d1a057/proxy-httpd/0.log" Feb 23 10:19:38 crc kubenswrapper[4940]: I0223 10:19:38.816378 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_3545f0d9-3f75-4de3-ab04-716362d1a057/sg-core/0.log" Feb 23 10:19:39 crc kubenswrapper[4940]: I0223 10:19:39.014028 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceph_63ebc8a2-744a-4844-b60d-80fefedbf7df/ceph/0.log" Feb 23 10:19:39 crc kubenswrapper[4940]: I0223 10:19:39.425067 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_f91c0e0d-08da-47b9-acef-5e4e9856fc85/cinder-api/0.log" Feb 23 10:19:39 crc kubenswrapper[4940]: I0223 10:19:39.437891 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_f91c0e0d-08da-47b9-acef-5e4e9856fc85/cinder-api-log/0.log" Feb 23 10:19:39 crc kubenswrapper[4940]: I0223 10:19:39.607985 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-backup-0_20de4506-b14e-4f9b-9afc-c4d9ac6aef52/probe/0.log" Feb 23 10:19:39 crc kubenswrapper[4940]: I0223 10:19:39.756664 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_8ea914bd-a046-42ba-942e-7d3d778d0b52/cinder-scheduler/0.log" Feb 23 10:19:39 crc kubenswrapper[4940]: I0223 10:19:39.935642 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_8ea914bd-a046-42ba-942e-7d3d778d0b52/probe/0.log" Feb 23 10:19:40 crc kubenswrapper[4940]: I0223 10:19:40.295054 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-volume-volume1-0_f446400f-c44a-49c0-891b-83b475c43e39/probe/0.log" Feb 23 10:19:40 crc kubenswrapper[4940]: I0223 10:19:40.594316 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-z77p7_10b1d407-edfe-4a01-9d25-ae2d0491e2aa/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Feb 23 10:19:40 crc kubenswrapper[4940]: I0223 10:19:40.895989 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-ttbp8_bf532816-d5b9-4205-844c-bf70b4cc5c18/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 23 10:19:41 crc kubenswrapper[4940]: I0223 10:19:41.111728 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-5d99fc9df9-v8kxm_99e36de7-b768-429a-a0c5-78ee546952bf/init/0.log" Feb 23 10:19:41 crc kubenswrapper[4940]: I0223 10:19:41.384879 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-5d99fc9df9-v8kxm_99e36de7-b768-429a-a0c5-78ee546952bf/init/0.log" Feb 23 10:19:41 crc kubenswrapper[4940]: I0223 10:19:41.764740 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-5d99fc9df9-v8kxm_99e36de7-b768-429a-a0c5-78ee546952bf/dnsmasq-dns/0.log" Feb 23 10:19:41 crc kubenswrapper[4940]: I0223 10:19:41.873715 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-nr6hb_50cd61db-fb52-4abe-a3c6-7c3e3777d04b/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Feb 23 10:19:42 crc kubenswrapper[4940]: I0223 10:19:42.023976 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-backup-0_20de4506-b14e-4f9b-9afc-c4d9ac6aef52/cinder-backup/0.log" Feb 23 10:19:42 crc kubenswrapper[4940]: I0223 10:19:42.051234 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_4c37345c-c81e-4d3f-8b55-8eec1705a5a1/glance-httpd/0.log" Feb 23 10:19:42 crc kubenswrapper[4940]: I0223 10:19:42.105948 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_4c37345c-c81e-4d3f-8b55-8eec1705a5a1/glance-log/0.log" Feb 23 10:19:42 crc kubenswrapper[4940]: I0223 10:19:42.257405 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_a6886923-a3fa-46f7-97f5-7864c61a5137/glance-log/0.log" Feb 23 10:19:42 crc kubenswrapper[4940]: I0223 10:19:42.405498 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_a6886923-a3fa-46f7-97f5-7864c61a5137/glance-httpd/0.log" Feb 23 10:19:42 crc kubenswrapper[4940]: I0223 10:19:42.616419 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-8485464bb-cvmj5_0c698dee-e3c4-44d3-a08b-73e6b1e87986/horizon/0.log" Feb 23 10:19:42 crc kubenswrapper[4940]: I0223 10:19:42.706719 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-l9x62_b42a2d02-c866-40d6-93ce-81d71aaf7195/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Feb 23 10:19:42 crc kubenswrapper[4940]: I0223 10:19:42.959586 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-nntpm_05541a9b-b462-4150-b0d7-131d75a1d775/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 23 10:19:43 crc kubenswrapper[4940]: I0223 10:19:43.193160 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29530681-kgpdn_62c1078b-acb9-4ce6-9c47-290a2ec6e9b0/keystone-cron/0.log" Feb 23 10:19:43 crc kubenswrapper[4940]: I0223 10:19:43.428410 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_15d7a09a-83f9-4b41-a280-e0d7257ee6f3/kube-state-metrics/0.log" Feb 23 10:19:43 crc kubenswrapper[4940]: I0223 10:19:43.621284 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-8485464bb-cvmj5_0c698dee-e3c4-44d3-a08b-73e6b1e87986/horizon-log/0.log" Feb 23 10:19:43 crc kubenswrapper[4940]: I0223 10:19:43.723203 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-t6ttl_7376823f-eb39-4631-9cac-0d4b297a9580/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Feb 23 10:19:43 crc kubenswrapper[4940]: I0223 10:19:43.912146 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-volume-volume1-0_f446400f-c44a-49c0-891b-83b475c43e39/cinder-volume/0.log" Feb 23 10:19:44 crc kubenswrapper[4940]: I0223 10:19:44.361657 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-scheduler-0_6efb7037-6af6-4b85-b2fc-940a912cddf4/probe/0.log" Feb 23 10:19:44 crc kubenswrapper[4940]: I0223 10:19:44.566541 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-scheduler-0_6efb7037-6af6-4b85-b2fc-940a912cddf4/manila-scheduler/0.log" Feb 23 10:19:44 crc kubenswrapper[4940]: I0223 10:19:44.754426 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-api-0_3f78e173-a538-4fa3-804d-25bff89a23ca/manila-api/0.log" Feb 23 10:19:44 crc kubenswrapper[4940]: I0223 10:19:44.819921 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-share-share1-0_1cfd9d39-e351-44f6-90b2-02c15fef4e9f/probe/0.log" Feb 23 10:19:45 crc kubenswrapper[4940]: I0223 10:19:45.246958 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-share-share1-0_1cfd9d39-e351-44f6-90b2-02c15fef4e9f/manila-share/0.log" Feb 23 10:19:45 crc kubenswrapper[4940]: I0223 10:19:45.327144 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-api-0_3f78e173-a538-4fa3-804d-25bff89a23ca/manila-api-log/0.log" Feb 23 10:19:45 crc kubenswrapper[4940]: I0223 10:19:45.839362 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-jl6h4_dea28292-7367-4777-9e99-80da3a9c51cf/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Feb 23 10:19:46 crc kubenswrapper[4940]: I0223 10:19:46.118532 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_e0aedede-6061-46c9-8fd2-88a2e1880c2f/memcached/0.log" Feb 23 10:19:46 crc kubenswrapper[4940]: I0223 10:19:46.202960 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-654489f6f-92jdq_feae1958-0b14-4a24-af08-cb96a4131a47/neutron-httpd/0.log" Feb 23 10:19:46 crc kubenswrapper[4940]: I0223 10:19:46.861739 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-654489f6f-92jdq_feae1958-0b14-4a24-af08-cb96a4131a47/neutron-api/0.log" Feb 23 10:19:47 crc kubenswrapper[4940]: I0223 10:19:47.199982 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_09737df3-14f0-4f68-a683-5402bfcb0aab/nova-cell0-conductor-conductor/0.log" Feb 23 10:19:47 crc kubenswrapper[4940]: I0223 10:19:47.566687 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_c854cfd6-7319-4a6c-8893-a96cd32bdcd0/nova-cell1-conductor-conductor/0.log" Feb 23 10:19:47 crc kubenswrapper[4940]: I0223 10:19:47.971644 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_1baa0ab5-14b9-4150-872e-e135857e3033/nova-cell1-novncproxy-novncproxy/0.log" Feb 23 10:19:47 crc kubenswrapper[4940]: I0223 10:19:47.983879 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-657b46f66d-5snf5_14b5e353-0333-4351-a628-4767407854ec/keystone-api/0.log" Feb 23 10:19:48 crc kubenswrapper[4940]: I0223 10:19:48.331472 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-wj7m8_4528f4f4-45cd-415f-902e-d15ecef72b60/nova-edpm-deployment-openstack-edpm-ipam/0.log" Feb 23 10:19:48 crc kubenswrapper[4940]: I0223 10:19:48.510178 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_38eb6728-c410-4f85-ac35-969880b14e26/nova-metadata-log/0.log" Feb 23 10:19:48 crc kubenswrapper[4940]: I0223 10:19:48.880572 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_10792692-8f84-43da-aea3-46d28e5ba1f5/nova-api-log/0.log" Feb 23 10:19:49 crc kubenswrapper[4940]: I0223 10:19:49.074608 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_f228632e-c649-4cbf-9a32-5baad303ef28/mysql-bootstrap/0.log" Feb 23 10:19:49 crc kubenswrapper[4940]: I0223 10:19:49.397292 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_f228632e-c649-4cbf-9a32-5baad303ef28/galera/0.log" Feb 23 10:19:49 crc kubenswrapper[4940]: I0223 10:19:49.422035 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_f228632e-c649-4cbf-9a32-5baad303ef28/mysql-bootstrap/0.log" Feb 23 10:19:49 crc kubenswrapper[4940]: I0223 10:19:49.433147 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_261aaecb-ec48-4d96-9579-35057b0d6394/nova-scheduler-scheduler/0.log" Feb 23 10:19:49 crc kubenswrapper[4940]: I0223 10:19:49.692648 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_1b7438a4-1302-46b5-a005-b74758200871/mysql-bootstrap/0.log" Feb 23 10:19:49 crc kubenswrapper[4940]: I0223 10:19:49.919723 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_10792692-8f84-43da-aea3-46d28e5ba1f5/nova-api-api/0.log" Feb 23 10:19:49 crc kubenswrapper[4940]: I0223 10:19:49.943173 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_1b7438a4-1302-46b5-a005-b74758200871/mysql-bootstrap/0.log" Feb 23 10:19:49 crc kubenswrapper[4940]: I0223 10:19:49.946653 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_1b7438a4-1302-46b5-a005-b74758200871/galera/0.log" Feb 23 10:19:50 crc kubenswrapper[4940]: I0223 10:19:50.129095 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_1a7ead03-cd14-44b3-967b-9daaf4070687/openstackclient/0.log" Feb 23 10:19:50 crc kubenswrapper[4940]: I0223 10:19:50.231922 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-jl7wx_53a2f9a0-c632-432a-aebd-7f3c5863d0bc/openstack-network-exporter/0.log" Feb 23 10:19:50 crc kubenswrapper[4940]: I0223 10:19:50.453817 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-srtp4_67fcaa2c-2af4-49db-8193-de6e83317807/ovsdb-server-init/0.log" Feb 23 10:19:50 crc kubenswrapper[4940]: I0223 10:19:50.553800 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-srtp4_67fcaa2c-2af4-49db-8193-de6e83317807/ovsdb-server-init/0.log" Feb 23 10:19:50 crc kubenswrapper[4940]: I0223 10:19:50.624604 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_38eb6728-c410-4f85-ac35-969880b14e26/nova-metadata-metadata/0.log" Feb 23 10:19:50 crc kubenswrapper[4940]: I0223 10:19:50.665275 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-srtp4_67fcaa2c-2af4-49db-8193-de6e83317807/ovsdb-server/0.log" Feb 23 10:19:50 crc kubenswrapper[4940]: I0223 10:19:50.685134 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-srtp4_67fcaa2c-2af4-49db-8193-de6e83317807/ovs-vswitchd/0.log" Feb 23 10:19:50 crc kubenswrapper[4940]: I0223 10:19:50.798745 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-skhdb_f4dfcca1-21ca-42ff-bed0-1bb4f8d14aaa/ovn-controller/0.log" Feb 23 10:19:50 crc kubenswrapper[4940]: I0223 10:19:50.939889 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-ltqdk_d252356a-80f4-4cf3-b739-520d9bd4b2c1/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Feb 23 10:19:50 crc kubenswrapper[4940]: I0223 10:19:50.959603 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_a6f61bb8-d6f5-464b-8ee4-fee2fce8c60a/openstack-network-exporter/0.log" Feb 23 10:19:51 crc kubenswrapper[4940]: I0223 10:19:51.026234 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_a6f61bb8-d6f5-464b-8ee4-fee2fce8c60a/ovn-northd/0.log" Feb 23 10:19:51 crc kubenswrapper[4940]: I0223 10:19:51.148199 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_20aa8441-57d4-4190-8edb-609af4891496/openstack-network-exporter/0.log" Feb 23 10:19:51 crc kubenswrapper[4940]: I0223 10:19:51.175034 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_20aa8441-57d4-4190-8edb-609af4891496/ovsdbserver-nb/0.log" Feb 23 10:19:51 crc kubenswrapper[4940]: I0223 10:19:51.276192 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_0e5b3c11-0f21-4277-b49b-15dc23cc9d96/openstack-network-exporter/0.log" Feb 23 10:19:51 crc kubenswrapper[4940]: I0223 10:19:51.382651 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_0e5b3c11-0f21-4277-b49b-15dc23cc9d96/ovsdbserver-sb/0.log" Feb 23 10:19:51 crc kubenswrapper[4940]: I0223 10:19:51.716742 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-96958f474-956sq_e38493cb-6fde-4245-a5a4-99a91920708b/placement-api/0.log" Feb 23 10:19:51 crc kubenswrapper[4940]: I0223 10:19:51.827546 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-96958f474-956sq_e38493cb-6fde-4245-a5a4-99a91920708b/placement-log/0.log" Feb 23 10:19:51 crc kubenswrapper[4940]: I0223 10:19:51.853453 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_89409108-7455-4318-83ba-65a6dd96d76c/setup-container/0.log" Feb 23 10:19:52 crc kubenswrapper[4940]: I0223 10:19:52.060343 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_89409108-7455-4318-83ba-65a6dd96d76c/rabbitmq/0.log" Feb 23 10:19:52 crc kubenswrapper[4940]: I0223 10:19:52.090866 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_89409108-7455-4318-83ba-65a6dd96d76c/setup-container/0.log" Feb 23 10:19:52 crc kubenswrapper[4940]: I0223 10:19:52.110071 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_438743de-ddf8-4b10-878a-b87c389cd3b6/setup-container/0.log" Feb 23 10:19:52 crc kubenswrapper[4940]: I0223 10:19:52.313195 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_438743de-ddf8-4b10-878a-b87c389cd3b6/setup-container/0.log" Feb 23 10:19:52 crc kubenswrapper[4940]: I0223 10:19:52.395902 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-l48h8_863d4b0a-6bc6-44a6-89d0-9167411a397d/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 23 10:19:52 crc kubenswrapper[4940]: I0223 10:19:52.451594 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_438743de-ddf8-4b10-878a-b87c389cd3b6/rabbitmq/0.log" Feb 23 10:19:52 crc kubenswrapper[4940]: I0223 10:19:52.586568 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-mh545_5fe83bad-242b-4933-9ff1-525359d29867/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Feb 23 10:19:52 crc kubenswrapper[4940]: I0223 10:19:52.679593 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-wrv9l_cb29483b-9f50-4202-935a-0ff2e3e7d3ec/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Feb 23 10:19:52 crc kubenswrapper[4940]: I0223 10:19:52.747912 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-lgnzc_a25d2721-a065-4c7a-9d4c-61c3be28422e/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 23 10:19:52 crc kubenswrapper[4940]: I0223 10:19:52.877412 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-w7thd_96cf1ebf-387e-417f-83eb-a360f951217e/ssh-known-hosts-edpm-deployment/0.log" Feb 23 10:19:53 crc kubenswrapper[4940]: I0223 10:19:53.079287 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-6c475756fc-pxxbv_418704a3-dc2d-440f-8beb-2c00795cf4d4/proxy-server/0.log" Feb 23 10:19:53 crc kubenswrapper[4940]: I0223 10:19:53.103983 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-6c475756fc-pxxbv_418704a3-dc2d-440f-8beb-2c00795cf4d4/proxy-httpd/0.log" Feb 23 10:19:53 crc kubenswrapper[4940]: I0223 10:19:53.157021 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-4jjdt_1b9efcfe-df2d-405e-9f10-d22dbce174e9/swift-ring-rebalance/0.log" Feb 23 10:19:53 crc kubenswrapper[4940]: I0223 10:19:53.304586 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_8b985bfb-fd7d-4c37-b935-26bc80e96fc0/account-auditor/0.log" Feb 23 10:19:53 crc kubenswrapper[4940]: I0223 10:19:53.306311 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_8b985bfb-fd7d-4c37-b935-26bc80e96fc0/account-reaper/0.log" Feb 23 10:19:53 crc kubenswrapper[4940]: I0223 10:19:53.419655 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_8b985bfb-fd7d-4c37-b935-26bc80e96fc0/account-replicator/0.log" Feb 23 10:19:53 crc kubenswrapper[4940]: I0223 10:19:53.434849 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_8b985bfb-fd7d-4c37-b935-26bc80e96fc0/account-server/0.log" Feb 23 10:19:53 crc kubenswrapper[4940]: I0223 10:19:53.545359 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_8b985bfb-fd7d-4c37-b935-26bc80e96fc0/container-auditor/0.log" Feb 23 10:19:53 crc kubenswrapper[4940]: I0223 10:19:53.608344 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_8b985bfb-fd7d-4c37-b935-26bc80e96fc0/container-replicator/0.log" Feb 23 10:19:53 crc kubenswrapper[4940]: I0223 10:19:53.618966 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_8b985bfb-fd7d-4c37-b935-26bc80e96fc0/container-server/0.log" Feb 23 10:19:53 crc kubenswrapper[4940]: I0223 10:19:53.648123 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_8b985bfb-fd7d-4c37-b935-26bc80e96fc0/container-updater/0.log" Feb 23 10:19:53 crc kubenswrapper[4940]: I0223 10:19:53.703801 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_8b985bfb-fd7d-4c37-b935-26bc80e96fc0/object-auditor/0.log" Feb 23 10:19:53 crc kubenswrapper[4940]: I0223 10:19:53.772677 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_8b985bfb-fd7d-4c37-b935-26bc80e96fc0/object-expirer/0.log" Feb 23 10:19:53 crc kubenswrapper[4940]: I0223 10:19:53.850914 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_8b985bfb-fd7d-4c37-b935-26bc80e96fc0/object-replicator/0.log" Feb 23 10:19:53 crc kubenswrapper[4940]: I0223 10:19:53.877135 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_8b985bfb-fd7d-4c37-b935-26bc80e96fc0/object-server/0.log" Feb 23 10:19:53 crc kubenswrapper[4940]: I0223 10:19:53.889743 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_8b985bfb-fd7d-4c37-b935-26bc80e96fc0/rsync/0.log" Feb 23 10:19:53 crc kubenswrapper[4940]: I0223 10:19:53.892513 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_8b985bfb-fd7d-4c37-b935-26bc80e96fc0/object-updater/0.log" Feb 23 10:19:53 crc kubenswrapper[4940]: I0223 10:19:53.980428 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_8b985bfb-fd7d-4c37-b935-26bc80e96fc0/swift-recon-cron/0.log" Feb 23 10:19:54 crc kubenswrapper[4940]: I0223 10:19:54.149707 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-5zv49_16b77a40-fb67-4fe3-b4c8-d87dd4be9b25/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Feb 23 10:19:54 crc kubenswrapper[4940]: I0223 10:19:54.236999 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_c7cd2a10-7128-40ff-98b8-6d3026b08566/tempest-tests-tempest-tests-runner/0.log" Feb 23 10:19:54 crc kubenswrapper[4940]: I0223 10:19:54.351895 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_614008c1-1725-42e6-b6b3-407d9b909846/test-operator-logs-container/0.log" Feb 23 10:19:54 crc kubenswrapper[4940]: I0223 10:19:54.487162 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-l8x8b_f35a90d0-fb62-4a2f-9ae3-f7c8971a58e8/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Feb 23 10:20:01 crc kubenswrapper[4940]: I0223 10:20:01.429140 4940 patch_prober.go:28] interesting pod/machine-config-daemon-26mgs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 23 10:20:01 crc kubenswrapper[4940]: I0223 10:20:01.429887 4940 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" podUID="f3f2cfd6-5ddf-436d-998f-440f1cc642b1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 23 10:20:01 crc kubenswrapper[4940]: I0223 10:20:01.429942 4940 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" Feb 23 10:20:01 crc kubenswrapper[4940]: I0223 10:20:01.430682 4940 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"546957238c5906c7fb8f7c88c8ab18ff9dc429daecaa694afa30833c82fdb34e"} pod="openshift-machine-config-operator/machine-config-daemon-26mgs" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 23 10:20:01 crc kubenswrapper[4940]: I0223 10:20:01.430740 4940 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" podUID="f3f2cfd6-5ddf-436d-998f-440f1cc642b1" containerName="machine-config-daemon" containerID="cri-o://546957238c5906c7fb8f7c88c8ab18ff9dc429daecaa694afa30833c82fdb34e" gracePeriod=600 Feb 23 10:20:02 crc kubenswrapper[4940]: I0223 10:20:02.042886 4940 generic.go:334] "Generic (PLEG): container finished" podID="f3f2cfd6-5ddf-436d-998f-440f1cc642b1" containerID="546957238c5906c7fb8f7c88c8ab18ff9dc429daecaa694afa30833c82fdb34e" exitCode=0 Feb 23 10:20:02 crc kubenswrapper[4940]: I0223 10:20:02.042938 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" event={"ID":"f3f2cfd6-5ddf-436d-998f-440f1cc642b1","Type":"ContainerDied","Data":"546957238c5906c7fb8f7c88c8ab18ff9dc429daecaa694afa30833c82fdb34e"} Feb 23 10:20:02 crc kubenswrapper[4940]: I0223 10:20:02.043660 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" event={"ID":"f3f2cfd6-5ddf-436d-998f-440f1cc642b1","Type":"ContainerStarted","Data":"034d4916d87eca1c845da02d4d801b4d26aff1b1f34659b667ee72945e9a694c"} Feb 23 10:20:02 crc kubenswrapper[4940]: I0223 10:20:02.043685 4940 scope.go:117] "RemoveContainer" containerID="15348b9c177e6685e259fc78f404481f8793b0b450981c2e491d638571adaf4f" Feb 23 10:20:10 crc kubenswrapper[4940]: I0223 10:20:10.122818 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-jnzpg"] Feb 23 10:20:10 crc kubenswrapper[4940]: E0223 10:20:10.124401 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a51f07ce-77b2-4aa0-b521-63b587e319e6" containerName="container-00" Feb 23 10:20:10 crc kubenswrapper[4940]: I0223 10:20:10.124443 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="a51f07ce-77b2-4aa0-b521-63b587e319e6" containerName="container-00" Feb 23 10:20:10 crc kubenswrapper[4940]: I0223 10:20:10.127705 4940 memory_manager.go:354] "RemoveStaleState removing state" podUID="a51f07ce-77b2-4aa0-b521-63b587e319e6" containerName="container-00" Feb 23 10:20:10 crc kubenswrapper[4940]: I0223 10:20:10.157800 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jnzpg" Feb 23 10:20:10 crc kubenswrapper[4940]: I0223 10:20:10.158940 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qx9w5\" (UniqueName: \"kubernetes.io/projected/4ceffbe3-49a3-4930-acb1-5c42e181e375-kube-api-access-qx9w5\") pod \"redhat-marketplace-jnzpg\" (UID: \"4ceffbe3-49a3-4930-acb1-5c42e181e375\") " pod="openshift-marketplace/redhat-marketplace-jnzpg" Feb 23 10:20:10 crc kubenswrapper[4940]: I0223 10:20:10.159000 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4ceffbe3-49a3-4930-acb1-5c42e181e375-catalog-content\") pod \"redhat-marketplace-jnzpg\" (UID: \"4ceffbe3-49a3-4930-acb1-5c42e181e375\") " pod="openshift-marketplace/redhat-marketplace-jnzpg" Feb 23 10:20:10 crc kubenswrapper[4940]: I0223 10:20:10.159091 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4ceffbe3-49a3-4930-acb1-5c42e181e375-utilities\") pod \"redhat-marketplace-jnzpg\" (UID: \"4ceffbe3-49a3-4930-acb1-5c42e181e375\") " pod="openshift-marketplace/redhat-marketplace-jnzpg" Feb 23 10:20:10 crc kubenswrapper[4940]: I0223 10:20:10.166766 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-jnzpg"] Feb 23 10:20:10 crc kubenswrapper[4940]: I0223 10:20:10.261122 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qx9w5\" (UniqueName: \"kubernetes.io/projected/4ceffbe3-49a3-4930-acb1-5c42e181e375-kube-api-access-qx9w5\") pod \"redhat-marketplace-jnzpg\" (UID: \"4ceffbe3-49a3-4930-acb1-5c42e181e375\") " pod="openshift-marketplace/redhat-marketplace-jnzpg" Feb 23 10:20:10 crc kubenswrapper[4940]: I0223 10:20:10.261568 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4ceffbe3-49a3-4930-acb1-5c42e181e375-catalog-content\") pod \"redhat-marketplace-jnzpg\" (UID: \"4ceffbe3-49a3-4930-acb1-5c42e181e375\") " pod="openshift-marketplace/redhat-marketplace-jnzpg" Feb 23 10:20:10 crc kubenswrapper[4940]: I0223 10:20:10.261669 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4ceffbe3-49a3-4930-acb1-5c42e181e375-utilities\") pod \"redhat-marketplace-jnzpg\" (UID: \"4ceffbe3-49a3-4930-acb1-5c42e181e375\") " pod="openshift-marketplace/redhat-marketplace-jnzpg" Feb 23 10:20:10 crc kubenswrapper[4940]: I0223 10:20:10.262184 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4ceffbe3-49a3-4930-acb1-5c42e181e375-utilities\") pod \"redhat-marketplace-jnzpg\" (UID: \"4ceffbe3-49a3-4930-acb1-5c42e181e375\") " pod="openshift-marketplace/redhat-marketplace-jnzpg" Feb 23 10:20:10 crc kubenswrapper[4940]: I0223 10:20:10.262484 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4ceffbe3-49a3-4930-acb1-5c42e181e375-catalog-content\") pod \"redhat-marketplace-jnzpg\" (UID: \"4ceffbe3-49a3-4930-acb1-5c42e181e375\") " pod="openshift-marketplace/redhat-marketplace-jnzpg" Feb 23 10:20:10 crc kubenswrapper[4940]: I0223 10:20:10.282931 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qx9w5\" (UniqueName: \"kubernetes.io/projected/4ceffbe3-49a3-4930-acb1-5c42e181e375-kube-api-access-qx9w5\") pod \"redhat-marketplace-jnzpg\" (UID: \"4ceffbe3-49a3-4930-acb1-5c42e181e375\") " pod="openshift-marketplace/redhat-marketplace-jnzpg" Feb 23 10:20:10 crc kubenswrapper[4940]: I0223 10:20:10.481981 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jnzpg" Feb 23 10:20:11 crc kubenswrapper[4940]: I0223 10:20:11.002194 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-jnzpg"] Feb 23 10:20:11 crc kubenswrapper[4940]: I0223 10:20:11.175566 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jnzpg" event={"ID":"4ceffbe3-49a3-4930-acb1-5c42e181e375","Type":"ContainerStarted","Data":"8f736f6e17393e676daed8294c7fb1ac8a4a31f4861c839e573b2b30628d8ad8"} Feb 23 10:20:12 crc kubenswrapper[4940]: I0223 10:20:12.186503 4940 generic.go:334] "Generic (PLEG): container finished" podID="4ceffbe3-49a3-4930-acb1-5c42e181e375" containerID="fbda907333131f0c575285773b1d37a4684b81921c5d00ce307be7c4e1ac0660" exitCode=0 Feb 23 10:20:12 crc kubenswrapper[4940]: I0223 10:20:12.186594 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jnzpg" event={"ID":"4ceffbe3-49a3-4930-acb1-5c42e181e375","Type":"ContainerDied","Data":"fbda907333131f0c575285773b1d37a4684b81921c5d00ce307be7c4e1ac0660"} Feb 23 10:20:13 crc kubenswrapper[4940]: I0223 10:20:13.197666 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jnzpg" event={"ID":"4ceffbe3-49a3-4930-acb1-5c42e181e375","Type":"ContainerStarted","Data":"4a763c1afa0593e38c3b419f5ef298eff6384a4f4b5499d107173648612344c4"} Feb 23 10:20:14 crc kubenswrapper[4940]: I0223 10:20:14.210718 4940 generic.go:334] "Generic (PLEG): container finished" podID="4ceffbe3-49a3-4930-acb1-5c42e181e375" containerID="4a763c1afa0593e38c3b419f5ef298eff6384a4f4b5499d107173648612344c4" exitCode=0 Feb 23 10:20:14 crc kubenswrapper[4940]: I0223 10:20:14.210763 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jnzpg" event={"ID":"4ceffbe3-49a3-4930-acb1-5c42e181e375","Type":"ContainerDied","Data":"4a763c1afa0593e38c3b419f5ef298eff6384a4f4b5499d107173648612344c4"} Feb 23 10:20:15 crc kubenswrapper[4940]: I0223 10:20:15.221970 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jnzpg" event={"ID":"4ceffbe3-49a3-4930-acb1-5c42e181e375","Type":"ContainerStarted","Data":"580058e273977afe2ba230b3d0e525290ace33cbeb641ad74f28c79ff2dd025c"} Feb 23 10:20:15 crc kubenswrapper[4940]: I0223 10:20:15.241468 4940 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-jnzpg" podStartSLOduration=2.830537125 podStartE2EDuration="5.241433752s" podCreationTimestamp="2026-02-23 10:20:10 +0000 UTC" firstStartedPulling="2026-02-23 10:20:12.188273227 +0000 UTC m=+5543.571479384" lastFinishedPulling="2026-02-23 10:20:14.599169854 +0000 UTC m=+5545.982376011" observedRunningTime="2026-02-23 10:20:15.238070776 +0000 UTC m=+5546.621276943" watchObservedRunningTime="2026-02-23 10:20:15.241433752 +0000 UTC m=+5546.624639909" Feb 23 10:20:20 crc kubenswrapper[4940]: I0223 10:20:20.482140 4940 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-jnzpg" Feb 23 10:20:20 crc kubenswrapper[4940]: I0223 10:20:20.484403 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-jnzpg" Feb 23 10:20:20 crc kubenswrapper[4940]: I0223 10:20:20.541658 4940 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-jnzpg" Feb 23 10:20:20 crc kubenswrapper[4940]: I0223 10:20:20.861996 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-6d8bf5c495-92fk4_0d68e7dc-1d8e-4edd-a2f9-585043e15a98/manager/0.log" Feb 23 10:20:21 crc kubenswrapper[4940]: I0223 10:20:21.089987 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_f50df332aac1228d54a4d82de8c3e65bdeace5a18fcc566c39bd5668d3d5l6q_c924436a-929d-4ed7-ad05-6f9dea4ab38a/util/0.log" Feb 23 10:20:21 crc kubenswrapper[4940]: I0223 10:20:21.330138 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-jnzpg" Feb 23 10:20:21 crc kubenswrapper[4940]: I0223 10:20:21.387383 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_f50df332aac1228d54a4d82de8c3e65bdeace5a18fcc566c39bd5668d3d5l6q_c924436a-929d-4ed7-ad05-6f9dea4ab38a/pull/0.log" Feb 23 10:20:21 crc kubenswrapper[4940]: I0223 10:20:21.387450 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_f50df332aac1228d54a4d82de8c3e65bdeace5a18fcc566c39bd5668d3d5l6q_c924436a-929d-4ed7-ad05-6f9dea4ab38a/util/0.log" Feb 23 10:20:21 crc kubenswrapper[4940]: I0223 10:20:21.607567 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_f50df332aac1228d54a4d82de8c3e65bdeace5a18fcc566c39bd5668d3d5l6q_c924436a-929d-4ed7-ad05-6f9dea4ab38a/pull/0.log" Feb 23 10:20:21 crc kubenswrapper[4940]: I0223 10:20:21.816109 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_f50df332aac1228d54a4d82de8c3e65bdeace5a18fcc566c39bd5668d3d5l6q_c924436a-929d-4ed7-ad05-6f9dea4ab38a/pull/0.log" Feb 23 10:20:21 crc kubenswrapper[4940]: I0223 10:20:21.861463 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_f50df332aac1228d54a4d82de8c3e65bdeace5a18fcc566c39bd5668d3d5l6q_c924436a-929d-4ed7-ad05-6f9dea4ab38a/util/0.log" Feb 23 10:20:22 crc kubenswrapper[4940]: I0223 10:20:22.060440 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_f50df332aac1228d54a4d82de8c3e65bdeace5a18fcc566c39bd5668d3d5l6q_c924436a-929d-4ed7-ad05-6f9dea4ab38a/extract/0.log" Feb 23 10:20:22 crc kubenswrapper[4940]: I0223 10:20:22.292157 4940 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-jnzpg"] Feb 23 10:20:22 crc kubenswrapper[4940]: I0223 10:20:22.469576 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-77987464f4-p857l_2a7c5730-7ed4-44b1-832d-109fa4460dc5/manager/0.log" Feb 23 10:20:22 crc kubenswrapper[4940]: I0223 10:20:22.579688 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-69f49c598c-pvb4b_61343538-79c0-4565-ae70-a397b5fd6b2f/manager/0.log" Feb 23 10:20:22 crc kubenswrapper[4940]: I0223 10:20:22.937118 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-5b9b8895d5-qzd5f_db71f743-426e-4fe8-ab74-17c3f68798fc/manager/0.log" Feb 23 10:20:23 crc kubenswrapper[4940]: I0223 10:20:23.248766 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-5d946d989d-bqhhr_c8d94d12-5d54-4c60-85d4-de19e4dfde67/manager/0.log" Feb 23 10:20:23 crc kubenswrapper[4940]: I0223 10:20:23.544116 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-554564d7fc-8wv98_34061626-0f45-4bb5-a16f-9059fa45be7f/manager/0.log" Feb 23 10:20:23 crc kubenswrapper[4940]: I0223 10:20:23.556980 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-79d975b745-86vf7_82d3766e-53e7-4dc8-9c9b-d71e9d930595/manager/0.log" Feb 23 10:20:24 crc kubenswrapper[4940]: I0223 10:20:24.035877 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-b4d948c87-zqz6k_2fb7ee71-a9af-4504-8899-932449157080/manager/0.log" Feb 23 10:20:24 crc kubenswrapper[4940]: I0223 10:20:24.110886 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-54f6768c69-vh4r6_780fe903-e160-47c9-9291-31ee2d139266/manager/0.log" Feb 23 10:20:24 crc kubenswrapper[4940]: I0223 10:20:24.306412 4940 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-jnzpg" podUID="4ceffbe3-49a3-4930-acb1-5c42e181e375" containerName="registry-server" containerID="cri-o://580058e273977afe2ba230b3d0e525290ace33cbeb641ad74f28c79ff2dd025c" gracePeriod=2 Feb 23 10:20:24 crc kubenswrapper[4940]: I0223 10:20:24.372926 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-6994f66f48-rwvf9_e05a318b-495f-49c1-83cf-056d5ce99c8c/manager/0.log" Feb 23 10:20:24 crc kubenswrapper[4940]: I0223 10:20:24.585868 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-64ddbf8bb-6nlcd_15249b0f-c437-4d93-b97a-c7e078139e07/manager/0.log" Feb 23 10:20:24 crc kubenswrapper[4940]: I0223 10:20:24.784987 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-567668f5cf-6ztzk_d2c13199-d708-496b-b69a-43fba1068955/manager/0.log" Feb 23 10:20:24 crc kubenswrapper[4940]: I0223 10:20:24.841076 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jnzpg" Feb 23 10:20:24 crc kubenswrapper[4940]: I0223 10:20:24.884780 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qx9w5\" (UniqueName: \"kubernetes.io/projected/4ceffbe3-49a3-4930-acb1-5c42e181e375-kube-api-access-qx9w5\") pod \"4ceffbe3-49a3-4930-acb1-5c42e181e375\" (UID: \"4ceffbe3-49a3-4930-acb1-5c42e181e375\") " Feb 23 10:20:24 crc kubenswrapper[4940]: I0223 10:20:24.884841 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4ceffbe3-49a3-4930-acb1-5c42e181e375-utilities\") pod \"4ceffbe3-49a3-4930-acb1-5c42e181e375\" (UID: \"4ceffbe3-49a3-4930-acb1-5c42e181e375\") " Feb 23 10:20:24 crc kubenswrapper[4940]: I0223 10:20:24.884873 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4ceffbe3-49a3-4930-acb1-5c42e181e375-catalog-content\") pod \"4ceffbe3-49a3-4930-acb1-5c42e181e375\" (UID: \"4ceffbe3-49a3-4930-acb1-5c42e181e375\") " Feb 23 10:20:24 crc kubenswrapper[4940]: I0223 10:20:24.885936 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4ceffbe3-49a3-4930-acb1-5c42e181e375-utilities" (OuterVolumeSpecName: "utilities") pod "4ceffbe3-49a3-4930-acb1-5c42e181e375" (UID: "4ceffbe3-49a3-4930-acb1-5c42e181e375"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 10:20:24 crc kubenswrapper[4940]: I0223 10:20:24.901305 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4ceffbe3-49a3-4930-acb1-5c42e181e375-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4ceffbe3-49a3-4930-acb1-5c42e181e375" (UID: "4ceffbe3-49a3-4930-acb1-5c42e181e375"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 10:20:24 crc kubenswrapper[4940]: I0223 10:20:24.916499 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4ceffbe3-49a3-4930-acb1-5c42e181e375-kube-api-access-qx9w5" (OuterVolumeSpecName: "kube-api-access-qx9w5") pod "4ceffbe3-49a3-4930-acb1-5c42e181e375" (UID: "4ceffbe3-49a3-4930-acb1-5c42e181e375"). InnerVolumeSpecName "kube-api-access-qx9w5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 10:20:24 crc kubenswrapper[4940]: I0223 10:20:24.987519 4940 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qx9w5\" (UniqueName: \"kubernetes.io/projected/4ceffbe3-49a3-4930-acb1-5c42e181e375-kube-api-access-qx9w5\") on node \"crc\" DevicePath \"\"" Feb 23 10:20:24 crc kubenswrapper[4940]: I0223 10:20:24.987556 4940 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4ceffbe3-49a3-4930-acb1-5c42e181e375-utilities\") on node \"crc\" DevicePath \"\"" Feb 23 10:20:24 crc kubenswrapper[4940]: I0223 10:20:24.987570 4940 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4ceffbe3-49a3-4930-acb1-5c42e181e375-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 23 10:20:25 crc kubenswrapper[4940]: I0223 10:20:25.163898 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-7c6767dc9cwwlzn_70ee3a78-ae3e-4b61-a6f6-4c3e36f50c3e/manager/0.log" Feb 23 10:20:25 crc kubenswrapper[4940]: I0223 10:20:25.324034 4940 generic.go:334] "Generic (PLEG): container finished" podID="4ceffbe3-49a3-4930-acb1-5c42e181e375" containerID="580058e273977afe2ba230b3d0e525290ace33cbeb641ad74f28c79ff2dd025c" exitCode=0 Feb 23 10:20:25 crc kubenswrapper[4940]: I0223 10:20:25.324108 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jnzpg" event={"ID":"4ceffbe3-49a3-4930-acb1-5c42e181e375","Type":"ContainerDied","Data":"580058e273977afe2ba230b3d0e525290ace33cbeb641ad74f28c79ff2dd025c"} Feb 23 10:20:25 crc kubenswrapper[4940]: I0223 10:20:25.324142 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jnzpg" event={"ID":"4ceffbe3-49a3-4930-acb1-5c42e181e375","Type":"ContainerDied","Data":"8f736f6e17393e676daed8294c7fb1ac8a4a31f4861c839e573b2b30628d8ad8"} Feb 23 10:20:25 crc kubenswrapper[4940]: I0223 10:20:25.324161 4940 scope.go:117] "RemoveContainer" containerID="580058e273977afe2ba230b3d0e525290ace33cbeb641ad74f28c79ff2dd025c" Feb 23 10:20:25 crc kubenswrapper[4940]: I0223 10:20:25.324209 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jnzpg" Feb 23 10:20:25 crc kubenswrapper[4940]: I0223 10:20:25.363897 4940 scope.go:117] "RemoveContainer" containerID="4a763c1afa0593e38c3b419f5ef298eff6384a4f4b5499d107173648612344c4" Feb 23 10:20:25 crc kubenswrapper[4940]: I0223 10:20:25.382387 4940 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-jnzpg"] Feb 23 10:20:25 crc kubenswrapper[4940]: I0223 10:20:25.386655 4940 scope.go:117] "RemoveContainer" containerID="fbda907333131f0c575285773b1d37a4684b81921c5d00ce307be7c4e1ac0660" Feb 23 10:20:25 crc kubenswrapper[4940]: I0223 10:20:25.401435 4940 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-jnzpg"] Feb 23 10:20:25 crc kubenswrapper[4940]: I0223 10:20:25.441463 4940 scope.go:117] "RemoveContainer" containerID="580058e273977afe2ba230b3d0e525290ace33cbeb641ad74f28c79ff2dd025c" Feb 23 10:20:25 crc kubenswrapper[4940]: E0223 10:20:25.441994 4940 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"580058e273977afe2ba230b3d0e525290ace33cbeb641ad74f28c79ff2dd025c\": container with ID starting with 580058e273977afe2ba230b3d0e525290ace33cbeb641ad74f28c79ff2dd025c not found: ID does not exist" containerID="580058e273977afe2ba230b3d0e525290ace33cbeb641ad74f28c79ff2dd025c" Feb 23 10:20:25 crc kubenswrapper[4940]: I0223 10:20:25.442031 4940 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"580058e273977afe2ba230b3d0e525290ace33cbeb641ad74f28c79ff2dd025c"} err="failed to get container status \"580058e273977afe2ba230b3d0e525290ace33cbeb641ad74f28c79ff2dd025c\": rpc error: code = NotFound desc = could not find container \"580058e273977afe2ba230b3d0e525290ace33cbeb641ad74f28c79ff2dd025c\": container with ID starting with 580058e273977afe2ba230b3d0e525290ace33cbeb641ad74f28c79ff2dd025c not found: ID does not exist" Feb 23 10:20:25 crc kubenswrapper[4940]: I0223 10:20:25.442058 4940 scope.go:117] "RemoveContainer" containerID="4a763c1afa0593e38c3b419f5ef298eff6384a4f4b5499d107173648612344c4" Feb 23 10:20:25 crc kubenswrapper[4940]: E0223 10:20:25.442820 4940 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4a763c1afa0593e38c3b419f5ef298eff6384a4f4b5499d107173648612344c4\": container with ID starting with 4a763c1afa0593e38c3b419f5ef298eff6384a4f4b5499d107173648612344c4 not found: ID does not exist" containerID="4a763c1afa0593e38c3b419f5ef298eff6384a4f4b5499d107173648612344c4" Feb 23 10:20:25 crc kubenswrapper[4940]: I0223 10:20:25.442846 4940 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4a763c1afa0593e38c3b419f5ef298eff6384a4f4b5499d107173648612344c4"} err="failed to get container status \"4a763c1afa0593e38c3b419f5ef298eff6384a4f4b5499d107173648612344c4\": rpc error: code = NotFound desc = could not find container \"4a763c1afa0593e38c3b419f5ef298eff6384a4f4b5499d107173648612344c4\": container with ID starting with 4a763c1afa0593e38c3b419f5ef298eff6384a4f4b5499d107173648612344c4 not found: ID does not exist" Feb 23 10:20:25 crc kubenswrapper[4940]: I0223 10:20:25.442865 4940 scope.go:117] "RemoveContainer" containerID="fbda907333131f0c575285773b1d37a4684b81921c5d00ce307be7c4e1ac0660" Feb 23 10:20:25 crc kubenswrapper[4940]: E0223 10:20:25.445323 4940 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fbda907333131f0c575285773b1d37a4684b81921c5d00ce307be7c4e1ac0660\": container with ID starting with fbda907333131f0c575285773b1d37a4684b81921c5d00ce307be7c4e1ac0660 not found: ID does not exist" containerID="fbda907333131f0c575285773b1d37a4684b81921c5d00ce307be7c4e1ac0660" Feb 23 10:20:25 crc kubenswrapper[4940]: I0223 10:20:25.445375 4940 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fbda907333131f0c575285773b1d37a4684b81921c5d00ce307be7c4e1ac0660"} err="failed to get container status \"fbda907333131f0c575285773b1d37a4684b81921c5d00ce307be7c4e1ac0660\": rpc error: code = NotFound desc = could not find container \"fbda907333131f0c575285773b1d37a4684b81921c5d00ce307be7c4e1ac0660\": container with ID starting with fbda907333131f0c575285773b1d37a4684b81921c5d00ce307be7c4e1ac0660 not found: ID does not exist" Feb 23 10:20:25 crc kubenswrapper[4940]: I0223 10:20:25.800975 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-68c97fd8b-ls267_bd17ff9f-8fa9-4ccc-972b-3cbb2abbdf00/operator/0.log" Feb 23 10:20:25 crc kubenswrapper[4940]: I0223 10:20:25.884294 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-fkkwt_808d7f68-dc41-4211-b785-00e0157483b1/registry-server/0.log" Feb 23 10:20:26 crc kubenswrapper[4940]: I0223 10:20:26.237579 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-d44cf6b75-58p99_e810e429-c05d-4451-a863-196e8e071d9b/manager/0.log" Feb 23 10:20:26 crc kubenswrapper[4940]: I0223 10:20:26.449355 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-8497b45c89-zb9xm_bda50d0f-3559-47b6-9ee2-8104750b30c4/manager/0.log" Feb 23 10:20:26 crc kubenswrapper[4940]: I0223 10:20:26.687165 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-gk729_69a079c2-ac60-4b97-ae60-25c8189e6816/operator/0.log" Feb 23 10:20:26 crc kubenswrapper[4940]: I0223 10:20:26.915297 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-68f46476f-qzv55_8d39a603-93c8-4c09-a1d2-97e6c14902fe/manager/0.log" Feb 23 10:20:27 crc kubenswrapper[4940]: I0223 10:20:27.179118 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-7f45b4ff68-khtmd_d2fb7a6a-317d-4180-bcc3-07087b8a48ba/manager/0.log" Feb 23 10:20:27 crc kubenswrapper[4940]: I0223 10:20:27.366456 4940 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4ceffbe3-49a3-4930-acb1-5c42e181e375" path="/var/lib/kubelet/pods/4ceffbe3-49a3-4930-acb1-5c42e181e375/volumes" Feb 23 10:20:27 crc kubenswrapper[4940]: I0223 10:20:27.385017 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-7866795846-phggr_c81581e5-15a7-4b56-9b22-ecfd026749bc/manager/0.log" Feb 23 10:20:27 crc kubenswrapper[4940]: I0223 10:20:27.616946 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-554b4c57dc-7gq48_ce0016e4-e6c7-4ac5-8b5e-bd9edfa9c1b8/manager/0.log" Feb 23 10:20:27 crc kubenswrapper[4940]: I0223 10:20:27.758934 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-5db88f68c-s2vxb_c6e874c6-520a-40fa-b182-e7a0daab54c7/manager/0.log" Feb 23 10:20:27 crc kubenswrapper[4940]: I0223 10:20:27.816257 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-69f8888797-cmbf8_32fc4d76-59e1-44b3-ace9-e9f14dc4f86a/manager/0.log" Feb 23 10:20:32 crc kubenswrapper[4940]: I0223 10:20:32.115180 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-868647ff47-vp5zb_dfc9a681-c309-4803-9be0-6150d615b023/manager/0.log" Feb 23 10:20:50 crc kubenswrapper[4940]: I0223 10:20:50.411564 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-687p7_f5e834a4-b3e0-4d34-922a-2a8ed8e1fecb/control-plane-machine-set-operator/0.log" Feb 23 10:20:50 crc kubenswrapper[4940]: I0223 10:20:50.592804 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-95wjd_4de72bcc-6d41-47cc-b9f7-f4cca10b977f/kube-rbac-proxy/0.log" Feb 23 10:20:50 crc kubenswrapper[4940]: I0223 10:20:50.601709 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-95wjd_4de72bcc-6d41-47cc-b9f7-f4cca10b977f/machine-api-operator/0.log" Feb 23 10:21:04 crc kubenswrapper[4940]: I0223 10:21:04.365758 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-csxvp_ea6d2e05-15f3-4d73-b9e7-d22652f685ff/cert-manager-controller/0.log" Feb 23 10:21:04 crc kubenswrapper[4940]: I0223 10:21:04.572037 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-ls9d2_ff0dd0c0-0c3d-4373-bf64-bfbfbda693d3/cert-manager-cainjector/0.log" Feb 23 10:21:04 crc kubenswrapper[4940]: I0223 10:21:04.641124 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-wv7r6_33328c1e-cfb4-435b-a5a0-8b1ec675055a/cert-manager-webhook/0.log" Feb 23 10:21:19 crc kubenswrapper[4940]: I0223 10:21:19.888296 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-5c78fc5d65-w29cj_e77fac6b-039a-43b2-ad12-f5e506201ef7/nmstate-console-plugin/0.log" Feb 23 10:21:20 crc kubenswrapper[4940]: I0223 10:21:20.314723 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-frr6p_a28be9f7-f2d0-4349-8432-a33d0f04d076/nmstate-handler/0.log" Feb 23 10:21:20 crc kubenswrapper[4940]: I0223 10:21:20.405381 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-58c85c668d-kcszk_06a06080-4162-423f-bd67-2cdc3aa6cec0/kube-rbac-proxy/0.log" Feb 23 10:21:20 crc kubenswrapper[4940]: I0223 10:21:20.512522 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-58c85c668d-kcszk_06a06080-4162-423f-bd67-2cdc3aa6cec0/nmstate-metrics/0.log" Feb 23 10:21:20 crc kubenswrapper[4940]: I0223 10:21:20.540911 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-694c9596b7-2lctm_6a03ba2d-040d-4fe6-ac2f-081bb22e1f38/nmstate-operator/0.log" Feb 23 10:21:20 crc kubenswrapper[4940]: I0223 10:21:20.713106 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-866bcb46dc-btmp6_f78f27f8-0a49-4aef-9e58-0cdb19fddbe9/nmstate-webhook/0.log" Feb 23 10:21:54 crc kubenswrapper[4940]: I0223 10:21:54.543449 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-69bbfbf88f-5cz68_e1d5ae18-3a8e-4845-a163-827184c53429/kube-rbac-proxy/0.log" Feb 23 10:21:54 crc kubenswrapper[4940]: I0223 10:21:54.634322 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-69bbfbf88f-5cz68_e1d5ae18-3a8e-4845-a163-827184c53429/controller/0.log" Feb 23 10:21:54 crc kubenswrapper[4940]: I0223 10:21:54.780852 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-vj8xk_bbb0e30c-ec14-4878-922f-df5bdaa26e76/cp-frr-files/0.log" Feb 23 10:21:54 crc kubenswrapper[4940]: I0223 10:21:54.983447 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-vj8xk_bbb0e30c-ec14-4878-922f-df5bdaa26e76/cp-reloader/0.log" Feb 23 10:21:54 crc kubenswrapper[4940]: I0223 10:21:54.986979 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-vj8xk_bbb0e30c-ec14-4878-922f-df5bdaa26e76/cp-reloader/0.log" Feb 23 10:21:55 crc kubenswrapper[4940]: I0223 10:21:55.030977 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-vj8xk_bbb0e30c-ec14-4878-922f-df5bdaa26e76/cp-frr-files/0.log" Feb 23 10:21:55 crc kubenswrapper[4940]: I0223 10:21:55.033263 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-vj8xk_bbb0e30c-ec14-4878-922f-df5bdaa26e76/cp-metrics/0.log" Feb 23 10:21:55 crc kubenswrapper[4940]: I0223 10:21:55.220961 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-vj8xk_bbb0e30c-ec14-4878-922f-df5bdaa26e76/cp-frr-files/0.log" Feb 23 10:21:55 crc kubenswrapper[4940]: I0223 10:21:55.283112 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-vj8xk_bbb0e30c-ec14-4878-922f-df5bdaa26e76/cp-reloader/0.log" Feb 23 10:21:55 crc kubenswrapper[4940]: I0223 10:21:55.309149 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-vj8xk_bbb0e30c-ec14-4878-922f-df5bdaa26e76/cp-metrics/0.log" Feb 23 10:21:55 crc kubenswrapper[4940]: I0223 10:21:55.342285 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-vj8xk_bbb0e30c-ec14-4878-922f-df5bdaa26e76/cp-metrics/0.log" Feb 23 10:21:55 crc kubenswrapper[4940]: I0223 10:21:55.529128 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-vj8xk_bbb0e30c-ec14-4878-922f-df5bdaa26e76/cp-reloader/0.log" Feb 23 10:21:55 crc kubenswrapper[4940]: I0223 10:21:55.532421 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-vj8xk_bbb0e30c-ec14-4878-922f-df5bdaa26e76/cp-frr-files/0.log" Feb 23 10:21:55 crc kubenswrapper[4940]: I0223 10:21:55.537457 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-vj8xk_bbb0e30c-ec14-4878-922f-df5bdaa26e76/controller/0.log" Feb 23 10:21:55 crc kubenswrapper[4940]: I0223 10:21:55.546714 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-vj8xk_bbb0e30c-ec14-4878-922f-df5bdaa26e76/cp-metrics/0.log" Feb 23 10:21:55 crc kubenswrapper[4940]: I0223 10:21:55.742068 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-vj8xk_bbb0e30c-ec14-4878-922f-df5bdaa26e76/kube-rbac-proxy/0.log" Feb 23 10:21:55 crc kubenswrapper[4940]: I0223 10:21:55.766575 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-vj8xk_bbb0e30c-ec14-4878-922f-df5bdaa26e76/kube-rbac-proxy-frr/0.log" Feb 23 10:21:55 crc kubenswrapper[4940]: I0223 10:21:55.800203 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-vj8xk_bbb0e30c-ec14-4878-922f-df5bdaa26e76/frr-metrics/0.log" Feb 23 10:21:56 crc kubenswrapper[4940]: I0223 10:21:56.032065 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-78b44bf5bb-crdxs_130d1750-19ea-4753-87f5-1e7f85169a40/frr-k8s-webhook-server/0.log" Feb 23 10:21:56 crc kubenswrapper[4940]: I0223 10:21:56.043332 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-vj8xk_bbb0e30c-ec14-4878-922f-df5bdaa26e76/reloader/0.log" Feb 23 10:21:56 crc kubenswrapper[4940]: I0223 10:21:56.338862 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-6fbfbdcfc7-6tv8l_19abcf46-c53b-4409-a6f9-e7e8b41e3182/manager/0.log" Feb 23 10:21:56 crc kubenswrapper[4940]: I0223 10:21:56.513006 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-d595fc4b7-pnf6s_462005ef-96eb-4734-9ffe-eec88929e4d2/webhook-server/0.log" Feb 23 10:21:56 crc kubenswrapper[4940]: I0223 10:21:56.634282 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-vw24x_2309dc31-3802-4155-847b-56d77574cee0/kube-rbac-proxy/0.log" Feb 23 10:21:57 crc kubenswrapper[4940]: I0223 10:21:57.337788 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-vw24x_2309dc31-3802-4155-847b-56d77574cee0/speaker/0.log" Feb 23 10:21:57 crc kubenswrapper[4940]: I0223 10:21:57.911096 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-vj8xk_bbb0e30c-ec14-4878-922f-df5bdaa26e76/frr/0.log" Feb 23 10:22:01 crc kubenswrapper[4940]: I0223 10:22:01.429868 4940 patch_prober.go:28] interesting pod/machine-config-daemon-26mgs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 23 10:22:01 crc kubenswrapper[4940]: I0223 10:22:01.430554 4940 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" podUID="f3f2cfd6-5ddf-436d-998f-440f1cc642b1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 23 10:22:12 crc kubenswrapper[4940]: I0223 10:22:12.118158 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213ccb95_36e9d5fb-4709-4cb8-ac88-67a510ca10fe/util/0.log" Feb 23 10:22:12 crc kubenswrapper[4940]: I0223 10:22:12.364408 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213ccb95_36e9d5fb-4709-4cb8-ac88-67a510ca10fe/util/0.log" Feb 23 10:22:12 crc kubenswrapper[4940]: I0223 10:22:12.394365 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213ccb95_36e9d5fb-4709-4cb8-ac88-67a510ca10fe/pull/0.log" Feb 23 10:22:12 crc kubenswrapper[4940]: I0223 10:22:12.396795 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213ccb95_36e9d5fb-4709-4cb8-ac88-67a510ca10fe/pull/0.log" Feb 23 10:22:12 crc kubenswrapper[4940]: I0223 10:22:12.683779 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213ccb95_36e9d5fb-4709-4cb8-ac88-67a510ca10fe/pull/0.log" Feb 23 10:22:12 crc kubenswrapper[4940]: I0223 10:22:12.698461 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213ccb95_36e9d5fb-4709-4cb8-ac88-67a510ca10fe/extract/0.log" Feb 23 10:22:12 crc kubenswrapper[4940]: I0223 10:22:12.710936 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213ccb95_36e9d5fb-4709-4cb8-ac88-67a510ca10fe/util/0.log" Feb 23 10:22:12 crc kubenswrapper[4940]: I0223 10:22:12.887885 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-nkx7p_c2f95df1-67e7-47f8-aa50-91cbcdd1036d/extract-utilities/0.log" Feb 23 10:22:13 crc kubenswrapper[4940]: I0223 10:22:13.090435 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-nkx7p_c2f95df1-67e7-47f8-aa50-91cbcdd1036d/extract-utilities/0.log" Feb 23 10:22:13 crc kubenswrapper[4940]: I0223 10:22:13.098998 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-nkx7p_c2f95df1-67e7-47f8-aa50-91cbcdd1036d/extract-content/0.log" Feb 23 10:22:13 crc kubenswrapper[4940]: I0223 10:22:13.131107 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-nkx7p_c2f95df1-67e7-47f8-aa50-91cbcdd1036d/extract-content/0.log" Feb 23 10:22:13 crc kubenswrapper[4940]: I0223 10:22:13.351919 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-nkx7p_c2f95df1-67e7-47f8-aa50-91cbcdd1036d/extract-content/0.log" Feb 23 10:22:13 crc kubenswrapper[4940]: I0223 10:22:13.393316 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-nkx7p_c2f95df1-67e7-47f8-aa50-91cbcdd1036d/extract-utilities/0.log" Feb 23 10:22:13 crc kubenswrapper[4940]: I0223 10:22:13.485116 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-nkx7p_c2f95df1-67e7-47f8-aa50-91cbcdd1036d/registry-server/0.log" Feb 23 10:22:13 crc kubenswrapper[4940]: I0223 10:22:13.659092 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-nmz5s_f93c4964-18cf-48c3-b3b1-dc7107d8542a/extract-utilities/0.log" Feb 23 10:22:13 crc kubenswrapper[4940]: I0223 10:22:13.854118 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-nmz5s_f93c4964-18cf-48c3-b3b1-dc7107d8542a/extract-content/0.log" Feb 23 10:22:13 crc kubenswrapper[4940]: I0223 10:22:13.939809 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-nmz5s_f93c4964-18cf-48c3-b3b1-dc7107d8542a/extract-utilities/0.log" Feb 23 10:22:14 crc kubenswrapper[4940]: I0223 10:22:14.138273 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-nmz5s_f93c4964-18cf-48c3-b3b1-dc7107d8542a/extract-content/0.log" Feb 23 10:22:14 crc kubenswrapper[4940]: I0223 10:22:14.492672 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-nmz5s_f93c4964-18cf-48c3-b3b1-dc7107d8542a/extract-utilities/0.log" Feb 23 10:22:14 crc kubenswrapper[4940]: I0223 10:22:14.500407 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-nmz5s_f93c4964-18cf-48c3-b3b1-dc7107d8542a/extract-content/0.log" Feb 23 10:22:14 crc kubenswrapper[4940]: I0223 10:22:14.733037 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca4pb6q_3db796ec-3e41-4deb-abb8-e60eb37a659a/util/0.log" Feb 23 10:22:15 crc kubenswrapper[4940]: I0223 10:22:15.014008 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca4pb6q_3db796ec-3e41-4deb-abb8-e60eb37a659a/pull/0.log" Feb 23 10:22:15 crc kubenswrapper[4940]: I0223 10:22:15.053411 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca4pb6q_3db796ec-3e41-4deb-abb8-e60eb37a659a/util/0.log" Feb 23 10:22:15 crc kubenswrapper[4940]: I0223 10:22:15.119846 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca4pb6q_3db796ec-3e41-4deb-abb8-e60eb37a659a/pull/0.log" Feb 23 10:22:15 crc kubenswrapper[4940]: I0223 10:22:15.304854 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-nmz5s_f93c4964-18cf-48c3-b3b1-dc7107d8542a/registry-server/0.log" Feb 23 10:22:15 crc kubenswrapper[4940]: I0223 10:22:15.366134 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca4pb6q_3db796ec-3e41-4deb-abb8-e60eb37a659a/pull/0.log" Feb 23 10:22:15 crc kubenswrapper[4940]: I0223 10:22:15.381374 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca4pb6q_3db796ec-3e41-4deb-abb8-e60eb37a659a/extract/0.log" Feb 23 10:22:15 crc kubenswrapper[4940]: I0223 10:22:15.386176 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca4pb6q_3db796ec-3e41-4deb-abb8-e60eb37a659a/util/0.log" Feb 23 10:22:15 crc kubenswrapper[4940]: I0223 10:22:15.541000 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-hf78k_4e776654-5212-41ae-ac30-a4dafdf7a349/marketplace-operator/0.log" Feb 23 10:22:15 crc kubenswrapper[4940]: I0223 10:22:15.617818 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-6t7mh_b812b371-b4f8-439d-8a46-152ba8e9b7bf/extract-utilities/0.log" Feb 23 10:22:15 crc kubenswrapper[4940]: I0223 10:22:15.821012 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-6t7mh_b812b371-b4f8-439d-8a46-152ba8e9b7bf/extract-utilities/0.log" Feb 23 10:22:15 crc kubenswrapper[4940]: I0223 10:22:15.846874 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-6t7mh_b812b371-b4f8-439d-8a46-152ba8e9b7bf/extract-content/0.log" Feb 23 10:22:15 crc kubenswrapper[4940]: I0223 10:22:15.856983 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-6t7mh_b812b371-b4f8-439d-8a46-152ba8e9b7bf/extract-content/0.log" Feb 23 10:22:16 crc kubenswrapper[4940]: I0223 10:22:16.079605 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-6t7mh_b812b371-b4f8-439d-8a46-152ba8e9b7bf/extract-content/0.log" Feb 23 10:22:16 crc kubenswrapper[4940]: I0223 10:22:16.119862 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-6t7mh_b812b371-b4f8-439d-8a46-152ba8e9b7bf/extract-utilities/0.log" Feb 23 10:22:16 crc kubenswrapper[4940]: I0223 10:22:16.261413 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-6t7mh_b812b371-b4f8-439d-8a46-152ba8e9b7bf/registry-server/0.log" Feb 23 10:22:16 crc kubenswrapper[4940]: I0223 10:22:16.309191 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-qmdzn_612eb23a-7ea4-4c79-bcfe-a627918a7e3f/extract-utilities/0.log" Feb 23 10:22:16 crc kubenswrapper[4940]: I0223 10:22:16.525877 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-qmdzn_612eb23a-7ea4-4c79-bcfe-a627918a7e3f/extract-utilities/0.log" Feb 23 10:22:16 crc kubenswrapper[4940]: I0223 10:22:16.529229 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-qmdzn_612eb23a-7ea4-4c79-bcfe-a627918a7e3f/extract-content/0.log" Feb 23 10:22:16 crc kubenswrapper[4940]: I0223 10:22:16.534896 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-qmdzn_612eb23a-7ea4-4c79-bcfe-a627918a7e3f/extract-content/0.log" Feb 23 10:22:16 crc kubenswrapper[4940]: I0223 10:22:16.728657 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-qmdzn_612eb23a-7ea4-4c79-bcfe-a627918a7e3f/extract-utilities/0.log" Feb 23 10:22:16 crc kubenswrapper[4940]: I0223 10:22:16.746582 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-qmdzn_612eb23a-7ea4-4c79-bcfe-a627918a7e3f/extract-content/0.log" Feb 23 10:22:17 crc kubenswrapper[4940]: I0223 10:22:17.476500 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-qmdzn_612eb23a-7ea4-4c79-bcfe-a627918a7e3f/registry-server/0.log" Feb 23 10:22:31 crc kubenswrapper[4940]: I0223 10:22:31.429201 4940 patch_prober.go:28] interesting pod/machine-config-daemon-26mgs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 23 10:22:31 crc kubenswrapper[4940]: I0223 10:22:31.429699 4940 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" podUID="f3f2cfd6-5ddf-436d-998f-440f1cc642b1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 23 10:22:50 crc kubenswrapper[4940]: I0223 10:22:50.113929 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-2dh6n"] Feb 23 10:22:50 crc kubenswrapper[4940]: E0223 10:22:50.115114 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ceffbe3-49a3-4930-acb1-5c42e181e375" containerName="extract-content" Feb 23 10:22:50 crc kubenswrapper[4940]: I0223 10:22:50.115140 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ceffbe3-49a3-4930-acb1-5c42e181e375" containerName="extract-content" Feb 23 10:22:50 crc kubenswrapper[4940]: E0223 10:22:50.115177 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ceffbe3-49a3-4930-acb1-5c42e181e375" containerName="registry-server" Feb 23 10:22:50 crc kubenswrapper[4940]: I0223 10:22:50.115185 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ceffbe3-49a3-4930-acb1-5c42e181e375" containerName="registry-server" Feb 23 10:22:50 crc kubenswrapper[4940]: E0223 10:22:50.115207 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ceffbe3-49a3-4930-acb1-5c42e181e375" containerName="extract-utilities" Feb 23 10:22:50 crc kubenswrapper[4940]: I0223 10:22:50.115215 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ceffbe3-49a3-4930-acb1-5c42e181e375" containerName="extract-utilities" Feb 23 10:22:50 crc kubenswrapper[4940]: I0223 10:22:50.115485 4940 memory_manager.go:354] "RemoveStaleState removing state" podUID="4ceffbe3-49a3-4930-acb1-5c42e181e375" containerName="registry-server" Feb 23 10:22:50 crc kubenswrapper[4940]: I0223 10:22:50.117511 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2dh6n" Feb 23 10:22:50 crc kubenswrapper[4940]: I0223 10:22:50.130309 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-2dh6n"] Feb 23 10:22:50 crc kubenswrapper[4940]: I0223 10:22:50.280727 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4090ad1a-0873-4c7e-85ef-e6dda3bef8c0-utilities\") pod \"community-operators-2dh6n\" (UID: \"4090ad1a-0873-4c7e-85ef-e6dda3bef8c0\") " pod="openshift-marketplace/community-operators-2dh6n" Feb 23 10:22:50 crc kubenswrapper[4940]: I0223 10:22:50.281053 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4090ad1a-0873-4c7e-85ef-e6dda3bef8c0-catalog-content\") pod \"community-operators-2dh6n\" (UID: \"4090ad1a-0873-4c7e-85ef-e6dda3bef8c0\") " pod="openshift-marketplace/community-operators-2dh6n" Feb 23 10:22:50 crc kubenswrapper[4940]: I0223 10:22:50.281175 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-76pnm\" (UniqueName: \"kubernetes.io/projected/4090ad1a-0873-4c7e-85ef-e6dda3bef8c0-kube-api-access-76pnm\") pod \"community-operators-2dh6n\" (UID: \"4090ad1a-0873-4c7e-85ef-e6dda3bef8c0\") " pod="openshift-marketplace/community-operators-2dh6n" Feb 23 10:22:50 crc kubenswrapper[4940]: I0223 10:22:50.382540 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-76pnm\" (UniqueName: \"kubernetes.io/projected/4090ad1a-0873-4c7e-85ef-e6dda3bef8c0-kube-api-access-76pnm\") pod \"community-operators-2dh6n\" (UID: \"4090ad1a-0873-4c7e-85ef-e6dda3bef8c0\") " pod="openshift-marketplace/community-operators-2dh6n" Feb 23 10:22:50 crc kubenswrapper[4940]: I0223 10:22:50.382657 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4090ad1a-0873-4c7e-85ef-e6dda3bef8c0-utilities\") pod \"community-operators-2dh6n\" (UID: \"4090ad1a-0873-4c7e-85ef-e6dda3bef8c0\") " pod="openshift-marketplace/community-operators-2dh6n" Feb 23 10:22:50 crc kubenswrapper[4940]: I0223 10:22:50.382699 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4090ad1a-0873-4c7e-85ef-e6dda3bef8c0-catalog-content\") pod \"community-operators-2dh6n\" (UID: \"4090ad1a-0873-4c7e-85ef-e6dda3bef8c0\") " pod="openshift-marketplace/community-operators-2dh6n" Feb 23 10:22:50 crc kubenswrapper[4940]: I0223 10:22:50.383219 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4090ad1a-0873-4c7e-85ef-e6dda3bef8c0-utilities\") pod \"community-operators-2dh6n\" (UID: \"4090ad1a-0873-4c7e-85ef-e6dda3bef8c0\") " pod="openshift-marketplace/community-operators-2dh6n" Feb 23 10:22:50 crc kubenswrapper[4940]: I0223 10:22:50.384059 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4090ad1a-0873-4c7e-85ef-e6dda3bef8c0-catalog-content\") pod \"community-operators-2dh6n\" (UID: \"4090ad1a-0873-4c7e-85ef-e6dda3bef8c0\") " pod="openshift-marketplace/community-operators-2dh6n" Feb 23 10:22:50 crc kubenswrapper[4940]: I0223 10:22:50.415059 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-76pnm\" (UniqueName: \"kubernetes.io/projected/4090ad1a-0873-4c7e-85ef-e6dda3bef8c0-kube-api-access-76pnm\") pod \"community-operators-2dh6n\" (UID: \"4090ad1a-0873-4c7e-85ef-e6dda3bef8c0\") " pod="openshift-marketplace/community-operators-2dh6n" Feb 23 10:22:50 crc kubenswrapper[4940]: I0223 10:22:50.455761 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2dh6n" Feb 23 10:22:51 crc kubenswrapper[4940]: I0223 10:22:51.114534 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-2dh6n"] Feb 23 10:22:51 crc kubenswrapper[4940]: I0223 10:22:51.723544 4940 generic.go:334] "Generic (PLEG): container finished" podID="4090ad1a-0873-4c7e-85ef-e6dda3bef8c0" containerID="2137646c96a6d5bbdb38906a383d78d9864a025a523ce3c57b36cff31e456a70" exitCode=0 Feb 23 10:22:51 crc kubenswrapper[4940]: I0223 10:22:51.723736 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2dh6n" event={"ID":"4090ad1a-0873-4c7e-85ef-e6dda3bef8c0","Type":"ContainerDied","Data":"2137646c96a6d5bbdb38906a383d78d9864a025a523ce3c57b36cff31e456a70"} Feb 23 10:22:51 crc kubenswrapper[4940]: I0223 10:22:51.724175 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2dh6n" event={"ID":"4090ad1a-0873-4c7e-85ef-e6dda3bef8c0","Type":"ContainerStarted","Data":"abc0f73f78ff709b8d1ee1e9874f2aa60f628fb6199ec07afaeef1ae025bf7f9"} Feb 23 10:22:52 crc kubenswrapper[4940]: I0223 10:22:52.734738 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2dh6n" event={"ID":"4090ad1a-0873-4c7e-85ef-e6dda3bef8c0","Type":"ContainerStarted","Data":"c53431f264f3ca370b78885ebe7f0a70fae544f5a562bc5b5f184911f6aca7b5"} Feb 23 10:22:54 crc kubenswrapper[4940]: I0223 10:22:54.771368 4940 generic.go:334] "Generic (PLEG): container finished" podID="4090ad1a-0873-4c7e-85ef-e6dda3bef8c0" containerID="c53431f264f3ca370b78885ebe7f0a70fae544f5a562bc5b5f184911f6aca7b5" exitCode=0 Feb 23 10:22:54 crc kubenswrapper[4940]: I0223 10:22:54.771448 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2dh6n" event={"ID":"4090ad1a-0873-4c7e-85ef-e6dda3bef8c0","Type":"ContainerDied","Data":"c53431f264f3ca370b78885ebe7f0a70fae544f5a562bc5b5f184911f6aca7b5"} Feb 23 10:22:55 crc kubenswrapper[4940]: I0223 10:22:55.784168 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2dh6n" event={"ID":"4090ad1a-0873-4c7e-85ef-e6dda3bef8c0","Type":"ContainerStarted","Data":"868b05f17cf5fc8ef967437b62cd301b24025947d4c0c744bc746b309a9815d1"} Feb 23 10:23:00 crc kubenswrapper[4940]: I0223 10:23:00.457802 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-2dh6n" Feb 23 10:23:00 crc kubenswrapper[4940]: I0223 10:23:00.458134 4940 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-2dh6n" Feb 23 10:23:00 crc kubenswrapper[4940]: I0223 10:23:00.502544 4940 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-2dh6n" Feb 23 10:23:00 crc kubenswrapper[4940]: I0223 10:23:00.531161 4940 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-2dh6n" podStartSLOduration=7.077729947 podStartE2EDuration="10.531096894s" podCreationTimestamp="2026-02-23 10:22:50 +0000 UTC" firstStartedPulling="2026-02-23 10:22:51.728092564 +0000 UTC m=+5703.111298721" lastFinishedPulling="2026-02-23 10:22:55.181459511 +0000 UTC m=+5706.564665668" observedRunningTime="2026-02-23 10:22:55.808105294 +0000 UTC m=+5707.191311451" watchObservedRunningTime="2026-02-23 10:23:00.531096894 +0000 UTC m=+5711.914303051" Feb 23 10:23:00 crc kubenswrapper[4940]: I0223 10:23:00.903541 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-2dh6n" Feb 23 10:23:00 crc kubenswrapper[4940]: I0223 10:23:00.954544 4940 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-2dh6n"] Feb 23 10:23:01 crc kubenswrapper[4940]: I0223 10:23:01.429419 4940 patch_prober.go:28] interesting pod/machine-config-daemon-26mgs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 23 10:23:01 crc kubenswrapper[4940]: I0223 10:23:01.429479 4940 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" podUID="f3f2cfd6-5ddf-436d-998f-440f1cc642b1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 23 10:23:01 crc kubenswrapper[4940]: I0223 10:23:01.429526 4940 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" Feb 23 10:23:01 crc kubenswrapper[4940]: I0223 10:23:01.430370 4940 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"034d4916d87eca1c845da02d4d801b4d26aff1b1f34659b667ee72945e9a694c"} pod="openshift-machine-config-operator/machine-config-daemon-26mgs" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 23 10:23:01 crc kubenswrapper[4940]: I0223 10:23:01.430469 4940 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" podUID="f3f2cfd6-5ddf-436d-998f-440f1cc642b1" containerName="machine-config-daemon" containerID="cri-o://034d4916d87eca1c845da02d4d801b4d26aff1b1f34659b667ee72945e9a694c" gracePeriod=600 Feb 23 10:23:01 crc kubenswrapper[4940]: E0223 10:23:01.579598 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-26mgs_openshift-machine-config-operator(f3f2cfd6-5ddf-436d-998f-440f1cc642b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" podUID="f3f2cfd6-5ddf-436d-998f-440f1cc642b1" Feb 23 10:23:01 crc kubenswrapper[4940]: I0223 10:23:01.880134 4940 generic.go:334] "Generic (PLEG): container finished" podID="f3f2cfd6-5ddf-436d-998f-440f1cc642b1" containerID="034d4916d87eca1c845da02d4d801b4d26aff1b1f34659b667ee72945e9a694c" exitCode=0 Feb 23 10:23:01 crc kubenswrapper[4940]: I0223 10:23:01.880223 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" event={"ID":"f3f2cfd6-5ddf-436d-998f-440f1cc642b1","Type":"ContainerDied","Data":"034d4916d87eca1c845da02d4d801b4d26aff1b1f34659b667ee72945e9a694c"} Feb 23 10:23:01 crc kubenswrapper[4940]: I0223 10:23:01.880298 4940 scope.go:117] "RemoveContainer" containerID="546957238c5906c7fb8f7c88c8ab18ff9dc429daecaa694afa30833c82fdb34e" Feb 23 10:23:01 crc kubenswrapper[4940]: I0223 10:23:01.884594 4940 scope.go:117] "RemoveContainer" containerID="034d4916d87eca1c845da02d4d801b4d26aff1b1f34659b667ee72945e9a694c" Feb 23 10:23:01 crc kubenswrapper[4940]: E0223 10:23:01.885826 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-26mgs_openshift-machine-config-operator(f3f2cfd6-5ddf-436d-998f-440f1cc642b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" podUID="f3f2cfd6-5ddf-436d-998f-440f1cc642b1" Feb 23 10:23:02 crc kubenswrapper[4940]: I0223 10:23:02.888734 4940 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-2dh6n" podUID="4090ad1a-0873-4c7e-85ef-e6dda3bef8c0" containerName="registry-server" containerID="cri-o://868b05f17cf5fc8ef967437b62cd301b24025947d4c0c744bc746b309a9815d1" gracePeriod=2 Feb 23 10:23:03 crc kubenswrapper[4940]: I0223 10:23:03.392382 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2dh6n" Feb 23 10:23:03 crc kubenswrapper[4940]: I0223 10:23:03.431340 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4090ad1a-0873-4c7e-85ef-e6dda3bef8c0-catalog-content\") pod \"4090ad1a-0873-4c7e-85ef-e6dda3bef8c0\" (UID: \"4090ad1a-0873-4c7e-85ef-e6dda3bef8c0\") " Feb 23 10:23:03 crc kubenswrapper[4940]: I0223 10:23:03.431686 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4090ad1a-0873-4c7e-85ef-e6dda3bef8c0-utilities\") pod \"4090ad1a-0873-4c7e-85ef-e6dda3bef8c0\" (UID: \"4090ad1a-0873-4c7e-85ef-e6dda3bef8c0\") " Feb 23 10:23:03 crc kubenswrapper[4940]: I0223 10:23:03.431822 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-76pnm\" (UniqueName: \"kubernetes.io/projected/4090ad1a-0873-4c7e-85ef-e6dda3bef8c0-kube-api-access-76pnm\") pod \"4090ad1a-0873-4c7e-85ef-e6dda3bef8c0\" (UID: \"4090ad1a-0873-4c7e-85ef-e6dda3bef8c0\") " Feb 23 10:23:03 crc kubenswrapper[4940]: I0223 10:23:03.436444 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4090ad1a-0873-4c7e-85ef-e6dda3bef8c0-utilities" (OuterVolumeSpecName: "utilities") pod "4090ad1a-0873-4c7e-85ef-e6dda3bef8c0" (UID: "4090ad1a-0873-4c7e-85ef-e6dda3bef8c0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 10:23:03 crc kubenswrapper[4940]: I0223 10:23:03.444840 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4090ad1a-0873-4c7e-85ef-e6dda3bef8c0-kube-api-access-76pnm" (OuterVolumeSpecName: "kube-api-access-76pnm") pod "4090ad1a-0873-4c7e-85ef-e6dda3bef8c0" (UID: "4090ad1a-0873-4c7e-85ef-e6dda3bef8c0"). InnerVolumeSpecName "kube-api-access-76pnm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 10:23:03 crc kubenswrapper[4940]: I0223 10:23:03.503991 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4090ad1a-0873-4c7e-85ef-e6dda3bef8c0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4090ad1a-0873-4c7e-85ef-e6dda3bef8c0" (UID: "4090ad1a-0873-4c7e-85ef-e6dda3bef8c0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 10:23:03 crc kubenswrapper[4940]: I0223 10:23:03.534211 4940 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4090ad1a-0873-4c7e-85ef-e6dda3bef8c0-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 23 10:23:03 crc kubenswrapper[4940]: I0223 10:23:03.534441 4940 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4090ad1a-0873-4c7e-85ef-e6dda3bef8c0-utilities\") on node \"crc\" DevicePath \"\"" Feb 23 10:23:03 crc kubenswrapper[4940]: I0223 10:23:03.534505 4940 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-76pnm\" (UniqueName: \"kubernetes.io/projected/4090ad1a-0873-4c7e-85ef-e6dda3bef8c0-kube-api-access-76pnm\") on node \"crc\" DevicePath \"\"" Feb 23 10:23:03 crc kubenswrapper[4940]: I0223 10:23:03.908201 4940 generic.go:334] "Generic (PLEG): container finished" podID="4090ad1a-0873-4c7e-85ef-e6dda3bef8c0" containerID="868b05f17cf5fc8ef967437b62cd301b24025947d4c0c744bc746b309a9815d1" exitCode=0 Feb 23 10:23:03 crc kubenswrapper[4940]: I0223 10:23:03.908299 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2dh6n" event={"ID":"4090ad1a-0873-4c7e-85ef-e6dda3bef8c0","Type":"ContainerDied","Data":"868b05f17cf5fc8ef967437b62cd301b24025947d4c0c744bc746b309a9815d1"} Feb 23 10:23:03 crc kubenswrapper[4940]: I0223 10:23:03.908339 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2dh6n" event={"ID":"4090ad1a-0873-4c7e-85ef-e6dda3bef8c0","Type":"ContainerDied","Data":"abc0f73f78ff709b8d1ee1e9874f2aa60f628fb6199ec07afaeef1ae025bf7f9"} Feb 23 10:23:03 crc kubenswrapper[4940]: I0223 10:23:03.908361 4940 scope.go:117] "RemoveContainer" containerID="868b05f17cf5fc8ef967437b62cd301b24025947d4c0c744bc746b309a9815d1" Feb 23 10:23:03 crc kubenswrapper[4940]: I0223 10:23:03.908400 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2dh6n" Feb 23 10:23:03 crc kubenswrapper[4940]: I0223 10:23:03.943254 4940 scope.go:117] "RemoveContainer" containerID="c53431f264f3ca370b78885ebe7f0a70fae544f5a562bc5b5f184911f6aca7b5" Feb 23 10:23:03 crc kubenswrapper[4940]: I0223 10:23:03.968332 4940 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-2dh6n"] Feb 23 10:23:03 crc kubenswrapper[4940]: I0223 10:23:03.978402 4940 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-2dh6n"] Feb 23 10:23:03 crc kubenswrapper[4940]: I0223 10:23:03.985228 4940 scope.go:117] "RemoveContainer" containerID="2137646c96a6d5bbdb38906a383d78d9864a025a523ce3c57b36cff31e456a70" Feb 23 10:23:04 crc kubenswrapper[4940]: I0223 10:23:04.018786 4940 scope.go:117] "RemoveContainer" containerID="868b05f17cf5fc8ef967437b62cd301b24025947d4c0c744bc746b309a9815d1" Feb 23 10:23:04 crc kubenswrapper[4940]: E0223 10:23:04.019238 4940 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"868b05f17cf5fc8ef967437b62cd301b24025947d4c0c744bc746b309a9815d1\": container with ID starting with 868b05f17cf5fc8ef967437b62cd301b24025947d4c0c744bc746b309a9815d1 not found: ID does not exist" containerID="868b05f17cf5fc8ef967437b62cd301b24025947d4c0c744bc746b309a9815d1" Feb 23 10:23:04 crc kubenswrapper[4940]: I0223 10:23:04.019282 4940 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"868b05f17cf5fc8ef967437b62cd301b24025947d4c0c744bc746b309a9815d1"} err="failed to get container status \"868b05f17cf5fc8ef967437b62cd301b24025947d4c0c744bc746b309a9815d1\": rpc error: code = NotFound desc = could not find container \"868b05f17cf5fc8ef967437b62cd301b24025947d4c0c744bc746b309a9815d1\": container with ID starting with 868b05f17cf5fc8ef967437b62cd301b24025947d4c0c744bc746b309a9815d1 not found: ID does not exist" Feb 23 10:23:04 crc kubenswrapper[4940]: I0223 10:23:04.019303 4940 scope.go:117] "RemoveContainer" containerID="c53431f264f3ca370b78885ebe7f0a70fae544f5a562bc5b5f184911f6aca7b5" Feb 23 10:23:04 crc kubenswrapper[4940]: E0223 10:23:04.019787 4940 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c53431f264f3ca370b78885ebe7f0a70fae544f5a562bc5b5f184911f6aca7b5\": container with ID starting with c53431f264f3ca370b78885ebe7f0a70fae544f5a562bc5b5f184911f6aca7b5 not found: ID does not exist" containerID="c53431f264f3ca370b78885ebe7f0a70fae544f5a562bc5b5f184911f6aca7b5" Feb 23 10:23:04 crc kubenswrapper[4940]: I0223 10:23:04.019856 4940 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c53431f264f3ca370b78885ebe7f0a70fae544f5a562bc5b5f184911f6aca7b5"} err="failed to get container status \"c53431f264f3ca370b78885ebe7f0a70fae544f5a562bc5b5f184911f6aca7b5\": rpc error: code = NotFound desc = could not find container \"c53431f264f3ca370b78885ebe7f0a70fae544f5a562bc5b5f184911f6aca7b5\": container with ID starting with c53431f264f3ca370b78885ebe7f0a70fae544f5a562bc5b5f184911f6aca7b5 not found: ID does not exist" Feb 23 10:23:04 crc kubenswrapper[4940]: I0223 10:23:04.019901 4940 scope.go:117] "RemoveContainer" containerID="2137646c96a6d5bbdb38906a383d78d9864a025a523ce3c57b36cff31e456a70" Feb 23 10:23:04 crc kubenswrapper[4940]: E0223 10:23:04.020591 4940 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2137646c96a6d5bbdb38906a383d78d9864a025a523ce3c57b36cff31e456a70\": container with ID starting with 2137646c96a6d5bbdb38906a383d78d9864a025a523ce3c57b36cff31e456a70 not found: ID does not exist" containerID="2137646c96a6d5bbdb38906a383d78d9864a025a523ce3c57b36cff31e456a70" Feb 23 10:23:04 crc kubenswrapper[4940]: I0223 10:23:04.020636 4940 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2137646c96a6d5bbdb38906a383d78d9864a025a523ce3c57b36cff31e456a70"} err="failed to get container status \"2137646c96a6d5bbdb38906a383d78d9864a025a523ce3c57b36cff31e456a70\": rpc error: code = NotFound desc = could not find container \"2137646c96a6d5bbdb38906a383d78d9864a025a523ce3c57b36cff31e456a70\": container with ID starting with 2137646c96a6d5bbdb38906a383d78d9864a025a523ce3c57b36cff31e456a70 not found: ID does not exist" Feb 23 10:23:05 crc kubenswrapper[4940]: I0223 10:23:05.356049 4940 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4090ad1a-0873-4c7e-85ef-e6dda3bef8c0" path="/var/lib/kubelet/pods/4090ad1a-0873-4c7e-85ef-e6dda3bef8c0/volumes" Feb 23 10:23:17 crc kubenswrapper[4940]: I0223 10:23:17.346249 4940 scope.go:117] "RemoveContainer" containerID="034d4916d87eca1c845da02d4d801b4d26aff1b1f34659b667ee72945e9a694c" Feb 23 10:23:17 crc kubenswrapper[4940]: E0223 10:23:17.348222 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-26mgs_openshift-machine-config-operator(f3f2cfd6-5ddf-436d-998f-440f1cc642b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" podUID="f3f2cfd6-5ddf-436d-998f-440f1cc642b1" Feb 23 10:23:30 crc kubenswrapper[4940]: I0223 10:23:30.345785 4940 scope.go:117] "RemoveContainer" containerID="034d4916d87eca1c845da02d4d801b4d26aff1b1f34659b667ee72945e9a694c" Feb 23 10:23:30 crc kubenswrapper[4940]: E0223 10:23:30.346645 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-26mgs_openshift-machine-config-operator(f3f2cfd6-5ddf-436d-998f-440f1cc642b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" podUID="f3f2cfd6-5ddf-436d-998f-440f1cc642b1" Feb 23 10:23:41 crc kubenswrapper[4940]: I0223 10:23:41.346501 4940 scope.go:117] "RemoveContainer" containerID="034d4916d87eca1c845da02d4d801b4d26aff1b1f34659b667ee72945e9a694c" Feb 23 10:23:41 crc kubenswrapper[4940]: E0223 10:23:41.347590 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-26mgs_openshift-machine-config-operator(f3f2cfd6-5ddf-436d-998f-440f1cc642b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" podUID="f3f2cfd6-5ddf-436d-998f-440f1cc642b1" Feb 23 10:23:53 crc kubenswrapper[4940]: I0223 10:23:53.346584 4940 scope.go:117] "RemoveContainer" containerID="034d4916d87eca1c845da02d4d801b4d26aff1b1f34659b667ee72945e9a694c" Feb 23 10:23:53 crc kubenswrapper[4940]: E0223 10:23:53.348045 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-26mgs_openshift-machine-config-operator(f3f2cfd6-5ddf-436d-998f-440f1cc642b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" podUID="f3f2cfd6-5ddf-436d-998f-440f1cc642b1" Feb 23 10:24:04 crc kubenswrapper[4940]: I0223 10:24:04.347450 4940 scope.go:117] "RemoveContainer" containerID="034d4916d87eca1c845da02d4d801b4d26aff1b1f34659b667ee72945e9a694c" Feb 23 10:24:04 crc kubenswrapper[4940]: E0223 10:24:04.348578 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-26mgs_openshift-machine-config-operator(f3f2cfd6-5ddf-436d-998f-440f1cc642b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" podUID="f3f2cfd6-5ddf-436d-998f-440f1cc642b1" Feb 23 10:24:15 crc kubenswrapper[4940]: I0223 10:24:15.346312 4940 scope.go:117] "RemoveContainer" containerID="034d4916d87eca1c845da02d4d801b4d26aff1b1f34659b667ee72945e9a694c" Feb 23 10:24:15 crc kubenswrapper[4940]: E0223 10:24:15.348723 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-26mgs_openshift-machine-config-operator(f3f2cfd6-5ddf-436d-998f-440f1cc642b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" podUID="f3f2cfd6-5ddf-436d-998f-440f1cc642b1" Feb 23 10:24:26 crc kubenswrapper[4940]: I0223 10:24:26.346315 4940 scope.go:117] "RemoveContainer" containerID="034d4916d87eca1c845da02d4d801b4d26aff1b1f34659b667ee72945e9a694c" Feb 23 10:24:26 crc kubenswrapper[4940]: E0223 10:24:26.348047 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-26mgs_openshift-machine-config-operator(f3f2cfd6-5ddf-436d-998f-440f1cc642b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" podUID="f3f2cfd6-5ddf-436d-998f-440f1cc642b1" Feb 23 10:24:37 crc kubenswrapper[4940]: I0223 10:24:37.797570 4940 scope.go:117] "RemoveContainer" containerID="4839673769c8a99599d4f7289f855357137e6dd69fac5fe55637db30c9d2d1e6" Feb 23 10:24:39 crc kubenswrapper[4940]: I0223 10:24:39.351798 4940 scope.go:117] "RemoveContainer" containerID="034d4916d87eca1c845da02d4d801b4d26aff1b1f34659b667ee72945e9a694c" Feb 23 10:24:39 crc kubenswrapper[4940]: E0223 10:24:39.352311 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-26mgs_openshift-machine-config-operator(f3f2cfd6-5ddf-436d-998f-440f1cc642b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" podUID="f3f2cfd6-5ddf-436d-998f-440f1cc642b1" Feb 23 10:24:39 crc kubenswrapper[4940]: I0223 10:24:39.972108 4940 generic.go:334] "Generic (PLEG): container finished" podID="d5a55261-dc38-4ed1-88be-1552ca0e32eb" containerID="73d7466bbc598593251e71a094b4762d0b3dc311e1f807f8766ed7bb8ca0c3fe" exitCode=0 Feb 23 10:24:39 crc kubenswrapper[4940]: I0223 10:24:39.972771 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-vkzq4/must-gather-lc2d4" event={"ID":"d5a55261-dc38-4ed1-88be-1552ca0e32eb","Type":"ContainerDied","Data":"73d7466bbc598593251e71a094b4762d0b3dc311e1f807f8766ed7bb8ca0c3fe"} Feb 23 10:24:39 crc kubenswrapper[4940]: I0223 10:24:39.973972 4940 scope.go:117] "RemoveContainer" containerID="73d7466bbc598593251e71a094b4762d0b3dc311e1f807f8766ed7bb8ca0c3fe" Feb 23 10:24:40 crc kubenswrapper[4940]: I0223 10:24:40.668651 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-vkzq4_must-gather-lc2d4_d5a55261-dc38-4ed1-88be-1552ca0e32eb/gather/0.log" Feb 23 10:24:53 crc kubenswrapper[4940]: I0223 10:24:53.345578 4940 scope.go:117] "RemoveContainer" containerID="034d4916d87eca1c845da02d4d801b4d26aff1b1f34659b667ee72945e9a694c" Feb 23 10:24:53 crc kubenswrapper[4940]: E0223 10:24:53.346437 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-26mgs_openshift-machine-config-operator(f3f2cfd6-5ddf-436d-998f-440f1cc642b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" podUID="f3f2cfd6-5ddf-436d-998f-440f1cc642b1" Feb 23 10:24:53 crc kubenswrapper[4940]: I0223 10:24:53.927481 4940 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-vkzq4/must-gather-lc2d4"] Feb 23 10:24:53 crc kubenswrapper[4940]: I0223 10:24:53.928168 4940 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-vkzq4/must-gather-lc2d4" podUID="d5a55261-dc38-4ed1-88be-1552ca0e32eb" containerName="copy" containerID="cri-o://a5f5bbec0f2c57e5fef78806c16d0443ebbe1638063dcd969867bf125f2216ce" gracePeriod=2 Feb 23 10:24:53 crc kubenswrapper[4940]: I0223 10:24:53.935648 4940 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-vkzq4/must-gather-lc2d4"] Feb 23 10:24:54 crc kubenswrapper[4940]: I0223 10:24:54.122792 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-vkzq4_must-gather-lc2d4_d5a55261-dc38-4ed1-88be-1552ca0e32eb/copy/0.log" Feb 23 10:24:54 crc kubenswrapper[4940]: I0223 10:24:54.123325 4940 generic.go:334] "Generic (PLEG): container finished" podID="d5a55261-dc38-4ed1-88be-1552ca0e32eb" containerID="a5f5bbec0f2c57e5fef78806c16d0443ebbe1638063dcd969867bf125f2216ce" exitCode=143 Feb 23 10:24:54 crc kubenswrapper[4940]: I0223 10:24:54.377956 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-vkzq4_must-gather-lc2d4_d5a55261-dc38-4ed1-88be-1552ca0e32eb/copy/0.log" Feb 23 10:24:54 crc kubenswrapper[4940]: I0223 10:24:54.378697 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vkzq4/must-gather-lc2d4" Feb 23 10:24:54 crc kubenswrapper[4940]: I0223 10:24:54.487713 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qpdj6\" (UniqueName: \"kubernetes.io/projected/d5a55261-dc38-4ed1-88be-1552ca0e32eb-kube-api-access-qpdj6\") pod \"d5a55261-dc38-4ed1-88be-1552ca0e32eb\" (UID: \"d5a55261-dc38-4ed1-88be-1552ca0e32eb\") " Feb 23 10:24:54 crc kubenswrapper[4940]: I0223 10:24:54.487796 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/d5a55261-dc38-4ed1-88be-1552ca0e32eb-must-gather-output\") pod \"d5a55261-dc38-4ed1-88be-1552ca0e32eb\" (UID: \"d5a55261-dc38-4ed1-88be-1552ca0e32eb\") " Feb 23 10:24:54 crc kubenswrapper[4940]: I0223 10:24:54.493463 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d5a55261-dc38-4ed1-88be-1552ca0e32eb-kube-api-access-qpdj6" (OuterVolumeSpecName: "kube-api-access-qpdj6") pod "d5a55261-dc38-4ed1-88be-1552ca0e32eb" (UID: "d5a55261-dc38-4ed1-88be-1552ca0e32eb"). InnerVolumeSpecName "kube-api-access-qpdj6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 10:24:54 crc kubenswrapper[4940]: I0223 10:24:54.590003 4940 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qpdj6\" (UniqueName: \"kubernetes.io/projected/d5a55261-dc38-4ed1-88be-1552ca0e32eb-kube-api-access-qpdj6\") on node \"crc\" DevicePath \"\"" Feb 23 10:24:54 crc kubenswrapper[4940]: I0223 10:24:54.677956 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d5a55261-dc38-4ed1-88be-1552ca0e32eb-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "d5a55261-dc38-4ed1-88be-1552ca0e32eb" (UID: "d5a55261-dc38-4ed1-88be-1552ca0e32eb"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 10:24:54 crc kubenswrapper[4940]: I0223 10:24:54.692094 4940 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/d5a55261-dc38-4ed1-88be-1552ca0e32eb-must-gather-output\") on node \"crc\" DevicePath \"\"" Feb 23 10:24:55 crc kubenswrapper[4940]: I0223 10:24:55.137054 4940 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-vkzq4_must-gather-lc2d4_d5a55261-dc38-4ed1-88be-1552ca0e32eb/copy/0.log" Feb 23 10:24:55 crc kubenswrapper[4940]: I0223 10:24:55.137876 4940 scope.go:117] "RemoveContainer" containerID="a5f5bbec0f2c57e5fef78806c16d0443ebbe1638063dcd969867bf125f2216ce" Feb 23 10:24:55 crc kubenswrapper[4940]: I0223 10:24:55.138058 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vkzq4/must-gather-lc2d4" Feb 23 10:24:55 crc kubenswrapper[4940]: I0223 10:24:55.162234 4940 scope.go:117] "RemoveContainer" containerID="73d7466bbc598593251e71a094b4762d0b3dc311e1f807f8766ed7bb8ca0c3fe" Feb 23 10:24:55 crc kubenswrapper[4940]: I0223 10:24:55.355126 4940 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d5a55261-dc38-4ed1-88be-1552ca0e32eb" path="/var/lib/kubelet/pods/d5a55261-dc38-4ed1-88be-1552ca0e32eb/volumes" Feb 23 10:25:06 crc kubenswrapper[4940]: I0223 10:25:06.346977 4940 scope.go:117] "RemoveContainer" containerID="034d4916d87eca1c845da02d4d801b4d26aff1b1f34659b667ee72945e9a694c" Feb 23 10:25:06 crc kubenswrapper[4940]: E0223 10:25:06.347760 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-26mgs_openshift-machine-config-operator(f3f2cfd6-5ddf-436d-998f-440f1cc642b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" podUID="f3f2cfd6-5ddf-436d-998f-440f1cc642b1" Feb 23 10:25:20 crc kubenswrapper[4940]: I0223 10:25:20.346143 4940 scope.go:117] "RemoveContainer" containerID="034d4916d87eca1c845da02d4d801b4d26aff1b1f34659b667ee72945e9a694c" Feb 23 10:25:20 crc kubenswrapper[4940]: E0223 10:25:20.346950 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-26mgs_openshift-machine-config-operator(f3f2cfd6-5ddf-436d-998f-440f1cc642b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" podUID="f3f2cfd6-5ddf-436d-998f-440f1cc642b1" Feb 23 10:25:35 crc kubenswrapper[4940]: I0223 10:25:35.345701 4940 scope.go:117] "RemoveContainer" containerID="034d4916d87eca1c845da02d4d801b4d26aff1b1f34659b667ee72945e9a694c" Feb 23 10:25:35 crc kubenswrapper[4940]: E0223 10:25:35.346306 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-26mgs_openshift-machine-config-operator(f3f2cfd6-5ddf-436d-998f-440f1cc642b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" podUID="f3f2cfd6-5ddf-436d-998f-440f1cc642b1" Feb 23 10:25:37 crc kubenswrapper[4940]: I0223 10:25:37.855480 4940 scope.go:117] "RemoveContainer" containerID="0e2706b8d43476e2fe65a82f4c6a7a555a0ab7f7a987ad12004d7c679b8ebf88" Feb 23 10:25:47 crc kubenswrapper[4940]: I0223 10:25:47.346844 4940 scope.go:117] "RemoveContainer" containerID="034d4916d87eca1c845da02d4d801b4d26aff1b1f34659b667ee72945e9a694c" Feb 23 10:25:47 crc kubenswrapper[4940]: E0223 10:25:47.350379 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-26mgs_openshift-machine-config-operator(f3f2cfd6-5ddf-436d-998f-440f1cc642b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" podUID="f3f2cfd6-5ddf-436d-998f-440f1cc642b1" Feb 23 10:26:01 crc kubenswrapper[4940]: I0223 10:26:01.345496 4940 scope.go:117] "RemoveContainer" containerID="034d4916d87eca1c845da02d4d801b4d26aff1b1f34659b667ee72945e9a694c" Feb 23 10:26:01 crc kubenswrapper[4940]: E0223 10:26:01.347978 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-26mgs_openshift-machine-config-operator(f3f2cfd6-5ddf-436d-998f-440f1cc642b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" podUID="f3f2cfd6-5ddf-436d-998f-440f1cc642b1" Feb 23 10:26:13 crc kubenswrapper[4940]: I0223 10:26:13.347184 4940 scope.go:117] "RemoveContainer" containerID="034d4916d87eca1c845da02d4d801b4d26aff1b1f34659b667ee72945e9a694c" Feb 23 10:26:13 crc kubenswrapper[4940]: E0223 10:26:13.348567 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-26mgs_openshift-machine-config-operator(f3f2cfd6-5ddf-436d-998f-440f1cc642b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" podUID="f3f2cfd6-5ddf-436d-998f-440f1cc642b1" Feb 23 10:26:28 crc kubenswrapper[4940]: I0223 10:26:28.346133 4940 scope.go:117] "RemoveContainer" containerID="034d4916d87eca1c845da02d4d801b4d26aff1b1f34659b667ee72945e9a694c" Feb 23 10:26:28 crc kubenswrapper[4940]: E0223 10:26:28.348669 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-26mgs_openshift-machine-config-operator(f3f2cfd6-5ddf-436d-998f-440f1cc642b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" podUID="f3f2cfd6-5ddf-436d-998f-440f1cc642b1" Feb 23 10:26:42 crc kubenswrapper[4940]: I0223 10:26:42.346308 4940 scope.go:117] "RemoveContainer" containerID="034d4916d87eca1c845da02d4d801b4d26aff1b1f34659b667ee72945e9a694c" Feb 23 10:26:42 crc kubenswrapper[4940]: E0223 10:26:42.347603 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-26mgs_openshift-machine-config-operator(f3f2cfd6-5ddf-436d-998f-440f1cc642b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" podUID="f3f2cfd6-5ddf-436d-998f-440f1cc642b1" Feb 23 10:26:54 crc kubenswrapper[4940]: I0223 10:26:54.345384 4940 scope.go:117] "RemoveContainer" containerID="034d4916d87eca1c845da02d4d801b4d26aff1b1f34659b667ee72945e9a694c" Feb 23 10:26:54 crc kubenswrapper[4940]: E0223 10:26:54.346268 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-26mgs_openshift-machine-config-operator(f3f2cfd6-5ddf-436d-998f-440f1cc642b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" podUID="f3f2cfd6-5ddf-436d-998f-440f1cc642b1" Feb 23 10:27:08 crc kubenswrapper[4940]: I0223 10:27:08.345577 4940 scope.go:117] "RemoveContainer" containerID="034d4916d87eca1c845da02d4d801b4d26aff1b1f34659b667ee72945e9a694c" Feb 23 10:27:08 crc kubenswrapper[4940]: E0223 10:27:08.347646 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-26mgs_openshift-machine-config-operator(f3f2cfd6-5ddf-436d-998f-440f1cc642b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" podUID="f3f2cfd6-5ddf-436d-998f-440f1cc642b1" Feb 23 10:27:23 crc kubenswrapper[4940]: I0223 10:27:23.347573 4940 scope.go:117] "RemoveContainer" containerID="034d4916d87eca1c845da02d4d801b4d26aff1b1f34659b667ee72945e9a694c" Feb 23 10:27:23 crc kubenswrapper[4940]: E0223 10:27:23.348549 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-26mgs_openshift-machine-config-operator(f3f2cfd6-5ddf-436d-998f-440f1cc642b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" podUID="f3f2cfd6-5ddf-436d-998f-440f1cc642b1" Feb 23 10:27:33 crc kubenswrapper[4940]: I0223 10:27:33.511862 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-ztrdx"] Feb 23 10:27:33 crc kubenswrapper[4940]: E0223 10:27:33.513511 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4090ad1a-0873-4c7e-85ef-e6dda3bef8c0" containerName="extract-content" Feb 23 10:27:33 crc kubenswrapper[4940]: I0223 10:27:33.513527 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="4090ad1a-0873-4c7e-85ef-e6dda3bef8c0" containerName="extract-content" Feb 23 10:27:33 crc kubenswrapper[4940]: E0223 10:27:33.513540 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4090ad1a-0873-4c7e-85ef-e6dda3bef8c0" containerName="registry-server" Feb 23 10:27:33 crc kubenswrapper[4940]: I0223 10:27:33.513546 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="4090ad1a-0873-4c7e-85ef-e6dda3bef8c0" containerName="registry-server" Feb 23 10:27:33 crc kubenswrapper[4940]: E0223 10:27:33.513569 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5a55261-dc38-4ed1-88be-1552ca0e32eb" containerName="gather" Feb 23 10:27:33 crc kubenswrapper[4940]: I0223 10:27:33.513576 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5a55261-dc38-4ed1-88be-1552ca0e32eb" containerName="gather" Feb 23 10:27:33 crc kubenswrapper[4940]: E0223 10:27:33.513588 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4090ad1a-0873-4c7e-85ef-e6dda3bef8c0" containerName="extract-utilities" Feb 23 10:27:33 crc kubenswrapper[4940]: I0223 10:27:33.513596 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="4090ad1a-0873-4c7e-85ef-e6dda3bef8c0" containerName="extract-utilities" Feb 23 10:27:33 crc kubenswrapper[4940]: E0223 10:27:33.513636 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5a55261-dc38-4ed1-88be-1552ca0e32eb" containerName="copy" Feb 23 10:27:33 crc kubenswrapper[4940]: I0223 10:27:33.513643 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5a55261-dc38-4ed1-88be-1552ca0e32eb" containerName="copy" Feb 23 10:27:33 crc kubenswrapper[4940]: I0223 10:27:33.513838 4940 memory_manager.go:354] "RemoveStaleState removing state" podUID="d5a55261-dc38-4ed1-88be-1552ca0e32eb" containerName="gather" Feb 23 10:27:33 crc kubenswrapper[4940]: I0223 10:27:33.513855 4940 memory_manager.go:354] "RemoveStaleState removing state" podUID="4090ad1a-0873-4c7e-85ef-e6dda3bef8c0" containerName="registry-server" Feb 23 10:27:33 crc kubenswrapper[4940]: I0223 10:27:33.513869 4940 memory_manager.go:354] "RemoveStaleState removing state" podUID="d5a55261-dc38-4ed1-88be-1552ca0e32eb" containerName="copy" Feb 23 10:27:33 crc kubenswrapper[4940]: I0223 10:27:33.515256 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ztrdx" Feb 23 10:27:33 crc kubenswrapper[4940]: I0223 10:27:33.550848 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-ztrdx"] Feb 23 10:27:33 crc kubenswrapper[4940]: I0223 10:27:33.579035 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/18ff4e86-435b-4ef9-b333-5e373e67a227-utilities\") pod \"redhat-operators-ztrdx\" (UID: \"18ff4e86-435b-4ef9-b333-5e373e67a227\") " pod="openshift-marketplace/redhat-operators-ztrdx" Feb 23 10:27:33 crc kubenswrapper[4940]: I0223 10:27:33.579090 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fphnm\" (UniqueName: \"kubernetes.io/projected/18ff4e86-435b-4ef9-b333-5e373e67a227-kube-api-access-fphnm\") pod \"redhat-operators-ztrdx\" (UID: \"18ff4e86-435b-4ef9-b333-5e373e67a227\") " pod="openshift-marketplace/redhat-operators-ztrdx" Feb 23 10:27:33 crc kubenswrapper[4940]: I0223 10:27:33.579124 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/18ff4e86-435b-4ef9-b333-5e373e67a227-catalog-content\") pod \"redhat-operators-ztrdx\" (UID: \"18ff4e86-435b-4ef9-b333-5e373e67a227\") " pod="openshift-marketplace/redhat-operators-ztrdx" Feb 23 10:27:33 crc kubenswrapper[4940]: I0223 10:27:33.681218 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/18ff4e86-435b-4ef9-b333-5e373e67a227-utilities\") pod \"redhat-operators-ztrdx\" (UID: \"18ff4e86-435b-4ef9-b333-5e373e67a227\") " pod="openshift-marketplace/redhat-operators-ztrdx" Feb 23 10:27:33 crc kubenswrapper[4940]: I0223 10:27:33.681288 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fphnm\" (UniqueName: \"kubernetes.io/projected/18ff4e86-435b-4ef9-b333-5e373e67a227-kube-api-access-fphnm\") pod \"redhat-operators-ztrdx\" (UID: \"18ff4e86-435b-4ef9-b333-5e373e67a227\") " pod="openshift-marketplace/redhat-operators-ztrdx" Feb 23 10:27:33 crc kubenswrapper[4940]: I0223 10:27:33.681327 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/18ff4e86-435b-4ef9-b333-5e373e67a227-catalog-content\") pod \"redhat-operators-ztrdx\" (UID: \"18ff4e86-435b-4ef9-b333-5e373e67a227\") " pod="openshift-marketplace/redhat-operators-ztrdx" Feb 23 10:27:33 crc kubenswrapper[4940]: I0223 10:27:33.681889 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/18ff4e86-435b-4ef9-b333-5e373e67a227-catalog-content\") pod \"redhat-operators-ztrdx\" (UID: \"18ff4e86-435b-4ef9-b333-5e373e67a227\") " pod="openshift-marketplace/redhat-operators-ztrdx" Feb 23 10:27:33 crc kubenswrapper[4940]: I0223 10:27:33.682101 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/18ff4e86-435b-4ef9-b333-5e373e67a227-utilities\") pod \"redhat-operators-ztrdx\" (UID: \"18ff4e86-435b-4ef9-b333-5e373e67a227\") " pod="openshift-marketplace/redhat-operators-ztrdx" Feb 23 10:27:33 crc kubenswrapper[4940]: I0223 10:27:33.716570 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fphnm\" (UniqueName: \"kubernetes.io/projected/18ff4e86-435b-4ef9-b333-5e373e67a227-kube-api-access-fphnm\") pod \"redhat-operators-ztrdx\" (UID: \"18ff4e86-435b-4ef9-b333-5e373e67a227\") " pod="openshift-marketplace/redhat-operators-ztrdx" Feb 23 10:27:33 crc kubenswrapper[4940]: I0223 10:27:33.848144 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ztrdx" Feb 23 10:27:34 crc kubenswrapper[4940]: I0223 10:27:34.350216 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-ztrdx"] Feb 23 10:27:34 crc kubenswrapper[4940]: I0223 10:27:34.829129 4940 generic.go:334] "Generic (PLEG): container finished" podID="18ff4e86-435b-4ef9-b333-5e373e67a227" containerID="7c378442370d86c537e258d2983e78fa57cf22ba144b82b7b560f1c89144631e" exitCode=0 Feb 23 10:27:34 crc kubenswrapper[4940]: I0223 10:27:34.829491 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ztrdx" event={"ID":"18ff4e86-435b-4ef9-b333-5e373e67a227","Type":"ContainerDied","Data":"7c378442370d86c537e258d2983e78fa57cf22ba144b82b7b560f1c89144631e"} Feb 23 10:27:34 crc kubenswrapper[4940]: I0223 10:27:34.829529 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ztrdx" event={"ID":"18ff4e86-435b-4ef9-b333-5e373e67a227","Type":"ContainerStarted","Data":"29de090c7897226c6297aa0ab4dfa0e811c48dd41dcebcee20aa083372e2d512"} Feb 23 10:27:34 crc kubenswrapper[4940]: I0223 10:27:34.832106 4940 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 23 10:27:35 crc kubenswrapper[4940]: I0223 10:27:35.841668 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ztrdx" event={"ID":"18ff4e86-435b-4ef9-b333-5e373e67a227","Type":"ContainerStarted","Data":"fec040debdd0b3dc32f1f6c91321e7da3c73cd0c403a588a263d176f86da96c1"} Feb 23 10:27:36 crc kubenswrapper[4940]: I0223 10:27:36.345198 4940 scope.go:117] "RemoveContainer" containerID="034d4916d87eca1c845da02d4d801b4d26aff1b1f34659b667ee72945e9a694c" Feb 23 10:27:36 crc kubenswrapper[4940]: E0223 10:27:36.345652 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-26mgs_openshift-machine-config-operator(f3f2cfd6-5ddf-436d-998f-440f1cc642b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" podUID="f3f2cfd6-5ddf-436d-998f-440f1cc642b1" Feb 23 10:27:39 crc kubenswrapper[4940]: I0223 10:27:39.884233 4940 generic.go:334] "Generic (PLEG): container finished" podID="18ff4e86-435b-4ef9-b333-5e373e67a227" containerID="fec040debdd0b3dc32f1f6c91321e7da3c73cd0c403a588a263d176f86da96c1" exitCode=0 Feb 23 10:27:39 crc kubenswrapper[4940]: I0223 10:27:39.884323 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ztrdx" event={"ID":"18ff4e86-435b-4ef9-b333-5e373e67a227","Type":"ContainerDied","Data":"fec040debdd0b3dc32f1f6c91321e7da3c73cd0c403a588a263d176f86da96c1"} Feb 23 10:27:40 crc kubenswrapper[4940]: I0223 10:27:40.895009 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ztrdx" event={"ID":"18ff4e86-435b-4ef9-b333-5e373e67a227","Type":"ContainerStarted","Data":"b3af313360f4113b83f33e4cc801093000b57948716a148c4c9d85ad9a95a3b8"} Feb 23 10:27:40 crc kubenswrapper[4940]: I0223 10:27:40.923648 4940 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-ztrdx" podStartSLOduration=2.485901532 podStartE2EDuration="7.923584572s" podCreationTimestamp="2026-02-23 10:27:33 +0000 UTC" firstStartedPulling="2026-02-23 10:27:34.831771489 +0000 UTC m=+5986.214977646" lastFinishedPulling="2026-02-23 10:27:40.269454529 +0000 UTC m=+5991.652660686" observedRunningTime="2026-02-23 10:27:40.913937798 +0000 UTC m=+5992.297143985" watchObservedRunningTime="2026-02-23 10:27:40.923584572 +0000 UTC m=+5992.306790739" Feb 23 10:27:43 crc kubenswrapper[4940]: I0223 10:27:43.849171 4940 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-ztrdx" Feb 23 10:27:43 crc kubenswrapper[4940]: I0223 10:27:43.849684 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-ztrdx" Feb 23 10:27:44 crc kubenswrapper[4940]: I0223 10:27:44.896299 4940 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-ztrdx" podUID="18ff4e86-435b-4ef9-b333-5e373e67a227" containerName="registry-server" probeResult="failure" output=< Feb 23 10:27:44 crc kubenswrapper[4940]: timeout: failed to connect service ":50051" within 1s Feb 23 10:27:44 crc kubenswrapper[4940]: > Feb 23 10:27:49 crc kubenswrapper[4940]: I0223 10:27:49.354454 4940 scope.go:117] "RemoveContainer" containerID="034d4916d87eca1c845da02d4d801b4d26aff1b1f34659b667ee72945e9a694c" Feb 23 10:27:49 crc kubenswrapper[4940]: E0223 10:27:49.355104 4940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-26mgs_openshift-machine-config-operator(f3f2cfd6-5ddf-436d-998f-440f1cc642b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" podUID="f3f2cfd6-5ddf-436d-998f-440f1cc642b1" Feb 23 10:27:53 crc kubenswrapper[4940]: I0223 10:27:53.915605 4940 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-ztrdx" Feb 23 10:27:53 crc kubenswrapper[4940]: I0223 10:27:53.983294 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-ztrdx" Feb 23 10:27:54 crc kubenswrapper[4940]: I0223 10:27:54.164196 4940 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-ztrdx"] Feb 23 10:27:55 crc kubenswrapper[4940]: I0223 10:27:55.024520 4940 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-ztrdx" podUID="18ff4e86-435b-4ef9-b333-5e373e67a227" containerName="registry-server" containerID="cri-o://b3af313360f4113b83f33e4cc801093000b57948716a148c4c9d85ad9a95a3b8" gracePeriod=2 Feb 23 10:27:55 crc kubenswrapper[4940]: I0223 10:27:55.497054 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ztrdx" Feb 23 10:27:55 crc kubenswrapper[4940]: I0223 10:27:55.684239 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/18ff4e86-435b-4ef9-b333-5e373e67a227-catalog-content\") pod \"18ff4e86-435b-4ef9-b333-5e373e67a227\" (UID: \"18ff4e86-435b-4ef9-b333-5e373e67a227\") " Feb 23 10:27:55 crc kubenswrapper[4940]: I0223 10:27:55.684383 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fphnm\" (UniqueName: \"kubernetes.io/projected/18ff4e86-435b-4ef9-b333-5e373e67a227-kube-api-access-fphnm\") pod \"18ff4e86-435b-4ef9-b333-5e373e67a227\" (UID: \"18ff4e86-435b-4ef9-b333-5e373e67a227\") " Feb 23 10:27:55 crc kubenswrapper[4940]: I0223 10:27:55.684578 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/18ff4e86-435b-4ef9-b333-5e373e67a227-utilities\") pod \"18ff4e86-435b-4ef9-b333-5e373e67a227\" (UID: \"18ff4e86-435b-4ef9-b333-5e373e67a227\") " Feb 23 10:27:55 crc kubenswrapper[4940]: I0223 10:27:55.685204 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/18ff4e86-435b-4ef9-b333-5e373e67a227-utilities" (OuterVolumeSpecName: "utilities") pod "18ff4e86-435b-4ef9-b333-5e373e67a227" (UID: "18ff4e86-435b-4ef9-b333-5e373e67a227"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 10:27:55 crc kubenswrapper[4940]: I0223 10:27:55.685634 4940 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/18ff4e86-435b-4ef9-b333-5e373e67a227-utilities\") on node \"crc\" DevicePath \"\"" Feb 23 10:27:55 crc kubenswrapper[4940]: I0223 10:27:55.697563 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/18ff4e86-435b-4ef9-b333-5e373e67a227-kube-api-access-fphnm" (OuterVolumeSpecName: "kube-api-access-fphnm") pod "18ff4e86-435b-4ef9-b333-5e373e67a227" (UID: "18ff4e86-435b-4ef9-b333-5e373e67a227"). InnerVolumeSpecName "kube-api-access-fphnm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 10:27:55 crc kubenswrapper[4940]: I0223 10:27:55.787951 4940 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fphnm\" (UniqueName: \"kubernetes.io/projected/18ff4e86-435b-4ef9-b333-5e373e67a227-kube-api-access-fphnm\") on node \"crc\" DevicePath \"\"" Feb 23 10:27:55 crc kubenswrapper[4940]: I0223 10:27:55.809570 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/18ff4e86-435b-4ef9-b333-5e373e67a227-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "18ff4e86-435b-4ef9-b333-5e373e67a227" (UID: "18ff4e86-435b-4ef9-b333-5e373e67a227"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 10:27:55 crc kubenswrapper[4940]: I0223 10:27:55.890305 4940 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/18ff4e86-435b-4ef9-b333-5e373e67a227-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 23 10:27:56 crc kubenswrapper[4940]: I0223 10:27:56.035876 4940 generic.go:334] "Generic (PLEG): container finished" podID="18ff4e86-435b-4ef9-b333-5e373e67a227" containerID="b3af313360f4113b83f33e4cc801093000b57948716a148c4c9d85ad9a95a3b8" exitCode=0 Feb 23 10:27:56 crc kubenswrapper[4940]: I0223 10:27:56.035924 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ztrdx" event={"ID":"18ff4e86-435b-4ef9-b333-5e373e67a227","Type":"ContainerDied","Data":"b3af313360f4113b83f33e4cc801093000b57948716a148c4c9d85ad9a95a3b8"} Feb 23 10:27:56 crc kubenswrapper[4940]: I0223 10:27:56.035963 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ztrdx" event={"ID":"18ff4e86-435b-4ef9-b333-5e373e67a227","Type":"ContainerDied","Data":"29de090c7897226c6297aa0ab4dfa0e811c48dd41dcebcee20aa083372e2d512"} Feb 23 10:27:56 crc kubenswrapper[4940]: I0223 10:27:56.035961 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ztrdx" Feb 23 10:27:56 crc kubenswrapper[4940]: I0223 10:27:56.035985 4940 scope.go:117] "RemoveContainer" containerID="b3af313360f4113b83f33e4cc801093000b57948716a148c4c9d85ad9a95a3b8" Feb 23 10:27:56 crc kubenswrapper[4940]: I0223 10:27:56.067760 4940 scope.go:117] "RemoveContainer" containerID="fec040debdd0b3dc32f1f6c91321e7da3c73cd0c403a588a263d176f86da96c1" Feb 23 10:27:56 crc kubenswrapper[4940]: I0223 10:27:56.086696 4940 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-ztrdx"] Feb 23 10:27:56 crc kubenswrapper[4940]: I0223 10:27:56.092517 4940 scope.go:117] "RemoveContainer" containerID="7c378442370d86c537e258d2983e78fa57cf22ba144b82b7b560f1c89144631e" Feb 23 10:27:56 crc kubenswrapper[4940]: I0223 10:27:56.096862 4940 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-ztrdx"] Feb 23 10:27:56 crc kubenswrapper[4940]: I0223 10:27:56.136358 4940 scope.go:117] "RemoveContainer" containerID="b3af313360f4113b83f33e4cc801093000b57948716a148c4c9d85ad9a95a3b8" Feb 23 10:27:56 crc kubenswrapper[4940]: E0223 10:27:56.136839 4940 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b3af313360f4113b83f33e4cc801093000b57948716a148c4c9d85ad9a95a3b8\": container with ID starting with b3af313360f4113b83f33e4cc801093000b57948716a148c4c9d85ad9a95a3b8 not found: ID does not exist" containerID="b3af313360f4113b83f33e4cc801093000b57948716a148c4c9d85ad9a95a3b8" Feb 23 10:27:56 crc kubenswrapper[4940]: I0223 10:27:56.136893 4940 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b3af313360f4113b83f33e4cc801093000b57948716a148c4c9d85ad9a95a3b8"} err="failed to get container status \"b3af313360f4113b83f33e4cc801093000b57948716a148c4c9d85ad9a95a3b8\": rpc error: code = NotFound desc = could not find container \"b3af313360f4113b83f33e4cc801093000b57948716a148c4c9d85ad9a95a3b8\": container with ID starting with b3af313360f4113b83f33e4cc801093000b57948716a148c4c9d85ad9a95a3b8 not found: ID does not exist" Feb 23 10:27:56 crc kubenswrapper[4940]: I0223 10:27:56.136926 4940 scope.go:117] "RemoveContainer" containerID="fec040debdd0b3dc32f1f6c91321e7da3c73cd0c403a588a263d176f86da96c1" Feb 23 10:27:56 crc kubenswrapper[4940]: E0223 10:27:56.137414 4940 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fec040debdd0b3dc32f1f6c91321e7da3c73cd0c403a588a263d176f86da96c1\": container with ID starting with fec040debdd0b3dc32f1f6c91321e7da3c73cd0c403a588a263d176f86da96c1 not found: ID does not exist" containerID="fec040debdd0b3dc32f1f6c91321e7da3c73cd0c403a588a263d176f86da96c1" Feb 23 10:27:56 crc kubenswrapper[4940]: I0223 10:27:56.137640 4940 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fec040debdd0b3dc32f1f6c91321e7da3c73cd0c403a588a263d176f86da96c1"} err="failed to get container status \"fec040debdd0b3dc32f1f6c91321e7da3c73cd0c403a588a263d176f86da96c1\": rpc error: code = NotFound desc = could not find container \"fec040debdd0b3dc32f1f6c91321e7da3c73cd0c403a588a263d176f86da96c1\": container with ID starting with fec040debdd0b3dc32f1f6c91321e7da3c73cd0c403a588a263d176f86da96c1 not found: ID does not exist" Feb 23 10:27:56 crc kubenswrapper[4940]: I0223 10:27:56.137751 4940 scope.go:117] "RemoveContainer" containerID="7c378442370d86c537e258d2983e78fa57cf22ba144b82b7b560f1c89144631e" Feb 23 10:27:56 crc kubenswrapper[4940]: E0223 10:27:56.138721 4940 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7c378442370d86c537e258d2983e78fa57cf22ba144b82b7b560f1c89144631e\": container with ID starting with 7c378442370d86c537e258d2983e78fa57cf22ba144b82b7b560f1c89144631e not found: ID does not exist" containerID="7c378442370d86c537e258d2983e78fa57cf22ba144b82b7b560f1c89144631e" Feb 23 10:27:56 crc kubenswrapper[4940]: I0223 10:27:56.138749 4940 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7c378442370d86c537e258d2983e78fa57cf22ba144b82b7b560f1c89144631e"} err="failed to get container status \"7c378442370d86c537e258d2983e78fa57cf22ba144b82b7b560f1c89144631e\": rpc error: code = NotFound desc = could not find container \"7c378442370d86c537e258d2983e78fa57cf22ba144b82b7b560f1c89144631e\": container with ID starting with 7c378442370d86c537e258d2983e78fa57cf22ba144b82b7b560f1c89144631e not found: ID does not exist" Feb 23 10:27:57 crc kubenswrapper[4940]: I0223 10:27:57.362153 4940 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="18ff4e86-435b-4ef9-b333-5e373e67a227" path="/var/lib/kubelet/pods/18ff4e86-435b-4ef9-b333-5e373e67a227/volumes" Feb 23 10:28:04 crc kubenswrapper[4940]: I0223 10:28:04.347121 4940 scope.go:117] "RemoveContainer" containerID="034d4916d87eca1c845da02d4d801b4d26aff1b1f34659b667ee72945e9a694c" Feb 23 10:28:05 crc kubenswrapper[4940]: I0223 10:28:05.138913 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-26mgs" event={"ID":"f3f2cfd6-5ddf-436d-998f-440f1cc642b1","Type":"ContainerStarted","Data":"234872b1acb4561e201e94c6e5f97ad2e3ab2c660e9438dcf579b34f8a092f52"} Feb 23 10:28:31 crc kubenswrapper[4940]: I0223 10:28:31.758328 4940 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-pszgx"] Feb 23 10:28:31 crc kubenswrapper[4940]: E0223 10:28:31.759335 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18ff4e86-435b-4ef9-b333-5e373e67a227" containerName="extract-content" Feb 23 10:28:31 crc kubenswrapper[4940]: I0223 10:28:31.759351 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="18ff4e86-435b-4ef9-b333-5e373e67a227" containerName="extract-content" Feb 23 10:28:31 crc kubenswrapper[4940]: E0223 10:28:31.759382 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18ff4e86-435b-4ef9-b333-5e373e67a227" containerName="registry-server" Feb 23 10:28:31 crc kubenswrapper[4940]: I0223 10:28:31.759390 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="18ff4e86-435b-4ef9-b333-5e373e67a227" containerName="registry-server" Feb 23 10:28:31 crc kubenswrapper[4940]: E0223 10:28:31.759419 4940 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18ff4e86-435b-4ef9-b333-5e373e67a227" containerName="extract-utilities" Feb 23 10:28:31 crc kubenswrapper[4940]: I0223 10:28:31.759427 4940 state_mem.go:107] "Deleted CPUSet assignment" podUID="18ff4e86-435b-4ef9-b333-5e373e67a227" containerName="extract-utilities" Feb 23 10:28:31 crc kubenswrapper[4940]: I0223 10:28:31.759731 4940 memory_manager.go:354] "RemoveStaleState removing state" podUID="18ff4e86-435b-4ef9-b333-5e373e67a227" containerName="registry-server" Feb 23 10:28:31 crc kubenswrapper[4940]: I0223 10:28:31.761760 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pszgx" Feb 23 10:28:31 crc kubenswrapper[4940]: I0223 10:28:31.773314 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-pszgx"] Feb 23 10:28:31 crc kubenswrapper[4940]: I0223 10:28:31.821097 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qwxvn\" (UniqueName: \"kubernetes.io/projected/8d239b0d-7de5-42d9-a608-5af316c759ef-kube-api-access-qwxvn\") pod \"certified-operators-pszgx\" (UID: \"8d239b0d-7de5-42d9-a608-5af316c759ef\") " pod="openshift-marketplace/certified-operators-pszgx" Feb 23 10:28:31 crc kubenswrapper[4940]: I0223 10:28:31.821204 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8d239b0d-7de5-42d9-a608-5af316c759ef-utilities\") pod \"certified-operators-pszgx\" (UID: \"8d239b0d-7de5-42d9-a608-5af316c759ef\") " pod="openshift-marketplace/certified-operators-pszgx" Feb 23 10:28:31 crc kubenswrapper[4940]: I0223 10:28:31.821282 4940 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8d239b0d-7de5-42d9-a608-5af316c759ef-catalog-content\") pod \"certified-operators-pszgx\" (UID: \"8d239b0d-7de5-42d9-a608-5af316c759ef\") " pod="openshift-marketplace/certified-operators-pszgx" Feb 23 10:28:31 crc kubenswrapper[4940]: I0223 10:28:31.922630 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qwxvn\" (UniqueName: \"kubernetes.io/projected/8d239b0d-7de5-42d9-a608-5af316c759ef-kube-api-access-qwxvn\") pod \"certified-operators-pszgx\" (UID: \"8d239b0d-7de5-42d9-a608-5af316c759ef\") " pod="openshift-marketplace/certified-operators-pszgx" Feb 23 10:28:31 crc kubenswrapper[4940]: I0223 10:28:31.922991 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8d239b0d-7de5-42d9-a608-5af316c759ef-utilities\") pod \"certified-operators-pszgx\" (UID: \"8d239b0d-7de5-42d9-a608-5af316c759ef\") " pod="openshift-marketplace/certified-operators-pszgx" Feb 23 10:28:31 crc kubenswrapper[4940]: I0223 10:28:31.923068 4940 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8d239b0d-7de5-42d9-a608-5af316c759ef-catalog-content\") pod \"certified-operators-pszgx\" (UID: \"8d239b0d-7de5-42d9-a608-5af316c759ef\") " pod="openshift-marketplace/certified-operators-pszgx" Feb 23 10:28:31 crc kubenswrapper[4940]: I0223 10:28:31.923475 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8d239b0d-7de5-42d9-a608-5af316c759ef-utilities\") pod \"certified-operators-pszgx\" (UID: \"8d239b0d-7de5-42d9-a608-5af316c759ef\") " pod="openshift-marketplace/certified-operators-pszgx" Feb 23 10:28:31 crc kubenswrapper[4940]: I0223 10:28:31.923562 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8d239b0d-7de5-42d9-a608-5af316c759ef-catalog-content\") pod \"certified-operators-pszgx\" (UID: \"8d239b0d-7de5-42d9-a608-5af316c759ef\") " pod="openshift-marketplace/certified-operators-pszgx" Feb 23 10:28:31 crc kubenswrapper[4940]: I0223 10:28:31.949408 4940 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qwxvn\" (UniqueName: \"kubernetes.io/projected/8d239b0d-7de5-42d9-a608-5af316c759ef-kube-api-access-qwxvn\") pod \"certified-operators-pszgx\" (UID: \"8d239b0d-7de5-42d9-a608-5af316c759ef\") " pod="openshift-marketplace/certified-operators-pszgx" Feb 23 10:28:32 crc kubenswrapper[4940]: I0223 10:28:32.086709 4940 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pszgx" Feb 23 10:28:32 crc kubenswrapper[4940]: I0223 10:28:32.823101 4940 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-pszgx"] Feb 23 10:28:33 crc kubenswrapper[4940]: I0223 10:28:33.463258 4940 generic.go:334] "Generic (PLEG): container finished" podID="8d239b0d-7de5-42d9-a608-5af316c759ef" containerID="95811ce6b3a97997a5550568bdb2e64f8056a0a6145911fda896939e94392118" exitCode=0 Feb 23 10:28:33 crc kubenswrapper[4940]: I0223 10:28:33.463433 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pszgx" event={"ID":"8d239b0d-7de5-42d9-a608-5af316c759ef","Type":"ContainerDied","Data":"95811ce6b3a97997a5550568bdb2e64f8056a0a6145911fda896939e94392118"} Feb 23 10:28:33 crc kubenswrapper[4940]: I0223 10:28:33.463667 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pszgx" event={"ID":"8d239b0d-7de5-42d9-a608-5af316c759ef","Type":"ContainerStarted","Data":"d70a2ef5144b085ad00e5ab2c84d2db3c7a057ab7308b6cfffe38875a1f21af3"} Feb 23 10:28:34 crc kubenswrapper[4940]: I0223 10:28:34.473670 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pszgx" event={"ID":"8d239b0d-7de5-42d9-a608-5af316c759ef","Type":"ContainerStarted","Data":"f9ecd89d1b622ef171645a79513ebdaf6b7d70607fe7d6d9f942cdfe59b1b905"} Feb 23 10:28:35 crc kubenswrapper[4940]: I0223 10:28:35.495205 4940 generic.go:334] "Generic (PLEG): container finished" podID="8d239b0d-7de5-42d9-a608-5af316c759ef" containerID="f9ecd89d1b622ef171645a79513ebdaf6b7d70607fe7d6d9f942cdfe59b1b905" exitCode=0 Feb 23 10:28:35 crc kubenswrapper[4940]: I0223 10:28:35.495534 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pszgx" event={"ID":"8d239b0d-7de5-42d9-a608-5af316c759ef","Type":"ContainerDied","Data":"f9ecd89d1b622ef171645a79513ebdaf6b7d70607fe7d6d9f942cdfe59b1b905"} Feb 23 10:28:36 crc kubenswrapper[4940]: I0223 10:28:36.505996 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pszgx" event={"ID":"8d239b0d-7de5-42d9-a608-5af316c759ef","Type":"ContainerStarted","Data":"64d3d47fac9228dff1ffd7cfc061a80f61826852835d385b0e872dc1dda39e52"} Feb 23 10:28:36 crc kubenswrapper[4940]: I0223 10:28:36.539810 4940 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-pszgx" podStartSLOduration=3.107678074 podStartE2EDuration="5.539784977s" podCreationTimestamp="2026-02-23 10:28:31 +0000 UTC" firstStartedPulling="2026-02-23 10:28:33.465655679 +0000 UTC m=+6044.848861836" lastFinishedPulling="2026-02-23 10:28:35.897762582 +0000 UTC m=+6047.280968739" observedRunningTime="2026-02-23 10:28:36.527096986 +0000 UTC m=+6047.910303163" watchObservedRunningTime="2026-02-23 10:28:36.539784977 +0000 UTC m=+6047.922991144" Feb 23 10:28:42 crc kubenswrapper[4940]: I0223 10:28:42.087631 4940 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-pszgx" Feb 23 10:28:42 crc kubenswrapper[4940]: I0223 10:28:42.088156 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-pszgx" Feb 23 10:28:42 crc kubenswrapper[4940]: I0223 10:28:42.138834 4940 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-pszgx" Feb 23 10:28:42 crc kubenswrapper[4940]: I0223 10:28:42.636277 4940 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-pszgx" Feb 23 10:28:42 crc kubenswrapper[4940]: I0223 10:28:42.694388 4940 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-pszgx"] Feb 23 10:28:44 crc kubenswrapper[4940]: I0223 10:28:44.579468 4940 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-pszgx" podUID="8d239b0d-7de5-42d9-a608-5af316c759ef" containerName="registry-server" containerID="cri-o://64d3d47fac9228dff1ffd7cfc061a80f61826852835d385b0e872dc1dda39e52" gracePeriod=2 Feb 23 10:28:45 crc kubenswrapper[4940]: I0223 10:28:45.157341 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pszgx" Feb 23 10:28:45 crc kubenswrapper[4940]: I0223 10:28:45.350649 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qwxvn\" (UniqueName: \"kubernetes.io/projected/8d239b0d-7de5-42d9-a608-5af316c759ef-kube-api-access-qwxvn\") pod \"8d239b0d-7de5-42d9-a608-5af316c759ef\" (UID: \"8d239b0d-7de5-42d9-a608-5af316c759ef\") " Feb 23 10:28:45 crc kubenswrapper[4940]: I0223 10:28:45.350965 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8d239b0d-7de5-42d9-a608-5af316c759ef-utilities\") pod \"8d239b0d-7de5-42d9-a608-5af316c759ef\" (UID: \"8d239b0d-7de5-42d9-a608-5af316c759ef\") " Feb 23 10:28:45 crc kubenswrapper[4940]: I0223 10:28:45.351154 4940 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8d239b0d-7de5-42d9-a608-5af316c759ef-catalog-content\") pod \"8d239b0d-7de5-42d9-a608-5af316c759ef\" (UID: \"8d239b0d-7de5-42d9-a608-5af316c759ef\") " Feb 23 10:28:45 crc kubenswrapper[4940]: I0223 10:28:45.352168 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8d239b0d-7de5-42d9-a608-5af316c759ef-utilities" (OuterVolumeSpecName: "utilities") pod "8d239b0d-7de5-42d9-a608-5af316c759ef" (UID: "8d239b0d-7de5-42d9-a608-5af316c759ef"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 10:28:45 crc kubenswrapper[4940]: I0223 10:28:45.357868 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8d239b0d-7de5-42d9-a608-5af316c759ef-kube-api-access-qwxvn" (OuterVolumeSpecName: "kube-api-access-qwxvn") pod "8d239b0d-7de5-42d9-a608-5af316c759ef" (UID: "8d239b0d-7de5-42d9-a608-5af316c759ef"). InnerVolumeSpecName "kube-api-access-qwxvn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 10:28:45 crc kubenswrapper[4940]: I0223 10:28:45.406483 4940 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8d239b0d-7de5-42d9-a608-5af316c759ef-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8d239b0d-7de5-42d9-a608-5af316c759ef" (UID: "8d239b0d-7de5-42d9-a608-5af316c759ef"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 10:28:45 crc kubenswrapper[4940]: I0223 10:28:45.454059 4940 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8d239b0d-7de5-42d9-a608-5af316c759ef-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 23 10:28:45 crc kubenswrapper[4940]: I0223 10:28:45.454094 4940 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qwxvn\" (UniqueName: \"kubernetes.io/projected/8d239b0d-7de5-42d9-a608-5af316c759ef-kube-api-access-qwxvn\") on node \"crc\" DevicePath \"\"" Feb 23 10:28:45 crc kubenswrapper[4940]: I0223 10:28:45.454103 4940 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8d239b0d-7de5-42d9-a608-5af316c759ef-utilities\") on node \"crc\" DevicePath \"\"" Feb 23 10:28:45 crc kubenswrapper[4940]: I0223 10:28:45.591035 4940 generic.go:334] "Generic (PLEG): container finished" podID="8d239b0d-7de5-42d9-a608-5af316c759ef" containerID="64d3d47fac9228dff1ffd7cfc061a80f61826852835d385b0e872dc1dda39e52" exitCode=0 Feb 23 10:28:45 crc kubenswrapper[4940]: I0223 10:28:45.591084 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pszgx" event={"ID":"8d239b0d-7de5-42d9-a608-5af316c759ef","Type":"ContainerDied","Data":"64d3d47fac9228dff1ffd7cfc061a80f61826852835d385b0e872dc1dda39e52"} Feb 23 10:28:45 crc kubenswrapper[4940]: I0223 10:28:45.591117 4940 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pszgx" event={"ID":"8d239b0d-7de5-42d9-a608-5af316c759ef","Type":"ContainerDied","Data":"d70a2ef5144b085ad00e5ab2c84d2db3c7a057ab7308b6cfffe38875a1f21af3"} Feb 23 10:28:45 crc kubenswrapper[4940]: I0223 10:28:45.591140 4940 scope.go:117] "RemoveContainer" containerID="64d3d47fac9228dff1ffd7cfc061a80f61826852835d385b0e872dc1dda39e52" Feb 23 10:28:45 crc kubenswrapper[4940]: I0223 10:28:45.591279 4940 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pszgx" Feb 23 10:28:45 crc kubenswrapper[4940]: I0223 10:28:45.636465 4940 scope.go:117] "RemoveContainer" containerID="f9ecd89d1b622ef171645a79513ebdaf6b7d70607fe7d6d9f942cdfe59b1b905" Feb 23 10:28:45 crc kubenswrapper[4940]: I0223 10:28:45.642714 4940 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-pszgx"] Feb 23 10:28:45 crc kubenswrapper[4940]: I0223 10:28:45.657981 4940 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-pszgx"] Feb 23 10:28:45 crc kubenswrapper[4940]: I0223 10:28:45.684864 4940 scope.go:117] "RemoveContainer" containerID="95811ce6b3a97997a5550568bdb2e64f8056a0a6145911fda896939e94392118" Feb 23 10:28:45 crc kubenswrapper[4940]: I0223 10:28:45.713169 4940 scope.go:117] "RemoveContainer" containerID="64d3d47fac9228dff1ffd7cfc061a80f61826852835d385b0e872dc1dda39e52" Feb 23 10:28:45 crc kubenswrapper[4940]: E0223 10:28:45.713911 4940 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"64d3d47fac9228dff1ffd7cfc061a80f61826852835d385b0e872dc1dda39e52\": container with ID starting with 64d3d47fac9228dff1ffd7cfc061a80f61826852835d385b0e872dc1dda39e52 not found: ID does not exist" containerID="64d3d47fac9228dff1ffd7cfc061a80f61826852835d385b0e872dc1dda39e52" Feb 23 10:28:45 crc kubenswrapper[4940]: I0223 10:28:45.713967 4940 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"64d3d47fac9228dff1ffd7cfc061a80f61826852835d385b0e872dc1dda39e52"} err="failed to get container status \"64d3d47fac9228dff1ffd7cfc061a80f61826852835d385b0e872dc1dda39e52\": rpc error: code = NotFound desc = could not find container \"64d3d47fac9228dff1ffd7cfc061a80f61826852835d385b0e872dc1dda39e52\": container with ID starting with 64d3d47fac9228dff1ffd7cfc061a80f61826852835d385b0e872dc1dda39e52 not found: ID does not exist" Feb 23 10:28:45 crc kubenswrapper[4940]: I0223 10:28:45.713991 4940 scope.go:117] "RemoveContainer" containerID="f9ecd89d1b622ef171645a79513ebdaf6b7d70607fe7d6d9f942cdfe59b1b905" Feb 23 10:28:45 crc kubenswrapper[4940]: E0223 10:28:45.714343 4940 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f9ecd89d1b622ef171645a79513ebdaf6b7d70607fe7d6d9f942cdfe59b1b905\": container with ID starting with f9ecd89d1b622ef171645a79513ebdaf6b7d70607fe7d6d9f942cdfe59b1b905 not found: ID does not exist" containerID="f9ecd89d1b622ef171645a79513ebdaf6b7d70607fe7d6d9f942cdfe59b1b905" Feb 23 10:28:45 crc kubenswrapper[4940]: I0223 10:28:45.714397 4940 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f9ecd89d1b622ef171645a79513ebdaf6b7d70607fe7d6d9f942cdfe59b1b905"} err="failed to get container status \"f9ecd89d1b622ef171645a79513ebdaf6b7d70607fe7d6d9f942cdfe59b1b905\": rpc error: code = NotFound desc = could not find container \"f9ecd89d1b622ef171645a79513ebdaf6b7d70607fe7d6d9f942cdfe59b1b905\": container with ID starting with f9ecd89d1b622ef171645a79513ebdaf6b7d70607fe7d6d9f942cdfe59b1b905 not found: ID does not exist" Feb 23 10:28:45 crc kubenswrapper[4940]: I0223 10:28:45.714426 4940 scope.go:117] "RemoveContainer" containerID="95811ce6b3a97997a5550568bdb2e64f8056a0a6145911fda896939e94392118" Feb 23 10:28:45 crc kubenswrapper[4940]: E0223 10:28:45.714710 4940 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"95811ce6b3a97997a5550568bdb2e64f8056a0a6145911fda896939e94392118\": container with ID starting with 95811ce6b3a97997a5550568bdb2e64f8056a0a6145911fda896939e94392118 not found: ID does not exist" containerID="95811ce6b3a97997a5550568bdb2e64f8056a0a6145911fda896939e94392118" Feb 23 10:28:45 crc kubenswrapper[4940]: I0223 10:28:45.714736 4940 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"95811ce6b3a97997a5550568bdb2e64f8056a0a6145911fda896939e94392118"} err="failed to get container status \"95811ce6b3a97997a5550568bdb2e64f8056a0a6145911fda896939e94392118\": rpc error: code = NotFound desc = could not find container \"95811ce6b3a97997a5550568bdb2e64f8056a0a6145911fda896939e94392118\": container with ID starting with 95811ce6b3a97997a5550568bdb2e64f8056a0a6145911fda896939e94392118 not found: ID does not exist" Feb 23 10:28:47 crc kubenswrapper[4940]: I0223 10:28:47.365409 4940 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8d239b0d-7de5-42d9-a608-5af316c759ef" path="/var/lib/kubelet/pods/8d239b0d-7de5-42d9-a608-5af316c759ef/volumes"